Updates from: 05/04/2022 01:08:30
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory How To Mfa Number Match https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-number-match.md
description: Learn how to use number matching in MFA notifications
Previously updated : 03/18/2022 Last updated : 05/03/2022
To enable number matching in the Azure AD portal, complete the following steps:
>[!NOTE] >[Least privilege role in Azure Active Directory - Multi-factor Authentication](https://docs.microsoft.com/azure/active-directory/roles/delegate-by-task#multi-factor-authentication)
+Number matching is not supported for Apple Watch notifications. Apple Watch need to use their phone to approve notifications when number matching is enabled.
## Next steps
active-directory Concept Conditional Access Cloud Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-cloud-apps.md
The Microsoft Azure Management application includes multiple services.
For more information on how to set up a sample policy for Microsoft Azure Management, see [Conditional Access: Require MFA for Azure management](howto-conditional-access-policy-azure-management.md).
-For Azure Government, you should target the Azure Government Cloud Management API application.
+>[!NOTE]
+>For Azure Government, you should target the Azure Government Cloud Management API application.
### Other applications
active-directory Concept Conditional Access Users Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-users-groups.md
The following options are available to exclude when creating a Conditional Acces
- Directory roles - Allows administrators to select specific Azure AD directory roles used to determine assignment. For example, organizations may create a more restrictive policy on users assigned the global administrator role. - Users and groups
- - Allows targeting of specific sets of users. For example, organizations can select a group that contains all members of the HR department when an HR app is selected as the cloud app. A group can be any type of group in Azure AD, including dynamic or assigned security and distribution groups.
+ - Allows targeting of specific sets of users. For example, organizations can select a group that contains all members of the HR department when an HR app is selected as the cloud app. A group can be any type of group in Azure AD, including dynamic or assigned security and distribution groups. Policy will be applied to nested users and groups.
### Preventing administrator lockout
active-directory Howto Conditional Access Policy Risk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-risk.md
Most users have a normal behavior that can be tracked, when they fall outside of this norm it could be risky to allow them to just sign in. You may want to block that user or maybe just ask them to perform multi-factor authentication to prove that they are really who they say they are.
-A sign-in risk represents the probability that a given authentication request isn't authorized by the identity owner. Organizations with Azure AD Premium P2 licenses can create Conditional Access policies incorporating [Azure AD Identity Protection sign-in risk detections](../identity-protection/concept-identity-protection-risks.md#sign-in-risk).
+A sign-in risk represents the probability that a given authentication request isn't authorized by the identity owner. Organizations with Azure AD Premium P2 licenses can create Conditional Access policies incorporating [Azure AD Identity Protection sign-in risk detections](../identity-protection/concept-identity-protection-risks.md#sign-in-risk).
There are two locations where this policy may be configured, Conditional Access and Identity Protection. Configuration using a Conditional Access policy is the preferred method providing more context including enhanced diagnostic data, report-only mode integration, Graph API support, and the ability to utilize other Conditional Access attributes in the policy.
+The Sign-in risk-based policy protects users from registering MFA in risky sessions. For example. If the users are not registered for MFA, their risky sign-ins will get blocked and presented with the AADSTS53004 error.
+ ## Template deployment Organizations can choose to deploy this policy using the steps outlined below or using the [Conditional Access templates (Preview)](concept-conditional-access-policy-common.md#conditional-access-templates-preview).
active-directory Reference Claims Mapping Policy Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-claims-mapping-policy-type.md
The ID element identifies which property on the source provides the value for th
| User | preferreddatalocation | Preffered Data Location | | User | proxyaddresses | Proxy Addresses | | User | usertype | User Type |
+| User | telephonenumber| Business Phones / Office Phones |
| application, resource, audience | displayname | Display Name | | application, resource, audience | objectid | ObjectID | | application, resource, audience | tags | Service Principal Tag |
Based on the method chosen, a set of inputs and outputs is expected. Define the
| User | userprincipalname|User Principal Name| | User | onpremisessamaccountname|On Premises Sam Account Name| | User | employeeid|Employee ID|
+| User | telephonenumber| Business Phones / Office Phones |
| User | extensionattribute1 | Extension Attribute 1 | | User | extensionattribute2 | Extension Attribute 2 | | User | extensionattribute3 | Extension Attribute 3 |
active-directory Test Setup Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/test-setup-environment.md
Previously updated : 09/28/2021 Last updated : 05/02/2022
If you can't safely constrain your test app in your production tenant, you can c
If you don't already have a dedicated test tenant, you can create one for free using the Microsoft 365 Developer Program or manually create one yourself. #### Join the Microsoft 365 Developer Program (recommended)
-The [Microsoft 365 Developer Program](https://developer.microsoft.com/microsoft-365/dev-program) is free and can have test user accounts and sample data packs automatically added to the tenant.
+The [Microsoft 365 Developer Program](/office/developer-program/microsoft-365-developer-program) is free and can have test user accounts and sample data packs automatically added to the tenant.
1. Click on the **Join now** button on the screen. 2. Sign in with a new Microsoft Account or use an existing (work) account you already have. 3. On the sign-up page select your region, enter a company name and accept the terms and conditions of the program before you click **Next**.
active-directory Howto Vm Sign In Azure Ad Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-windows.md
The AADLoginForWindows extension must install successfully in order for the VM t
1. The Device State can be viewed by running `dsregcmd /status`. The goal is for Device State to show as `AzureAdJoined : YES`. > [!NOTE]
- > Azure AD join activity is captured in Event viewer under the `User Device Registration\Admin` log.
+ > Azure AD join activity is captured in Event viewer under the `User Device Registration\Admin` log at `Event Viewer (local)\Applications` and `Services Logs\Windows\Microsoft\User Device Registration\Admin`.
If the AADLoginForWindows extension fails with certain error code, you can perform the following steps:
active-directory B2b Direct Connect Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-direct-connect-overview.md
Then Contoso adds the Fabrikam organization and configures the following **Organ
- Allow all Contoso users and groups to have outbound access to Fabrikam using B2B direct connect. - Allow Contoso B2B direct connect users to have outbound access to all Fabrikam applications.
-For this scenario to work, Fabrikam also needs to allow B2B direct connect with Contoso by configuring these same cross-tenant access settings for Contoso and for their own users and applications. Contoso users who manage Teams shared channels in your organizations will be able to add Fabrikam users by searching for their full Fabrikam email addresses.
+For this scenario to work, Fabrikam also needs to allow B2B direct connect with Contoso by configuring these same cross-tenant access settings for Contoso and for their own users and applications. When configuration is complete, Contoso users who manage Teams shared channels will be able to add Fabrikam users by searching for their full Fabrikam email addresses.
### Example 2: Enable B2B direct connect with Fabrikam's Marketing group only
Starting from the example above, Contoso could also choose to allow only the Fab
- Allow all Contoso users and groups to have outbound access to Fabrikam using B2B direct connect. - Allow Contoso B2B direct connect users to have outbound access to all Fabrikam applications.
-Fabrikam will also need to configure their outbound cross-tenant access settings so that their Marketing group is allowed to collaborate with Contoso through B2B direct connect. Contoso users who manage Teams shared channels in your organizations will be able to add only Fabrikam Marketing group users by searching for their full Fabrikam email addresses.
+Fabrikam will also need to configure their outbound cross-tenant access settings so that their Marketing group is allowed to collaborate with Contoso through B2B direct connect. When configuration is complete, Contoso users who manage Teams shared channels will be able to add only Fabrikam Marketing group users by searching for their full Fabrikam email addresses.
## Authentication
active-directory Reference Basic Info Sign In Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-basic-info-sign-in-logs.md
na Previously updated : 12/17/2021 Last updated : 05/02/2022
The type of a user. Examples include `member`, `guest`, or `external`.
This attribute describes the type of cross-tenant access used by the actor to access the resource. Possible values are: -- `none`-- `b2bCollaboration`-- `b2bDirectConnect`-- `microsoftSupport`-- `serviceProvider`-- `unknownFutureValue`
+- `none` - A sign-in event that did not cross an Azure AD tenant's boundaries.
+- `b2bCollaboration`- A cross tenant sign-in performed by a guest user using B2B Collaboration.
+- `b2bDirectConnect` - A cross tenant sign-in performed by a B2B.
+- `microsoftSupport`- A cross tenant sign-in performed by a Microsoft support agent in a Microsoft customer tenant.
+- `serviceProvider` - A cross-tenant sign-in performed by a Cloud Service Provider (CSP) or similar admin on behalf of that CSP's customer in a tenant
+- `unknownFutureValue` - A sentinel value used by MS Graph to help clients handle changes in enum lists. For more information, see [Best practices for working with Microsoft Graph](https://docs.microsoft.com/graph/best-practices-concept).
If the sign-in did not the pass the boundaries of a tenant, the value is `none`.
active-directory Get Started Request Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/get-started-request-api.md
Previously updated : 10/08/2021 Last updated : 05/03/2022 #Customer intent: As an administrator, I am trying to learn how to use the Request Service API and integrate it into my business application.
active-directory Verifiable Credentials Configure Issuer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-issuer.md
Previously updated : 10/08/2021 Last updated : 05/03/2022 # Customer intent: As an enterprise, we want to enable customers to manage information about themselves by using verifiable credentials.
Now that you have a new credential, you're going to gather some information abou
The sample application is available in .NET, and the code is maintained in a GitHub repository. Download the sample code from [GitHub](https://github.com/Azure-Samples/active-directory-verifiable-credentials-dotnet), or clone the repository to your local machine:
-```bash
+```
git clone https://github.com/Azure-Samples/active-directory-verifiable-credentials-dotnet.git ```
The following JSON demonstrates a complete *appsettings.json* file:
Now you're ready to issue your first verified credential expert card by running the sample application.
-1. From Visual Studio Code, run the *Verifiable_credentials_DotNet* project. Or, from the command shell, run the following commands:
+1. From Visual Studio Code, run the *Verifiable_credentials_DotNet* project. Or, from your operating system's command line, run:
- ```bash
+ ```
cd active-directory-verifiable-credentials-dotnet/1-asp-net-core-api-idtokenhint dotnet build "AspNetCoreVerifiableCredentials.csproj" -c Debug -o .\\bin\\Debug\\netcoreapp3. dotnet run ```
-1. In another terminal, run the following command. This command runs [ngrok](https://ngrok.com/) to set up a URL on 3000, and make it publicly available on the internet.
+1. In another command prompt window, run the following command. This command runs [ngrok](https://ngrok.com/) to set up a URL on 5000, and make it publicly available on the internet.
- ```bash
+ ```
ngrok http 5000 ```
aks Azure Ad Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-ad-rbac.md
Last updated 03/17/2021
# Control access to cluster resources using Kubernetes role-based access control and Azure Active Directory identities in Azure Kubernetes Service
-Azure Kubernetes Service (AKS) can be configured to use Azure Active Directory (AD) for user authentication. In this configuration, you sign in to an AKS cluster using an Azure AD authentication token. Once authenticated, you can use the built-in Kubernetes role-based access control (Kubernetes RBAC) to manage access to namespaces and cluster resources based a user's identity or group membership.
+Azure Kubernetes Service (AKS) can be configured to use Azure Active Directory (AD) for user authentication. In this configuration, you sign in to an AKS cluster using an Azure AD authentication token. Once authenticated, you can use the built-in Kubernetes role-based access control (Kubernetes RBAC) to manage access to namespaces and cluster resources based on a user's identity or group membership.
This article shows you how to control access using Kubernetes RBAC in an AKS cluster based on Azure AD group membership. Example groups and users are created in Azure AD, then Roles and RoleBindings are created in the AKS cluster to grant the appropriate permissions to create and view resources.
aks Ingress Static Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-static-ip.md
Next, create a public IP address with the *static* allocation method using the [
-> [!NOTE]
-> The above commands create an IP address that will be deleted if you delete your AKS cluster. Alternatively, you can create an IP address in a different resource group which can be managed separately from your AKS cluster. If you create an IP address in a different resource group, ensure the cluster identity used by the AKS cluster has delegated permissions to the other resource group, such as *Network Contributor*. For more information, see [Use a static public IP address and DNS label with the AKS load balancer][aks-static-ip].
+The above commands create an IP address that will be deleted if you delete your AKS cluster.
+
+Alternatively, you can create an IP address in a different resource group which can be managed separately from your AKS cluster. If you create an IP address in a different resource group, ensure the following:
+
+* The cluster identity used by the AKS cluster has delegated permissions to the resource group, such as *Network Contributor*.
+* Add the `--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-resource-group"="<RESOURCE_GROUP>"` parameter. Replace `<RESOURCE_GROUP>` with the name of the resource group where the IP address resides.
+
+For more information, see [Use a static public IP address and DNS label with the AKS load balancer][aks-static-ip].
Now deploy the *nginx-ingress* chart with Helm. For added redundancy, two replicas of the NGINX ingress controllers are deployed with the `--set controller.replicaCount` parameter. To fully benefit from running replicas of the ingress controller, make sure there's more than one node in your AKS cluster.
The ingress controller also needs to be scheduled on a Linux node. Windows Serve
Update the following script with the **IP address** of your ingress controller and a **unique name** that you would like to use for the FQDN prefix.
-> [!IMPORTANT]
-> You must update replace `<STATIC_IP>` and `<DNS_LABEL>` with your own IP address and unique name when running the command. The DNS_LABEL must be unique within the Azure region.
+Replace `<STATIC_IP>` and `<DNS_LABEL>` with your own IP address and unique name when running the command. The DNS_LABEL must be unique within the Azure region.
### [Azure CLI](#tab/azure-cli)
api-management Api Management Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-policies.md
More information about policies:
- [Send message to Pub/Sub topic](api-management-dapr-policies.md#pubsub) - uses Dapr runtime to publish a message to a Publish/Subscribe topic. - [Trigger output binding](api-management-dapr-policies.md#bind) - uses Dapr runtime to invoke an external system via output binding.
-## [Graph QL validation policy](graphql-validation-policies.md)
+## [GraphQL validation policy](graphql-validation-policies.md)
- [Validate GraphQL request](graphql-validation-policies.md#validate-graphql-request) - Validates and authorizes a request to a GraphQL API. ## [Transformation policies](api-management-transformation-policies.md)
For more information about working with policies, see:
+ [Tutorial: Transform and protect your API](transform-api.md) + [Set or edit policies](set-edit-policies.md)
-+ [Policy samples](./policies/index.md)
++ [Policy samples](./policies/index.md)
app-service App Service Plan Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-plan-manage.md
You can move an app to another App Service plan, as long as the source plan and
> [!NOTE] > Azure deploys each new App Service plan into a deployment unit, internally called a webspace. Each region can have many webspaces, but your app can only move between plans that are created in the same webspace. An App Service Environment is an isolated webspace, so apps can be moved between plans in the same App Service Environment, but not between plans in different App Service Environments. >
-> You canΓÇÖt specify the webspace you want when creating a plan, but itΓÇÖs possible to ensure that a plan is created in the same webspace as an existing plan. In brief, all plans created with the same resource group and region combination are deployed into the same webspace. For example, if you created a plan in resource group A and region B, then any plan you subsequently create in resource group A and region B is deployed into the same webspace. Note that plans canΓÇÖt move webspaces after theyΓÇÖre created, so you canΓÇÖt move a plan into ΓÇ£the same webspaceΓÇ¥ as another plan by moving it to another resource group.
+> You canΓÇÖt specify the webspace you want when creating a plan, but itΓÇÖs possible to ensure that a plan is created in the same webspace as an existing plan. In brief, all plans created with the same resource group, region combination and operating system are deployed into the same webspace. For example, if you created a plan in resource group A and region B, then any plan you subsequently create in resource group A and region B is deployed into the same webspace. Note that plans canΓÇÖt move webspaces after theyΓÇÖre created, so you canΓÇÖt move a plan into ΓÇ£the same webspaceΓÇ¥ as another plan by moving it to another resource group.
> 1. In the [Azure portal](https://portal.azure.com), search for and select **App services** and select the app that you want to move.
app-service Configure Ssl Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-ssl-certificate.md
Once the certificate is added to your App Service app or [function app](../azure-functions/index.yml), you can [secure a custom DNS name with it](configure-ssl-bindings.md) or [use it in your application code](configure-ssl-certificate-in-code.md). > [!NOTE]
-> A certificate uploaded into an app is stored in a deployment unit that is bound to the app service plan's resource group and region combination (internally called a *webspace*). This makes the certificate accessible to other apps in the same resource group and region combination.
+> A certificate uploaded into an app is stored in a deployment unit that is bound to the app service plan's resource group, region and operating system combination (internally called a *webspace*). This makes the certificate accessible to other apps in the same resource group and region combination.
The following table lists the options you have for adding certificates in App Service:
app-service Overview Arc Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-arc-integration.md
Title: 'App Service on Azure Arc' description: An introduction to App Service integration with Azure Arc for Azure operators. Previously updated : 03/09/2022 Last updated : 05/03/2022 # App Service, Functions, and Logic Apps on Azure Arc (Preview)
The following public preview limitations apply to App Service Kubernetes environ
| Supported Azure regions | East US, West Europe | | Cluster networking requirement | Must support `LoadBalancer` service type | | Cluster storage requirement | Must have cluster attached storage class available for use by the extension to support deployment and build of code-based apps where applicable |
-| Feature: Networking | [Not available (rely on cluster networking)](#are-networking-features-supported) |
+| Feature: Networking | [Not available (rely on cluster networking)](#are-all-networking-features-supported) |
| Feature: Managed identities | [Not available](#are-managed-identities-supported) | | Feature: Key vault references | Not available (depends on managed identities) | | Feature: Pull images from ACR with managed identity | Not available (depends on managed identities) |
Only one Kubernetes environment resource can be created in a custom location. In
- [Which built-in application stacks are supported?](#which-built-in-application-stacks-are-supported) - [Are all app deployment types supported?](#are-all-app-deployment-types-supported) - [Which App Service features are supported?](#which-app-service-features-are-supported)-- [Are networking features supported?](#are-networking-features-supported)
+- [Are all networking features supported?](#are-all-networking-features-supported)
- [Are managed identities supported?](#are-managed-identities-supported) - [Are there any scaling limits?](#are-there-any-scaling-limits) - [What logs are collected?](#what-logs-are-collected)
FTP deployment is not supported. Currently `az webapp up` is also not supported.
During the preview period, certain App Service features are being validated. When they're supported, their left navigation options in the Azure portal will be activated. Features that are not yet supported remain grayed out.
-### Are networking features supported?
+### Are all networking features supported?
-No. Networking features such as hybrid connections, Virtual Network integration, or IP restrictions, are not supported. Networking should be handled directly in the networking rules in the Kubernetes cluster itself.
+No. Networking features such as hybrid connections or Virtual Network integration, are not supported. [Access restriction](app-service-ip-restrictions.md) support was added in April 2022. Networking should be handled directly in the networking rules in the Kubernetes cluster itself.
### Are managed identities supported?
If your extension was in the stable version and auto-upgrade-minor-version is se
az k8s-extension update --cluster-type connectedClusters -c <clustername> -g <resource group> -n <extension name> --release-train stable --version 0.12.2 ```
+### Application services extension v 0.13.0 (April 2022)
+
+- Added support for Application Insights codeless integration for Node JS applications
+- Added support for [Access Restrictions](app-service-ip-restrictions.md) via CLI
+- More details provided when extension fails to install, to assist with troubleshooting issues
+
+If your extension was in the stable version and auto-upgrade-minor-version is set to true, the extension upgrades automatically. To manually upgrade the extension to the latest version, you can run the command:
+
+```azurecli-interactive
+ az k8s-extension update --cluster-type connectedClusters -c <clustername> -g <resource group> -n <extension name> --release-train stable --version 0.13.0
+```
+### Application services extension v 0.13.1 (April 2022)
+
+- Update to resolve upgrade failures seen during auto upgrade of clusters to v 0.13.0
+
+If your extension was in the stable version and auto-upgrade-minor-version is set to true, the extension upgrades automatically. To manually upgrade the extension to the latest version, you can run the command:
+
+```azurecli-interactive
+ az k8s-extension update --cluster-type connectedClusters -c <clustername> -g <resource group> -n <extension name> --release-train stable --version 0.13.1
+```
+ ## Next steps [Create an App Service Kubernetes environment (Preview)](manage-create-arc-environment.md)
applied-ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/language-support.md
# Language support for Form Recognizer
- This table lists the written languages supported by each Form Recognizer service.
+This article covers the supported languages for text and field **extraction (by feature)** and **[detection (Read only)](#detected-languages-read-api)**. Both groups are mutually exclusive.
<!-- markdownlint-disable MD001 --> <!-- markdownlint-disable MD024 -->
The following lists include the currently GA languages in for the v2.1 version a
To use the preview languages, refer to the [v3.0 REST API migration guide](/rest/api/medi).
-### Handwritten languages (preview and GA)
+### Handwritten text (preview and GA)
The following table lists the supported languages for extracting handwritten texts.
The following table lists the supported languages for extracting handwritten tex
|German (preview) |`de`|Spanish (preview) |`es`| |Italian (preview) |`it`|
-### Print languages (preview)
+### Print text (preview)
This section lists the supported languages for extracting printed texts in the latest preview.
This section lists the supported languages for extracting printed texts in the l
|Kurukh (Devanagari) | `kru`|Welsh | `cy` |Kyrgyz (Cyrillic) | `ky`
-### Print languages (GA)
+### Print text (GA)
This section lists the supported languages for extracting printed texts in the latest GA version.
Language| Locale code |
## Detected languages: Read API
-The [Read API](concept-read.md) supports language detection for the following languages:
+The [Read API](concept-read.md) supports detecting the following languages in your documents. This list may include languages not currently supported for text extraction.
> [!NOTE] > **Language detection** >
-> Form Recognizer read model can _detect_ a wide range of languages, variants, dialects, and some regional/cultural languages and return a language code.
->
-> This section lists the languages that can be detected using the Read API. To determine if text can also be _extracted_ for a given language, see [handwritten](#handwritten-languages-preview-and-ga), [print preview](#print-languages-preview), and [print GA](#print-languages-ga) language extraction lists (above).
+> Form Recognizer read model can _detect_ possible presence of languages and returns language codes for detected languages. To determine if text can also be
+> extracted for a given language, see previous sections.
+ | Language | Code | |||
attestation Author Sign Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/author-sign-policy.md
+ # How to author an attestation policy Attestation policy is a file uploaded to Microsoft Azure Attestation. Azure Attestation offers the flexibility to upload a policy in an attestation-specific policy format. Alternatively, an encoded version of the policy, in JSON Web Signature, can also be uploaded. The policy administrator is responsible for writing the attestation policy. In most attestation scenarios, the relying party acts as the policy administrator. The client making the attestation call sends attestation evidence, which the service parses and converts into incoming claims (set of properties, value). The service then processes the claims, based on what is defined in the policy, and returns the computed result.
After creating a policy file, to upload a policy in JWS format, follow the below
## Next steps - [Set up Azure Attestation using PowerShell](quickstart-powershell.md) - [Attest an SGX enclave using code samples](/samples/browse/?expanded=azure&terms=attestation)
+- [Learn more about policy versions](policy-version-1-0.md)
attestation Claim Rule Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/claim-rule-functions.md
+
+ Title: Azure Attestation claim rule functions
+description: Claim rule concepts for Azure Attestation Policy.
++++ Last updated : 04/05/2022++++
+# Claim Rule functions (supported in policy version 1.2+)
+
+The attestation policy language is updated to enable querying of the incoming evidence in a JSON format. This section explains all the improvements made to the policy language.
+
+One new operator and Six functions are introduced in this policy version.
+
+## Functions
+
+Attestation evidence, which was earlier processed to generate specific incoming claims is now made available to the policy writer in the form of a JSON input. The policy language is also updated to use functions to extract the information. As specified in [Azure Attestation claim rule grammar](claim-rule-grammar.md), the language supports three valueTypeΓÇÖs and an implicit assignment operator.
+
+- Boolean
+- Integer
+- String
+- Claim property access
+
+The new function call expression can now be used to process the incoming claims set. The function call expression can be used with a claim property and is structured as:
+
+```JSON
+value=FunctionName((Expression (, Expression)*)?))
+```
+
+The function call expression starts with the name of the function, followed by parentheses containing zero or more arguments separated by comma. Since the arguments of the function are expressions, the language allows specifying a function call as an argument to another function. The arguments are evaluated from left to right before the evaluation of the function begins.
+
+The following functions are implemented in the policy language:
+
+## JmesPath function
+
+JmesPath is a query language for JSON. It specifies several operators and functions that can be used to search data in a JSON document. The search always returns a valid JSON as output. The JmesPath function is structured as:
+
+```JSON
+JmesPath(Expression, Expression)
+```
+
+### Argument constraints
+
+The function requires exactly two arguments. The first argument must evaluate a non-empty JSON as a string. The second argument must evaluate to a non-empty valid JmesPath query string.
+
+### Evaluation
+
+Evaluates the JmesPath expression represented by the second argument, against JSON data represented by the first argument and returns the result JSON as a string.
+
+### Effect of read-only arguments on result
+
+The result JSON string is read-only if either of the input arguments are retrieved from read-only claims.
+
+### Usage example
+
+#### Literal arguments (Boolean, Integer, String)
+
+Consider the following rule:
+
+```JSON
+=>add(type="JmesPathResult", value=JmesPath("{\"foo\": \"bar\"}", "foo"));
+```
+
+During the evaluation of this rule, the JmesPath function is evaluated. The evaluation boils down to evaluating JmesPath query *foo* against the JSON *{ "foo" : "bar" }.* The result of this search operation is a JSON value *"bar".* So, the evaluation of the rule adds a new claim with type *"JmesPathResult"* and string value *"bar"* to the incoming claims set.
+
+Notice that the backslash character is used to escape the double-quote character within the literal string representing the JSON data.
+
+#### Arguments constructed from claims
+
+Assume the following claims are available in the incoming claims set:
+
+```JSON
+Claim1: {type="JsonData", value="{\"values\": [0,1,2,3,4]}"}
+Claim2: {type="JmesPathQuery", value="values[2]"}
+```
+
+A JmesPath query can be written as following:
+
+```JSON
+c1:[type="JsonData"] && c2:[type=="JmesPathQuery"] =>
+add(type="JmesPathResult", value=JmesPath(c1.value, c2.value));
+```
+
+The evaluation of the JmesPath function boils down to evaluating the JmesPath query *values[2]* against the JSON *{"values":[0,1,2,3,4]}*. So, the evaluation of the rule adds a new claim with type *"JmesPathResult"* and string value *"2"* to the incoming claims set. So, the incoming set is updated as:
+
+```JSON
+Claim1: {type="JsonData", value="{\"values\": [0,1,2,3,4]}"}
+Claim2: {type="JmesPathQuery", value="values[2]"}
+Claim3: {type="JmesPathQuery", value="2"}
+```
+
+## JsonToClaimValue function
+
+JSON specifies six types of values:
+
+- Number (can be integer or a fraction)
+- Boolean
+- String
+- Object
+- Array
+- Null
+
+The policy language only supports four types of claim values:
+
+- Integer
+- Boolean
+- String
+- Set
+
+In the policy language, a JSON value is represented as a string claim whose value is equal to the string representation of the JSON value. The JsonToClaimValue function is used to convert JSON values to claim values that the policy language supports. The function is structured as:
+
+```JSON
+JsonToClaimValue(Expression)
+```
+
+### Argument constraints
+
+The function requires exactly one argument, which must evaluate to a valid JSON as a string.
+
+### Evaluation
+
+Here is how each type of the JSON values are converted to a claim value:
+
+- **Number**: If the number is an integer, the function returns a claim value with same integer value. If the number is a fraction, an error is generated.
+- **Boolean**: The function returns a claim value with the same Boolean value.
+- **String**: The function returns a claim value with the same string value.
+- **Object**: The function does not support JSON objects. If the argument is a JSON object, an error is generated.
+- **Array**: The function only supports arrays of primitive (number, Boolean, string, null) types. Such an array is converted to a set, containing claim with the same type but with values created by converting the JSON values from the array. If the argument to the function is an array of non-primitive (object, array) types, an error is generated.
+- **Null**: If the input is a JSON null, the function returns an empty claim value. If such a claim value is used to construct a claim, the claim is an empty claim. If a rule attempts to add or issue an empty claim, no claim is added to the incoming or the outgoing claims set. In other words, a rule attempting to add or issue an empty claim result in a no-op.
+
+### Effect of read-only arguments on result
+
+The result claim value/values is/are read-only if the input argument is read-only.
+
+### Usage example
+
+#### JSON Number/Boolean/String
+
+Assume the following claims are available in the incoming claim set:
+
+```JSON
+Claim1: { type="JsonIntegerData", value="100" }
+Claim2: { type="JsonBooleanData", value="true" }
+Claim1: { type="JsonStringData", value="abc" }
+```
+
+Evaluating rule:
+
+```JSON
+c:[type=="JsonIntegerData"] => add(type="IntegerResult", value=JsonToClaimValue(c.value));
+c:[type=="JsonBooleanData"] => add(type="BooleanResult", value=JsonToClaimValue(c.value));
+c:[type=="JsonStringData"] => add(type="StringResult", value=JsonToClaimValue(c.value));
+```
+
+Updated Incoming claims set:
+
+```JSON
+Claim1: { type = "JsonIntegerData", value="100" }
+Claim2: { type = "IntegerResult", value="100" }
+Claim3: { type = "JsonBooleanData", value="true" }
+Claim4: { type = "BooleanResult", value="true" }
+Claim5: { type = "JsonStringData", value="abc" }
+Claim6: { type = "StringResult", value="abc" }
+```
+
+#### JSON Array
+
+Assume the following claims are available in the incoming claims set:
+
+```JSON
+Claim1: { type="JsonData", value="[0, \"abc\", true]" }
+```
+
+Evaluating rule:
+
+```JSON
+c:[type=="JsonData"] => add(type="Result", value=JsonToClaimValue(c.value));
+```
+
+Updated incoming claims set:
+
+```JSON
+Claim1: { type="JsonData", value="[0, \"abc\", true]" }
+Claim2: { type="Result", value=0 }
+Claim3: { type="Result", value="abc" }
+Claim4: { type="Result", value=true}
+```
+
+Note the type in the claims is the same and only the value differs. If Multiple entries exist with the same value in the array, multiple claims will be created.
+
+#### JSON Null
+
+Assume the following claims are available in the incoming claims set:
+
+```JSON
+Claim1: { type="JsonData", value="null" }
+```
+
+Evaluating rule:
+
+```JSON
+c:[type=="JsonData"] => add(type="Result", value=JsonToClaimValue(c.value));
+```
+
+Updated incoming claims set:
+
+```JSON
+Claim1: { type="JsonData", value="null" }
+```
+
+The rule attempts to add a claim with type *Result* and an empty value, since it's not allowed no claim is created, and the incoming claims set remains unchanged.
+
+## IsSubsetOf function
+
+This function is used to check if a set of claims is subset of another set of claims. The function is structured as:
+
+```JSON
+IsSubsetOf(Expression, Expression)
+```
+
+### Argument constraints
+
+This function requires exactly two arguments. Both arguments can be sets of any cardinality. The sets in policy language are inherently heterogeneous, so there is no restriction on which type of values can be present in the argument sets.
+
+### Evaluation
+
+The function checks if the set represented by the first argument is a subset of the set represented by the second argument. If so, it returns true, otherwise it returns false.
+
+### Effect of read-only arguments on result
+
+Since the function simply creates and returns a Boolean value, the returned claim value is always non-read-only.
+
+### Usage example
+Assume the following claims are available in the incoming claims set:
+
+```JSON
+Claim1: { type="Subset", value="abc" }
+Claim2: { type="Subset", value=100 }
+Claim3: { type="Superset", value=true }
+Claim4: { type="Superset", value="abc" }
+Claim5: { type="Superset", value=100 }
+```
+
+Evaluating rule:
+
+```JSON
+c1:[type == "Subset"] && c2:[type=="Superset"] => add(type="IsSubset", value=IsSubsetOf(c1.value, c2.value));
+```
+
+Updated incoming claims set:
+
+```JSON
+Claim1: { type="Subset", value="abc" }
+Claim2: { type="Subset", value=100 }
+Claim3: { type="Superset", value=true }
+Claim4: { type="Superset", value="abc" }
+Claim5: { type="Superset", value=100 }
+Claim6: { type="IsSubset", value=true }
+```
+
+## AppendString function
+This function is used to append two string values. The function is structured as:
+
+```JSON
+AppendString(Expression, Expression)
+```
+
+### Argument constraints
+
+This function requires exactly two arguments. Both the arguments must evaluate string values. Empty string arguments are allowed.
+
+### Evaluation
+
+This function appends the string value of the second argument to the string value of the first argument and returns the result string value.
+
+### Effect of read-only arguments on result
+
+The result string value is considered to be read-only if either of the arguments are retrieved from read-only claims.
+
+### Usage example
+
+Assume the following claims are available in the incoming claims set:
+
+```JSON
+Claim1: { type="String1", value="abc" }
+Claim2: { type="String2", value="xyz" }
+```
+
+Evaluating rule:
+
+```JSON
+c:[type=="String1"] && c2:[type=="String2"] => add(type="Result", value=AppendString(c1.value, c2.value));
+```
+
+Updated incoming claims set:
+
+```JSON
+Claim1: { type="String1", value="abc" }
+Claim2: { type="String2", value="xyz" }
+Claim3: { type="Result", value="abcxyz" }
+```
+
+## NegateBool function
+This function is used to negate a Boolean claim value. The function is structured as:
+
+```JSON
+NegateBool(Expression)
+```
+
+### Argument constraints
+
+The function requires exactly one argument, which must evaluate a Boolean value.
+
+### Evaluation
+
+This function negates the Boolean value represented by the argument and returns the negated value.
+
+### Effect of read-only arguments on result
+
+The resultant Boolean value is considered to be read-only if the argument is retrieved from a read-only claim.
+
+### Usage example
+
+Assume the following claims are available in the incoming claims set:
+
+```JSON
+Claim1: { type="Input", value=true }
+```
+
+Evaluating rule:
+
+```JSON
+c:[type=="Input"] => add(type="Result", value=NegateBol(c.value));
+```
+
+Updated incoming claims set:
+
+```JSON
+Claim1: { type="Input", value=true }
+Claim2: { type="Result", value=false }
+```
+
+## ContainsOnlyValue function
+
+This function is used to check if a set of claims only contains a specific claim value. The function is structured as:
+
+```JSON
+ContainsOnlyValue(Expression, Expression)
+```
+
+### Argument constraints
+
+This function requires exactly two arguments. The first argument can evaluate a heterogeneous set of any cardinality. The second argument must evaluate a single value of any type (Boolean, string, integer) supported by the policy language.
+
+### Evaluation
+
+The function returns true if the set represented by the first argument is not empty and only contains the value represented by the second argument. The function returns false, if the set represented by the first argument is empty or contains any value other than the value represented by the second argument.
+
+### Effect of read-only arguments on result
+
+Since the function simply creates and returns a Boolean value, the returned claim value is always non-read-only.
+
+### Usage example
+
+Assume the following claims are available in the incoming claims set:
+
+```JSON
+Claim1: {type="Set", value=100}
+Claim2: {type="Set", value=101}
+```
+
+Evaluating rule:
+
+```JSON
+c:[type=="Set"] => add(type="Result", value=ContainsOnlyValue(100));
+```
+
+Updated incoming claims set:
+
+```JSON
+Claim1: {type="Set", value=100}
+Claim2: {type="Set", value=101}
+Claim3: {type="Result", value=false}
+```
+
+## Not Condition Operator
+
+The rules in the policy language start with an optional list of conditions that act as filtering criteria on the incoming claims set. The conditions can be used to identify if a claim is present in the incoming claims set. But there was no way of checking if a claim was absent. So, a new operator (!) was introduced that could be applied to the individual conditions in the conditions list. This operator changes the evaluation behavior of the condition from checking the presence of a claim to checking the absence of a claim.
+
+### Usage example
+
+Assume the following claims are available in the incoming claims set:
+
+```JSON
+Claim1: {type="Claim1", value=100}
+Claim2: {type="Claim2", value=200}
+```
+
+Evaluating rule:
+
+```JSON
+![type=="Claim3"] => add(type="Claim3", value=300)
+```
+
+This rule effectively translates to: *If a claim with type "Claim3" is not present in the incoming claims set, add a new claim with type "Claim3" and value 300 to the incoming claims set.*
+
+Updated incoming claims set:
+
+```JSON
+Claim1: {type="Claim1", value=100}
+Claim2: {type="Claim2", value=200}
+Claim3: {type="Claim2", value=300}
+```
+
+### Sample Policy using policy version 1.2
+
+Windows [implements measured boot](/windows/security/information-protection/secure-the-windows-10-boot-process) and along with attestation the protections provided is greatly enhanced which can be securely and reliably used to detect and protect against vulnerable and malicious boot components. These measurements can now be used to build the attestation policy.
+
+```
+version=1.2;
+
+authorizationrules {
+ => permit();
+};
++
+issuancerules
+{
+
+c:[type == "events", issuer=="AttestationService"] => add(type = "efiConfigVariables", value = JmesPath(c.value, "Events[?EventTypeString == 'EV_EFI_VARIABLE_DRIVER_CONFIG' && ProcessedData.VariableGuid == '8BE4DF61-93CA-11D2-AA0D-00E098032B8C']"));
+
+c:[type=="efiConfigVariables", issuer="AttestationPolicy"]=> issue(type = "secureBootEnabled", value = JsonToClaimValue(JmesPath(c.value, "[?ProcessedData.UnicodeName == 'SecureBoot'] | length(@) == `1` && @[0].ProcessedData.VariableData == 'AQ'")));
+![type=="secureBootEnabled", issuer=="AttestationPolicy"] => issue(type="secureBootEnabled", value=false);
+
+};
+```
attestation Claim Rule Grammar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/claim-rule-grammar.md
The set of actions that are allowed in a policy are described below.
## Next steps - [How to author and sign an attestation policy](author-sign-policy.md)-- [Set up Azure Attestation using PowerShell](quickstart-powershell.md)-
+- [Set up Azure Attestation using PowerShell](quickstart-powershell.md)
attestation Policy Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-examples.md
c:[type=="x-ms-sgx-mrsigner"] => issue(type="<custom-name>", value=c.value);
}; ```+ For more information on the incoming claims generated by Azure Attestation, see [claim sets](./claim-sets.md). Incoming claims can be used by policy authors to define authorization rules in a custom policy.
-Issuance rules section is not mandatory. This section can be used by the users to have additional outgoing claims generated in the attestation token with custom names. For more information on the outgoing claims generated by the service in attestation token, see [claim sets](./claim-sets.md).
+Issuance rules section isn't mandatory. This section can be used by the users to have additional outgoing claims generated in the attestation token with custom names. For more information on the outgoing claims generated by the service in attestation token, see [claim sets](./claim-sets.md).
## Default policy for an SGX enclave
issuancerules {
}; ```
-Claims used in default policy are considered deprecated but are fully supported and will continue to be included in the future. It is recommended to use the non-deprecated claim names. For more information on the recommended claim names, see [claim sets](./claim-sets.md).
+Claims used in default policy are considered deprecated but are fully supported and will continue to be included in the future. It's recommended to use the non-deprecated claim names. For more information on the recommended claim names, see [claim sets](./claim-sets.md).
+
+## Sample policy for TPM using Policy version 1.0
+
+```
+version=1.0;
+
+authorizationrules {
+ => permit();
+};
+
+issuancerules
+{
+[type=="aikValidated", value==true]&&
+[type=="secureBootEnabled", value==true] &&
+[type=="bootDebuggingDisabled", value==true] &&
+[type=="vbsEnabled", value==true] &&
+[type=="notWinPE", value==true] &&
+[type=="notSafeMode", value==true] => issue(type="PlatformAttested", value=true);
+};
+```
+
+A simple TPM attestation policy that can be used to verify minimal aspects of the boot.
+
+## Sample policy for TPM using Policy version 1.2
+
+```
+version=1.2;
+
+configurationrules{
+ => issueproperty(type="required_pcr_mask", value=131070);
+ => issueproperty(type="require_valid_aik_cert", value=false);
+};
+
+authorizationrules {
+c:[type == "tpmVersion", issuer=="AttestationService", value==2] => permit();
+};
+
+issuancerules{
+
+c:[type == "aikValidated", issuer=="AttestationService"] =>issue(type="aikValidated", value=c.value);
+
+// SecureBoot enabled
+c:[type == "events", issuer=="AttestationService"] => add(type = "efiConfigVariables", value = JmesPath(c.value, "Events[?EventTypeString == 'EV_EFI_VARIABLE_DRIVER_CONFIG' && ProcessedData.VariableGuid == '8BE4DF61-93CA-11D2-AA0D-00E098032B8C']"));
+c:[type == "efiConfigVariables", issuer=="AttestationPolicy"]=> issue(type = "SecureBootEnabled", value = JsonToClaimValue(JmesPath(c.value, "[?ProcessedData.UnicodeName == 'SecureBoot'] | length(@) == `1` && @[0].ProcessedData.VariableData == 'AQ'")));
+![type=="SecureBootEnabled", issuer=="AttestationPolicy"] => issue(type="SecureBootEnabled", value=false);
+
+// Retrieve bool properties Code integrity
+c:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` || PcrIndex == `13` || PcrIndex == `19` || PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));
+c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="codeIntegrityEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_CODEINTEGRITY")));
+c:[type=="codeIntegrityEnabledSet", issuer=="AttestationPolicy"] => issue(type="CodeIntegrityEnabled", value=ContainsOnlyValue(c.value, true));
+![type=="CodeIntegrityEnabled", issuer=="AttestationPolicy"] => issue(type="CodeIntegrityEnabled", value=false);
+
+// Bitlocker Boot Status, The first non zero measurement or zero.
+c:[type=="events", issuer=="AttestationService"] => add(type="srtmDrtmEventPcr", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` || PcrIndex == `19`)].ProcessedData.EVENT_TRUSTBOUNDARY"));
+c:[type=="srtmDrtmEventPcr", issuer=="AttestationPolicy"] => add(type="BitlockerStatus", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_BITLOCKER_UNLOCK | @[? Value != `0`].Value | @[0]")));
+[type=="BitlockerStatus", issuer=="AttestationPolicy"] => issue(type="BitlockerStatus", value=true);
+![type=="BitlockerStatus", issuer=="AttestationPolicy"] => issue(type="BitlockerStatus", value=false);
+
+// Elam Driver (windows defender) Loaded
+c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="elamDriverLoaded", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_LOADEDMODULE_AGGREGATION[] | [? EVENT_IMAGEVALIDATED == `true` && (equals_ignore_case(EVENT_FILEPATH, '\\windows\\system32\\drivers\\wdboot.sys') || equals_ignore_case(EVENT_FILEPATH, '\\windows\\system32\\drivers\\wd\\wdboot.sys'))] | @ != `null`")));
+[type=="elamDriverLoaded", issuer=="AttestationPolicy"] => issue(type="ELAMDriverLoaded", value=true);
+![type=="elamDriverLoaded", issuer=="AttestationPolicy"] => issue(type="ELAMDriverLoaded", value=false);
+
+};
+
+```
+
+The policy uses the TPM version to restrict attestation calls. The issuancerules looks at various properties measured during boot.
## Sample custom policy to support multiple SGX enclaves
eyJhbGciOiJub25lIn0.eyJBdHRlc3RhdGlvblBvbGljeSI6ICJkbVZ5YzJsdmJqMGdNUzR3TzJGMWRH
eyJhbGciOiJSU0EyNTYiLCJ4NWMiOlsiTUlJQzFqQ0NBYjZnQXdJQkFnSUlTUUdEOUVGakJcdTAwMkJZd0RRWUpLb1pJaHZjTkFRRUxCUUF3SWpFZ01CNEdBMVVFQXhNWFFYUjBaWE4wWVhScGIyNURaWEowYVdacFkyRjBaVEF3SGhjTk1qQXhNVEl6TVRneU1EVXpXaGNOTWpFeE1USXpNVGd5TURVeldqQWlNU0F3SGdZRFZRUURFeGRCZEhSbGMzUmhkR2x2YmtObGNuUnBabWxqWVhSbE1EQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUpyRWVNSlo3UE01VUJFbThoaUNLRGh6YVA2Y2xYdkhmd0RIUXJ5L3V0L3lHMUFuMGJ3MVU2blNvUEVtY2FyMEc1WmYxTUR4alZOdEF5QjZJWThKLzhaQUd4eFFnVVZsd1dHVmtFelpGWEJVQTdpN1B0NURWQTRWNlx1MDAyQkJnanhTZTBCWVpGYmhOcU5zdHhraUNybjYwVTYwQUU1WFx1MDAyQkE1M1JvZjFUUkNyTXNLbDRQVDRQeXAzUUtNVVlDaW9GU3d6TkFQaU8vTy9cdTAwMkJIcWJIMXprU0taUXh6bm5WUGVyYUFyMXNNWkptRHlyUU8vUFlMTHByMXFxSUY2SmJsbjZEenIzcG5uMXk0Wi9OTzJpdFBxMk5Nalx1MDAyQnE2N1FDblNXOC9xYlpuV3ZTNXh2S1F6QVR5VXFaOG1PSnNtSThUU05rLzBMMlBpeS9NQnlpeDdmMTYxQ2tjRm1LU3kwQ0F3RUFBYU1RTUE0d0RBWURWUjBUQkFVd0F3RUIvekFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBZ1ZKVWRCaXRud3ZNdDdvci9UMlo4dEtCUUZsejFVcVVSRlRUTTBBcjY2YWx2Y2l4VWJZR3gxVHlTSk5pbm9XSUJROU9QamdMa1dQMkVRRUtvUnhxN1NidGxqNWE1RUQ2VjRyOHRsejRISjY0N3MyM2V0blJFa2o5dE9Gb3ZNZjhOdFNVeDNGTnBhRUdabDJMUlZHd3dcdTAwMkJsVThQd0gzL2IzUmVCZHRhQTdrZmFWNVx1MDAyQml4ZWRjZFN5S1F1VkFUbXZNSTcxM1A4VlBsNk1XbXNNSnRrVjNYVi9ZTUVzUVx1MDAyQkdZcU1yN2tLWGwxM3lldUVmVTJWVkVRc1ovMXRnb29iZVZLaVFcdTAwMkJUcWIwdTJOZHNcdTAwMkJLamRIdmFNYngyUjh6TDNZdTdpR0pRZnd1aU1tdUxSQlJwSUFxTWxRRktLNmRYOXF6Nk9iT01zUjlpczZ6UDZDdmxGcEV6bzVGUT09Il19.eyJBdHRlc3RhdGlvblBvbGljeSI6ImRtVnljMmx2YmoweExqQTdZWFYwYUc5eWFYcGhkR2x2Ym5KMWJHVnpJSHRqT2x0MGVYQmxQVDBpSkdsekxXUmxZblZuWjJGaWJHVWlYU0FtSmlCYmRtRnNkV1U5UFhSeWRXVmRJRDAtSUdSbGJua29LVHM5UGlCd1pYSnRhWFFvS1R0OU8ybHpjM1ZoYm1ObGNuVnNaWE1nZXlBZ0lDQmpPbHQwZVhCbFBUMGlKR2x6TFdSbFluVm5aMkZpYkdVaVhTQTlQaUJwYzNOMVpTaDBlWEJsUFNKT2IzUkVaV0oxWjJkaFlteGxJaXdnZG1Gc2RXVTlZeTUyWVd4MVpTazdJQ0FnSUdNNlczUjVjR1U5UFNJa2FYTXRaR1ZpZFdkbllXSnNaU0pkSUQwLUlHbHpjM1ZsS0hSNWNHVTlJbWx6TFdSbFluVm5aMkZpYkdVaUxDQjJZV3gxWlQxakxuWmhiSFZsS1RzZ0lDQWdZenBiZEhsd1pUMDlJaVJ6WjNndGJYSnphV2R1WlhJaVhTQTlQaUJwYzNOMVpTaDBlWEJsUFNKelozZ3RiWEp6YVdkdVpYSWlMQ0IyWVd4MVpUMWpMblpoYkhWbEtUc2dJQ0FnWXpwYmRIbHdaVDA5SWlSelozZ3RiWEpsYm1Oc1lYWmxJbDBnUFQ0Z2FYTnpkV1VvZEhsd1pUMGljMmQ0TFcxeVpXNWpiR0YyWlNJc0lIWmhiSFZsUFdNdWRtRnNkV1VwT3lBZ0lDQmpPbHQwZVhCbFBUMGlKSEJ5YjJSMVkzUXRhV1FpWFNBOVBpQnBjM04xWlNoMGVYQmxQU0p3Y205a2RXTjBMV2xrSWl3Z2RtRnNkV1U5WXk1MllXeDFaU2s3SUNBZ0lHTTZXM1I1Y0dVOVBTSWtjM1p1SWwwZ1BUNGdhWE56ZFdVb2RIbHdaVDBpYzNadUlpd2dkbUZzZFdVOVl5NTJZV3gxWlNrN0lDQWdJR002VzNSNWNHVTlQU0lrZEdWbElsMGdQVDRnYVhOemRXVW9kSGx3WlQwaWRHVmxJaXdnZG1Gc2RXVTlZeTUyWVd4MVpTazdmVHMifQ.c0l-xqGDFQ8_kCiQ0_vvmDQYG_u544CYmoiucPNxd9MU8ZXT69UD59UgSuya2yl241NoVXA_0LaMEB2re0JnTbPD_dliJn96HnIOqnxXxRh7rKbu65ECUOMWPXbyKQMZ0I3Wjhgt_XyyhfEiQGfJfGzA95-wm6yWqrmW7dMI7JkczG9ideztnr0bsw5NRsIWBXOjVy7Bg66qooTnODS_OqeQ4iaNsN-xjMElHABUxXhpBt2htbhemDU1X41o8clQgG84aEHCgkE07pR-7IL_Fn2gWuPVC66yxAp00W1ib2L-96q78D9J52HPdeDCSFio2RL7r5lOtz8YkQnjacb6xA ```
-</br>
- ## Next steps - [How to author and sign an attestation policy](author-sign-policy.md)
attestation Policy Version 1 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-version-1-0.md
+
+ Title: Azure Attestation policy version 1.0
+description: policy version 1.0.
++++ Last updated : 04/05/2022++++
+# Attestation Policy Version 1.0
+
+Instance owners can use the Attestation policy to define what needs to be validated during the attestation flow.
+This article introduces the workings of the attestation service and the policy engine. Each attestation type has its own attestation policy, however the supported grammar, and processing is broadly the same.
+
+## Policy version 1.0
+
+The minimum version of the policy supported by the service is version 1.0.
++
+The attestation service flow is as follows:
+- The platform sends the attestation evidence in the attest call to the attestation service.
+- The attestation service parses the evidence and creates a list of claims that is then used in the attestation evaluation. These claims are logically categorized as incoming claims sets.
+- The uploaded attestation policy is used to evaluate the evidence over the rules authored in the attestation policy.
+
+For Policy version 1.0:
+
+The policy has three segments, as seen above:
+
+- **version**: The version is the version number of the grammar that is followed.
+- **authorizationrules**: A collection of claim rules that will be checked first, to determine if attestation should proceed to issuancerules. This section should be used to filter out calls that donΓÇÖt require the issuancerules to be applied. No claims can be issued from this section to the response token. These rules can be used to fail attestation.
+- **issuancerules**: A collection of claim rules that will be evaluated to add information to the attestation result as defined in the policy. The claim rules apply in the order they are defined and are also optional. A collection of claim rules that will be evaluated to add information to the attestation result as defined in the policy. The claim rules apply in the order they are defined and are also optional. These rules can be used to add to the outgoing claim set and the response token, these rules cannot be used to fail attestation.
+
+List of claims supported by policy version 1.0 as part of the incoming claims.
+
+### TPM attestation
+
+Claims to be used by policy authors to define authorization rules in a TPM attestation policy:
+
+- **aikValidated**: Boolean value containing information if the Attestation Identity Key (AIK) cert has been validated or not
+- **aikPubHash**: String containing the base64(SHA256(AIK public key in DER format))
+- **tpmVersion**: Integer value containing the Trusted Platform Module (TPM) major version
+- **secureBootEnabled**: Boolean value to indicate if secure boot is enabled
+- **iommuEnabled**: Boolean value to indicate if Input-output memory management unit (Iommu) is enabled
+- **bootDebuggingDisabled**: Boolean value to indicate if boot debugging is disabled
+- **notSafeMode**: Boolean value to indicate if the Windows is not running on safe mode
+- **notWinPE**: Boolean value indicating if Windows is not running in WinPE mode
+- **vbsEnabled**: Boolean value indicating if VBS is enabled
+- **vbsReportPresent**: Boolean value indicating if VBS enclave report is available
+
+### VBS attestation
+
+In addition to the TPM attestation policy claims, below claims can be used by policy authors to define authorization rules in a VBS attestation policy.
+
+- **enclaveAuthorId**: String value containing the Base64Url encoded value of the enclave author id-The author identifier of the primary module for the enclave
+- **enclaveImageId**: String value containing the Base64Url encoded value of the enclave Image id-The image identifier of the primary module for the enclave
+- **enclaveOwnerId**: String value containing the Base64Url encoded value of the enclave Owner id-The identifier of the owner for the enclave
+- **enclaveFamilyId**: String value containing the Base64Url encoded value of the enclave Family ID. The family identifier of the primary module for the enclave
+- **enclaveSvn**: Integer value containing the security version number of the primary module for the enclave
+- **enclavePlatformSvn**: Integer value containing the security version number of the platform that hosts the enclave
+- **enclaveFlags**: The enclaveFlags claim is an Integer value containing Flags that describe the runtime policy for the enclave
+
+## Sample policies for various attestation types
+
+Sample policy for TPM:
+
+```
+version=1.0;
+
+authorizationrules {
+ => permit();
+};
++
+issuancerules
+{
+[type=="aikValidated", value==true]&&
+[type=="secureBootEnabled", value==true] &&
+[type=="bootDebuggingDisabled", value==true] &&
+[type=="notSafeMode", value==true] => issue(type="PlatformAttested", value=true);
+};
+```
attestation Policy Version 1 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-version-1-1.md
+
+ Title: Azure Attestation policy version 1.1
+description: policy version 1.1.
++++ Last updated : 04/05/2022++++
+# Attestation Policy Version 1.1
+
+Instance owners can use the Attestation policy to define what needs to be validated during the attestation flow.
+This article introduces the workings of the attestation service and the policy engine. Each attestation type has its own attestation policy, however the supported grammar, and processing is broadly the same.
+
+## Policy version 1.1
++
+The attestation flow is as follows:
+- The platform sends the attestation evidence in the attest call to the attestation service.
+- The attestation service parses the evidence and creates a list of claims that is then used during rule evaluation. The claims are logically categorized as incoming claims sets.
+- The attestation policy uploaded by the owner of the attestation service instance is then used to evaluate and issue claims to the response.
+- During the evaluation, configuration rules can be used to indicate to the policy evaluation engine how to handle certain claims.
+
+For Policy version 1.1:
+The policy has four segments, as seen above:
+
+- **version**: The version is the version number of the grammar that is followed.
+- **configurationrules**: During policy evaluation, sometimes it may be required to control the behavior of the policy engine itself. This is where configuration rules can be used to indicate to the policy evaluation engine how to handle some claims in the evaluation.
+- **authorizationrules**: A collection of claim rules that will be checked first, to determine if attestation should proceed to issuancerules. This section should be used to filter out calls that donΓÇÖt require the issuancerules to be applied. No claims can be issued from this section to the response token. These rules can be used to fail attestation.
+- **issuancerules**: A collection of claim rules that will be evaluated to add information to the attestation result as defined in the policy. The claim rules apply in the defined order and are also optional. These rules can also be used to add to the outgoing claim set and the response token, however these rules cannot be used to fail attestation.
+
+The following **configurationrules** are available to the policy author.
+
+| Attestation Type | ConfigurationRule Property Name | Type | Default Value | Description |
+| -- | -- | -- | -- |-- |
+| TPM, VBS | require_valid_aik_cert | Bool | true | Indicates whether a valid AIK certificate is required. Only applied when TPM data is present.|
+| TPM, VBS | required_pcr_mask | Int | 0xFFFFFF | The bitmask for PCR indices that must be included in the TPM quote. Bit 0 represents PCR 0, bit 1 represents PCR 1, and so on. |
+
+List of claims supported as part of the incoming claims.
+
+### TPM attestation
+
+Claims to be used by policy authors to define authorization rules in a TPM attestation policy:
+
+- **aikValidated**: Boolean value containing information if the Attestation Identity Key (AIK) cert has been validated or not
+- **aikPubHash**: String containing the base64(SHA256(AIK public key in DER format))
+- **tpmVersion**: Integer value containing the Trusted Platform Module (TPM) major version
+- **secureBootEnabled**: Boolean value to indicate if secure boot is enabled
+- **iommuEnabled**: Boolean value to indicate if Input-output memory management unit (Iommu) is enabled
+- **bootDebuggingDisabled**: Boolean value to indicate if boot debugging is disabled
+- **notSafeMode**: Boolean value to indicate if the Windows is not running on safe mode
+- **notWinPE**: Boolean value indicating if Windows is not running in WinPE mode
+- **vbsEnabled**: Boolean value indicating if VBS is enabled
+- **vbsReportPresent**: Boolean value indicating if VBS enclave report is available
+
+### VBS attestation
+
+In addition to the TPM attestation policy claims, below claims can be used by policy authors to define authorization rules in a VBS attestation policy.
+
+- **enclaveAuthorId**: String value containing the Base64Url encoded value of the enclave author id-The author identifier of the primary module for the enclave
+- **enclaveImageId**: String value containing the Base64Url encoded value of the enclave Image id-The image identifier of the primary module for the enclave
+- **enclaveOwnerId**: String value containing the Base64Url encoded value of the enclave Owner id-The identifier of the owner for the enclave
+- **enclaveFamilyId**: String value containing the Base64Url encoded value of the enclave Family ID. The family identifier of the primary module for the enclave
+- **enclaveSvn**: Integer value containing the security version number of the primary module for the enclave
+- **enclavePlatformSvn**: Integer value containing the security version number of the platform that hosts the enclave
+- **enclaveFlags**: The enclaveFlags claim is an Integer value containing Flags that describe the runtime policy for the enclave
+
+## Sample policies for various attestation types
+
+Sample policy for TPM:
+
+```
+version=1.1;
+
+configurationrules{
+=> issueproperty(type = "required_pcr_mask", value = 15);
+=> issueproperty(type = "require_valid_aik_cert", value = false);
+};
+
+authorizationrules {
+ => permit();
+};
++
+issuancerules
+{
+[type=="aikValidated", value==true]&&
+[type=="secureBootEnabled", value==true] &&
+[type=="bootDebuggingDisabled", value==true] &&
+[type=="notSafeMode", value==true] => issue(type="PlatformAttested", value=true);
+
+[type=="aikValidated", value==false]&&
+[type=="secureBootEnabled", value==true] &&
+[type=="bootDebuggingDisabled", value==true] &&
+[type=="notSafeMode", value==true] => issue(type="PlatformAttested", value=false);
+};
+```
+The required_pcr_mask restricts the evaluation of PCR matches to only PCR 0,1,2,3.
+The require_valid_aik_cert marked as false, indicates that the aik cert is not a requirement and is later verified in the issuancerules to determine the PlatformAttested state.
attestation Policy Version 1 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-version-1-2.md
+
+ Title: Azure Attestation policy version 1.2
+description: policy version 1.2
++++ Last updated : 04/05/2022++++
+# Attestation Policy Version 1.2
+
+Instance owners can use the Attestation policy to define what needs to be validated during the attestation flow.
+This article introduces the workings of the attestation service and the policy engine. Each attestation type has its own attestation policy, however the supported grammar, and processing is broadly the same.
+
+## Policy Version 1.2
++
+The attestation flow is as follows:
+- The platform sends the attestation evidence in the attest call to the attestation service.
+- The attestation service parses the evidence and creates a list of claims that is then used in the attestation evaluation. The evidence is also parsed and maintained as a JSON format, which is used to provide a broader set of measurements to the policy writer. These claims are logically categorized as incoming claims sets.
+- The attestation policy uploaded by the owner of the attestation service instance is then used to evaluate and issue claims to the response. The policy writer can now use JmesPath based queries to search in the evidence to create their own claims and subsequent claim rules. During the evaluation, configuration rules can also be used to indicate to the policy evaluation engine how to handle certain claims.
+
+Policy version 1.2 has four segments:
+
+- **version:** The version is the version number of the grammar.
+- **configurationrules:** During policy evaluation, sometimes it may be required to control the behavior of the policy engine itself. Configuration rules can be used to indicate to the policy evaluation engine how to handle some claims in the evaluation.
+- **authorizationrules:** A collection of claim rules that will be checked first, to determine if attestation should continue to issuancerules. This section should be used to filter out calls that donΓÇÖt require the issuancerules to be applied. No claims can be issued from this section to the response token. These rules can be used to fail attestation.
+**issuancerules:** A collection of claim rules that will be evaluated to add information to the attestation result as defined in the policy. The claim rules apply in the order they're defined and are also optional. A collection of claim rules that will be evaluated to add information to the attestation result as defined in the policy. The claim rules apply in the order they are defined and are also optional. These rules can be used to add to the outgoing claim set and the response token, these rules can't be used to fail attestation.
+
+The following **configurationrules** are available to the policy author.
+
+| Attestation Type | ConfigurationRule Property Name | Type | Default Value | Description |
+| -- | -- | -- | -- |-- |
+| TPM, VBS | require_valid_aik_cert | Bool | true | Indicates whether a valid AIK certificate is required. Only applied when TPM data is present.|
+| TPM, VBS | required_pcr_mask | Int | 0xFFFFFF | The bitmask for PCR indices that must be included in the TPM quote. Bit 0 represents PCR 0, bit 1 represents PCR 1, and so on. |
+
+## List of claims supported as part of the incoming claims
+
+Policy Version 1.2 also introduces functions to the policy grammar. Read more about the functions [here](claim-rule-functions.md). With the introduction of JmesPath-based functions, incoming claims can be generated as needed by the attestation policy author.
+
+Some of the key rules that can be used to generate claims are listed below.
+
+|Feature |Brief Description |Policy Rule |
+|--|-|--|
+| Secure Boot |Device boots using only software that is trusted by the (OEM): Msft | `c:[type == "events", issuer=="AttestationService"] => add(type = "efiConfigVariables", value = JmesPath(c.value, "Events[?EventTypeString == 'EV_EFI_VARIABLE_DRIVER_CONFIG' && ProcessedData.VariableGuid == '8BE4DF61-93CA-11D2-AA0D-00E098032B8C']")); => issue(type = "secureBootEnabled", value = JsonToClaimValue(JmesPath(c.value, "[?ProcessedData.UnicodeName == 'SecureBoot'] \| length(@) == `1` && @[0].ProcessedData.VariableData == 'AQ'"))); \![type=="secureBootEnabled", issuer=="AttestationPolicy"] => issue(type="secureBootEnabled", value=false);` |
+| Code Integrity |Code integrity is a feature that validates the integrity of a driver or system file each time it is loaded into memory| `// Retrieve bool propertiesc:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="codeIntegrityEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_CODEINTEGRITY")));c:[type=="codeIntegrityEnabledSet", issuer=="AttestationPolicy"] => issue(type="codeIntegrityEnabled", value=ContainsOnlyValue(c.value, true));\![type=="codeIntegrityEnabled", issuer=="AttestationPolicy"] => issue(type="codeIntegrityEnabled", value=false);` |
+|BitLocker [Boot state] |Used for encryption of device drives.| `// Bitlocker Boot Status, The first non zero measurement or zero.c:[type=="events", issuer=="AttestationService"] => add(type="srtmDrtmEventPcr", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `19`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="srtmDrtmEventPcr", issuer=="AttestationPolicy"] => issue(type="bitlockerStatus", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_BITLOCKER_UNLOCK \| @[? Value != `0`].Value \| @[0]")));\![type=="bitlockerStatus"] => issue(type="bitlockerStatus", value=0);Nonzero means enabled.` |
+| Early launch Antimalware | ELAM protects against loading unsigned/malicious drivers during boot. | `// Elam Driver (windows defender) Loaded.c:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="boolProperties", issuer=="AttestationPolicy"] => issue(type="elamDriverLoaded", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_LOADEDMODULE_AGGREGATION[] \| [? EVENT_IMAGEVALIDATED == `true` && (equals_ignore_case(EVENT_FILEPATH, '\\windows\\system32\\drivers\\wdboot.sys') \|\| equals_ignore_case(EVENT_FILEPATH, '\\windows\\system32\\drivers\\wd\\wdboot.sys'))] \| @ != `null`")));![type=="elamDriverLoaded", issuer=="AttestationPolicy"] => issue(type="elamDriverLoaded", value=false);` |
+| Boot Debugging |Allows the user to connect to a boot debugger. Can be used to bypass Secure Boot and other boot protections. | `// Boot debuggingc:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="bootDebuggingEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_BOOTDEBUGGING")));c:[type=="bootDebuggingEnabledSet", issuer=="AttestationPolicy"] => issue(type="bootDebuggingDisabled", value=ContainsOnlyValue(c.value, false));\![type=="bootDebuggingDisabled", issuer=="AttestationPolicy"] => issue(type="bootDebuggingDisabled", value=false);` |
+| Kernel Debugging | Allows the user to connect a kernel debugger. Grants access to all system resources (less VSM protected resources). | `// Kernel Debuggingc:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="osKernelDebuggingEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_OSKERNELDEBUG")));c:[type=="osKernelDebuggingEnabledSet", issuer=="AttestationPolicy"] => issue(type="osKernelDebuggingDisabled", value=ContainsOnlyValue(c.value, false));\![type=="osKernelDebuggingDisabled", issuer=="AttestationPolicy"] => issue(type="osKernelDebuggingDisabled", value=false);` |
+|Data Execution Prevention Policy | Data Execution Prevention (DEP) Policy defines is a set of hardware and software technologies that perform additional checks on memory to help prevent malicious code from running on a system. | `// DEP Policyc:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="boolProperties", issuer=="AttestationPolicy"] => issue(type="depPolicy", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_DATAEXECUTIONPREVENTION.Value \| @[-1]")));\![type=="depPolicy"] => issue(type="depPolicy", value=0);` |
+| Test and Flight Signing | Enables the user to run test signed code. | `// Test Signing< c:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY")); c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="testSigningEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_TESTSIGNING"))); c:[type=="testSigningEnabledSet", issuer=="AttestationPolicy"] => issue(type="testSigningDisabled", value=ContainsOnlyValue(c.value, false)); ![type=="testSigningDisabled", issuer=="AttestationPolicy"] => issue(type="testSigningDisabled", value=false);//Flight Signingc:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="flightSigningEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "\[\*\].EVENT_FLIGHTSIGNING")));c:[type=="flightSigningEnabledSet", issuer=="AttestationPolicy"] => issue(type="flightSigningNotEnabled", value=ContainsOnlyValue(c.value, false));![type=="flightSigningNotEnabled", issuer=="AttestationPolicy"] => issue(type="flightSigningNotEnabled", value=false);` |
+| Virtual Security Mode (VSM/VBS) | VBS uses the Windows hypervisor to create this virtual secure mode that is used to protect vital system and operating system resources, credentials, etc. | `// VSM enabled c:[type=="events", issuer=="AttestationService"] => add(type="srtmDrtmEventPcr", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `19`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="srtmDrtmEventPcr", issuer=="AttestationPolicy"] => add(type="vsmEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "\[\*\].EVENT_VBS_VSM_REQUIRED")));c:[type=="srtmDrtmEventPcr", issuer=="AttestationPolicy"] => add(type="vsmEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_VBS_MANDATORY_ENFORCEMENT")));c:[type=="vsmEnabledSet", issuer=="AttestationPolicy"] => issue(type="vsmEnabled", value=ContainsOnlyValue(c.value, true));![type=="vsmEnabled", issuer=="AttestationPolicy"] => issue(type="vsmEnabled", value=false);c:[type=="vsmEnabled", issuer=="AttestationPolicy"] => issue(type="vbsEnabled", value=c.value);` |
+| HVCI | Hyper Visor based Code integrity is a feature that validates the integrity of a system file each time it is loaded into memory.| `// HVCIc:[type=="events", issuer=="AttestationService"] => add(type="srtmDrtmEventPcr", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `19`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="srtmDrtmEventPcr", issuer=="AttestationPolicy"] => add(type="hvciEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_VBS_HVCI_POLICY \| @[?String == 'HypervisorEnforcedCodeIntegrityEnable'].Value")));c:[type=="hvciEnabledSet", issuer=="AttestationPolicy"] => issue(type="hvciEnabled", value=ContainsOnlyValue(c.value, 1));![type=="hvciEnabled", issuer=="AttestationPolicy"] => issue(type="hvciEnabled", value=false);` |
+| IOMMUInput Output Memory Management Unit | Input Output Memory Management Unit (IOMMU) translates virtual to physical memory addresses for Direct Memory Access (DMA) capable device peripherals. Protects sensitive memory regions. | `// IOMMUc:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="iommuEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_VBS_IOMMU_REQUIRED")));c:[type=="iommuEnabledSet", issuer=="AttestationPolicy"] => issue(type="iommuEnabled", value=ContainsOnlyValue(c.value, true));![type=="iommuEnabled", issuer=="AttestationPolicy"] => issue(type="iommuEnabled", value=false);` |
+| PCR Value evaluation | PCRs contain measurement(s) of components that are made during the boot. These can be used to verify the components against golden/known measurements. | `//PCRS are only read-only and thus cannot be used with issue operation, but they can be used to validate expected/golden measurements.c:[type == "pcrs", issuer=="AttestationService"] && c1:[type=="pcrMatchesExpectedValue", value==JsonToClaimValue(JmesPath(c.value, "PCRs[? Index == `0`].Digests.SHA1 \| @[0] == `\"KCk6Ow\"`"))] => issue(claim = c1);` |
+| Boot Manager Version | The security version number of the Boot Manager that was loaded during initial boot on the attested device. | `// Find the first EVENT_APPLICATION_SVN. That value is the Boot Manager SVN// Find the first EV_SEPARATOR in PCR 12, 13, Or 14c:[type=="events", issuer=="AttestationService"] => add(type="evSeparatorSeq", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_SEPARATOR' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `14`)] \| @[0].EventSeq"));c:[type=="evSeparatorSeq", value != "null", issuer=="AttestationPolicy"] => add(type="beforeEvSepClause", value=AppendString(AppendString("Events[? EventSeq < `", c.value), "`"));[type=="evSeparatorSeq", value=="null", issuer=="AttestationPolicy"] => add(type="beforeEvSepClause", value="Events[? `true` ");// Find the first EVENT_APPLICATION_SVN. That value is the Boot Manager SVNc:[type=="beforeEvSepClause", issuer=="AttestationPolicy"] => add(type="bootMgrSvnSeqQuery", value=AppendString(c.value, " && EventTypeString == 'EV_EVENT_TAG' && PcrIndex == `12` && ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_APPLICATION_SVN] \| @[0].EventSeq"));c1:[type=="bootMgrSvnSeqQuery", issuer=="AttestationPolicy"] && c2:[type=="events", issuer=="AttestationService"] => add(type="bootMgrSvnSeq", value=JmesPath(c2.value, c1.value));c:[type=="bootMgrSvnSeq", value!="null", issuer=="AttestationPolicy"] => add(type="bootMgrSvnQuery", value=AppendString(AppendString("Events[? EventSeq == `", c.value), "`].ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_APPLICATION_SVN \| @[0]"));c1:[type=="bootMgrSvnQuery", issuer=="AttestationPolicy"] && c2:[type=="events", issuer=="AttestationService"] => issue(type="bootMgrSvn", value=JsonToClaimValue(JmesPath(c2.value, c1.value)));` |
+| Safe Mode | Safe mode is a troubleshooting option for Windows that starts your computer in a limited state. Only the basic files and drivers necessary to run Windows are started. | `// Safe modec:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="safeModeEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_SAFEMODE")));c:[type=="safeModeEnabledSet", issuer=="AttestationPolicy"] => issue(type="notSafeMode", value=ContainsOnlyValue(c.value, false));![type=="notSafeMode", issuer=="AttestationPolicy"] => issue(type="notSafeMode", value=true);` |
+| Win PE boot | Windows pre-installation Environment (Windows PE) is a minimal operating system with limited services that is used to prepare a computer for Windows installation, to copy disk images from a network file server, and to initiate Windows Setup. | `// Win PEc:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="winPEEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "\[\*\].EVENT_WINPE")));c:[type=="winPEEnabledSet", issuer=="AttestationPolicy"] => issue(type="notWinPE", value=ContainsOnlyValue(c.value, false));![type=="notWinPE", issuer=="AttestationPolicy"] => issue(type="notWinPE", value=true);` |
+| CI Policy | Hash of Code Integrity policy that is controlling the security of the boot environment | `// CI Policyc :[type=="events", issuer=="AttestationService"] => issue(type="codeIntegrityPolicy", value=JsonToClaimValue(JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && PcrIndex == `13`].ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_SI_POLICY[].RawData")));`|
+| Secure Boot Configuration Policy Hash | SBCPHash is the fingerprint of the Custom Secure Boot Configuration Policy (SBCP) that was loaded during boot in Windows devices, except PCs. | `// Secure Boot Custom Policyc:[type=="events", issuer=="AttestationService"] => issue(type="secureBootCustomPolicy", value=JsonToClaimValue(JmesPath(c.value, "Events[? EventTypeString == 'EV_EFI_VARIABLE_DRIVER_CONFIG' && PcrIndex == `7` && ProcessedData.UnicodeName == 'CurrentPolicy' && ProcessedData.VariableGuid == '77FA9ABD-0359-4D32-BD60-28F4E78F784B'].ProcessedData.VariableData \| @[0]")));` |
+| Boot Application SVN | The version of the Boot Manager that is running on the device. | `// Find the first EV_SEPARATOR in PCR 12, 13, Or 14, the ordering of the events is critical to ensure correctness.c:[type=="events", issuer=="AttestationService"] => add(type="evSeparatorSeq", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_SEPARATOR' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `14`)] \| @[0].EventSeq"));c:[type=="evSeparatorSeq", value != "null", issuer=="AttestationPolicy"] => add(type="beforeEvSepClause", value=AppendString(AppendString("Events[? EventSeq < `", c.value), "`"));[type=="evSeparatorSeq", value=="null", issuer=="AttestationPolicy"] => add(type="beforeEvSepClause", value="Events[? `true` "); // No restriction of EV_SEPARATOR in case it is not present// Find the first EVENT_TRANSFER_CONTROL with value 1 or 2 in PCR 12 that is before the EV_SEPARATORc1:[type=="beforeEvSepClause", issuer=="AttestationPolicy"] && c2:[type=="bootMgrSvnSeq", value != "null", issuer=="AttestationPolicy"] => add(type="beforeEvSepAfterBootMgrSvnClause", value=AppendString(AppendString(AppendString(c1.value, "&& EventSeq >= `"), c2.value), "`"));c:[type=="beforeEvSepAfterBootMgrSvnClause", issuer=="AttestationPolicy"] => add(type="tranferControlQuery", value=AppendString(c.value, " && EventTypeString == 'EV_EVENT_TAG' && PcrIndex == `12`&& (ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_TRANSFER_CONTROL.Value == `1` \|\| ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_TRANSFER_CONTROL.Value == `2`)] \| @[0].EventSeq"));c1:[type=="tranferControlQuery", issuer=="AttestationPolicy"] && c2:[type=="events", issuer=="AttestationService"] => add(type="tranferControlSeq", value=JmesPath(c2.value, c1.value));// Find the first non-null EVENT_MODULE_SVN in PCR 13 after the transfer control.c:[type=="tranferControlSeq", value!="null", issuer=="AttestationPolicy"] => add(type="afterTransferCtrlClause", value=AppendString(AppendString(" && EventSeq > `", c.value), "`"));c1:[type=="beforeEvSepClause", issuer=="AttestationPolicy"] && c2:[type=="afterTransferCtrlClause", issuer=="AttestationPolicy"] => add(type="moduleQuery", value=AppendString(AppendString(c1.value, c2.value), " && EventTypeString == 'EV_EVENT_TAG' && PcrIndex == `13` && ((ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_LOADEDMODULE_AGGREGATION[].EVENT_MODULE_SVN \| @[0]) \|\| (ProcessedData.EVENT_LOADEDMODULE_AGGREGATION[].EVENT_MODULE_SVN \| @[0]))].EventSeq \| @[0]"));c1:[type=="moduleQuery", issuer=="AttestationPolicy"] && c2:[type=="events", issuer=="AttestationService"] => add(type="moduleSeq", value=JmesPath(c2.value, c1.value));// Find the first EVENT_APPLICATION_SVN after EV_EVENT_TAG in PCR 12. That value is Boot App SVNc:[type=="moduleSeq", value!="null", issuer=="AttestationPolicy"] => add(type="applicationSvnAfterModuleClause", value=AppendString(AppendString(" && EventSeq > `", c.value), "`"));c1:[type=="beforeEvSepClause", issuer=="AttestationPolicy"] && c2:[type=="applicationSvnAfterModuleClause", issuer=="AttestationPolicy"] => add(type="bootAppSvnQuery", value=AppendString(AppendString(c1.value, c2.value), " && EventTypeString == 'EV_EVENT_TAG' && PcrIndex == `12`].ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_APPLICATION_SVN \| @[0]"));c1:[type=="bootAppSvnQuery", issuer=="AttestationPolicy"] && c2:[type=="events", issuer=="AttestationService"] => issue(type="bootAppSvn", value=JsonToClaimValue(JmesPath(c2.value, c1.value)));` |
+| Boot Revision List | Boot Revision List used to Direct the device to an enterprise honeypot, to further monitor the device's activities. | `// Boot Rev List Info c:[type=="events", issuer=="AttestationService"] => issue(type="bootRevListInfo", value=JsonToClaimValue(JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && PcrIndex == `13`].ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_BOOT_REVOCATION_LIST.RawData \| @[0]")));` |
+
+## Sample policies for TPM attestation using version 1.2
+
+```
+version=1.2;
+
+authorizationrules {
+ => permit();
+};
++
+issuancerules
+{
+
+// Verify if secureboot is enabled
+c:[type == "events", issuer=="AttestationService"] => add(type = "efiConfigVariables", value = JmesPath(c.value, "Events[?EventTypeString == 'EV_EFI_VARIABLE_DRIVER_CONFIG' && ProcessedData.VariableGuid == '8BE4DF61-93CA-11D2-AA0D-00E098032B8C']"));
+
+c:[type=="efiConfigVariables", issuer="AttestationPolicy"]=> add(type = "secureBootEnabled", value = JsonToClaimValue(JmesPath(c.value, "[?ProcessedData.UnicodeName == 'SecureBoot'] | length(@) == `1` && @[0].ProcessedData.VariableData == 'AQ'")));
+![type=="secureBootEnabled", issuer=="AttestationPolicy"] => add(type="secureBootEnabled", value=false);
+
+//Verfify in Defender ELAM is loaded.
+c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="elamDriverLoaded", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_LOADEDMODULE_AGGREGATION[] | [? EVENT_IMAGEVALIDATED == `true` && (equals_ignore_case(EVENT_FILEPATH, '\\windows\\system32\\drivers\\wdboot.sys') || equals_ignore_case(EVENT_FILEPATH, '\\windows\\system32\\drivers\\wd\\wdboot.sys'))] | @ != `null`")));
+[type=="elamDriverLoaded", issuer=="AttestationPolicy"] => add(type="WindowsDefenderElamDriverLoaded", value=true);
+![type=="elamDriverLoaded", issuer=="AttestationPolicy"] => add(type="WindowsDefenderElamDriverLoaded", value=false);
+
+[type=="WindowsDefenderElamDriverLoaded", value==true] &&
+[type=="secureBootEnabled", value==true] => issue("PlatformAttested", value=true);
+
+};
+```
attestation Tpm Attestation Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/tpm-attestation-concepts.md
+
+ Title: TPM attestation overview for Azure
+description: TPM Attestation overview
++++ Last updated : 04/05/2022++++
+# Trusted Platform Module (TPM) Attestation
+
+Devices with a TPM, can rely on attestation to prove that boot integrity isn't compromised along with using the measured boot to detect early boot feature states. A growing number of device types, bootloaders and boot stack attacks require an attestation solution to evolve accordingly. An attested state of a device is driven by the attestation policy used to verify the contents on the platform evidence. This document provides an overview of TPM attestation and capabilities supported by MAA.
+
+## Overview
+
+TPM attestation starts from validating the TPM itself all the way up to the point where a relying party can validate the boot flow.
+
+In general, TPM attestation is based on the following pillars:
+
+### Validate TPM authenticity
+
+Validate the TPM authenticity by validating the TPM.
+
+- Every TPM ships with a unique asymmetric key, called the Endorsement Key (EK), burned by the manufacturer. We refer to the public portion of this key as EKPub and the associated private key as EKPriv. Some TPM chips also have an EK certificate that is issued by the manufacturer for the EKPub. We refer to this cert as EKCert.
+- A CA establishes trust in the TPM either via EKPub or EKCert.
+- A device proves to the CA that the key for which the certificate is being requested is cryptographically bound to the EKPub and that the TPM owns the EKpriv.
+- The CA issues a certificate with a special issuance policy to denote that the key is now attested to be protected by a TPM.
+
+### Validate the measurements made during the boot
+
+Validate the measurements made during the boot using the Azure Attestation service.
+
+- As part of Trusted and Measured boot, every step of the boot is validated and measured into the TPM. Different events are measured for different platforms. More information about the measured boot process in Windows can be found [here](/windows/security/information-protection/secure-the-windows-10-boot-process).
+- At boot, an Attestation Identity Key is generated which is used to provide a cryptographic proof to the attestation service that the TPM in use has been issued a cert after EK validation was performed.
+- Relying parties can perform an attestation against the Azure Attestation service, which can be used to validate measurements made during the boot process.
+- A relying party can then rely on the attestation statement to gate access to resources or other actions.
+
+![Conceptual device attestation flow](./media/device-tpm-attestation-flow.png)
+
+Conceptually, TPM attestation can be visualized as above, where the relying party applies Azure Attestation service to verify the platform(s) integrity and any violation of promises, providing the confidence to run workloads or provide access to resources.
+
+## Protection from malicious boot attacks
+
+Mature attacks techniques aim to infect the boot chain, as it can provide the attacker access to system resources while allowing it the capability of hiding from anti-malware software. Trusted boot acts as the first order of defense and extending the capability to be used by relying parties is trusted boot and attestation. Most attackers attempt to bypass secureboot or load an unwanted binary in the boot process.
+
+Remote Attestation lets the relying parties verify the whole boot chain for any violation of promises. Consider the Secure Boot evaluation by the attestation service that validates the values of the secure variables measured by UEFI.
+
+Measured boot instrumentation ensures the cryptographically bound measurements can't be changed once they are made and also only a trusted component can make the measurement. Hence, validating the secure variables is sufficient to ensure the enablement.
+
+Azure Attestation additionally signs the report to ensure the integrity of the attestation is also maintained protecting against Man in the Middle type of attacks.
+
+A simple policy can be used as below.
+
+```
+version=1.0;
+
+authorizationrules {
+ => permit();
+};
++
+issuancerules
+{
+[type=="aikValidated", value==true]&&
+[type=="secureBootEnabled", value==true] => issue(type="PlatformAttested", value=true);
+}
+
+```
+
+Sometimes it's not sufficient to only verify one single component in the boot but verifying complimenting features like Code Integrity(or HVCI), System Guard Secure Launch, also add to the protection profile of a device. More so the ability the peer into the boot to evaluate any violations is also needed to ensure confidence can be gained on a platform.
+
+Consider one such policy that takes advantage of the policy version 1.2 to verify details about secureboot, HVCI, System Guard Secure Launch and also verifying that an unwanted(malicious.sys) driver isn't loaded during the boot.
+
+```
+version=1.2;
+
+authorizationrules {
+ => permit();
+};
++
+issuancerules
+{
+
+// Verify if secureboot is enabled
+c:[type == "events", issuer=="AttestationService"] => add(type = "efiConfigVariables", value = JmesPath(c.value, "Events[?EventTypeString == 'EV_EFI_VARIABLE_DRIVER_CONFIG' && ProcessedData.VariableGuid == '8BE4DF61-93CA-11D2-AA0D-00E098032B8C']"));
+c:[type=="efiConfigVariables", issuer="AttestationPolicy"]=> add(type = "secureBootEnabled", value = JsonToClaimValue(JmesPath(c.value, "[?ProcessedData.UnicodeName == 'SecureBoot'] | length(@) == `1` && @[0].ProcessedData.VariableData == 'AQ'")));
+![type=="secureBootEnabled", issuer=="AttestationPolicy"] => add(type="secureBootEnabled", value=false);
+
+// HVCI
+c:[type=="events", issuer=="AttestationService"] => add(type="srtmDrtmEventPcr", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == 12 || PcrIndex == 19)].ProcessedData.EVENT_TRUSTBOUNDARY"));
+c:[type=="srtmDrtmEventPcr", issuer=="AttestationPolicy"] => add(type="hvciEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_VBS_HVCI_POLICY | @[?String == 'HypervisorEnforcedCodeIntegrityEnable'].Value")));
+c:[type=="hvciEnabledSet", issuer=="AttestationPolicy"] => issue(type="hvciEnabled", value=ContainsOnlyValue(c.value, 1));
+![type=="hvciEnabled", issuer=="AttestationPolicy"] => issue(type="hvciEnabled", value=false);
+
+// System Guard Secure Launch
+
+// Validating unwanted(malicious.sys) driver is not loaded
+c:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == 12 || PcrIndex == 13 || PcrIndex == 19 || PcrIndex == 20)].ProcessedData.EVENT_TRUSTBOUNDARY"));
+c:[type=="boolProperties", issuer=="AttestationPolicy"] => issue(type="MaliciousDriverLoaded", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_LOADEDMODULE_AGGREGATION[] | [? EVENT_IMAGEVALIDATED == true && (equals_ignore_case(EVENT_FILEPATH, '\windows\system32\drivers\malicious.sys') || equals_ignore_case(EVENT_FILEPATH, '\windows\system32\drivers\wd\malicious.sys'))] | @ != null")));
+![type=="MaliciousDriverLoaded", issuer=="AttestationPolicy"] => issue(type="MaliciousDriverLoaded", value=false);
+
+};
+```
+
+## Next steps
+
+- [Device Health Attestation on Windows and interacting with Azure Attestation](/windows/client-management/mdm/healthattestation-csp#windows-11-device-health-attestation)
+- [Learn more about the Claim Rule Grammar](claim-rule-grammar.md)
+- [Attestation policy claim rule functions](claim-rule-functions.md)
azure-app-configuration Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/overview.md
Title: What is Azure App Configuration? description: Read an overview of the Azure App Configuration service. Understand why you would want to use App Configuration, and learn how you can use it.--++ Previously updated : 03/30/2022 Last updated : 04/19/2022 # What is Azure App Configuration?
App Configuration complements [Azure Key Vault](https://azure.microsoft.com/serv
## Use App Configuration
-The easiest way to add an App Configuration store to your application is through a client library provided by Microsoft. The following methods are available to connect with your application, depending on your chosen language and framework
+The easiest way to add an App Configuration store to your application is through a client library provided by Microsoft. The following methods are available to connect with your application, depending on your chosen language and framework.
-| Programming language and framework | How to connect |
-|||
-| .NET Core and ASP.NET Core | App Configuration provider for .NET Core |
-| .NET Framework and ASP.NET | App Configuration builder for .NET |
-| Java Spring | App Configuration client for Spring Cloud |
-| Others | App Configuration REST API |
+|Programming language and framework | How to connect | Quickstart |
+|--|||
+| .NET Core | App Configuration [provider](/dotnet/api/Microsoft.Extensions.Configuration.AzureAppConfiguration) for .NET Core | .NET Core [quickstart](./quickstart-dotnet-core-app.md) |
+| ASP.NET Core | App Configuration [provider](/dotnet/api/Microsoft.Extensions.Configuration.AzureAppConfiguration) for .NET Core | ASP.NET Core [quickstart](./quickstart-aspnet-core-app.md) |
+| .NET Framework and ASP.NET | App Configuration [builder](https://go.microsoft.com/fwlink/?linkid=2074663) for .NET | .NET Framework [quickstart](./quickstart-dotnet-app.md) |
+| Java Spring | App Configuration [provider](https://go.microsoft.com/fwlink/?linkid=2180917) for Spring Cloud | Java Spring [quickstart](./quickstart-java-spring-app.md) |
+| JavaScript/Node.js | App Configuration [client](https://go.microsoft.com/fwlink/?linkid=2103664) for JavaScript | Javascript/Node.js [quickstart](./quickstart-javascript.md)|
+| Python | App Configuration [client](https://go.microsoft.com/fwlink/?linkid=2103727) for Python | Python [quickstart](./quickstart-python.md) |
+| Other | App Configuration [REST API](/rest/api/appconfiguration/) | None |
## Next steps
-* [ASP.NET Core quickstart](./quickstart-aspnet-core-app.md)
-* [.NET Core quickstart](./quickstart-dotnet-core-app.md)
-* [.NET Framework quickstart](./quickstart-dotnet-app.md)
-* [Azure Functions quickstart](./quickstart-azure-functions-csharp.md)
-* [Java Spring quickstart](./quickstart-java-spring-app.md)
-* [ASP.NET Core feature flag quickstart](./quickstart-feature-flag-aspnet-core.md)
-* [Spring Boot feature flag quickstart](./quickstart-feature-flag-spring-boot.md)
+> [!div class="nextstepaction"]
+> [Best practices](howto-best-practices.md)
azure-arc Conceptual Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-gitops-flux2.md
description: "This article provides a conceptual overview of GitOps in Azure for
keywords: "GitOps, Flux, Kubernetes, K8s, Azure, Arc, AKS, Azure Kubernetes Service, containers, devops" Previously updated : 1/24/2022 Last updated : 5/3/2022 -- # GitOps in Azure
Each `fluxConfigurations` resource in Azure will be associated in a Kubernetes c
> * `fluxconfig-agent` monitors for new or updated `fluxConfiguration` resources in Azure. The agent requires connectivity to Azure for the desired state of the `fluxConfiguration` to be applied to the cluster. If the agent is unable to connect to Azure, there will be a delay in making the changes in the cluster until the agent can connect. If the cluster is disconnected from Azure for more than 48 hours, then the request to the cluster will time-out, and the changes will need to be re-applied in Azure. > * Sensitive customer inputs like private key and token/password are stored for less than 48 hours in the Kubernetes Configuration service. If you update any of these values in Azure, assure that your clusters connect with Azure within 48 hours.
+## GitOps with Private Link
+
+If you've added support for private link to an Azure Arc-enabled Kubernetes cluster, then the `microsoft.flux` extension works out-of-the-box with communication back to Azure. For connections to your Git repository, Helm repository, or any other endpoints that are needed to deploy your Kubernetes manifests, you will need to provision these endpoints behind your firewall or list them on your firewall so that the Flux Source controller can successfully reach them.
+
+For more information on private link scopes in Azure Arc, refer to [this document](../servers/private-link-security.md#create-a-private-link-scope).
+ ## Data residency The Azure GitOps service (Azure Kubernetes Configuration Management) stores/processes customer data. By default, customer data is replicated to the paired region. For the regions Singapore, East Asia, and Brazil South, all customer data is stored and processed in the region.
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/overview.md
Previously updated : 04/13/2022 Last updated : 05/03/2022 description: "This article provides an overview of Azure Arc-enabled Kubernetes." keywords: "Kubernetes, Arc, Azure, containers"
keywords: "Kubernetes, Arc, Azure, containers"
Azure Arc-enabled Kubernetes allows you to attach and configure Kubernetes clusters running anywhere. You can connect your clusters running on other public cloud providers (such as GCP or AWS) or clusters running on your on-premise data center (such as VMware vSphere or Azure Stack HCI) to Azure Arc. When you connect a Kubernetes cluster to Azure Arc, it will:+ * Be represented in Azure Resource Manager by a unique ID * Be placed in an Azure subscription and resource group * Receive tags just like any other Azure resource
-Azure Arc-enabled Kubernetes supports industry-standard SSL to secure data in transit. For the connected clusters, data at rest is stored encrypted in an Azure Cosmos DB database to ensure confidentiality.
+Azure Arc-enabled Kubernetes supports industry-standard SSL to secure data in transit. For the connected clusters, cluster extensions, and custom locations, data at rest is stored encrypted in an Azure Cosmos DB database to ensure confidentiality.
-Azure Arc-enabled Kubernetes supports the following scenarios for connected clusters:
+Azure Arc-enabled Kubernetes supports the following scenarios for connected clusters:
* [Connect Kubernetes](quickstart-connect-cluster.md) running outside of Azure for inventory, grouping, and tagging.
azure-arc Tutorial Arc Enabled Open Service Mesh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-arc-enabled-open-service-mesh.md
Title: Azure Arc-enabled Open Service Mesh description: Open Service Mesh (OSM) extension on Azure Arc-enabled Kubernetes cluster Previously updated : 04/07/2022 Last updated : 05/02/2022
OSM runs an Envoy-based control plane on Kubernetes, can be configured with [SMI](https://smi-spec.io/) APIs, and works by injecting an Envoy proxy as a sidecar container next to each instance of your application. [Read more](https://docs.openservicemesh.io/#features) on the service mesh scenarios enabled by Open Service Mesh.
-### Support limitations for Azure Arc-enabled Open Service Mesh
+## Installation options and requirements
-- Only one instance of Open Service Mesh can be deployed on an Azure Arc-connected Kubernetes cluster.-- Support is available for Azure Arc-enabled Open Service Mesh version v1.0.0-1 and above. Find the latest version [here](https://github.com/Azure/osm-azure/releases). Supported release versions are appended with notes. Ignore the tags associated with intermediate releases.-- The following Kubernetes distributions are currently supported:
- - AKS Engine
- - AKS on HCI
- - Cluster API Azure
- - Google Kubernetes Engine
- - Canonical Kubernetes Distribution
- - Rancher Kubernetes Engine
- - OpenShift Kubernetes Distribution
- - Amazon Elastic Kubernetes Service
- - VMware Tanzu Kubernetes Grid
-- Azure Monitor integration with Azure Arc-enabled Open Service Mesh is available with [limited support](#monitoring-application-using-azure-monitor-and-applications-insights).-
+Azure Arc-enabled Open Service Mesh can be deployed through Azure portal, Azure CLI, an ARM template, or a built-in Azure policy.
### Prerequisites - Ensure you have met all the common prerequisites for cluster extensions listed [here](extensions.md#prerequisites). - Use az k8s-extension CLI version >= v1.0.4
-## Basic installation
+### Current support limitations
-Arc-enabled Open Service Mesh can be deployed through Azure portal, an ARM template, a built-in Azure policy, and CLI.
+- Only one instance of Open Service Mesh can be deployed on an Azure Arc-connected Kubernetes cluster.
+- Support is available for Azure Arc-enabled Open Service Mesh version v1.0.0-1 and above. Find the latest version [here](https://github.com/Azure/osm-azure/releases). Supported release versions are appended with notes. Ignore the tags associated with intermediate releases.
+- The following Kubernetes distributions are currently supported:
+ - AKS Engine
+ - AKS on HCI
+ - Cluster API Azure
+ - Google Kubernetes Engine
+ - Canonical Kubernetes Distribution
+ - Rancher Kubernetes Engine
+ - OpenShift Kubernetes Distribution
+ - Amazon Elastic Kubernetes Service
+ - VMware Tanzu Kubernetes Grid
+- Azure Monitor integration with Azure Arc-enabled Open Service Mesh is available [in preview with limited support](#monitoring-application-using-azure-monitor-and-applications-insights-preview).
+
+## Basic installation using Azure portal
-### Basic installation using Azure portal
To deploy using Azure portal, once you have an Arc connected cluster, go to the cluster's **Open Service Mesh** section.
-[ ![Open Service Mesh located under Settings for Arc enabled Kubernetes cluster](media/tutorial-arc-enabled-open-service-mesh/osm-portal-install.jpg) ](media/tutorial-arc-enabled-open-service-mesh/osm-portal-install.jpg#lightbox)
+[![Open Service Mesh located under Settings for Arc enabled Kubernetes cluster](media/tutorial-arc-enabled-open-service-mesh/osm-portal-install.jpg)](media/tutorial-arc-enabled-open-service-mesh/osm-portal-install.jpg#lightbox)
Simply select the **Install extension** button to deploy the latest version of the extension. Alternatively, you can use the CLI experience captured below. For at-scale onboarding, read further in this article about deployment using [ARM template](#install-azure-arc-enabled-osm-using-arm-template) and using [Azure Policy](#install-azure-arc-enabled-osm-using-built-in-policy).
-### Basic installation using Azure CLI
+## Basic installation using Azure CLI
-The following steps assume that you already have a cluster with a supported Kubernetes distribution connected to Azure Arc.
-Ensure that your KUBECONFIG environment variable points to the kubeconfig of the Arc-enabled Kubernetes cluster.
+The following steps assume that you already have a cluster with a supported Kubernetes distribution connected to Azure Arc. Ensure that your KUBECONFIG environment variable points to the kubeconfig of the Arc-enabled Kubernetes cluster.
Set the environment variables:
You should see output similar to the example below. It may take 3-5 minutes for
Next, [validate your installation](#validate-installation). ## Custom installations+ The following sections describe certain custom installations of Azure Arc-enabled OSM. Custom installations require setting values of OSM by in a JSON file and passing them into `k8s-extension create` CLI command as described below. ### Install OSM on an OpenShift cluster 1. Copy and save the following contents into a JSON file. If you have already created a configuration settings file, please add the following line to the existing file to preserve your previous changes.
+
```json { "osm.osm.enablePrivilegedInitContainer": "true"
values of OSM by in a JSON file and passing them into `k8s-extension create` CLI
2. [Install OSM with custom values](#setting-values-during-osm-installation). 3. Add the privileged [security context constraint](https://docs.openshift.com/container-platform/4.7/authentication/managing-security-context-constraints.html) to each service account for the applications in the mesh.+ ```azurecli-interactive oc adm policy add-scc-to-user privileged -z <service account name> -n <service account namespace> ```
It may take 3-5 minutes for the actual OSM helm chart to get deployed to the clu
To ensure that the privileged init container setting is not reverted to the default, pass in the `"osm.osm.enablePrivilegedInitContainer" : "true"` configuration setting to all subsequent az k8s-extension create commands. ### Enable High Availability features on installation+ OSM's control plane components are built with High Availability and Fault Tolerance in mind. This section describes how to enable Horizontal Pod Autoscaling (HPA) and Pod Disruption Budget (PDB) during installation. Read more on the design considerations of High Availability on OSM [here](https://docs.openservicemesh.io/docs/guides/ha_scale/high_availability/). #### Horizontal Pod Autoscaling (HPA)+ HPA automatically scales up or down control plane pods based on the average target CPU utilization (%) and average target memory utilization (%) defined by the user. To enable HPA and set applicable values on OSM control plane pods during installation, create or append to your existing JSON settings file as below, repeating the key/value pairs for each control plane pod
append to your existing JSON settings file as below, repeating the key/value pai
Now, [install OSM with custom values](#setting-values-during-osm-installation). #### Pod Disruption Budget (PDB)+ In order to prevent disruptions during planned outages, control plane pods `osm-controller` and `osm-injector` have a PDB that ensures there is always at least 1 pod corresponding to each control plane application. To enable PDB, create or append to your existing JSON settings file as follows for each desired control plane pod (`osmController`, `injector`):+ ```json { "osm.osm.<control_plane_pod>.enablePodDisruptionBudget" : "true"
To enable PDB, create or append to your existing JSON settings file as follows f
Now, [install OSM with custom values](#setting-values-during-osm-installation). ### Install OSM with cert-manager for certificate management+ [cert-manager](https://cert-manager.io/) is a provider that can be used for issuing signed certificates to OSM without the need for storing private keys in Kubernetes. Refer to OSM's [cert-manager documentation](https://docs.openservicemesh.io/docs/guides/certificates/) and [demo](https://docs.openservicemesh.io/docs/demos/cert-manager_integration/) to learn more.
also include and update the subsequent `certmanager.issuer` lines.
Now, [install OSM with custom values](#setting-values-during-osm-installation). ### Install OSM with Contour for ingress+ OSM provides multiple options to expose mesh services externally using ingress. OSM can use [Contour](https://projectcontour.io/), which works with the ingress controller installed outside the mesh and provisioned with a certificate to participate in the mesh. Refer to [OSM's ingress documentation](https://docs.openservicemesh.io/docs/guides/traffic_management/ingress/#1-using-contour-ingress-controller-and-gateway)
and [demo](https://docs.openservicemesh.io/docs/demos/ingress_contour/) to learn
> [!NOTE] > Use the commands provided in the OSM GitHub documentation with caution. Ensure that you use the correct namespace in commands or specify with flag `--osm-namespace arc-osm-system`. To set required values for configuring Contour during OSM installation, append the following to your JSON settings file:+ ```json { "osm.osm.osmNamespace" : "arc-osm-system",
To set required values for configuring Contour during OSM installation, append t
Now, [install OSM with custom values](#setting-values-during-osm-installation). ### Setting values during OSM installation+ Any values that need to be set during OSM installation need to be saved to a single JSON file and passed in through the Azure CLI install command. Once you have created a JSON file with applicable values as described in above custom installation sections, set the file path as an environment variable:+ ```azurecli-interactive export SETTINGS_FILE=<json-file-path> ``` Run the `az k8s-extension create` command to create the OSM extension, passing in the settings file using the+ `--configuration-settings-file` flag: ```azurecli-interactive az k8s-extension create --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --cluster-type connectedClusters --extension-type Microsoft.openservicemesh --scope cluster --name osm --configuration-settings-file $SETTINGS_FILE
After connecting your cluster to Azure Arc, create a JSON file with the followin
``` Set the environment variables:+ ```azurecli-interactive export TEMPLATE_FILE_NAME=<template-file-path> export DEPLOYMENT_NAME=<desired-deployment-name> ``` Run the command below to install the OSM extension using the az CLI:+ ```azurecli-interactive az deployment group create --name $DEPLOYMENT_NAME --resource-group $RESOURCE_GROUP --template-file $TEMPLATE_FILE_NAME ```
You should see a JSON output similar to the output below:
"version": "x.y.z" } ```+ ## OSM controller configuration+ OSM deploys a MeshConfig resource `osm-mesh-config` as a part of its control plane in arc-osm-system namespace. The purpose of this MeshConfig is to provide the mesh owner/operator the ability to update some of the mesh configurations based on their needs. to view the default values, use the following command. ```azurecli-interactive kubectl describe meshconfig osm-mesh-config -n arc-osm-system ```+ The output would show the default values: ```azurecli-interactive
The output would show the default values:
Outbound IP Range Exclusion List: Outbound Port Exclusion List: ```+ Refer to the [Config API reference](https://docs.openservicemesh.io/docs/api_reference/config/v1alpha1/) for more information. Notice that `spec.traffic.enablePermissiveTrafficPolicyMode` is set to `true`. When OSM is in permissive traffic policy mode, [SMI](https://smi-spec.io/) traffic policy enforcement is bypassed. In this mode, OSM automatically discovers services that are a part of the service mesh and programs traffic policy rules on each Envoy proxy sidecar to be able to communicate with these services. `osm-mesh-config` can also be viewed on Azure portal by selecting **Edit configuration** in the cluster's Open Service Mesh section.
-[ ![Edit configuration button located on top of the Open Service Mesh section](media/tutorial-arc-enabled-open-service-mesh/osm-portal-configuration.jpg) ](media/tutorial-arc-enabled-open-service-mesh/osm-portal-configuration.jpg#lightbox)
+[![Edit configuration button located on top of the Open Service Mesh section](media/tutorial-arc-enabled-open-service-mesh/osm-portal-configuration.jpg)](media/tutorial-arc-enabled-open-service-mesh/osm-portal-configuration.jpg#lightbox)
### Making changes to OSM controller configuration
The MeshConfig "osm-mesh-config" is invalid: spec.traffic.enableEgress: Invalid
Alternatively, to edit `osm-mesh-config` in Azure portal, select **Edit configuration** in the cluster's Open Service Mesh section.
-[ ![Edit configuration button in the Open Service Mesh section](media/tutorial-arc-enabled-open-service-mesh/osm-portal-configuration-edit.jpg) ](media/tutorial-arc-enabled-open-service-mesh/osm-portal-configuration-edit.jpg#lightbox)
+[![Edit configuration button in the Open Service Mesh section](media/tutorial-arc-enabled-open-service-mesh/osm-portal-configuration-edit.jpg)](media/tutorial-arc-enabled-open-service-mesh/osm-portal-configuration-edit.jpg#lightbox)
## Using Azure Arc-enabled OSM
osm namespace add <namespace_name>
``` Namespaces can be onboarded from Azure portal as well by selecting **+Add** in the cluster's Open Service Mesh section.
-[ ![+Add button located on top of the Open Service Mesh section](media/tutorial-arc-enabled-open-service-mesh/osm-portal-add-namespace.jpg) ](media/tutorial-arc-enabled-open-service-mesh/osm-portal-add-namespace.jpg#lightbox)
+[![+Add button located on top of the Open Service Mesh section](media/tutorial-arc-enabled-open-service-mesh/osm-portal-add-namespace.jpg)](media/tutorial-arc-enabled-open-service-mesh/osm-portal-add-namespace.jpg#lightbox)
More information about onboarding services can be found [here](https://docs.openservicemesh.io/docs/guides/app_onboarding/#onboard-services).
You can start with a [sample application](https://docs.openservicemesh.io/docs/g
> [!NOTE] > If you are using a sample applications, ensure that their versions match the version of the OSM extension installed on your cluster. For example, if you are using v1.0.0 of the OSM extension, use the bookstore manifest from release-v1.0 branch of OSM upstream repository.+ ### Configuring your own Jaeger, Prometheus and Grafana instances The OSM extension does not install add-ons like [Jaeger](https://www.jaegertracing.io/docs/getting-started/), [Prometheus](https://prometheus.io/docs/prometheus/latest/installation/), [Grafana](https://grafana.com/docs/grafana/latest/installation/) and [Flagger](https://docs.flagger.app/) so that users can integrate OSM with their own running instances of those tools instead. To integrate with your own instances, refer to the following documentation:
-> [!NOTE]
-> Use the commands provided in the OSM GitHub documentation with caution. Ensure that you use the correct namespace name `arc-osm-system` when making changes to `osm-mesh-config`.
- [BYO-Jaeger instance](https://docs.openservicemesh.io/docs/guides/observability/tracing/#byo-bring-your-own) - [BYO-Prometheus instance](https://docs.openservicemesh.io/docs/guides/observability/metrics/#prometheus) - [BYO-Grafana dashboard](https://docs.openservicemesh.io/docs/guides/observability/metrics/#grafana) - [OSM Progressive Delivery with Flagger](https://docs.flagger.app/tutorials/osm-progressive-delivery)
-## Monitoring application using Azure Monitor and Applications Insights
+> [!NOTE]
+> Use the commands provided in the OSM GitHub documentation with caution. Ensure that you use the correct namespace name `arc-osm-system` when making changes to `osm-mesh-config`.
+
+## Monitoring application using Azure Monitor and Applications Insights (preview)
+
+Both Azure Monitor and Azure Application Insights help you maximize the availability and performance of your applications and services by delivering a comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments. Azure Arc-enabled Open Service Mesh will have deep integrations into both of these Azure services, and provide a seamless Azure experience for viewing and responding to critical KPIs provided by OSM metrics.
-Both Azure Monitor and Azure Application Insights help you maximize the availability and performance of your applications and services by delivering a comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments.
-Azure Arc-enabled Open Service Mesh will have deep integrations into both of these Azure services, and provide a seamless Azure experience for viewing and responding to critical KPIs provided by OSM metrics. Follow the steps below to allow Azure Monitor to scrape Prometheus endpoints for collecting application metrics.
+Follow the steps below to allow Azure Monitor to scrape Prometheus endpoints for collecting application metrics.
1. Follow the guidance available [here](#onboard-namespaces-to-the-service-mesh) to ensure that the application namespaces that you wish to be monitored are onboarded to the mesh. 2. Expose the Prometheus endpoints for application namespaces.
- ```azurecli-interactive
- osm metrics enable --namespace <namespace1>
- osm metrics enable --namespace <namespace2>
- ```
+
+ ```azurecli-interactive
+ osm metrics enable --namespace <namespace1>
+ osm metrics enable --namespace <namespace2>
+ ```
3. Install the Azure Monitor extension using the guidance available [here](../../azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md?toc=/azure/azure-arc/kubernetes/toc.json).
Azure Arc-enabled Open Service Mesh will have deep integrations into both of the
``` 5. Run the following kubectl command+ ```azurecli-interactive kubectl apply -f container-azm-ms-osmconfig.yaml ```
InsightsMetrics
``` ### Navigating the OSM dashboard+ 1. Access your Arc connected Kubernetes cluster using this [link](https://aka.ms/azmon/osmux). 2. Go to Azure Monitor and navigate to the Reports tab to access the OSM workbook. 3. Select the time-range & namespace to scope your services.
-[ ![OSM workbook](media/tutorial-arc-enabled-open-service-mesh/osm-workbook.jpg) ](media/tutorial-arc-enabled-open-service-mesh/osm-workbook.jpg#lightbox)
+[![OSM workbook](media/tutorial-arc-enabled-open-service-mesh/osm-workbook.jpg)](media/tutorial-arc-enabled-open-service-mesh/osm-workbook.jpg#lightbox)
#### Requests tab
When you use the az k8s-extension command to delete the OSM extension, the arc-o
> [!NOTE] > Use the az k8s-extension CLI to uninstall OSM components managed by Arc. Using the OSM CLI to uninstall is not supported by Arc and can result in undesirable behavior.+ ## Troubleshooting Refer to the troubleshooting guide [available here](troubleshooting.md#azure-arc-enabled-open-service-mesh).
Refer to the troubleshooting guide [available here](troubleshooting.md#azure-arc
## Frequently asked questions ### Is the extension of Azure Arc-enabled OSM zone redundant?
-Yes, all components of Azure Arc-enabled OSM are deployed on availability zones and are hence zone redundant.
+Yes, all components of Azure Arc-enabled OSM are deployed on availability zones and are hence zone redundant.
## Next steps
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md
Title: App settings reference for Azure Functions description: Reference documentation for the Azure Functions app settings or environment variables. Previously updated : 07/27/2021 Last updated : 04/27/2022 # App settings reference for Azure Functions
Supported on [Premium](functions-premium-plan.md) and [Dedicated (App Service) p
## WEBSITE\_CONTENTSHARE
-The file path to the function app code and configuration in an event-driven scaling plan on Windows. Used with WEBSITE_CONTENTAZUREFILECONNECTIONSTRING. Default is a unique string that begins with the function app name. See [Create a function app](functions-infrastructure-as-code.md?tabs=windows#create-a-function-app).
+The file path to the function app code and configuration in an event-driven scaling plans. Used with WEBSITE_CONTENTAZUREFILECONNECTIONSTRING. Default is a unique string generated by the runtime that begins with the function app name. See [Create a function app](functions-infrastructure-as-code.md?tabs=windows#create-a-function-app).
|Key|Sample value| ||| |WEBSITE_CONTENTSHARE|`functionapp091999e2`|
-Only used when deploying to a Windows or Linux Premium plan or to a Windows Consumption plan. Not supported for Linux Consumption plans or Windows or Linux Dedicated plans. When you change the setting, ensure the value is lowercased. Changing or removing this setting may cause your function app to not start. To learn more, see [this troubleshooting article](functions-recover-storage-account.md#storage-account-application-settings-were-deleted).
+This setting is used for Consumption and Premium plan apps on both Windows and Linux. It's not used for Dedicated plan apps, which aren't dynamically scaled by Functions.
-When using an Azure Resource Manager template to create a function app during deployment, don't include WEBSITE_CONTENTSHARE in the template. This slot setting is generated during deployment. To learn more, see [Automate resource deployment for your function app](functions-infrastructure-as-code.md?tabs=windows#create-a-function-app).
+Changing or removing this setting may cause your function app to not start. To learn more, see [this troubleshooting article](functions-recover-storage-account.md#storage-account-application-settings-were-deleted).
+
+The following considerations apply when using an Azure Resource Manager (ARM) template to create a function app during deployment:
+++ When you don't set a `WEBSITE_CONTENTSHARE` value for the main function app or any apps in slots, unique share values are generated for you. This is the recommended approach for an ARM template deployment.++ There are scenarios where you must set the `WEBSITE_CONTENTSHARE` value to a predefined share, such as when you [use a secured storage account in a virtual network](configure-networking-how-to.md#restrict-your-storage-account-to-a-virtual-network). In this case, you must set a unique share name for the main function app and the app for each deployment slot. ++ Don't make `WEBSITE_CONTENTSHARE` a slot setting. ++ When you specify `WEBSITE_CONTENTSHARE`, the value must follow [this guidance for share names](/rest/api/storageservices/naming-and-referencing-shares--directories--files--and-metadata#share-names). +
+To learn more, see [Automate resource deployment for your function app](functions-infrastructure-as-code.md?tabs=windows#create-a-function-app).
## WEBSITE\_SKIP\_CONTENTSHARE\_VALIDATION
azure-monitor Itsm Connector Secure Webhook Connections Azure Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsm-connector-secure-webhook-connections-azure-configuration.md
Title: IT Service Management Connector - Secure Webhook in Azure Monitor - Azure Configurations description: This article shows you how to configure Azure in order to connect your ITSM products/services with Secure Webhook in Azure Monitor to centrally monitor and manage ITSM work items. Previously updated : 03/30/2022 Last updated : 04/28/2022 # Configure Azure to connect ITSM tools using Secure Webhook
-This article provides information about how to configure the Azure in order to use "Secure Webhook".
-In order to use "Secure Webhook", follow these steps:
-
-1. [Register your app with Azure AD.](./itsm-connector-secure-webhook-connections-azure-configuration.md#register-with-azure-active-directory)
-1. [Define Service principal.](./itsm-connector-secure-webhook-connections-azure-configuration.md#define-service-principal)
-1. [Create a Secure Webhook action group.](./itsm-connector-secure-webhook-connections-azure-configuration.md#create-a-secure-webhook-action-group)
-1. Configure your partner environment.
- Secure Webhook supports connections with the following ITSM tools:
- * [ServiceNow](./itsmc-secure-webhook-connections-servicenow.md)
- * [BMC Helix](./itsmc-secure-webhook-connections-bmc.md)
-
+This article describes the required Azure configurations for using Secure Webhook.
## Register with Azure Active Directory Follow these steps to register the application with Azure AD:
Follow these steps to register the application with Azure AD:
## Define service principal
-The Action Group service is a first party application therefore it has permission to acquire authentication tokens from your AAD application in order to authentication with Service now.
+The Action group service is a first party application, and has permission to acquire authentication tokens from your Azure AD application in order to authenticate with ServiceNow.
As an optional step you can define application role in the created appΓÇÖs manifest, which can allow you to further restrict, access in a way that only certain applications with that specific role can send messages. This role has to be then assigned to the Action Group service principal (Requires tenant admin privileges). This step can be done through the same [PowerShell commands](../alerts/action-groups.md#secure-webhook-powershell-script).
To add a webhook to an action, follow these instructions for Secure Webhook:
![Screenshot that shows a Secure Webhook action.](media/itsm-connector-secure-webhook-connections-azure-configuration/secure-webhook.png) ## Configure the ITSM tool environment
+Secure Webhook supports connections with the following ITSM tools:
+ * [ServiceNow](./itsmc-secure-webhook-connections-servicenow.md)
+ * [BMC Helix](./itsmc-secure-webhook-connections-bmc.md)
-The configuration contains two steps:
-
+To configure the ITSM tool environment:
1. Get the URI for the secure Webhook definition.
-2. Definitions according to the flow of the ITSM tool.
-
+2. Create definitions based on ITSM tool flow.
## Next steps * [ServiceNow Secure Webhook Configuration](./itsmc-secure-webhook-connections-servicenow.md)
azure-monitor Itsmc Connections Scsm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-connections-scsm.md
- Title: Connect SCSM with IT Service Management Connector
-description: This article provides information about how to SCSM with the IT Service Management Connector (ITSMC) in Azure Monitor to centrally monitor and manage the ITSM work items.
- Previously updated : 2/23/2022---
-# Connect System Center Service Manager with IT Service Management Connector
-
-This article provides information about how to configure the connection between your System Center Service Manager instance and the IT Service Management Connector (ITSMC) in Log Analytics to centrally manage your work items.
-
-The following sections provide details about how to connect your System Center Service Manager product to ITSMC in Azure.
-
-## Prerequisites
-
-Ensure the following prerequisites are met:
--- ITSMC installed. More information: [Adding the IT Service Management Connector Solution](./itsmc-definition.md).-- The Service Manager Web application (Web app) is deployed and configured. Information on Web app is [here](#create-and-deploy-service-manager-web-app-service).-- Hybrid connection created and configured. More information: [Configure the hybrid Connection](#configure-the-hybrid-connection).-- Supported versions of Service -- User role: [Advanced operator](/previous-versions/system-center/service-manager-2010-sp1/ff461054(v=technet.10)).-- Today the alerts that are sent from Azure Monitor can create in System Center Service Manager Incidents.-
-> [!NOTE]
-> - ITSM Connector can only connect to cloud-based ServiceNow instances. On-premises ServiceNow instances are currently not supported.
-> - In order to use custom [templates](./itsmc-definition.md#define-a-template) as a part of the actions the parameter "ProjectionType" in the SCSM template should be mapped to "IncidentManagement!System.WorkItem.Incident.ProjectionType"
-
-## Connection procedure
-
-Use the following procedure to connect your System Center Service Manager instance to ITSMC:
-
-1. In Azure portal, go to **All Resources** and look for **ServiceDesk(YourWorkspaceName)**
-
-2. Under **WORKSPACE DATA SOURCES** click **ITSM Connections**.
-
- ![New connection](media/itsmc-connections-scsm/add-new-itsm-connection.png)
-
-3. At the top of the right pane, click **Add**.
-
-4. Provide the information as described in the following table, and click **OK** to create the connection.
-
-> [!NOTE]
-> All these parameters are mandatory.
-
-| **Field** | **Description** |
-| | |
-| **Connection Name** | Type a name for the System Center Service Manager instance that you want to connect with ITSMC. You use this name later when you configure work items in this instance/ view detailed log analytics. |
-| **Partner type** | Select **System Center Service Manager**. |
-| **Server URL** | Type the URL of the Service Manager Web app. More information about Service Manager Web app is [here](#create-and-deploy-service-manager-web-app-service).
-| **Client ID** | Type the client ID that you generated (using the automatic script) for authenticating the Web app. More information about the automated script is [here.](./itsmc-service-manager-script.md)|
-| **Client Secret** | Type the client secret, generated for this ID. |
-| **Sync Data** | Select the Service Manager work items that you want to sync through ITSMC. These work items are imported into Log Analytics. **Options:** Incidents, Change Requests.|
-| **Data Sync Scope** | Type the number of past days that you want the data from. **Maximum limit**: 120 days. |
-| **Create new configuration item in ITSM solution** | Select this option if you want to create the configuration items in the ITSM product. When selected, Log Analytics creates the affected CIs as configuration items (in case of non-existing CIs) in the supported ITSM system. **Default**: disabled. |
-
-![Service manager connection](media/itsmc-connections-scsm/service-manager-connection.png)
-
-**When successfully connected, and synced**:
--- Selected work items from Service Manager are imported into Azure **Log Analytics.** You can view the summary of these work items on the IT Service Management Connector tile.--- You can create incidents from Log Analytics alerts or from log records, or from Azure alerts in this Service Manager instance.-
-Learn more: [Create ITSM work items from Azure alerts](./itsmc-definition.md#create-itsm-work-items-from-azure-alerts).
-
-## Create and deploy Service Manager web app service
-
-To connect the on-premises Service Manager with ITSMC in Azure, Microsoft has created a Service Manager Web app on the GitHub.
-
-To set up the ITSM Web app for your Service Manager, do the following:
--- **Deploy the Web app** ΓÇô Deploy the Web app, set the properties, and authenticate with Azure AD. You can deploy the web app by using the [automated script](./itsmc-service-manager-script.md) that Microsoft has provided you.-- **Configure the hybrid connection** - [Configure this connection](#configure-the-hybrid-connection), manually.-
-### Deploy the web app
-Use the automated [script](./itsmc-service-manager-script.md) to deploy the Web app, set the properties, and authenticate with Azure AD.
-
-Run the script by providing the following required details:
--- Azure subscription details-- Resource group name-- Location-- Service Manager server details (server name, domain, user name, and password)-- Site name prefix for your Web app-- ServiceBus Namespace.-
-The script creates the Web app using the name that you specified (along with few additional strings to make it unique). It generates the **Web app URL**, **client ID** and **client secret**.
-
-Save the values, you use them when you create a connection with ITSMC.
-
-**Check the Web app installation**
-
-1. Go to **Azure portal** > **Resources**.
-2. Select the Web app, click **Settings** > **Application Settings**.
-3. Confirm the information about the Service Manager instance that you provided at the time of deploying the app through the script.
-
-## Configure the hybrid connection
-
-Use the following procedure to configure the hybrid connection that connects the Service Manager instance with ITSMC in Azure.
-
-1. Find the Service Manager Web app, under **Azure Resources**.
-2. Click **Settings** > **Networking**.
-3. Under **Hybrid Connections**, click **Configure your hybrid connection endpoints**.
-
- ![Hybrid connection networking](media/itsmc-connections-scsm/itsmc-hybrid-connection-networking-and-end-points.png)
-4. In the **Hybrid Connections** blade, click **Add hybrid connection**.
-
- ![Hybrid connection add](media/itsmc-connections-scsm/itsmc-new-hybrid-connection-add.png)
-
-5. In the **Add Hybrid Connections** blade, click **Create new hybrid Connection**.
-
- ![New Hybrid connection](media/itsmc-connections-scsm/itsmc-create-new-hybrid-connection.png)
-
-6. Type the following values:
-
- - **EndPoint Name**: Specify a name for the new Hybrid connection.
- - **EndPoint Host**: FQDN of the Service Manager management server.
- - **EndPoint Port**: Type 5724
- - **Servicebus namespace**: Use an existing servicebus namespace or create a new one.
- - **Location**: select the location.
- - **Name**: Specify a name to the servicebus if you are creating it.
-
- ![Hybrid connection values](media/itsmc-connections-scsm/itsmc-new-hybrid-connection-values.png)
-6. Click **OK** to close the **Create hybrid connection** blade and start creating the hybrid connection.
-
- Once the Hybrid connection is created, it is displayed under the blade.
-
-7. After the hybrid connection is created, select the connection and click **Add selected hybrid connection**.
-
- ![New hybrid connection](media/itsmc-connections-scsm/itsmc-new-hybrid-connection-added.png)
-
-### Configure the listener setup
-
-Use the following procedure to configure the listener setup for the hybrid connection.
-
-1. In the **Hybrid Connections** blade, click **Download the Connection Manager** and install it on the machine where System Center Service Manager instance is running.
-
- Once the installation is complete, **Hybrid Connection Manager UI** option is available under **Start** menu.
-
-2. Click **Hybrid Connection Manager UI** , you will be prompted for your Azure credentials.
-
-3. Login with your Azure credentials and select your subscription where the Hybrid connection was created.
-
-4. Click **Save**.
-
-Your hybrid connection is successfully connected.
-
-![successful hybrid connection](media/itsmc-connections-scsm/itsmc-hybrid-connection-listener-set-up-successful.png)
-> [!NOTE]
->
-> After the hybrid connection is created, verify and test the connection by visiting the deployed Service Manager Web app. Ensure the connection is successful before you try to connect to ITSMC in Azure.
-
-The following sample image shows the details of a successful connection:
-
-![Hybrid connection test](media/itsmc-connections-scsm/itsmc-hybrid-connection-test.png)
-
-## Next steps
-
-* [ITSM Connector Overview](itsmc-overview.md)
-* [Create ITSM work items from Azure alerts](./itsmc-definition.md#create-itsm-work-items-from-azure-alerts)
-* [Troubleshooting problems in ITSM Connector](./itsmc-resync-servicenow.md)
azure-monitor Itsmc Connections Servicenow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-connections-servicenow.md
As a part of setting up OAuth, we recommend:
## Install the user app and create the user role
-Use the following procedure to install the Service Now user app and create the integration user role for it. You'll use these credentials to make the ServiceNow connection in Azure.
+Use the following procedure to install the ServiceNow user app and create the integration user role for it. You'll use these credentials to make the ServiceNow connection in Azure.
> [!NOTE] > ITSMC supports only the official user app for Microsoft Log Analytics integration that's downloaded from the ServiceNow store. ITSMC does not support any code ingestion on the ServiceNow side or any application that's not part of the official ServiceNow solution.
azure-monitor Itsmc Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-connections.md
Title: IT Service Management Connector in Azure Monitor
-description: This article provides information about how to connect your ITSM products/services with the IT Service Management Connector (ITSMC) in Azure Monitor to centrally monitor and manage the ITSM work items.
+description: This article provides information about how to connect your ITSM products or services with the IT Service Management Connector (ITSMC) in Azure Monitor to centrally monitor and manage the ITSM work items.
Last updated 2/23/2022 # Connect ITSM products/services with IT Service Management Connector
-This article provides information about how to configure the connection between your ITSM product/service and the IT Service Management Connector (ITSMC) in Log Analytics to centrally manage your work items. For more information about ITSMC, see [Overview](./itsmc-overview.md).
+This article provides information about how to configure the connection between your ITSM product or service and the IT Service Management Connector (ITSMC) in Log Analytics to centrally manage your work items. For more information about ITSMC, see [Overview](./itsmc-overview.md).
To set up your ITSM environment: 1. Connect to your ITSM.
azure-monitor Itsmc Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-definition.md
When the ITSMC resource is deployed, a notification appears at the upper-right c
## Create an ITSM connection
-After you've installed ITSMC, you must prep your ITSM tool to allow the connection from ITSMC. Based on the ITSM product that you're connecting to, select one of the following links for instructions:
--- [ServiceNow](./itsmc-connections-servicenow.md)-- [System Center Service Manager](./itsmc-connections-scsm.md)
+After you've installed ITSMC, follow these steps to create the ITSM connection.
After you've prepped your ITSM tool, complete these steps to create a connection:
+1. [Configure ServiceNow](./itsmc-connections-servicenow.md) to allow the connection from ITSMC.
1. In **All resources**, look for **ServiceDesk(*your workspace name*)**: ![Screenshot that shows recent resources in the Azure portal.](media/itsmc-definition/create-new-connection-from-resource.png)
Action groups provide a modular and reusable way to trigger actions for your Azu
### Define a template
-Certain work item types can use templates that you define in the ITSM tool. Using templates, you can define fields that will be automatically populated using fixed values for an action group. You can define which template you want to use as a part of the definition of an action group. Find information about how to create templates [here](https://docs.servicenow.com/bundle/paris-platform-administration/page/administer/form-administration/task/t_CreateATemplateUsingTheTmplForm.html).
+Certain work item types can use templates that you define in the ServiceNow. Using templates, you can define fields that will be automatically populated using constant values that is defined in ServiceNow (not values from the payload). The templates synced with Azure and you can define which template you want to use as a part of the definition of an action group. Find information about how to create templates [here](https://docs.servicenow.com/bundle/paris-platform-administration/page/administer/form-administration/task/t_CreateATemplateUsingTheTmplForm.html).
To create an action group:
azure-monitor Itsmc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-overview.md
Title: IT Service Management integration description: This article provides an overview of the ways you can integrate with an IT Service Management product. Previously updated : 3/30/2022 Last updated : 04/28/2022 - # IT Service Management (ITSM) Integration :::image type="icon" source="media/itsmc-overview/itsmc-symbol.png":::
Azure Monitor supports connections with the following ITSM tools:
- ServiceNow ITSM or ITOM - BMC
- >[!NOTE]
- > As of March 1, 2022, System Center ITSM integrations with Azure alerts is no longer enabled for new customers. New System Center ITSM Connections are not supported.
- > Existing ITSM connections are supported.
For information about legal terms and the privacy policy, see [Microsoft Privacy Statement](https://go.microsoft.com/fwLink/?LinkID=522330&clcid=0x9). ## ITSM Integration Workflow Depending on your integration, start connecting to your ITSM with these steps:-- For Service Now ITOM events and BMC Helix use the Secure webhook action:
- 1. [Register your app with Azure AD.](./itsm-connector-secure-webhook-connections-azure-configuration.md#register-with-azure-active-directory)
- 1. [Define Service principal.](./itsm-connector-secure-webhook-connections-azure-configuration.md#define-service-principal)
- 1. [Create a Secure Webhook action group.](./itsm-connector-secure-webhook-connections-azure-configuration.md#create-a-secure-webhook-action-group)
+
+- For Service Now ITOM events or BMC Helix use the Secure webhook action:
+
+ 1. [Register your app with Azure AD](./itsm-connector-secure-webhook-connections-azure-configuration.md#register-with-azure-active-directory).
+ 1. [Define a Service principal](./itsm-connector-secure-webhook-connections-azure-configuration.md#define-service-principal).
+ 1. [Create a Secure Webhook action group](./itsm-connector-secure-webhook-connections-azure-configuration.md#create-a-secure-webhook-action-group).
1. Configure your partner environment. Secure Export supports connections with the following ITSM tools: - [ServiceNow ITOM](./itsmc-secure-webhook-connections-servicenow.md)
- - [BMC Helix](./itsmc-secure-webhook-connections-bmc.md).
+ - [BMC Helix](./itsmc-secure-webhook-connections-bmc.md)
-- For Service Now ITSM, use the ITSM action:
+- For ServiceNow ITSM, use the ITSM action:
- 1. Connect to your ITSM.
- - For ServiceNow ITSM, see [the ServiceNow connection instructions](./itsmc-connections-servicenow.md).
- - For SCSM, see [the System Center Service Manager connection instructions](./itsmc-connections-scsm.md).
- 1. (Optional) Set up the IP Ranges. In order to list the ITSM IP addresses to allow ITSM connections from partner ITSM tools, we recommend listing the whole public IP range of Azure region where their LogAnalytics workspace belongs. [details here](https://www.microsoft.com/en-us/download/details.aspx?id=56519). For regions EUS/WEU/EUS2/WUS2/US South Central the customer can list ActionGroup network tag only.)
- 1. [Configure your Azure ITSM Solution](./itsmc-definition.md#add-it-service-management-connector)
- 1. [Configure the Azure ITSM connector for your ITSM environment.](./itsmc-definition.md#create-an-itsm-connection)
- 1. [Configure Action Group to leverage ITSM connector.](./itsmc-definition.md#define-a-template)
+ 1. Connect to your ITSM. See [the ServiceNow connection instructions](./itsmc-connections-servicenow.md).
+ 1. (Optional) Set up the IP Ranges. In order to list the ITSM IP addresses to allow ITSM connections from partner ITSM tools, we recommend listing the whole public IP range of Azure region where their LogAnalytics workspace belongs. [See details here](https://www.microsoft.com/en-us/download/details.aspx?id=56519). For regions EUS/WEU/EUS2/WUS2/US South Central the customer can list ActionGroup network tag only.
+ 1. [Configure your Azure ITSM Solution and create the ITSM connection](./itsmc-definition.md#add-it-service-management-connector).
+ 1. [Configure Action Group to leverage ITSM connector](./itsmc-definition.md#define-a-template).
## Next steps-
-* [Troubleshooting problems in ITSM Connector](./itsmc-resync-servicenow.md)
+- [ServiceNow connection instructions](./itsmc-connections-servicenow.md).
azure-monitor Itsmc Service Manager Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-service-manager-script.md
- Title: Create web app for Service Management Connector
-description: Create a Service Manager Web app using an automated script to connect with IT Service Management Connector in Azure, and centrally monitor and manage the ITSM work items.
- Previously updated : 2/23/2022----
-# Create Service Manager Web app using the automated script
-
-Use the following script to create the Web app for your Service Manager instance. More information about Service Manager connection is here: [Service Manager Web app](./itsmc-connections-scsm.md)
-
-Run the script by providing the following required details:
--- Azure subscription details-- Resource group name-- Location-- Service Manager server details (server name, domain, username, and password)-- Site name prefix for your Web app-- ServiceBus Namespace.-
-The script will create the Web app using the name that you specified (along with few additional strings to make it unique). It generates the **Web app URL**, **client ID**, and **client secret**.
-
-Save these values, you will need these values when you create a connection with IT Service Management Connector.
--
-## Prerequisites
-
- Windows Management Framework 5.0 or above.
- Windows 10 has 5.1 by default. You can download the framework from [here](https://www.microsoft.com/download/details.aspx?id=50395):
-
-Use the following script:
-
-```powershell
-####################################
-# User Configuration Section Begins
-####################################
-
-# Subscription name in Azure account. Check in Azure Portal.
-$azureSubscriptionName = ""
-
-# Resource group name for resource deployment. Could be an existing resource group or a new one to be created.
-$resourceGroupName = ""
-
-# Location for existing resource group or new resource group deployment
-################################### List of available regions #################################################
-# centralus,eastasia,southeastasia,eastus,eastus2,westus,westus2,northcentralus,southcentralus,westcentralus,
-# northeurope,westeurope,japaneast,japanwest,brazilsouth,australiasoutheast,australiaeast,westindia,southindia,
-# centralindia,canadacentral,canadaeast,uksouth,ukwest.
-###############################################################################################################
-$location = ""
-
-# Service Manager Authentication Settings
-$serverName = ""
-$domain = ""
-$username = ""
-$password = ""
--
-# Azure site Name Prefix. Default is "smoc". It can be configured to any desired value.
-$siteNamePrefix = ""
-
-# Service Bus namespace. Please provide an already existing service bus namespace.
-# If it doesn't exist, a new one will be created with name $siteName + "sbn" which can also be later reused for any other hybrid connections.
-$serviceName = ""
-
-##################################
-# User Configuration Section Ends
-##################################
-
-################
-# Installations
-################
-
-# Allowing the execution of the script for current user.
-Set-ExecutionPolicy -ExecutionPolicy Unrestricted -Scope CurrentUser -Force
-
-Write-Host "Checking for required modules..."
-if(!(Get-PackageProvider -Name NuGet))
-{
- Write-Host "Installing NuGet Package Provider..."
- Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201 -Scope CurrentUser -Force -WarningAction SilentlyContinue
-}
-$module = Get-Module -ListAvailable -Name Az
-
-if(!$module -or ($module[0].Version.Major -lt 1))
-{
- Write-Host "Installing Az Module..."
- try
- {
- # In case of Win 10 Anniversary update
- Install-Module Az -MinimumVersion 1.0 -Scope CurrentUser -Force -WarningAction SilentlyContinue -AllowClobber
- }
- catch
- {
- Install-Module Az -MinimumVersion 1.0 -Scope CurrentUser -Force -WarningAction SilentlyContinue
- }
-
-}
-
-Write-Host "Requirement check complete!!"
-
-#############
-# Parameters
-#############
-
-$errorActionPreference = "Stop"
-
-$templateUri = "https://raw.githubusercontent.com/Azure/SMOMSConnector/master/azuredeploy.json"
-
-if(!$siteNamePrefix)
-{
- $siteNamePrefix = "smoc"
-}
-
-Connect-AzAccount
-
-$context = Set-AzContext -SubscriptionName $azureSubscriptionName -WarningAction SilentlyContinue
-
-$resourceProvider = Get-AzResourceProvider -ProviderNamespace Microsoft.Web
-
-if(!$resourceProvider -or $resourceProvider[0].RegistrationState -ne "Registered")
-{
- try
- {
- Write-Host "Registering Microsoft.Web Resource Provider"
- Register-AzResourceProvider -ProviderNamespace Microsoft.Web
- }
- catch
- {
- Write-Host "Failed to Register Microsoft.Web Resource Provider. Please register it in Azure Portal."
- exit
- }
-}
-do
-{
- $rand = Get-Random -Maximum 32000
-
- $siteName = $siteNamePrefix + $rand
-
- $resource = Get-AzResource -Name $siteName -ResourceType Microsoft.Web/sites
-
-}while($resource)
-
-$azureSite = "https://"+$siteName+".azurewebsites.net"
-
-##############
-# MAIN Begins
-##############
-
-# Web App Deployment
-####################
-
-$tenant = $context.Tenant.Id
-if(!$tenant)
-{
- #For backward compatibility with older versions
- $tenant = $context.Tenant.TenantId
-}
-try
-{
- Get-AzResourceGroup -Name $resourceGroupName
-}
-catch
-{
- New-AzResourceGroup -Location $location -Name $resourceGroupName
-}
-
-Write-Output "Web App Deployment in progress...."
-
-New-AzResourceGroupDeployment -TemplateUri $templateUri -siteName $siteName -ResourceGroupName $resourceGroupName
-
-Write-Output "Web App Deployed successfully!!"
-
-# AAD Authentication
-####################
-
-Add-Type -AssemblyName System.Web
-
-$secret = [System.Web.Security.Membership]::GeneratePassword(30,2).ToString()
-$clientSecret = $secret | ConvertTo-SecureString -AsPlainText -Force
--
-try
-{
-
- Write-Host "Creating AzureAD application..."
-
- $adApp = New-AzADApplication -DisplayName $siteName -HomePage $azureSite -IdentifierUris $azureSite -Password $clientSecret
-
- Write-Host "AzureAD application created successfully!!"
-}
-catch
-{
- # Delete the deployed web app if Azure AD application fails
- Remove-AzResource -ResourceGroupName $resourceGroupName -ResourceName $siteName -ResourceType Microsoft.Web/sites -Force
-
- Write-Host "Failure occurred in Azure AD application....Try again!!"
-
- exit
-
-}
-
-$clientId = $adApp.ApplicationId
-
-$servicePrincipal = New-AzADServicePrincipal -ApplicationId $clientId -Role Contributor
-
-# Web App Configuration
-#######################
-try
-{
-
- Write-Host "Configuring deployed Web-App..."
- $webApp = Get-AzWebAppSlot -ResourceGroupName $resourceGroupName -Name $siteName -Slot production -WarningAction SilentlyContinue
-
- $appSettingList = $webApp.SiteConfig.AppSettings
-
- $appSettings = @{}
- ForEach ($item in $appSettingList) {
- $appSettings[$item.Name] = $item.Value
- }
-
- $appSettings['ida:Tenant'] = $tenant
- $appSettings['ida:Audience'] = $azureSite
- $appSettings['ida:ServerName'] = $serverName
- $appSettings['ida:Domain'] = $domain
- $appSettings['ida:Username'] = $userName
- $appSettings['ida:WhitelistedClientId'] = $clientId
-
- $connStrings = @{}
- $kvp = @{"Type"="Custom"; "Value"=$password}
- $connStrings['ida:Password'] = $kvp
-
- Set-AzWebAppSlot -ResourceGroupName $resourceGroupName -Name $siteName -AppSettings $appSettings -ConnectionStrings $connStrings -Slot production -WarningAction SilentlyContinue
-
-}
-catch
-{
- Write-Host "Web App configuration failed. Please ensure all values are provided in Service Manager Authentication Settings in User Configuration Section"
-
- # Delete the AzureRm AD Application if configuration fails
- Remove-AzADApplication -ObjectId $adApp.ObjectId -Force
-
- # Delete the deployed web app if configuration fails
- Remove-AzResource -ResourceGroupName $resourceGroupName -ResourceName $siteName -ResourceType Microsoft.Web/sites -Force
-
- exit
-}
--
-# Relay Namespace
-###################
-
-if(!$serviceName)
-{
- $serviceName = $siteName + "sbn"
-}
-
-$resourceProvider = Get-AzResourceProvider -ProviderNamespace Microsoft.Relay
-
-if(!$resourceProvider -or $resourceProvider[0].RegistrationState -ne "Registered")
-{
- try
- {
- Write-Host "Registering Microsoft.Relay Resource Provider"
- Register-AzResourceProvider -ProviderNamespace Microsoft.Relay
- }
- catch
- {
- Write-Host "Failed to Register Microsoft.Relay Resource Provider. Please register it in Azure Portal."
- }
-}
-
-$resource = Get-AzResource -Name $serviceName -ResourceType Microsoft.Relay/namespaces
-
-if(!$resource)
-{
- $serviceName = $siteName + "sbn"
- $properties = @{
- "sku" = @{
- "name"= "Standard"
- "tier"= "Standard"
- "capacity"= 1
- }
- }
- try
- {
- Write-Host "Creating Service Bus namespace..."
- New-AzResource -ResourceName $serviceName -Location $location -PropertyObject $properties -ResourceGroupName $resourceGroupName -ResourceType Microsoft.Relay/namespaces -ApiVersion 2016-07-01 -Force
- }
- catch
- {
- $err = $TRUE
- Write-Host "Creation of Service Bus Namespace failed...Please create it manually from Azure Portal.`n"
- }
-
-}
-
-Write-Host "Note: Please Configure Hybrid connection in the Networking section of the web application in Azure Portal to link to the on-premises system.`n"
-Write-Host "App Details"
-Write-Host "============"
-Write-Host "App Name:" $siteName
-Write-Host "Client Id:" $clientId
-Write-Host "Client Secret:" $secret
-Write-Host "URI:" $azureSite
-if(!$err)
-{
- Write-Host "ServiceBus Namespace:" $serviceName
-}
-```
-
-## Troubleshoot Service Manager web app deployment
--- If you have problems with web app deployment, ensure that you have permissions to create/deploy resources in the subscription.-- If you get an **Object reference not set to instance of an object** error when you run the [script](itsmc-service-manager-script.md), ensure that you entered valid values in the **User Configuration** section.-- If you fail to create the service bus relay namespace, ensure that the required resource provider is registered in the subscription. If it's not registered, manually create the service bus relay namespace from the Azure portal. You can also create it when you [create the hybrid connection](./itsmc-connections-scsm.md#configure-the-hybrid-connection) in the Azure portal.-
-## Next steps
-[Configure the Hybrid connection](./itsmc-connections-scsm.md#configure-the-hybrid-connection).
azure-monitor Itsmc Synced Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-synced-data.md
# Data synced from your ITSM product
-Incidents and change requests are synced from your ITSM tool to your Log Analytics workspace, based on the connection's configuration (using "Sync Data" field):
- >[!NOTE]
- > As of March 1, 2022, System Center ITSM integrations with Azure alerts is no longer enabled for new customers. New System Center ITSM Connections are not supported.
- > Existing ITSM connections are supported.
+Incidents and change requests are synced from [ServiceNow](./itsmc-connections-servicenow.md) to your Log Analytics workspace, based on the connection's configuration using the "Sync Data" field:
## Synced data This section shows some examples of data gathered by ITSMC.
azure-monitor Itsmc Troubleshoot Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-troubleshoot-overview.md
The following sections identify common symptoms, possible causes, and resolution
### Work items are not created
-**Cause**: There can be several reasons for this symptom:
+**Cause**: There can be several reasons for this:
* Code was modified on the ServiceNow side. * Permissions are misconfigured.
The following sections identify common symptoms, possible causes, and resolution
### Sync connection
-**Cause**: There can be several reasons for this symptom:
+**Cause**: There can be several reasons for this:
* Templates are not shown as a part of the action definition dropdown and an error message is shown: "Can't retrieve the template configuration, see the connector logs for more information." * Values are not shown in the dropdowns of the default fields as a part of the action definition and an error message is shown: "No values found for the following fields: \<field names\>."
The following sections identify common symptoms, possible causes, and resolution
* [Sync the connector](itsmc-resync-servicenow.md). * Check the [dashboard](itsmc-dashboard.md) and review the errors in the section for connector status. Then review the [common errors and their resolutions](itsmc-dashboard-errors.md)
-### Configuration Item is showing blank in incidents received from Service Now
-**Cause**: There can be several reasons for this symptom:
-* Only Log alerts supports configurtaion item, the alert can be from other type
-* The search results must have column Computer or Resource in order to have the configuration item
-* The values in the configurtaion item fied does not match to an entry in the CMDB
+### Configuration Item is blank in incidents received from ServiceNow
+**Cause**: There can be several reasons for this:
+* Only Log alerts supports the configuration item but the alert is another type of alert
+* To contain the configuration item, the search results must include the **Computer** or **Resource** column
+* The values in the configuration item field do not match an entry in the CMDB
**Resolution**: * Check whether it is log alert - if not configuration item not supported
azure-monitor Azure Vm Vmss Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-vm-vmss-apps.md
This article walks you through enabling Application Insights monitoring using th
## Enable Application Insights
-There are two ways to enable application monitoring for Azure virtual machines and Azure virtual machine scale sets hosted applications:
-
-### Auto-instrumentation via Application Insights Agent
-
-* This method is the easiest to enable, and no advanced configuration is required. It is often referred to as "runtime" monitoring.
-
-* For Azure virtual machines and Azure virtual machine scale sets we recommend at a minimum enabling this level of monitoring. After that, based on your specific scenario, you can evaluate whether manual instrumentation is needed.
+Auto-instrumentation is easy to enable with no advanced configuration required.
> [!NOTE]
-> Auto-instrumentation is currently only available for ASP.NET, ASP.NET Core IIS-hosted applications and Java. Use an SDK to instrument Node.js and Python applications hosted on an Azure virtual machines and virtual machine scale sets.
--
-#### ASP.NET / ASP.NET Core
+> Auto-instrumentation is available for ASP.NET, ASP.NET Core IIS-hosted applications and Java. Use an SDK to instrument Node.js and Python applications hosted on an Azure virtual machines and virtual machine scale sets.
- * The Application Insights Agent auto-collects the same dependency signals out-of-the-box as the .NET SDK. See [Dependency auto-collection](./auto-collect-dependencies.md#net) to learn more.
-
-#### Java
- * For Java, **[Application Insights Java 3.0 agent](./java-in-process-agent.md)** is the recommended approach. The most popular libraries and frameworks, as well as logs and dependencies are [auto-collected](./java-in-process-agent.md#autocollected-requests), with a multitude of [additional configurations](./java-standalone-config.md)
+### [.NET](#tab/net)
-### Code-based via SDK
-
-#### ASP.NET / ASP.NET Core
- * For .NET apps, this approach is much more customizable, but it requires [adding a dependency on the Application Insights SDK NuGet packages](./asp-net.md). This method, also means you have to manage the updates to the latest version of the packages yourself.
+The Application Insights Agent auto-collects the same dependency signals out-of-the-box as the .NET SDK. See [Dependency auto-collection](./auto-collect-dependencies.md#net) to learn more.
- * If you need to make custom API calls to track events/dependencies not captured by default with agent-based monitoring, you would need to use this method. Check out the [API for custom events and metrics article](./api-custom-events-metrics.md) to learn more.
+### [Java](#tab/Java)
- > [!NOTE]
- > For .NET apps only - if both agent based monitoring and manual SDK based instrumentation is detected only the manual instrumentation settings will be honored. This is to prevent duplicate data from being sent. To learn more about this check out the [troubleshooting section](#troubleshooting) below.
+For Java, **[Application Insights Java 3.0 agent](./java-in-process-agent.md)** is the recommended approach. The most popular libraries and frameworks, as well as logs and dependencies are [auto-collected](./java-in-process-agent.md#autocollected-requests), with a multitude of [other configurations](./java-standalone-config.md)
-#### Java
-
-If you need additional custom telemetry for Java applications, see what [is available](./java-in-process-agent.md#custom-telemetry), add [custom dimensions](./java-standalone-config.md#custom-dimensions), or use [telemetry processors](./java-standalone-telemetry-processors.md).
-
-#### Node.js
+### [Node.js](#tab/nodejs)
To instrument your Node.js application, use the [SDK](./nodejs.md).
-#### Python
+### [Python](#tab/python)
To monitor Python apps, use the [SDK](./opencensus-python.md). ++ ## Manage Application Insights Agent for .NET applications on Azure virtual machines using PowerShell > [!NOTE]
Get-AzResource -ResourceId "/subscriptions/<mySubscriptionId>/resourceGroups/<my
# Location : southcentralus # ResourceId : /subscriptions/<mySubscriptionId>/resourceGroups/<myVmResourceGroup>/providers/Microsoft.Compute/virtualMachines/<myVmName>/extensions/ApplicationMonitoring ```
-You may also view installed extensions in the [Azure virtual machine blade](../../virtual-machines/extensions/overview.md) in the Portal.
+You may also view installed extensions in the [Azure virtual machine section](../../virtual-machines/extensions/overview.md) in the Portal.
> [!NOTE] > Verify installation by clicking on Live Metrics Stream within the Application Insights Resource associated with the connection string you used to deploy the Application Insights Agent Extension. If you are sending data from multiple Virtual Machines, select the target Azure virtual machines under Server Name. It may take up to a minute for data to begin flowing.
C:\WindowsAzure\Logs\Plugins\Microsoft.Azure.Diagnostics.ApplicationMonitoringWi
### 2.8.44 -- Updated ApplicationInsights .NET/.NET Core SDK to 2.20.1-redfield.
+- Updated ApplicationInsights .NET/.NET Core SDK to 2.20.1 - red field.
- Enabled SQL query collection.-- Enabled support for Azure Active Directory (AAD) authentication.
+- Enabled support for Azure Active Directory authentication.
### 2.8.42 -- Updated ApplicationInsights .NET/.NET Core SDK to 2.18.1-redfield.
+- Updated ApplicationInsights .NET/.NET Core SDK to 2.18.1 - red field.
### 2.8.41
azure-monitor Correlation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/correlation.md
def function_1(parent_tracer=None):
## Telemetry correlation in .NET
+Correlation is handled by default when onboarding an app. No special actions are required.
+
+* [Application Insights for ASP.NET Core applications](asp-net-core.md#application-insights-for-aspnet-core-applications)
+* [Configure Application Insights for your ASP.NET website](asp-net.md#configure-application-insights-for-your-aspnet-website)
+* [Application Insights for Worker Service applications (non-HTTP applications)](worker-service.md#application-insights-for-worker-service-applications-non-http-applications)
+ .NET runtime supports distributed with the help of [Activity](https://github.com/dotnet/runtime/blob/master/src/libraries/System.Diagnostics.DiagnosticSource/src/ActivityUserGuide.md) and [DiagnosticSource](https://github.com/dotnet/runtime/blob/master/src/libraries/System.Diagnostics.DiagnosticSource/src/DiagnosticSourceUsersGuide.md) The Application Insights .NET SDK uses `DiagnosticSource` and `Activity` to collect and correlate telemetry.
azure-monitor Profiler Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/profiler-azure-functions.md
+
+ Title: Profile Azure Functions app with Application Insights Profiler
+description: Enable Application Insights Profiler for Azure Functions app.
+++
+ms.contributor: charles.weininger
+ Last updated : 05/03/2022++
+# Profile live Azure Functions app with Application Insights
+
+In this article, you'll use the Azure portal to:
+- View the current app settings for your Functions app.
+- Add two new app settings to enable Profiler on the Functions app.
+- Navigate to the Profiler for your Functions app to view data.
+
+> [!NOTE]
+> You can enable the Application Insights Profiler for Azure Functions apps on the **App Service** plan.
+
+## Pre-requisites
+
+- [An Azure Functions app](../../azure-functions/functions-create-function-app-portal.md). Verify your Functions app is on the **App Service** plan.
+
+ :::image type="content" source="./media/profiler-azure-functions/choose-plan.png" alt-text="Screenshot of where to select App Service plan from drop-down in Functions app creation.":::
++
+- Linked to [an Application Insights resource](./create-new-resource.md). Make note of the instrumentation key.
+
+## App settings for enabling Profiler
+
+|App Setting | Value |
+||-|
+|APPINSIGHTS_PROFILERFEATURE_VERSION | 1.0.0 |
+|DiagnosticServices_EXTENSION_VERSION | ~3 |
+
+## Add app settings to your Azure Functions app
+
+From your Functions app overview page in the Azure portal:
+
+1. Under **Settings**, select **Configuration**.
+
+ :::image type="content" source="./media/profiler-azure-functions/configuration-menu.png" alt-text="Screenshot of selecting Configuration from under the Settings section of the left side menu.":::
+
+1. In the **Application settings** tab, verify the `APPINSIGHTS_INSTRUMENTATIONKEY` setting is included in the settings list.
+
+ :::image type="content" source="./media/profiler-azure-functions/appinsights-key.png" alt-text="Screenshot showing the App Insights Instrumentation Key setting in the list.":::
+
+1. Select **New application setting**.
+
+ :::image type="content" source="./media/profiler-azure-functions/new-setting-button.png" alt-text="Screenshot outlining the new application setting button.":::
+
+1. Copy the **App Setting** and its **Value** from the [table above](#app-settings-for-enabling-profiler) and paste into the corresponding fields.
+
+ :::image type="content" source="./media/profiler-azure-functions/app-setting-1.png" alt-text="Screenshot adding the app insights profiler feature version setting.":::
+
+ :::image type="content" source="./media/profiler-azure-functions/app-setting-2.png" alt-text="Screenshot adding the diagnostic services extension version setting.":::
+
+ Leave the **Deployment slot setting** blank for now.
+
+1. Click **OK**.
+
+1. Click **Save** in the top menu, then **Continue**.
+
+ :::image type="content" source="./media/profiler-azure-functions/save-button.png" alt-text="Screenshot outlining the save button in the top menu of the configuration blade.":::
+
+ :::image type="content" source="./media/profiler-azure-functions/continue-button.png" alt-text="Screenshot outlining the continue button in the dialog after saving.":::
+
+The app settings now show up in the table:
+
+ :::image type="content" source="./media/profiler-azure-functions/app-settings-table.png" alt-text="Screenshot showing the two new app settings in the table on the configuration blade.":::
++
+## View the Profiler data for your Azure Functions app
+
+1. Under **Settings**, select **Application Insights (preview)** from the left menu.
+
+ :::image type="content" source="./media/profiler-azure-functions/app-insights-menu.png" alt-text="Screenshot showing application insights from the left menu of the Functions app.":::
+
+1. Select **View Application Insights data**.
+
+ :::image type="content" source="./media/profiler-azure-functions/view-app-insights-data.png" alt-text="Screenshot showing the button for viewing application insights data for the Functions app.":::
+
+1. On the App Insights page for your Functions app, select **Performance** from the left menu.
+
+ :::image type="content" source="./media/profiler-azure-functions/performance-menu.png" alt-text="Screenshot showing the performance link in the left menu of the app insights blade of the functions app.":::
+
+1. Select **Profiler** from the top menu of the Performance blade.
+
+ :::image type="content" source="./media/profiler-azure-functions/profiler-function-app.png" alt-text="Screenshot showing link to profiler for functions app.":::
++
+## Next Steps
+
+- Set these values using [Azure Resource Manager Templates](./azure-web-apps-net-core.md#app-service-application-settings-with-azure-resource-manager), [Azure PowerShell](/powershell/module/az.websites/set-azwebapp), or the [Azure CLI](/cli/azure/webapp/config/appsettings).
+- Learn more about [Profiler settings](profiler-settings.md).
azure-monitor Change Analysis Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-visualizations.md
ms.contributor: cawa Previously updated : 03/21/2022 Last updated : 04/18/2022
Azure Monitor's Change Analysis is:
From your resource's overview page in Azure portal, select **Diagnose and solve problems** the left menu. As you enter the Diagnose and Solve Problems tool, the **Microsoft.ChangeAnalysis** resource provider will automatically be registered.
-### Diagnose and solve problems tool for Web App
-
-> [!NOTE]
-> You may not immediately see web app in-guest file changes and configuration changes. Restart your web app and you should be able to view changes within 30 minutes. If not, refer to [the troubleshooting guide](./change-analysis-troubleshoot.md#cannot-see-in-guest-changes-for-newly-enabled-web-app).
-
-1. Select **Availability and Performance**.
-
- :::image type="content" source="./media/change-analysis/availability-and-performance.png" alt-text="Screenshot of the Availability and Performance troubleshooting options":::
-
-2. Select **Application Changes (Preview)**. The feature is also available in **Application Crashes**.
-
- :::image type="content" source="./media/change-analysis/application-changes.png" alt-text="Screenshot of the Application Crashes button":::
-
- The link leads to Azure Monitor's Change Analysis UI scoped to the web app.
-
-3. Enable web app in-guest change tracking if you haven't already.
-
- :::image type="content" source="./media/change-analysis/enable-changeanalysis.png" alt-text="Screenshot of the Application Crashes options":::
-
-4. Toggle on **Change Analysis** status and select **Save**.
-
- :::image type="content" source="./media/change-analysis/change-analysis-on.png" alt-text="Screenshot of the Enable Change Analysis user interface":::
-
- - The tool displays all web apps under an App Service plan, which you can toggle on and off individually.
-
- :::image type="content" source="./media/change-analysis/change-analysis-on-2.png" alt-text="Screenshot of the Enable Change Analysis user interface expanded":::
--
-You can also view change data via the **Web App Down** and **Application Crashes** detectors. The graph summarizes:
-- The change types over time.-- Details on those changes. -
-By default, the graph displays changes from within the past 24 hours help with immediate problems.
-- ### Diagnose and solve problems tool for Virtual Machines
-Change Analysis displays as an insight card in a your virtual machine's **Diagnose and solve problems** tool. The insight card displays the number of changes or issues a resource experiences within the past 72 hours.
+Change Analysis displays as an insight card in your virtual machine's **Diagnose and solve problems** tool. The insight card displays the number of changes or issues a resource experiences within the past 72 hours.
1. Within your virtual machine, select **Diagnose and solve problems** from the left menu. 1. Go to **Troubleshooting tools**.
azure-monitor Change Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis.md
ms.contributor: cawa Previously updated : 03/21/2022 Last updated : 04/18/2022
The Change Analysis service:
- Easily navigate through all resource changes. - Identify relevant changes in the troubleshooting or monitoring context.
+### Enable Change Analysis
+ You'll need to register the `Microsoft.ChangeAnalysis` resource provider with an Azure Resource Manager subscription to make the tracked properties and proxied settings change data available. The `Microsoft.ChangeAnalysis` resource is automatically registered as you either: - Enter the Web App **Diagnose and Solve Problems** tool, or - Bring up the Change Analysis standalone tab.
-For web app in-guest changes, separate enablement is required for scanning code files within a web app. For more information, see [Change Analysis in the Diagnose and solve problems tool](change-analysis-visualizations.md#diagnose-and-solve-problems-tool-for-web-app) section.
+### Enable Change Analysis for web app in-guest changes
+
+For web app in-guest changes, separate enablement is required for scanning code files within a web app. For more information, see [Change Analysis in the Diagnose and solve problems tool](change-analysis-visualizations.md#diagnose-and-solve-problems-tool) section.
+
+> [!NOTE]
+> You may not immediately see web app in-guest file changes and configuration changes. Restart your web app and you should be able to view changes within 30 minutes. If not, refer to [the troubleshooting guide](./change-analysis-troubleshoot.md#cannot-see-in-guest-changes-for-newly-enabled-web-app).
+
+1. Select **Availability and Performance**.
+
+ :::image type="content" source="./media/change-analysis/availability-and-performance.png" alt-text="Screenshot of the Availability and Performance troubleshooting options":::
+
+2. Select **Application Changes (Preview)**.
+
+ :::image type="content" source="./media/change-analysis/application-changes.png" alt-text="Screenshot of the Application Changes button":::
+
+ The link leads to Azure Monitor's Change Analysis UI scoped to the web app.
+
+3. Enable web app in-guest change tracking by either:
+
+ - Selecting **Enable Now** in the banner, or
+
+ :::image type="content" source="./media/change-analysis/enable-changeanalysis.png" alt-text="Screenshot of the Application Changes options from the banner":::
+
+ - Selecting **Configure** from the top menu.
+
+ :::image type="content" source="./media/change-analysis/configure-button.png" alt-text="Screenshot of the Application Changes options from the top menu":::
+
+4. Toggle on **Change Analysis** status and select **Save**.
+
+ :::image type="content" source="./media/change-analysis/change-analysis-on.png" alt-text="Screenshot of the Enable Change Analysis user interface":::
+
+ - The tool displays all web apps under an App Service plan, which you can toggle on and off individually.
-If you don't see changes within 30 minutes, refer to [the troubleshooting guide](./change-analysis-troubleshoot.md#cannot-see-in-guest-changes-for-newly-enabled-web-app).
+ :::image type="content" source="./media/change-analysis/change-analysis-on-2.png" alt-text="Screenshot of the Enable Change Analysis user interface expanded":::
+You can also view change data via the **Web App Down** and **Application Crashes** detectors. The graph summarizes:
+- The change types over time.
+- Details on those changes.
-## Cost
-Azure Monitor's Change Analysis is a free service. Once enabled, the Change Analysis **Diagnose and solve problems** tool does not:
-- Incur any billing cost to subscriptions. -- Have any performance impact for scanning Azure Resource properties changes.
+By default, the graph displays changes from within the past 24 hours help with immediate problems.
-## Data retention
-Change Analysis provides 14 days of data retention.
-## Enable Change Analysis at scale for Web App in-guest file and environment variable changes
+### Enable Change Analysis at scale for Web App in-guest file and environment variable changes
If your subscription includes several web apps, enabling the service at the web app level would be inefficient. Instead, run the following script to enable all web apps in your subscription.
-### Pre-requisites
+#### Pre-requisites
PowerShell Az Module. Follow instructions at [Install the Azure PowerShell module](/powershell/azure/install-az-ps)
-### Run the following script:
+#### Run the following script:
```PowerShell # Log in to your Azure subscription
azure-monitor Activity Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/activity-log.md
The following columns have been added to *AzureActivity* in the updated schema:
- Claims_d - Properties_d
-## Activity log insights (Preview)
+## Activity log insights
Activity log insights let you view information about changes to resources and resource groups in a subscription. The dashboards also present data about which users or services performed activities in the subscription and the activities' status. This article explains how to view Activity log insights in the Azure portal.
azure-monitor Activity Logs Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/activity-logs-insights.md
- Title: Activity logs insights
-description: View the overview of Azure Activity logs of your resources
--- Previously updated : 04/14/2022--
-#Customer intent: As an IT administrator, I want to track changes to resource groups or specific resources in a subscription and to see which administrators or services make these changes.
-
-
-# Activity logs insights (Preview)
-Activity logs insights let you view information about changes to resources and resource groups in your Azure subscription. It uses information from the [Activity log](activity-log.md) to also present data about which users or services performed particular activities in the subscription. This includes which administrators deleted, updated or created resources, and whether the activities failed or succeeded. This article explains how to enable and use Activity log insights.
-
-## Enable Activity log insights
-The only requirement to enable Activity log insights is to [configure the Activity log to export to a Log Analytics workspace](activity-log.md#send-to-log-analytics-workspace). Pre-built [workbooks](../visualize/workbooks-overview.md) curate this data, which is stored in the [AzureActivity](/azure/azure-monitor/reference/tables/azureactivity) table in the workspace.
--
-## View Activity logs insights - Resource group / Subscription level
-
-To view Activity logs insights on a resource group or a subscription level:
-
-1. In the Azure portal, select **Monitor** > **Workbooks**.
-1. Select **Activity Logs Insights** in the **Insights** section.
-
- :::image type="content" source="media/activity-log/open-activity-log-insights-workbook.png" lightbox="media/activity-log/open-activity-log-insights-workbook.png" alt-text="A screenshot showing how to locate and open the Activity logs insights workbook on a scale level":::
-
-1. At the top of the **Activity Logs Insights** page, select:
- 1. One or more subscriptions from the **Subscriptions** dropdown.
- 1. Resources and resource groups from the **CurrentResource** dropdown.
- 1. A time range for which to view data from the **TimeRange** dropdown.
-## View Activity logs insights on any Azure resource
-
->[!Note]
-> * Currently Applications Insights resources are not supported for this workbook.
-
-To view Activity logs insights on a resource level:
-
-1. In the Azure portal, go to your resource, select **Workbooks**.
-1. Select **Activity Logs Insights** in the **Activity Logs Insights** section.
-
- :::image type="content" source="media/activity-log/activity-log-resource-level.png" lightbox= "media/activity-log/activity-log-resource-level.png" alt-text="A screenshot showing how to locate and open the Activity logs insights workbook on a resource level":::
-
-1. At the top of the **Activity Logs Insights** page, select:
-
- 1. A time range for which to view data from the **TimeRange** dropdown.
- * **Azure Activity Logs Entries** shows the count of Activity log records in each [activity log category](./activity-log-schema.md#categories).
-
- :::image type="content" source="media/activity-log/activity-logs-insights-category-value.png" lightbox= "media/activity-log/activity-logs-insights-category-value.png" alt-text="Azure Activity Logs by Category Value":::
-
- * **Activity Logs by Status** shows the count of Activity log records in each status.
-
- :::image type="content" source="media/activity-log/activity-logs-insights-status.png" lightbox= "media/activity-log/activity-logs-insights-status.png" alt-text="Azure Activity Logs by Status":::
-
- * At the subscription and resource group level, **Activity Logs by Resource** and **Activity Logs by Resource Provider** show the count of Activity log records for each resource and resource provider.
-
- :::image type="content" source="media/activity-log/activity-logs-insights-resource.png" lightbox= "media/activity-log/activity-logs-insights-resource.png" alt-text="Azure Activity Logs by Resource":::
-
-## Next steps
-Learn more about:
-* [Platform logs](./platform-logs-overview.md)
-* [Activity log event schema](activity-log-schema.md)
-* [Creating a diagnostic setting to send Activity logs to other destinations](./diagnostic-settings.md)
azure-monitor Analyze Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/analyze-usage.md
Event
Analyze the amount of billable data collect from a virtual machine or set of virtual machines. The **Usage** table doesn't include information about data collected from virtual machines, so these queries use the [find operator](/azure/data-explorer/kusto/query/findoperator) to search all tables that include a computer name. The **Usage** type is omitted because this is only for analytics of data trends. > [!WARNING]
-> Use [find](/azure/data-explorer/kusto/query/findoperator?pivots=azuremonitor) queries sparingly because scans across data types are [resource intensive](./query-optimization.md#query-performance-pane) to execute. If you don't need results per subscription, resource group, or resource name, use the [Usage](/azure/azure-monitor/reference/tables/usage) table as in the queries above.
+> Use [find](/azure/data-explorer/kusto/query/findoperator?pivots=azuremonitor) queries sparingly because scans across data types are [resource intensive](./query-optimization.md#query-details-pane) to execute. If you don't need results per subscription, resource group, or resource name, use the [Usage](/azure/azure-monitor/reference/tables/usage) table as in the queries above.
**Billable data volume by computer**
find where TimeGenerated > ago(24h) project _IsBillable, Computer
Analyze the amount of billable data collected from a particular resource or set of resources. These queries use the [_ResourceId](./log-standard-columns.md#_resourceid) and [_SubscriptionId](./log-standard-columns.md#_subscriptionid) columns for data from resources hosted in Azure. > [!WARNING]
-> Use [find](/azure/data-explorer/kusto/query/findoperator?pivots=azuremonitor) queries sparingly because scans across data types are [resource intensive](./query-optimization.md#query-performance-pane) to execute. If you don't need results per subscription, resource group, or resource name, use the [Usage](/azure/azure-monitor/reference/tables/usage) table as in the queries above.
+> Use [find](/azure/data-explorer/kusto/query/findoperator?pivots=azuremonitor) queries sparingly because scans across data types are [resource intensive](./query-optimization.md#query-details-pane) to execute. If you don't need results per subscription, resource group, or resource name, use the [Usage](/azure/azure-monitor/reference/tables/usage) table as in the queries above.
**Billable data volume by resource ID**
union (AppAvailabilityResults),
If you don't have excessive data from any particular source, you may have an excessive number of agents that are sending data. > [!WARNING]
-> Use [find](/azure/data-explorer/kusto/query/findoperator?pivots=azuremonitor) queries sparingly because scans across data types are [resource intensive](./query-optimization.md#query-performance-pane) to execute. If you don't need results per subscription, resource group, or resource name, use the [Usage](/azure/azure-monitor/reference/tables/usage) table as in the queries above.
+> Use [find](/azure/data-explorer/kusto/query/findoperator?pivots=azuremonitor) queries sparingly because scans across data types are [resource intensive](./query-optimization.md#query-details-pane) to execute. If you don't need results per subscription, resource group, or resource name, use the [Usage](/azure/azure-monitor/reference/tables/usage) table as in the queries above.
**Count of agent nodes that are sending a heartbeat each day in the last month**
azure-monitor Cross Workspace Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cross-workspace-query.md
description: This article describes how you can query against resources from mul
Previously updated : 06/30/2021 Last updated : 04/28/2022
-# Perform log queries in Azure Monitor that span across workspaces and apps
+# Create a log query across multiple workspaces and apps in Azure Monitor
Azure Monitor Logs support querying across multiple Log Analytics workspaces and Application Insights apps in the same resource group, another resource group, or another subscription. This provides you with a system-wide view of your data.
azure-monitor Log Analytics Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-analytics-tutorial.md
Title: "Log Analytics tutorial"
-description: Learn from this tutorial how to use features of Log Analytics in Azure Monitor to build and run a log query and analyze its results in the Azure portal.
+description: Learn how to use Log Analytics in Azure Monitor to build and run a log query and analyze its results in the Azure portal.
Last updated 06/28/2021 # Log Analytics tutorial
-Log Analytics is a tool in the Azure portal to edit and run log queries from data collected by Azure Monitor Logs and interactively analyze their results. You can use Log Analytics queries to retrieve records that match particular criteria, identify trends, analyze patterns, and provide a variety of insights into your data.
+Log Analytics is a tool in the Azure portal to edit and run log queries from data collected by Azure Monitor Logs and interactively analyze their results. You can use Log Analytics queries to retrieve records that match particular criteria, identify trends, analyze patterns, and provide various insights into your data.
This tutorial walks you through the Log Analytics interface, gets you started with some basic queries, and shows you how you can work with the results. You'll learn the following:
Open the [Log Analytics demo environment](https://portal.azure.com/#blade/Micros
You can view the scope in the upper-left corner of the screen. If you're using your own environment, you'll see an option to select a different scope. This option isn't available in the demo environment. ## View table information The left side of the screen includes the **Tables** tab, where you can inspect the tables that are available in the current scope. These tables are grouped by **Solution** by default, but you can change their grouping or filter them.
Expand the **Log Management** solution and locate the **AppRequests** table. You
Select the link below **Useful links** to go to the table reference that documents each table and its columns. Select **Preview data** to have a quick look at a few recent records in the table. This preview can be useful to ensure that this is the data that you're expecting before you run a query with it. ## Write a query Let's write a query by using the **AppRequests** table. Double-click its name to add it to the query window. You can also type directly in the window. You can even get IntelliSense that will help complete the names of tables in the current scope and Kusto Query Language (KQL) commands.
You can see that we do have results. The number of records that the query has re
Let's add a filter to the query to reduce the number of records that are returned. Select the **Filter** tab on the left pane. This tab shows columns in the query results that you can use to filter the results. The top values in those columns are displayed with the number of records that have that value. Select **200** under **ResultCode**, and then select **Apply & Run**. A **where** statement is added to the query with the value that you selected. The results now include only records with that value, so you can see that the record count is reduced. ### Time range
LetΓÇÖs change the time range of the query by selecting **Last 12 hours** from t
> [!NOTE] > Changing the time range using the **Time range** dropdown does not change the query in the query editor. ### Multiple query conditions Let's reduce our results further by adding another filter condition. A query can include any number of filters to target exactly the set of records that you want. Select **Get Home/Index** under **Name**, and then select **Apply & Run**. ## Analyze results In addition to helping you write and run queries, Log Analytics provides features for working with the results. Start by expanding a record to view the values for all of its columns. Select the name of any column to sort the results by that column. Select the filter icon next to it to provide a filter condition. This is similar to adding a filter condition to the query itself, except that this filter is cleared if the query is run again. Use this method if you want to quickly analyze a set of records as part of interactive analysis.
-For example, set a filter on the **DurationMs** column to limit the records to those that took more than **100** milliseconds.
+For example, set a filter on the **DurationMs** column to limit the records to those that took more than **150** milliseconds.
:::image type="content" source="media/log-analytics-tutorial/query-results-filter.png" alt-text="Screenshot that shows a query results filter." lightbox="media/log-analytics-tutorial/query-results-filter.png":::
-Instead of filtering the results, you can group records by a particular column. Clear the filter that you just created and then turn on the **Group columns** toggle.
+### Search through query results
+Let's search through the query results using the search box at the top right of the results pane.
-Drag the **Url** column into the grouping row. Results are now organized by that column, and you can collapse each group to help you with your analysis.
+Enter **Chicago** in the query results search box and select the arrows to find all instances of this string in your search results.
+
+### Reorganize and summarize data
+
+To better visualize your data, you can reorganize and summarize the data in the query results based on your needs.
+
+Select **Columns** to the right of the results pane to open the **Columns** sidebar.
+
+
+In the sidebar, you'll see a list of all available columns. Drag the **Url** column into the **Row Group** section. Results are now organized by that column, and you can collapse each group to help you with your analysis. This is similar to adding a filter condition to the query, but instead of refetching data from the server, you're processing the data your original query returned. When you run the query again, Log Analytics retrieves data based on your original query. Use this method if you want to quickly analyze a set of records as part of interactive analysis.
+
+### Create a pivot table
+
+To analyze the performance of your pages, create a pivot table.
+
+In the **Columns** sidebar, select **Pivot Mode**.
+
+Select **Url** and **DurationMs** to show the total duration of all calls to each URL.
+
+To view the maximum call duration to each URL, select **sum(DurationMs)** > **max**.
++
+Now let's sort the results by longest maximum call duration by selecting the **max(DurationMs)** column in the results pane.
+ ## Work with charts Let's look at a query that uses numerical data that we can view in a chart. Instead of building a query, we'll select an example query.
-Select **Queries** on the left pane. This pane includes example queries that you can add to the query window. If you're using your own workspace, you should have a variety of queries in multiple categories. If you're using the demo environment, you might see only a single **Log Analytics workspaces** category. Expand that to view the queries in the category.
+Select **Queries** on the left pane. This pane includes example queries that you can add to the query window. If you're using your own workspace, you should have various queries in multiple categories. If you're using the demo environment, you might see only a single **Log Analytics workspaces** category. Expand that to view the queries in the category.
Select the query called **Function Error rate** in the **Applications** category. This step adds the query to the query window. Notice that the new query is separated from the other by a blank line. A query in KQL ends when it encounters a blank line, so these are considered separate queries.
azure-monitor Private Link Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/private-link-configure.md
To add a new connection, select **Add** and select the Azure Monitor Private Lin
### Virtual networks access configuration - Managing access from outside of private links scopes The settings on the bottom part of this page control access from public networks, meaning networks not connected to the listed scopes (AMPLSs).
-If you set **Allow public network access for ingestion** to **No**, then clients (machines, SDKs, etc.) outside of the connected scopes can't upload data or send logs to the resource.
+If you set **Accept data ingestion from public networks not connected through a Private Link Scope** to **No**, then clients (machines, SDKs, etc.) outside of the connected scopes can't upload data or send logs to the resource.
-If you set **Allow public network access for queries** to **No**, then clients (machines, SDKs etc.) outside of the connected scopes can't query data in the resource. That data includes access to logs, metrics, and the live metrics stream, as well as experiences built on top such as workbooks, dashboards, query API-based client experiences, insights in the Azure portal, and more. Experiences running outside the Azure portal and that query Log Analytics data also have to be running within the private-linked VNET.
+If you set **Accept queries from public networks not connected through a Private Link Scope** to **No**, then clients (machines, SDKs etc.) outside of the connected scopes can't query data in the resource. That data includes access to logs, metrics, and the live metrics stream, as well as experiences built on top such as workbooks, dashboards, query API-based client experiences, insights in the Azure portal, and more. Experiences running outside the Azure portal and that query Log Analytics data also have to be running within the private-linked VNET.
## Use APIs and command line
The below screenshot shows endpoints mapped for an AMPLS with two workspaces in
- Learn about [private storage](private-storage.md) for Custom Logs and Customer managed keys (CMK) - Learn about [Private Link for Automation](../../automation/how-to/private-link-security.md)-- Learn about the new [Data Collection endpoints](../essentials/data-collection-endpoint-overview.md)
+- Learn about the new [Data Collection endpoints](../essentials/data-collection-endpoint-overview.md)
azure-monitor Private Link Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/private-link-security.md
Last updated 1/5/2022
With [Azure Private Link](../../private-link/private-link-overview.md), you can securely link Azure platform as a service (PaaS) resources to your virtual network by using private endpoints. Azure Monitor is a constellation of different interconnected services that work together to monitor your workloads. An Azure Monitor Private Link connects a private endpoint to a set of Azure Monitor resources, defining the boundaries of your monitoring network. That set is called an Azure Monitor Private Link Scope (AMPLS).
+> [!NOTE]
+> Azure Monitor Private Links are structured differently from Private Links to other services you may use. Instead of creating multiple Private Links, one for each resource the VNet connects to, Azure Monitor uses a single Private Link connection, from the VNet to an Azure Monitor Private Link Scope (AMPLS). AMPLS is the set of all Azure Monitor resources to which VNet connects through a Private Link.
+ ## Advantages
azure-monitor Query Optimization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/query-optimization.md
Optimized queries will:
You should give particular attention to queries that are used for recurrent and bursty usage such as dashboards, alerts, Logic Apps and Power BI. The impact of an ineffective query in these cases is substantial.
-Here is a detailed video walkthrough on optimizing queries.
+Here's a detailed video walkthrough on optimizing queries.
> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE4NUH0]
-## Query performance pane
-After you run a query in Log Analytics, click the down arrow above the query results to view the query performance pane that shows the results of several performance indicators for the query. These performance indicators are each described in the following section.
+## Query Details pane
+After you run a query in Log Analytics, select **Query details** at the bottom right corner of the screen to open the **Query Details** side pane. This pane shows the results of several performance indicators for the query. These performance indicators are each described in the following section.
-![Query performance pane](media/query-optimization/query-performance-pane.png)
## Query performance indicators
Query processing time is spent on:
- Data retrieval ΓÇô retrieval of old data will consume more time than retrieval of recent data. - Data processing ΓÇô logic and evaluation of the data.
-Other than time spent in the query processing nodes, there is additional time that is spend by Azure Monitor Logs to: authenticate the user and verify that they are permitted to access this data, locate the data store, parse the query, and allocate the query processing nodes. This time is not included in the query total CPU time.
+In addition to the time spent in the query processing nodes, Azure Monitor Logs spends time in authenticating the user and verifying they're permitted to access this data, locating the data store, parsing the query, and allocating the query processing nodes. This time isn't included in the query total CPU time.
### Early filtering of records prior of using high CPU functions Some of the query commands and functions are heavy in their CPU consumption. This is especially true for commands that parse JSON and XML or extract complex regular expressions. Such parsing can happen explicitly via [parse_json()](/azure/kusto/query/parsejsonfunction) or [parse_xml()](/azure/kusto/query/parse-xmlfunction) functions or implicitly when referring to dynamic columns.
-These functions consume CPU in proportion to the number of rows they are processing. The most efficient optimization is to add where conditions early in the query that can filter out as many records as possible before the CPU intensive function is executed.
+These functions consume CPU in proportion to the number of rows they're processing. The most efficient optimization is to add where conditions early in the query that can filter out as many records as possible before the CPU intensive function is executed.
For example, the following queries produce exactly the same result but the second one is by far the most efficient as the [where](/azure/kusto/query/whereoperator) condition before parsing excludes many records:
SecurityEvent
While some aggregation commands like [max()](/azure/kusto/query/max-aggfunction), [sum()](/azure/kusto/query/sum-aggfunction), [count()](/azure/kusto/query/count-aggfunction), and [avg()](/azure/kusto/query/avg-aggfunction) have low CPU impact due to their logic, other are more complex and include heuristics and estimations that allow them to be executed efficiently. For example, [dcount()](/azure/kusto/query/dcount-aggfunction) uses the HyperLogLog algorithm to provide close estimation to distinct count of large sets of data without actually counting each value; the percentile functions are doing similar approximations using the nearest rank percentile algorithm. Several of the commands include optional parameters to reduce their impact. For example, the [makeset()](/azure/kusto/query/makeset-aggfunction) function has an optional parameter to define the maximum set size, which significantly affects the CPU and memory.
-[Join](/azure/kusto/query/joinoperator?pivots=azuremonitor) and [summarize](/azure/kusto/query/summarizeoperator) commands may cause high CPU utilization when they are processing a large set of data. Their complexity is directly related to the number of possible values, referred to as *cardinality*, of the columns that are using as the `by` in summarize or as the join attributes. For explanation and optimization of join and summarize, see their documentation articles and optimization tips.
+[Join](/azure/kusto/query/joinoperator?pivots=azuremonitor) and [summarize](/azure/kusto/query/summarizeoperator) commands may cause high CPU utilization when they're processing a large set of data. Their complexity is directly related to the number of possible values, referred to as *cardinality*, of the columns that are using as the `by` in summarize or as the join attributes. For explanation and optimization of join and summarize, see their documentation articles and optimization tips.
For example, the following queries produce exactly the same result because **CounterPath** is always one-to-one mapped to **CounterName** and **ObjectName**. The second one is more efficient as the aggregation dimension is smaller:
Perf
by CounterPath ```
-CPU consumption might also be impacted by where conditions or extended columns that require intensive computing. All trivial string comparisons such as [equal ==](/azure/kusto/query/datatypes-string-operators) and [startswith](/azure/kusto/query/datatypes-string-operators) have roughly the same CPU impact while advanced text matches have more impact. Specifically, the [has](/azure/kusto/query/datatypes-string-operators) operator is more efficient that the [contains](/azure/kusto/query/datatypes-string-operators) operator. Due to string handling techniques, it is more efficient to look for strings that are longer than four characters than short strings.
+CPU consumption might also be impacted by where conditions or extended columns that require intensive computing. All trivial string comparisons such as [equal ==](/azure/kusto/query/datatypes-string-operators) and [startswith](/azure/kusto/query/datatypes-string-operators) have roughly the same CPU impact while advanced text matches have more impact. Specifically, the [has](/azure/kusto/query/datatypes-string-operators) operator is more efficient that the [contains](/azure/kusto/query/datatypes-string-operators) operator. Due to string handling techniques, it's more efficient to look for strings that are longer than four characters than short strings.
For example, the following queries produce similar results, depending on Computer naming policy, but the second one is more efficient:
Heartbeat
> This indicator presents only CPU from the immediate cluster. In multi-region query, it would represent only one of the regions. In multi-workspace query, it might not include all workspaces. ### Avoid full XML and JSON parsing when string parsing works
-Full parsing of an XML or JSON object may consume high CPU and memory resources. In many cases, when only one or two parameters are needed and the XML or JSON objects are simple, it is easier to parse them as strings using the [parse operator](/azure/kusto/query/parseoperator) or other [text parsing techniques](./parse-text.md). The performance boost will be more significant as the number of records in the XML or JSON object increases. It is essential when the number of records reaches tens of millions.
+Full parsing of an XML or JSON object may consume high CPU and memory resources. In many cases, when only one or two parameters are needed and the XML or JSON objects are simple, it's easier to parse them as strings using the [parse operator](/azure/kusto/query/parseoperator) or other [text parsing techniques](./parse-text.md). The performance boost will be more significant as the number of records in the XML or JSON object increases. It is essential when the number of records reaches tens of millions.
-For example, the following query will return exactly the same results as the queries above without performing full XML parsing. Note that it makes some assumptions on the XML file structure such as that FilePath element comes after FileHash and none of them has attributes.
+For example, the following query returns exactly the same results as the queries above without performing full XML parsing. The query makes some assumptions about the XML file structure, such as that the FilePath element comes after FileHash and none of them has attributes.
```Kusto //even more efficient
SecurityEvent
``` ### Avoid multiple scans of same source data using conditional aggregation functions and materialize function
-When a query has several sub-queries that are merged using join or union operators, each sub-query scans the entire source separately and then merge the results. This multiples the number of times data is scanned - critical factor in very large data sets.
+When a query has several subqueries that are merged using join or union operators, each subquery scans the entire source separately and then merges the results. This multiplies the number of times data is scanned - a critical factor in very large data sets.
-A technique to avoid this is by using the conditional aggregation functions. Most of the [aggregation functions](/azure/data-explorer/kusto/query/summarizeoperator#list-of-aggregation-functions) that are used in summary operator has a conditioned version that allow you to use a single summarize operator with multiple conditions.
+A technique to avoid this is by using the conditional aggregation functions. Most of the [aggregation functions](/azure/data-explorer/kusto/query/summarizeoperator#list-of-aggregation-functions) that are used in summary operator has a conditioned version that allows you to use a single summarize operator with multiple conditions.
For example, the following queries show the number of login events and the number of process execution events for each account. They return the same results but the first is scanning the data twice, the second scan it only once:
SecurityEvent
| summarize LoginCount = countif(EventID == 4624), ExecutionCount = countif(EventID == 4688), ExecutedProcesses = make_set_if(Process,EventID == 4688) by Account ```
-Another case where sub-queries are unnecessary is pre-filtering for [parse operator](/azure/data-explorer/kusto/query/parseoperator?pivots=azuremonitor) to make sure that it processes only records that match specific pattern. This is unnecessary as the parse operator and other similar operators return empty results when the pattern doesn't match. Here are two queries that return exactly the same results while the second query scan data only once. In the second query, each parse command is relevant only for its events. The extend operator afterwards shows how to refer to empty data situation.
+Another case where subqueries are unnecessary is pre-filtering for [parse operator](/azure/data-explorer/kusto/query/parseoperator?pivots=azuremonitor) to make sure that it processes only records that match specific pattern. This is unnecessary as the parse operator and other similar operators return empty results when the pattern doesn't match. Here are two queries that return exactly the same results while the second query scan data only once. In the second query, each parse command is relevant only for its events. The extend operator afterwards shows how to refer to empty data situation.
```Kusto //Scan SecurityEvent table twice
SecurityEvent
| distinct FilePath, CallerProcessName1 ```
-When the above doesn't allow to avoid using sub-queries, another technique is to hint to the query engine that there is a single source data used in each one of them using the [materialize() function](/azure/data-explorer/kusto/query/materializefunction?pivots=azuremonitor). This is useful when the source data is coming from a function that is used several times within the query. Materialize is effective when the output of the sub-query is much smaller than the input. The query engine will cache and reuse the output in all occurrences.
+When the above doesn't allow to avoid using subqueries, another technique is to hint to the query engine that there's a single source data used in each one of them using the [materialize() function](/azure/data-explorer/kusto/query/materializefunction?pivots=azuremonitor). This is useful when the source data is coming from a function that is used several times within the query. Materialize is effective when the output of the subquery is much smaller than the input. The query engine will cache and reuse the output in all occurrences.
Query with time span of more than 15 days is considered a query that consumes ex
The time range can be set using the time range selector in the Log Analytics screen as described in [Log query scope and time range in Azure Monitor Log Analytics](./scope.md#time-range). This is the recommended method as the selected time range is passed to the backend using the query metadata. An alternative method is to explicitly include a [where](/azure/kusto/query/whereoperator) condition on **TimeGenerated** in the query. You should use this method as it assures that the time span is fixed, even when the query is used from a different interface.
-You should ensure that all parts of the query have **TimeGenerated** filters. When a query has sub-queries fetching data from various tables or the same table, each has to include its own [where](/azure/kusto/query/whereoperator) condition.
+You should ensure that all parts of the query have **TimeGenerated** filters. When a query has subqueries fetching data from various tables or the same table, each has to include its own [where](/azure/kusto/query/whereoperator) condition.
-### Make sure all sub-queries have TimeGenerated filter
+### Make sure all subqueries have TimeGenerated filter
For example, in the following query, while the **Perf** table will be scanned only for the last day, the **Heartbeat** table will be scanned for all of its history, which might be up to two years:
by Computer
) on Computer ```
-Another example for this fault is when performing the time scope filtering just after a [union](/azure/kusto/query/unionoperator?pivots=azuremonitor) over several tables. When performing the union, each sub-query should be scoped. You can use [let](/azure/kusto/query/letstatement) statement to assure scoping consistency.
+Another example for this fault is when performing the time scope filtering just after a [union](/azure/kusto/query/unionoperator?pivots=azuremonitor) over several tables. When performing the union, each subquery should be scoped. You can use [let](/azure/kusto/query/letstatement) statement to assure scoping consistency.
For example, the following query will scan all the data in the *Heartbeat* and *Perf* tables, not just the last 1 day:
Heartbeat
### Time span measurement limitations
-The measurement is always larger than the actual time specified. For example, if the filter on the query is 7 days, the system might scan 7.5 or 8.1 days. This is because the system is partitioning the data into chunks in variable size. To assure that all relevant records are scanned, it scans the entire partition that might cover several hours and even more than a day.
+The measurement is always larger than the actual time specified. For example, if the filter on the query is 7 days, the system might scan 7.5 or 8.1 days. This is because the system is partitioning the data into chunks of variable sizes. To ensure that all relevant records are scanned, the system scans the entire partition, which might cover several hours and even more than a day.
-There are several cases where the system cannot provide an accurate measurement of the time range. This happens in most of the cases where the query's span less than a day or in multi-workspace queries.
+There are several cases where the system can't provide an accurate measurement of the time range. This happens in most of the cases where the query's span less than a day or in multi-workspace queries.
> [!IMPORTANT] > This indicator presents only data processed in the immediate cluster. In multi-region query, it would represent only one of the regions. In multi-workspace query, it might not include all workspaces. ## Age of processed data
-Azure Data Explorer uses several storage tiers: in-memory, local SSD disks and much slower Azure Blobs. The newer the data, the higher is the chance that it is stored in a more performant tier with smaller latency, reducing the query duration and CPU. Other than the data itself, the system also has a cache for metadata. The older the data, the less chance its metadata will be in cache.
+Azure Data Explorer uses several storage tiers: in-memory, local SSD disks and much slower Azure Blobs. The newer the data, the higher is the chance that it's stored in a more performant tier with smaller latency, reducing the query duration and CPU. Other than the data itself, the system also has a cache for metadata. The older the data, the less chance its metadata will be in cache.
-Query that processes data than is more than 14 days old is considered a query that consumes excessive resources.
+A query that processes data that's more than 14 days old is considered a query that consumes excessive resources.
While some queries require usage of old data, there are cases where old data is used by mistake. This happens when queries are executed without providing time range in their meta-data and not all table references include filter on the **TimeGenerated** column. In these cases, the system will scan all the data that is stored in that table. When the data retention is long, it can cover long time ranges and thus data that is as old as the data retention period. Such cases can be for example: -- Not setting the time range in Log Analytics with a sub-query that isn't limited. See example above.
+- Not setting the time range in Log Analytics with a subquery that isn't limited. See example above.
- Using the API without the time range optional parameters. - Using a client that doesn't force a time range such as the Power BI connector.
-See examples and notes in the pervious section as they are also relevant in this case.
+See examples and notes in the pervious section as they're also relevant in this case.
## Number of regions There are several situations where a single query might be executed across different regions: -- When several workspaces are explicitly listed, and they are located in different regions.
+- When several workspaces are explicitly listed, and they're located in different regions.
- When a resource-scoped query is fetching data and the data is stored in multiple workspaces that are located in different regions. Cross-region query execution requires the system to serialize and transfer in the backend large chunks of intermediate data that are usually much larger than the query final results. It also limits the system's ability to perform optimizations, heuristics, and utilize caches.
-If there is no real reason to scan all these regions, you should adjust the scope so it covers fewer regions. If the resource scope is minimized but still many regions are used, it might happen due to misconfiguration. For example, audit logs and diagnostic settings are sent to different workspaces in different regions or there are multiple diagnostic settings configurations.
+If there's no real reason to scan all these regions, you should adjust the scope so it covers fewer regions. If the resource scope is minimized but still many regions are used, it might happen due to misconfiguration. For example, audit logs and diagnostic settings are sent to different workspaces in different regions or there are multiple diagnostic settings configurations.
Query that spans more than 3 regions is considered a query that consumes excessive resources. Query that spans more than 6 regions is considered an abusive query and might be throttled.
Usage of multiple workspaces can result from:
Cross-region and cross-cluster execution of queries requires the system to serialize and transfer in the backend large chunks of intermediate data that are usually much larger than the query final results. It also limits the system ability to perform optimizations, heuristics and utilizing caches.
-Query that spans more than 5 workspace is considered a query that consumes excessive resources. Queries cannot span to to more than 100 workspaces.
+Query that spans more than 5 workspace is considered a query that consumes excessive resources. Queries can't span to to more than 100 workspaces.
> [!IMPORTANT] > In some multi-workspace scenarios, the CPU and data measurements will not be accurate and will represent the measurement only to few of the workspaces.
Query that spans more than 5 workspace is considered a query that consumes exces
## Parallelism Azure Monitor Logs is using large clusters of Azure Data Explorer to run queries, and these clusters vary in scale, potentially getting up to dozens of compute nodes. The system automatically scales the clusters according to workspace placement logic and capacity.
-To efficiently execute a query, it is partitioned and distributed to compute nodes based on the data that is required for its processing. There are some situations where the system cannot do this efficiently. This can lead to a long duration of the query.
+To efficiently execute a query, it's partitioned and distributed to compute nodes based on the data that is required for its processing. There are some situations where the system can't do this efficiently. This can lead to a long duration of the query.
Query behaviors that can reduce parallelism include: - Use of serialization and window functions such as the [serialize operator](/azure/kusto/query/serializeoperator), [next()](/azure/kusto/query/nextfunction), [prev()](/azure/kusto/query/prevfunction), and the [row](/azure/kusto/query/rowcumsumfunction) functions. Time series and user analytics functions can be used in some of these cases. Inefficient serialization may also happen if the following operators are used not at the end of the query: [range](/azure/kusto/query/rangeoperator), [sort](/azure/kusto/query/sortoperator), [order](/azure/kusto/query/orderoperator), [top](/azure/kusto/query/topoperator), [top-hitters](/azure/kusto/query/tophittersoperator), [getschema](/azure/kusto/query/getschemaoperator). - Usage of [dcount()](/azure/kusto/query/dcount-aggfunction) aggregation function force the system to have central copy of the distinct values. When the scale of data is high, consider using the dcount function optional parameters to reduced accuracy. - In many cases, the [join](/azure/kusto/query/joinoperator?pivots=azuremonitor) operator lowers overall parallelism. Examine shuffle join as an alternative when performance is problematic.-- In resource-scope queries, the pre-execution Kubernetes RBAC or Azure RBAC checks may linger in situations where there is very large number of Azure role assignments. This may lead to longer checks that would result in lower parallelism. For example, a query is executed on a subscription where there are thousands of resources and each resource has many role assignments in the resource level, not on the subscription or resource group.-- If a query is processing small chunks of data, its parallelism will be low as the system will not spread it across many compute nodes.
+- In resource-scope queries, the pre-execution Kubernetes RBAC or Azure RBAC checks may linger in situations where there's very large number of Azure role assignments. This may lead to longer checks that would result in lower parallelism. For example, a query is executed on a subscription where there are thousands of resources and each resource has many role assignments in the resource level, not on the subscription or resource group.
+- If a query is processing small chunks of data, its parallelism will be low as the system won't spread it across many compute nodes.
azure-monitor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/overview.md
Just a few examples of what you can do with Azure Monitor include:
- Support operations at scale with [smart alerts](alerts/alerts-smartgroups-overview.md) and [automated actions](alerts/alerts-action-rules.md). - Create visualizations with Azure [dashboards](visualize/tutorial-logs-dashboards.md) and [workbooks](visualize/workbooks-overview.md). - Collect data from [monitored resources](./monitor-reference.md) using [Azure Monitor Metrics](./essentials/data-platform-metrics.md).
+- Investigate change data for routine monitoring or for triaging incidents using [Change Analysis](./change/change-analysis.md).
[!INCLUDE [azure-lighthouse-supported-service](../../includes/azure-lighthouse-supported-service.md)]
Azure Monitor uses a version of the [Kusto query language](/azure/kusto/query/)
![Diagram shows Logs data flowing into Log Analytics for analysis.](media/overview/logs.png)
+Change Analysis not only alerts you to live site issues, outages, component failures, or other change data, but it provides insights into those application changes, increases observability, and reduces the mean time to repair (MTTR). You automatically register the `Microsoft.ChangeAnalysis` resource provider with an Azure Resource Manager subscription by navigating to the Change Analysis service via the Azure portal. For web app in-guest changes, you can enable Change Analysis using the [Diagnose and solve problems tool](./change/change-analysis-visualizations.md#diagnose-and-solve-problems-tool).
+
+Change Analysis builds on [Azure Resource Graph](../governance/resource-graph/overview.md) to provide a historical record of how your Azure resources have changed over time, detecting managed identities, platform OS upgrades, and hostname changes. Change Analysis securely queries IP Configuration rules, TLS settings, and extension versions to provide more detailed change data.
+ ## What data does Azure Monitor collect? Azure Monitor can collect data from a [variety of sources](monitor-reference.md). This ranges from your application, any operating system and services it relies on, down to the platform itself. Azure Monitor collects data from each of the following tiers:
Azure Monitor can collect data from a [variety of sources](monitor-reference.md)
- **Azure resource monitoring data**: Data about the operation of an Azure resource. For a complete list of the resources that have metrics or logs, see [What can you monitor with Azure Monitor?](monitor-reference.md#azure-supported-services). - **Azure subscription monitoring data**: Data about the operation and management of an Azure subscription, as well as data about the health and operation of Azure itself. - **Azure tenant monitoring data**: Data about the operation of tenant-level Azure services, such as Azure Active Directory.
+- **Azure resource change data**: Data about changes within your Azure resource(s) and how to address and triage incidents and issues.
As soon as you create an Azure subscription and start adding resources such as virtual machines and web apps, Azure Monitor starts collecting data. [Activity logs](essentials/platform-logs-overview.md) record when resources are created or modified. [Metrics](essentials/data-platform-metrics.md) tell you how the resource is performing and the resources that it's consuming.
Autoscale allows you to have the right amount of resources running to handle the
## Integrate and export data You'll often have the requirement to integrate Azure Monitor with other systems and to build custom solutions that use your monitoring data. Other Azure services work with Azure Monitor to provide this integration.
-### Event Hub
+### Event Hubs
[Azure Event Hubs](../event-hubs/index.yml) is a streaming platform and event ingestion service. It can transform and store data using any real-time analytics provider or batching/storage adapters. Use Event Hubs to [stream Azure Monitor data](essentials/stream-monitoring-data-event-hubs.md) to partner SIEM and monitoring tools.
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
This article lists significant changes to Azure Monitor documentation.
**New articles** -- [Activity logs insights (Preview)](essentials/activity-logs-insights.md)
+- [Activity logs insights (Preview)](essentials/activity-log.md)
**Updated articles**
azure-netapp-files Performance Linux Mount Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-linux-mount-options.md
For example, [Deploy a SAP HANA scale-out system with standby node on Azure VMs
``` sudo vi /etc/fstab # Add the following entries
-10.23.1.5:/HN1-data-mnt00001 /hana/data/HN1/mnt00001 nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0 0
-10.23.1.6:/HN1-data-mnt00002 /hana/data/HN1/mnt00002 nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0 0
-10.23.1.4:/HN1-log-mnt00001 /hana/log/HN1/mnt00001 nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0 0
-10.23.1.6:/HN1-log-mnt00002 /hana/log/HN1/mnt00002 nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0 0
-10.23.1.4:/HN1-shared/shared /hana/shared nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0 0
+10.23.1.5:/HN1-data-mnt00001 /hana/data/HN1/mnt00001 nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0
+10.23.1.6:/HN1-data-mnt00002 /hana/data/HN1/mnt00002 nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0
+10.23.1.4:/HN1-log-mnt00001 /hana/log/HN1/mnt00001 nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0
+10.23.1.6:/HN1-log-mnt00002 /hana/log/HN1/mnt00002 nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0
+10.23.1.4:/HN1-shared/shared /hana/shared nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0
``` Also for example, SAS Viya recommends a 256-KiB read and write sizes, and [SAS GRID](https://communities.sas.com/t5/Administration-and-Deployment/Azure-NetApp-Files-A-shared-file-system-to-use-with-SAS-Grid-on/m-p/606973/highlight/true#M17740) limits the `r/wsize` to 64 KiB while augmenting read performance with increased read-ahead for the NFS mounts. See [NFS read-ahead best practices for Azure NetApp Files](performance-linux-nfs-read-ahead.md) for details.
azure-portal Manage Filter Resource Views https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/manage-filter-resource-views.md
To save and use a summary view:
:::image type="content" source="media/manage-filter-resource-views/type-summary-bar-chart.png" alt-text="Type summary showing a bar chart":::
-1. Select **Manage view** then **Save** to save this view like you did with the list view.
+1. Select **Manage view** then **Save view** to save this view like you did with the list view.
1. In the summary view, under **Type summary**, select a bar in the chart. Selecting the bar provides a list filtered down to one type of resource.
azure-resource-manager Bicep Functions Date https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-date.md
description: Describes the functions to use in a Bicep file to work with dates.
Previously updated : 05/02/2022 Last updated : 05/03/2022 # Date functions for Bicep
The output is:
The next example uses the epoch time value to set the expiration for a key in a key vault. ## utcNow
azure-resource-manager Move Support Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-support-resources.md
Title: Move operation support by resource type description: Lists the Azure resource types that can be moved to a new resource group, subscription, or region. Previously updated : 04/18/2022 Last updated : 05/03/2022 # Move operation support for resources
Jump to a resource provider namespace:
> | firewallpolicies | No | No | No | > | frontdoors | No | No | No | > | ipallocations | Yes | Yes | No |
-> | ipgroups | Yes | Yes | No |
+> | ipgroups | No | No | No |
> | loadbalancers | Yes - Basic SKU<br> Yes - Standard SKU | Yes - Basic SKU<br>No - Standard SKU | Yes <br/><br/> Use [Azure Resource Mover](../../resource-mover/tutorial-move-region-virtual-machines.md) to move internal and external load balancers. | > | localnetworkgateways | Yes | Yes | No | > | natgateways | No | No | No |
azure-resource-manager Tag Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-resources.md
Title: Tag resources, resource groups, and subscriptions for logical organization description: Shows how to apply tags to organize Azure resources for billing and managing. Previously updated : 03/23/2022 Last updated : 05/03/2022 # Use tags to organize your Azure resources and management hierarchy
-You apply tags to your Azure resources, resource groups, and subscriptions to logically organize them into a taxonomy. Each tag consists of a name and a value pair. For example, you can apply the name _Environment_ and the value _Production_ to all the resources in production.
+You apply tags to your Azure resources, resource groups, and subscriptions to logically organize them by values that make sense for your organization. Each tag consists of a name and a value pair. For example, you can apply the name _Environment_ and the value _Production_ to all the resources in production.
For recommendations on how to implement a tagging strategy, see [Resource naming and tagging decision guide](/azure/cloud-adoption-framework/decision-guides/resource-tagging/?toc=/azure/azure-resource-manager/management/toc.json). Resource tags support all cost-accruing services. To ensure that cost-accruing services are provisioned with a tag, use one of the [tag policies](tag-policies.md). > [!WARNING]
-> Tags are stored as plain text. Never add sensitive values to tags. Sensitive values could be exposed through many methods, including cost reports, tag taxonomies, deployment histories, exported templates, and monitoring logs.
+> Tags are stored as plain text. Never add sensitive values to tags. Sensitive values could be exposed through many methods, including cost reports, commands that return existing tag definitions, deployment histories, exported templates, and monitoring logs.
> [!IMPORTANT] > Tag names are case-insensitive for operations. A tag with a tag name, regardless of casing, is updated or retrieved. However, the resource provider might keep the casing you provide for the tag name. You'll see that casing in cost reports.
azure-resource-manager Template Functions Date https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-date.md
Title: Template functions - date description: Describes the functions to use in an Azure Resource Manager template (ARM template) to work with dates. Previously updated : 03/10/2022 Last updated : 05/03/2022 # Date functions for ARM templates
-Resource Manager provides the following functions for working with dates in your Azure Resource Manager template (ARM template):
-
-* [dateTimeAdd](#datetimeadd)
-* [utcNow](#utcnow)
+This article describes the functions for working with dates in your Azure Resource Manager template (ARM template).
> [!TIP] > We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [date](../bicep/bicep-functions-date.md) functions.
The next example template shows how to set the start time for an Automation sche
:::code language="json" source="~/resourcemanager-templates/azure-resource-manager/functions/date/datetimeadd-automation.json":::
+## dateTimeFromEpoch
+
+`dateTimeFromEpoch(epochTime)`
+
+Converts an epoch time integer value to an ISO 8601 datetime.
+
+In Bicep, use the [dateTimeFromEpoch](../bicep/bicep-functions-date.md#datetimefromepoch) function.
+
+### Parameters
+
+| Parameter | Required | Type | Description |
+|: |: |: |: |
+| epochTime | Yes | int | The epoch time to convert to a datetime string. |
+
+### Return value
+
+An ISO 8601 datetime string.
+
+### Example
+
+The following example shows output values for the epoch time functions.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "convertedEpoch": {
+ "type": "int",
+ "defaultValue": "[dateTimeToEpoch(dateTimeAdd(utcNow(), 'P1Y'))]"
+ }
+ },
+ "variables": {
+ "convertedDatetime": "[dateTimeFromEpoch(parameters('convertedEpoch'))]"
+ },
+ "resources": [],
+ "outputs": {
+ "epochValue": {
+ "type": "int",
+ "value": "[parameters('convertedEpoch')]"
+ },
+ "datetimeValue": {
+ "type": "string",
+ "value": "[variables('convertedDatetime')]"
+ }
+ }
+}
+```
+
+The output is:
+
+| Name | Type | Value |
+| - | - | -- |
+| datetimeValue | String | 2023-05-02T15:16:13Z |
+| epochValue | Int | 1683040573 |
+
+## dateTimeToEpoch
+
+`dateTimeToEpoch(dateTime)`
+
+Converts an ISO 8601 datetime string to an epoch time integer value.
+
+In Bicep, use the [dateTimeToEpoch](../bicep/bicep-functions-date.md#datetimetoepoch) function.
+
+### Parameters
+
+| Parameter | Required | Type | Description |
+|: |: |: |: |
+| dateTime | Yes | string | The datetime string to convert to an epoch time. |
+
+### Return value
+
+An integer that represents the number of seconds from midnight on January 1, 1970.
+
+### Examples
+
+The following example shows output values for the epoch time functions.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "convertedEpoch": {
+ "type": "int",
+ "defaultValue": "[dateTimeToEpoch(dateTimeAdd(utcNow(), 'P1Y'))]"
+ }
+ },
+ "variables": {
+ "convertedDatetime": "[dateTimeFromEpoch(parameters('convertedEpoch'))]"
+ },
+ "resources": [],
+ "outputs": {
+ "epochValue": {
+ "type": "int",
+ "value": "[parameters('convertedEpoch')]"
+ },
+ "datetimeValue": {
+ "type": "string",
+ "value": "[variables('convertedDatetime')]"
+ }
+ }
+}
+```
+
+The output is:
+
+| Name | Type | Value |
+| - | - | -- |
+| datetimeValue | String | 2023-05-02T15:16:13Z |
+| epochValue | Int | 1683040573 |
+
+The next example uses the epoch time value to set the expiration for a key in a key vault.
++ ## utcNow `utcNow(format)`
azure-resource-manager Template Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions.md
Title: Template functions description: Describes the functions to use in an Azure Resource Manager template (ARM template) to retrieve values, work with strings and numerics, and retrieve deployment information. Previously updated : 04/12/2022 Last updated : 05/02/2022 # ARM template functions
For Bicep files, use the [coalesce](../bicep/operators-logical.md) logical opera
Resource Manager provides the following functions for working with dates. * [dateTimeAdd](template-functions-date.md#datetimeadd)
+* [dateTimeFromEpoch](template-functions-date.md#datetimefromepoch)
+* [dateTimeToEpoch](template-functions-date.md#datetimetoepoch)
* [utcNow](template-functions-date.md#utcnow) For Bicep files, use the [date](../bicep/bicep-functions-date.md) functions.
azure-video-analyzer Access Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/access-policies.md
Last updated 11/04/2021
# Access policies + Access policies define the permissions and duration of access to a given Video Analyzer video resource. These access policies allow for greater control and flexibility by allowing 3rd party (Non AAD Clients) JWT tokens to provide authorization to client APIΓÇÖs that enable: - access to Video Metadata.
azure-video-analyzer Access Public Endpoints Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/access-public-endpoints-networking.md
Last updated 11/04/2021
# Public endpoints and networking + Azure Video Analyzer exposes a set of public network endpoints that enable different product scenarios, including management, ingestion, and playback. This article describes those endpoints, and provides some details about how they are used. The diagram below depicts those endpoints, in addition to some key endpoints exposed by associated Azure Services. :::image type="content" source="./media/access-public-endpoints-networking/endpoints-and-networking.svg" alt-text="The image represents Azure Video Analyzer public endpoints":::
azure-video-analyzer Ai Composition Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/ai-composition-overview.md
Last updated 11/04/2021
# AI composition + This article gives a high-level overview of Azure Video Analyzer support for three kinds of AI composition. * [Sequential](#sequential-ai-composition)
azure-video-analyzer Analyze Live Video Without Recording https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/analyze-live-video-without-recording.md
# Analyzing live videos without recording + ## Suggested pre-reading * [Pipeline concept](pipeline.md)
azure-video-analyzer Connect Cameras To Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/cloud/connect-cameras-to-cloud.md
[!INCLUDE [header](includes/cloud-env.md)] + Azure Video Analyzer service allows users to connect RTSP cameras directly to the cloud in order capture and record video, using [live pipelines](../pipeline.md). This will either reduce the computational load on an edge device or eliminate the need for an edge device completely. Video Analyzer service currently supports three different methods for connecting cameras to the cloud: connecting via a remote device adapter, connecting from behind a firewall using an IoT PnP command, and connecting over the internet without a firewall. > [!div class="mx-imgBorder"]
azure-video-analyzer Connect Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/cloud/connect-devices.md
[!INCLUDE [header](includes/cloud-env.md)] + In order to capture and record video from a device, Azure Video Analyzer service needs to establish an [RTSP](../terminology.md#rtsp) connection to it. If the device is behind a firewall, such connections are blocked, and it may not always be possible to create rules to allow inbound connections from Azure. To support such devices, you can build and install an [Azure IoT Plug and Play](../../../iot-develop/overview-iot-plug-and-play.md) device implementation, which listens to commands sent via IoT Hub from Video Analyzer and then opens a secure websocket tunnel to the service. Once such a tunnel is established, Video Analyzer can then connect to the RTSP server. ## Overview
azure-video-analyzer Export Portion Of Video As Mp4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/cloud/export-portion-of-video-as-mp4.md
[!INCLUDE [header](includes/cloud-env.md)] + In this tutorial, you learn how to export a portion of video that has been recorded in Azure Video Analyzer account. This exported portion of video is saved as an MP4 file which can be downloaded and consumed outside of the Video Analyzer account. The topic demonstrates how to export a portion of the video using the Azure portal and a C# SDK code sample.
azure-video-analyzer Get Started Livepipelines Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/cloud/get-started-livepipelines-portal.md
# Quickstart: Get started with Video Analyzer live pipelines in the Azure portal
-![cloud icon](media/env-icon/cloud.png)
-Alternatively, check out [get started with Video Analyzer on the edge using portal](../edge/get-started-detect-motion-emit-events-portal.md).
- This quickstart walks you through the steps to capture and record video from a Real Time Streaming Protocol (RTSP) camera using live pipelines in Azure Video Analyzer service. You will create a Video Analyzer account and its accompanying resources by using the Azure portal. You will deploy an RTSP camera simulator, if you donΓÇÖt have access to an actual RTSP camera (that can be made accessible over the internet). YouΓÇÖll then deploy the relevant Video Analyzer resources to record video to your Video Analyzer account.
azure-video-analyzer Monitor Log Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/cloud/monitor-log-cloud.md
Alternatively, check out [monitor and log on the edge](../edge/monitor-log-edge.
+ In this article, you'll learn about events and logs generated by Azure Video Analyzer service. You'll also learn how to consume the logs that the service generates and how to monitor the service events. ## Taxonomy of events
azure-video-analyzer Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/cloud/troubleshoot.md
Alternatively, check out [troubleshoot on the edge](../edge/troubleshoot.md).
+ This article covers troubleshooting steps for common error scenarios you might see while using the service. ## Enable diagnostics
azure-video-analyzer Use Remote Device Adapter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/cloud/use-remote-device-adapter.md
[!INCLUDE [header](includes/cloud-env.md)] + Azure Video Analyzer service allows users to capture and record video from RTSP cameras that are connected to the cloud. This requires that such cameras must be accessible over the internet. In cases where this may not be permissible, the Video Analyzer edge module can instead be deployed to a lightweight edge device with internet connectivity. With the lightweight edge device on the same (private) network as the RTSP cameras, the edge module can now be set up as an *adapter* that enables the Video Analyzer service to connect to the *remote devices* (cameras). The edge module enables the edge device to act as a [transparent gateway](../../../iot-edge/iot-edge-as-gateway.md) for video traffic between the RTSP cameras and the Video Analyzer service. > [!div class="mx-imgBorder"]
azure-video-analyzer Continuous Video Recording https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/continuous-video-recording.md
# Continuous video recording + Continuous video recording (CVR) refers to the process of continuously recording the video from a video source. Azure Video Analyzer supports recording video continuously, on a 24x7 basis, from a CCTV camera via a video processing [pipeline topology](pipeline.md) consisting of an RTSP source node and a video sink node. The diagram below shows a graphical representation of such a pipeline. The JSON representation of the topology can be found in this [document](https://raw.githubusercontent.com/Azure/video-analyzer/main/pipelines/live/topologies/cvr-video-sink/topology.json). You can use such a topology to create arbitrarily long recordings (years worth of content). The timestamps for the recordings are stored in UTC. > [!div class="mx-imgBorder"]
azure-video-analyzer Create Video Analyzer Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/create-video-analyzer-account.md
# Create a Video Analyzer account + To start using Azure Video Analyzer, you will need to create a Video Analyzer account. The account needs to be associated with a storage account and at least one [user-assigned managed identity][docs-uami](UAMI). The UAMI will need to have the permissions of the [Storage Blob Data Contributor][docs-storage-access] role and [Reader][docs-role-reader] role to your storage account. You can optionally associate an IoT Hub with your Video Analyzer account ΓÇô this is needed if you use Video Analyzer edge module as a [transparent gateway](./cloud/use-remote-device-adapter.md). If you do so, then you will need to add a UAMI which has [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role permissions. You can use the same UAMI for both storage account and IoT Hub, or separate UAMIs. This article describes the steps for creating a new Video Analyzer account. You can use the Azure portal or an [Azure Resource Manager (ARM) template][docs-arm-template]. Choose the tab for the method you would like to use.
azure-video-analyzer Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/customer-managed-keys.md
Last updated 11/04/2021
# Customer managed keys with Azure Video Analyzer + Bring Your Own Key (BYOK) is an Azure wide initiative to help customers move their workloads to the cloud. Customer managed keys allow customers to adhere to industry compliance regulations and improves tenant isolation of a service. Giving customers control of encryption keys is a way to minimize unnecessary access and control and build confidence in Microsoft services. ## Keys and key management
azure-video-analyzer Analyze Ai Composition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/analyze-ai-composition.md
[!INCLUDE [header](includes/edge-env.md)] + Certain customer scenarios require that video be analyzed with multiple AI models. Such models can be either [augmenting each other](../ai-composition-overview.md#sequential-ai-composition) or [working independently in parallel](../ai-composition-overview.md#parallel-ai-composition) on the [same video stream or a combination](../ai-composition-overview.md#combined-ai-composition) of such augmented and independently parallel models can be acting on the same video stream to derive actionable insights. Azure Video Analyzer supports such scenarios via a feature called [AI Composition](../ai-composition-overview.md). This guide shows you how you can apply multiple models in an augmented fashion on the same video stream. It uses a Tiny(Light) YOLO and a regular YOLO model in parallel, to detect an object of interest. The Tiny YOLO model is computationally lighter but less accurate than the YOLO model and is called first. If the object detected passes a specific confidence threshold, then the sequentially staged regular YOLO model is not invoked, thus utilizing the underlying resources efficiently.
azure-video-analyzer Analyze Live Video Custom Vision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/analyze-live-video-custom-vision.md
[!INCLUDE [header](includes/edge-env.md)] + In this tutorial, you'll learn how to use Azure [Custom Vision](https://azure.microsoft.com/services/cognitive-services/custom-vision-service/) to build a containerized model that can detect a toy truck and use the [AI extensibility capability](../analyze-live-video-without-recording.md#analyzing-video-using-a-custom-vision-model) of Azure Video Analyzer on Azure IoT Edge to deploy the model on the edge for detecting toy trucks from a live video stream. We'll show you how to bring together the power of Custom Vision to build and train a computer vision model by uploading and labeling a few images. You don't need any knowledge of data science, machine learning, or AI. You'll also learn about the capabilities of Video Analyzer and how to easily deploy a custom model as a container on the edge and analyze a simulated live video feed.
azure-video-analyzer Analyze Live Video Use Your Model Grpc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/analyze-live-video-use-your-model-grpc.md
[!INCLUDE [header](includes/edge-env.md)] + This quickstart shows you how to use Azure Video Analyzer to analyze a live video feed from a (simulated) IP camera. You'll see how to apply a computer vision model to detect objects. A subset of the frames in the live video feed is sent to an inference service. The results are sent to IoT Edge Hub. This quickstart uses an Azure VM as an IoT Edge device, and it uses a simulated live video stream. It's based on sample code written in C#, and it builds on the [Detect motion and emit events quickstart](detect-motion-emit-events-quickstart.md).
azure-video-analyzer Analyze Live Video Use Your Model Http https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/analyze-live-video-use-your-model-http.md
[!INCLUDE [header](includes/edge-env.md)] + This quickstart shows you how to use Azure Video Analyzer to analyze a live video feed from a (simulated) IP camera. You'll see how to apply a computer vision model to detect objects. A subset of the frames in the live video feed is sent to an inference service. The results are sent to IoT Edge Hub. The quickstart uses an Azure VM as an IoT Edge device, and it uses a simulated live video stream. It builds on the [Detect motion and emit events](detect-motion-emit-events-quickstart.md) quickstart.
azure-video-analyzer Camera Discovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/camera-discovery.md
[!INCLUDE [header](includes/edge-env.md)] + This how to guide walks you through how to use the Azure Video Analyzer edge module to discover ONVIF compliant cameras on the same subnet as the IoT Edge device. Open Network Video Interface Forum (ONVIF) is an open standard where discrete IP-based physical devices, such as surveillance cameras, can communicate with additional networked devices and software. For more information about ONVIF please visit the [ONVIF](https://www.onvif.org/about/mission/) website. ## Prerequisites
azure-video-analyzer Computer Vision For Spatial Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/computer-vision-for-spatial-analysis.md
[!INCLUDE [header](includes/edge-env.md)] + This tutorial shows you how to use Azure Video Analyzer together with [Computer Vision for spatial analysis AI service from Azure Cognitive Services](../../../cognitive-services/computer-vision/intro-to-spatial-analysis-public-preview.md) to analyze a live video feed from a (simulated) IP camera. You'll see how this inference server enables you to analyze the streaming video to understand spatial relationships between people and movement in physical space. A subset of the frames in the video feed is sent to this inference server, and the results are sent to IoT Edge Hub and when some conditions are met, video clips are recorded and stored as videos in the Video Analyzer account. In this tutorial you will:
azure-video-analyzer Configure Signal Gate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/configure-signal-gate.md
[!INCLUDE [header](includes/edge-env.md)] + Within a pipeline, a [signal gate processor node](../pipeline.md#signal-gate-processor) allows you to forward media from one node to another when the gate is triggered by an event. When it's triggered, the gate opens and lets media flow through for a specified duration. In the absence of events to trigger the gate, the gate closes, and media stops flowing. You can use the signal gate processor for event-based video recording. > [!NOTE]
azure-video-analyzer Deploy Iot Edge Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/deploy-iot-edge-device.md
[!INCLUDE [header](includes/edge-env.md)] + This article describes how you can deploy the Azure Video Analyzer edge module on an IoT Edge device which has no other modules previously installed. When you finish the steps in this article you will have a Video Analyzer account created and the Video Analyzer module deployed to your IoT Edge device, along with a module that simulates an RTSP-capable IP camera. The process is intended for use with the quickstarts and tutorials for Video Analyzer. You should review the [production readiness and best practices](production-readiness.md) article if you intend to deploy the Video Analyzer module for use in production. > [!NOTE]
azure-video-analyzer Deploy Iot Edge Linux On Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/deploy-iot-edge-linux-on-windows.md
[!INCLUDE [header](includes/edge-env.md)] + In this article, you'll learn how to deploy Azure Video Analyzer on an edge device that has [IoT Edge for Linux on Windows (EFLOW)](../../../iot-edge/iot-edge-for-linux-on-windows.md). Once you have finished following the steps in this document, you will be able to run a [pipeline](../pipeline.md) that detects motion in a video and emits such events to the IoT Hub. You can then switch out the pipeline for advanced scenarios and bring the power of Azure Video Analyzer to your Windows-based IoT Edge device. ## Prerequisites
azure-video-analyzer Deploy On Stack Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/deploy-on-stack-edge.md
# Deploy Azure Video Analyzer on Azure Stack Edge + This article provides full instructions for deploying Azure Video Analyzer on your Azure Stack Edge device. After you've set up and activated the device, it's ready for Video Analyzer deployment. In the article, we'll deploy Video Analyzer by using Azure IoT Hub, but the Azure Stack Edge resources expose a Kubernetes API, with which you can deploy additional non-IoT Hub-aware solutions that can interface with Video Analyzer.
azure-video-analyzer Detect Motion Emit Events Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/detect-motion-emit-events-quickstart.md
[!INCLUDE [header](includes/edge-env.md)] + This quickstart walks you through the steps to get started with Azure Video Analyzer. It uses an Azure VM as an IoT Edge device and a simulated live video stream. After completing the setup steps, you'll be able to run a simulated live video stream through a video pipeline that detects and reports any motion in that stream. The following diagram shows a graphical representation of that pipeline. > [!div class="mx-imgBorder"]
azure-video-analyzer Detect Motion Record Video Clips Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/detect-motion-record-video-clips-cloud.md
[!INCLUDE [header](includes/edge-env.md)] + This article walks you through the steps to use Azure Video Analyzer edge module for [event-based recording](../event-based-video-recording-concept.md). It uses a Linux VM in Azure as an IoT Edge device and a simulated live video stream. This video stream is analyzed for the presence of moving objects. When motion is detected, events are sent to Azure IoT Hub, and the relevant part of the video stream is recorded as a [video resource](../terminology.md#video) in your Video Analyzer account. ## Prerequisites
azure-video-analyzer Detect Motion Record Video Edge Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/detect-motion-record-video-edge-devices.md
[!INCLUDE [header](includes/edge-env.md)] + This quickstart shows you how to use Azure Video Analyzer to analyze the live video feed from a (simulated) IP camera. It shows how to detect if any motion is present, and if so, record an MP4 video clip to the local file system on the edge device. The quickstart uses an Azure VM as an IoT Edge device and also uses a simulated live video stream. ## Prerequisites
azure-video-analyzer Develop Deploy Grpc Inference Srv https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/develop-deploy-grpc-inference-srv.md
[!INCLUDE [header](includes/edge-env.md)] + This article shows you how you can wrap AI model(s) of your choice within a gRPC inference server, so that it can be integrated with Azure Video Analyzer (AVA) via pipeline extension. ## Suggested pre-reading
azure-video-analyzer Direct Methods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/direct-methods.md
[!INCLUDE [header](includes/edge-env.md)] + Azure Video Analyzer edge module `avaedge` exposes several direct methods that can be invoked from IoT Hub. Direct methods represent a request-reply interaction with a device similar to an HTTP call in that they succeed or fail immediately (after a user-specified timeout). This approach is useful for scenarios where the course of immediate action is different depending on whether the device was able to respond. For more information, see [Understand and invoke direct methods from IoT Hub](../../../iot-hub/iot-hub-devguide-direct-methods.md). This topic describes these methods, conventions, and the schema of the methods.
azure-video-analyzer Enable Video Preview Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/enable-video-preview-images.md
# Enable preview images when recording video + You can use Azure Video Analyzer to [capture and record video](../video-recording.md) from an RTSP camera. You would be creating a pipeline topology that includes a video sink node, as shown in [Quickstart: Detect motion in a (simulated) live video, record the video to the Video Analyzer account](detect-motion-record-video-clips-cloud.md) or [Tutorial: Continuous video recording and playback](use-continuous-video-recording.md). If you record video using the Video Analyzer edge module, you can enable the video sink node to periodically generate a set of preview images of different sizes. These images can then be retrieved from the [video resource](../terminology.md#video) in your Video Analyzer account. For example, if your camera generates a video that has a resolution of 1920x1080, then the preview images would have the following sizes:
azure-video-analyzer Get Started Detect Motion Emit Events Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/get-started-detect-motion-emit-events-portal.md
Alternatively, check out [get started with Video Analyzer live pipelines using p
+ This quickstart walks you through the steps to get started with Azure Video Analyzer. You'll create an Azure Video Analyzer account and its accompanying resources by using the Azure portal. You'll then deploy the Video Analyzer edge module and a Real Time Streaming Protocol (RTSP) camera simulator module to your Azure IoT Edge device. After you complete the setup steps, you'll be able to run the simulated live video stream through a pipeline that detects and reports any motion in that stream. The following diagram graphically represents that pipeline.
azure-video-analyzer Get Started Detect Motion Emit Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/get-started-detect-motion-emit-events.md
[!INCLUDE [header](includes/edge-env.md)] + This quickstart walks you through the steps to get started with Azure Video Analyzer. It uses an Azure VM as an IoT Edge device and a simulated live video stream. After completing the setup steps, you'll be able to run the simulated live video stream through a pipeline that detects and reports any motion in that stream. The following diagram graphically represents that pipeline.
azure-video-analyzer Grpc Extension Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/grpc-extension-protocol.md
[!INCLUDE [header](includes/edge-env.md)] + Azure Video Analyzer allows you to enhance its pipeline processing capabilities through a [pipeline extension node](../pipeline-extension.md). The gRPC extension processor enables extensibility scenarios using a [highly performant, structured, gRPC-based protocol](../pipeline-extension.md#grpc-extension-processor). In this article, you will learn about using gRPC extension protocol to send messages between Video Analyzer module and your gRPC server that processes those messages and returns results. gRPC is a modern, open-source, high-performance RPC framework that runs in any environment and support cross platform and cross language communication. The gRPC transport service uses HTTP/2 bidirectional streaming between:
azure-video-analyzer Http Extension Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/http-extension-protocol.md
[!INCLUDE [header](includes/edge-env.md)] + Azure Video Analyzer allows you to enhance its processing capabilities through a [pipeline extension](../pipeline-extension.md) node. HTTP extension processor node enables extensibility scenarios using the HTTP extension protocol, where performance and/or optimal resource utilization is not the primary concern. In this article, you will learn about using this protocol to send messages between the Video Analyzer and an HTTP REST endpoint, which would typically be wrapped around an AI inference server. The HTTP contract is defined between the following two components:
azure-video-analyzer Inference Metadata Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/inference-metadata-schema.md
[!INCLUDE [header](includes/edge-env.md)] + In Azure Video Analyzer, each inference object regardless of using HTTP-based contract or gRPC based contract should follow the object model described below. The JSON schema is documented [here](https://github.com/Azure/video-analyzer/tree/main/contracts/data-schema). ## Object model
azure-video-analyzer Module Twin Configuration Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/module-twin-configuration-schema.md
[!INCLUDE [header](includes/edge-env.md)] + Device twins are JSON documents that store device state information including metadata, configurations, and conditions. Azure IoT Hub maintains a device twin for each device that you connect to IoT Hub. For detailed explanation, see [Understand and use module twins in IoT Hub.](../../../iot-hub/iot-hub-devguide-module-twins.md) This topic describes module twin JSON schema of Azure Video Analyzer edge module.
azure-video-analyzer Monitor Log Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/monitor-log-edge.md
Alternatively, check out [monitor and log in the service](../cloud/monitor-log-c
+ In this article, you'll learn how to receive events for remote monitoring from the Azure Video Analyzer IoT Edge module. You'll also learn how to control the logs that the module generates.
azure-video-analyzer Production Readiness https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/production-readiness.md
[!INCLUDE [header](includes/edge-env.md)] + This article provides guidance on how to configure and deploy the Azure Video Analyzer edge module and cloud service in production environments. You should also review [Prepare to deploy your IoT Edge solution in production](../../../iot-edge/production-checklist.md) article on preparing your IoT Edge solution. You should consult your organization's IT department on aspects related to security.
azure-video-analyzer Record Event Based Live Video https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/record-event-based-live-video.md
[!INCLUDE [header](includes/edge-env.md)] + In this tutorial, you'll learn how to use Azure Video Analyzer to selectively record portions of a live video source to Video Analyzer in the cloud. This use case is referred to as [event-based video recording](../event-based-video-recording-concept.md) (EVR) in this tutorial. To record portions of a live video, you'll use an object detection AI model to look for objects in the video and record video clips only when a certain type of object is detected. You'll also learn about how to play back the recorded video clips by using Video Analyzer. This capability is useful for a variety of scenarios where there's a need to keep an archive of video clips of interest. [!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)]
azure-video-analyzer Record Stream Inference Data With Video https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/record-stream-inference-data-with-video.md
[!INCLUDE [header](includes/edge-env.md)] + In this tutorial, you will learn how to use Azure Video Analyzer to record live video and inference metadata to the cloud and play back that recording with visual inference metadata. In this use case, you will be continuously recording video, while using a custom model to detect objects **(yoloV3)** and a Video Analyzer processor **(object tracker)** to track objects. As video is being continuously recorded, so will the inference metadata from the objects being detected and tracked. [!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)]
azure-video-analyzer Track Objects Live Video https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/track-objects-live-video.md
[!INCLUDE [header](includes/edge-env.md)] + This quickstart shows you how to use Azure Video Analyzer edge module to track objects in a live video feed from a (simulated) IP camera. You will see how to apply a computer vision model to detect objects in a subset of the frames in the live video feed. You can then use an object tracker node to track those objects in the other frames. The object tracker comes in handy when you need to detect objects in every frame, but the edge device does not have the necessary compute power to be able to apply the vision model on every frame. If the live video feed is at, say 30 frames per second, and you can only run your computer vision model on every 15th frame, the object tracker takes the results from one such frame, and then uses [optical flow](https://en.wikipedia.org/wiki/Optical_flow) techniques to generate results for the 2nd, 3rd,…, 14th frame, until the model is applied again on the next frame.
azure-video-analyzer Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/troubleshoot.md
Alternatively, check out [troubleshoot in the service](../cloud/troubleshoot.md)
+ This article covers troubleshooting steps for the Azure Video Analyzer edge module. ## Troubleshoot deployment issues
azure-video-analyzer Use Azure Portal To Invoke Direct Methods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/use-azure-portal-to-invoke-direct-methods.md
[!INCLUDE [header](includes/edge-env.md)] + IoT Hub gives you the ability to invoke [direct methods](../../../iot-hub/iot-hub-devguide-direct-methods.md#method-invocation-for-iot-edge-modules) on edge devices from the cloud. The Azure Video Analyzer (Video Analyzer) module exposes several [direct methods](./direct-methods.md) that can be used to define, deploy, and activate different pipelines for analyzing live video. In this article, you will learn how to invoke direct method calls on Video Analyzer module via the Azure portal.
azure-video-analyzer Use Continuous Video Recording https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/use-continuous-video-recording.md
[!INCLUDE [header](includes/edge-env.md)] + In this tutorial, you'll learn how to use Azure Video Analyzer to perform [continuous video recording](../continuous-video-recording.md) (CVR) to the cloud and play back that recording. This capability is useful for scenarios such as safety and compliance where there is a need to maintain an archive of the footage from a camera for days, weeks, months or even years or alternatively you could specify the retention period for the video being recorded. Retention policy defines how many days of video should be stored (for example, the last 7 days), you can learn more about it in the [Manage retention policy](../manage-retention-policy.md) article. In this tutorial you will:
azure-video-analyzer Use Intel Grpc Video Analytics Serving Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/use-intel-grpc-video-analytics-serving-tutorial.md
[!INCLUDE [header](includes/edge-env.md)] + This tutorial shows you how to use the Intel OpenVINOΓäó DL Streamer ΓÇô Edge AI Extension from Intel to analyze a live video feed from a (simulated) IP camera. You'll see how this inference server gives you access to different models for detecting objects (a person, a vehicle, or a bike), object classification (vehicle attributions) and a model for object tracking (person, vehicle and bike). The integration with the gRPC module lets you send video frames to the AI inference server. The results are then sent to the IoT Edge Hub. When you run this inference service on the same compute node as Azure Video Analyzer, you can take advantage of sending video data via shared memory. This enables you to run inferencing at the frame rate of the live video feed (i.e. 30 frames/sec). This tutorial uses an Azure VM as a simulated IoT Edge device, and it uses a simulated live video stream. It's based on sample code written in C#, and it builds on the [Detect motion and emit events](detect-motion-emit-events-quickstart.md) quickstart.
azure-video-analyzer Use Intel Openvino Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/use-intel-openvino-tutorial.md
[!INCLUDE [header](includes/edge-env.md)] + This tutorial shows you how to use the [OpenVINOΓäó Model Server ΓÇô AI Extension from Intel](https://aka.ms/ava-intel-ovms) to analyze a live video feed from a (simulated) IP camera. You'll see how this inference server gives you access to models for detecting objects (a person, a vehicle, or a bike), and a model for classifying vehicles. A subset of the frames in the live video feed is sent to this inference server, and the results are sent to IoT Edge Hub. This tutorial uses an Azure VM as an IoT Edge device, and it uses a simulated live video stream. It's based on sample code written in C#.
azure-video-analyzer Use Line Crossing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/use-line-crossing.md
[!INCLUDE [header](includes/edge-env.md)] + This tutorial shows you how to use Azure Video Analyzer detect when objects cross a virtual line in a live video feed from a (simulated) IP camera. You will see how to apply a computer vision model to detect objects in a subset of the frames in the live video feed. You can then use an object tracker node to track those objects in the other frames and send the results to a line crossing node. The line crossing node enables you to detect when objects cross the virtual line. The events contain the direction (clockwise, counterclockwise) and a total counter per direction.
azure-video-analyzer Use Visual Studio Code Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/use-visual-studio-code-extension.md
[!INCLUDE [header](includes/edge-env.md)] + This article walks you through the steps to get started with the Video Studio Code extension for Azure Video Analyzer. You will connect the Visual Studio Code extension to your Video Analyzer Edge module through the IoT Hub and deploy a [sample pipeline topology](https://github.com/Azure/video-analyzer/tree/main/pipelines/live/topologies/cvr-video-sink). You will then run a simulated live video stream through a live pipeline that continuously records video to a video resource. The following diagram represents the pipeline. > [!div class="mx-imgBorder"]
azure-video-analyzer Embed Player In Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/embed-player-in-power-bi.md
# Embed player widget in Power BI Azure Video Analyzer enables you to [record](detect-motion-record-video-clips-cloud.md) video and associated inference metadata to your Video Analyzer cloud resource. Video Analyzer has a [Player Widget](player-widget.md) - an easy-to-embed widget allowing client apps to playback video and inference metadata.
azure-video-analyzer Event Based Video Recording Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/event-based-video-recording-concept.md
# Event-based video recording + Event-based video recording (EVR) refers to the process of recording video triggered by an event. The event in question could originate due to processing of the video signal itself (for example, when motion is detected) or could be from an independent source (for example, a door sensor signals that the door has been opened). A few use cases related to EVR are described in this article. The timestamps for the recordings are stored in UTC. Recorded video can be played back using the streaming capabilities of Video Analyzer. See [Recorded and live videos](viewing-videos-how-to.md) for more details.
azure-video-analyzer Manage Retention Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/manage-retention-policy.md
# Manage retention policy with Video Analyzer + You can use Azure Video Analyzer to archive content for long durations of time, spanning from few seconds to yearsΓÇÖ worth of video content from a single source. You can explicitly control the retention policy of your content, which ensures that older content gets periodically trimmed. You can apply different policies to different archives - for example, you can retain the most recent 3 days of recordings from a parking lot camera, and the most recent 30 days of recordings from the camera behind the cashier's desk. ## Retention period
azure-video-analyzer Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/managed-identity.md
# Managed identity + A common challenge for developers is the management of secrets and credentials to secure communication between different services. On Azure, managed identities eliminate the need for developers having to manage credentials by providing an identity for the Azure resource in Azure Active Directory (Azure AD) and using it to obtain Azure AD tokens. When you create an Azure Video Analyzer account, you must associate an Azure storage account with it. If you use Video Analyzer to record the live video from a camera, that data is stored as blobs in a container in the storage account. You can optionally associate an IoT Hub with your Video Analyzer account ΓÇô this is needed if you use Video Analyzer edge module as a [transparent gateway](./cloud/use-remote-device-adapter.md). You must use a managed identity to grant the Video Analyzer account the appropriate access to the storage account and IoT Hub (if needed for your solution) as follows.
azure-video-analyzer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/overview.md
Last updated 11/04/2021
# What is Azure Video Analyzer? (preview)+ Azure Video Analyzer provides you a platform for building intelligent video applications that span the edge and the cloud. The platform consists of an IoT Edge module and an Azure service. It offers the capability to capture, record, and analyze live videos and publish the results, namely video and insights from video, to edge or cloud.
azure-video-analyzer Pipeline Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/pipeline-extension.md
# Pipeline extension + Azure Video Analyzer allows you to extend the pipeline processing capabilities through a pipeline extension node. Your analytics extension plugin can make use of traditional image-processing techniques or computer vision AI models. Pipeline extensions are enabled by including an extension processor node in the pipeline flow. The extension processor node relays video frames to the configured endpoint and acts as the interface to your extension. The connection can be made to a local or remote endpoint and it can be secured by authentication and TLS encryption, if necessary. Additionally, the pipeline extension processor node allows for optional scaling and encoding of the video frames before they are submitted to your custom extension. Video Analyzer supports the following pipeline extension processors:
azure-video-analyzer Pipeline Topologies List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/pipeline-topologies-list.md
Last updated 01/12/2022
# List of pipeline topologies + The following tables list validated sample Azure Video Analyzer [live pipeline topologies](terminology.md#pipeline-topology). These topologies can be further customized according to solution needs. The tables also provide * A short description,
azure-video-analyzer Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/pipeline.md
# Pipeline + Pipelines let you ingest, process, and publish video within Azure Video Analyzer edge and cloud. Pipeline topologies allow you to define how video should be ingested, processed, and published within through a set of configurable nodes. Once defined, topologies can then be instantiated as individual pipelines that target specific cameras or source content, which are processed independently. Pipelines can be defined and instantiated at the edge for on premises video processing, or in the cloud. The diagrams below provides graphical representations of such pipelines. > [!div class="mx-imgBorder"]
azure-video-analyzer Player Widget https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/player-widget.md
# Use the Azure Video Analyzer player widget + In this tutorial, you learn how to use a player widget within your application. This code is an easy-to-embed widget that allows your users to play video and browse through the portions of a segmented video file. To do this, you'll be generating a static HTML page with the widget embedded, and all the pieces to make it work. In this tutorial, you will:
azure-video-analyzer Policy Definitions Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/policy-definitions-security.md
Last updated 12/02/2021
# Azure Policy for Video Analyzer + Azure Video Analyzer provides several built-in [Azure Policy](../../governance/policy/overview.md) definitions to help enforce organizational standards and compliance at-scale. Common use cases for Azure Policy include implementing governance for resource consistency, regulatory compliance, security, cost and management. Video Analyzer provides several common use case definitions for Azure Policy that are built-in to help you get started. This article explains how to assign policies for a Video Analyzer account using the Azure portal.
azure-video-analyzer Quotas Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/quotas-limitations.md
# Video Analyzer quotas and limitations + This article describes Azure Video Analyzer quotas and limitations. ## Quotas and limitations - Edge module
azure-video-analyzer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/release-notes.md
This article provides you with information about:
<hr width=100%>
+## April 29, 2022
+
+WeΓÇÖre retiring the Video Analyzer preview service, you're advised to **transition your applications off of Video Analyzer by 01 December 2022.** To minimize disruption to your workloads, transition your application from Video Analyzer per suggestions described in this [guide](./transition-from-video-analyzer.md) before December 01, 2022. After December 1, 2022 your Video Analyzer account will no longer function.
+
+Starting May 2, 2022 you will not be able to create new Video Analyzer accounts.
+ ## November 2, 2021 This release is an update to the Video Analyzer edge module and the Video Analyzer service. The release tag for the edge module is:
azure-video-analyzer Sample Player Widget https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/sample-player-widget.md
# Azure Video Analyzer player widget sample + This sample application shows the integration of Video Analyzer's player widget with video playback, zone drawing and video clip generation features. * Clone the [AVA C# sample repository](https://github.com/Azure-Samples/video-analyzer-iot-edge-csharp)
azure-video-analyzer Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/sdk.md
Last updated 11/04/2021
# Azure Video Analyzer SDKs + Azure Video Analyzer includes two groups of SDKs. The management SDKs are used for managing the Azure resource and the client SDKs are used for interacting with edge modules. ## Management SDKs
azure-video-analyzer Spatial Analysis Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/spatial-analysis-operations.md
# Supported Spatial Analysis operations + Spatial Analysis enables the analysis of real-time streaming video from camera devices. For each camera device you configure, the operations will generate an output stream of JSON messages sent to Azure Video Analyzer. Video Analyzer implements the following Spatial Analysis operations:
azure-video-analyzer Terminology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/terminology.md
# Azure Video Analyzer terminology + This article provides an overview of terminology related to [Azure Video Analyzer](overview.md). ## Pipeline topology
azure-video-analyzer Video Recording https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/video-recording.md
# Record video for playback + In the context of a video management system for CCTV cameras, video recording refers to the process of capturing video from the cameras and recording it for subsequent viewing via mobile and browser apps. Video recording can be categorized into continuous video recording and event-based video recording. ## Continuous video recording
azure-video-analyzer Viewing Videos How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/viewing-videos-how-to.md
# Viewing of videos + ## Suggested pre-reading * [Video Analyzer video resource](terminology.md#video)
azure-video-analyzer Visual Studio Code Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/visual-studio-code-extension.md
# Visual Studio Code extension for Azure Video Analyzer + Azure Video Analyzer is a platform to make building video analysis programs easier, and the associated Visual Studio Code extension is a tool to make learning that platform easier. This article is a reference to the various pieces of functionality offered by the extension. It covers the basics of: * Pipelines topologies ΓÇô creation, editing, deletion, viewing the JSON
azure-video-analyzer Visualize Ai Events Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/visualize-ai-events-power-bi.md
# Tutorial: Real-time visualization of AI inference events with Power BI + Azure Video Analyzer provides the capability to capture, record, and analyze live video along with publishing the results of video analysis in form of AI inference events to the [IoT Edge Hub](../../iot-edge/iot-edge-runtime.md?view=iotedge-2020-11&preserve-view=true#iot-edge-hub). These AI inference events can then be routed to other destinations including Visual Studio Code and Azure services such as Time Series Insights and Event Hubs. Dashboards are an insightful way to monitor your business and visualize all your important metrics at a glance. You can visualize AI inference events generated by Video Analyzer using [Microsoft Power BI](https://powerbi.microsoft.com/) via [Azure Stream Analytics](https://azure.microsoft.com/services/stream-analytics/#overview) to quickly gain insights and share dashboards with peers in your organization.
azure-video-indexer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md
>Get notified about when to revisit this page for updates by copying and pasting this URL: `https://docs.microsoft.com/api/search/rss?search=%22Azure+Media+Services+Video+Indexer+release+notes%22&locale=en-us` into your RSS feed reader.
-To stay up-to-date with the most recent Azure Video Indexer (former Video Indexer) developments, this article provides you with information about:
+To stay up-to-date with the most recent Azure Video Indexer (formerly Azure Video Analyzer for Media) developments, this article provides you with information about:
* [Important notice](#upcoming-critical-changes) about planned changes * The latest releases
To stay up-to-date with the most recent Azure Video Indexer (former Video Indexe
> [!Important] > This section describes a critical upcoming change for the `Upload-Video` API. - ### Upload-Video API In the past, the `Upload-Video` API was tolerant to calls to upload a video from a URL where an empty multipart form body was provided in the C# code, such as:
azure-vmware Concepts Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-storage.md
Title: Concepts - Storage
description: Learn about storage capacity, storage policies, fault tolerance, and storage integration in Azure VMware Solution private clouds. Previously updated : 08/31/2021 Last updated : 05/02/2022 # Azure VMware Solution storage concepts
Local storage in cluster hosts is used in the cluster-wide vSAN datastore. All d
## Storage policies and fault tolerance
-The default storage policy is set to RAID-1 (Mirroring), FTT-1, and thick provisioning. Unless you adjust the storage policy or apply a new policy, the cluster grows with this configuration. To set the storage policy, see [Configure storage policy](configure-storage-policy.md).
+The default storage policy is set to RAID-1 (Mirroring) FTT-1, with Object Space Reservation set to Thin provisioning. Unless you adjust the storage policy or apply a new policy, the cluster grows with this configuration. This is the policy that will be applied to the workload VMs. To set a different storage policy, see [Configure storage policy](configure-storage-policy.md).
In a three-host cluster, FTT-1 accommodates a single host's failure. Microsoft governs failures regularly and replaces the hardware when events are detected from an operations perspective.
+> [!NOTE]
+> When you log on to the vSphere Client, you may notice a VM Storage Policy called **vSAN Default Storage Policy** with **Object Space Reservation** set to **Thick** provisioning. Please note that this is not the default storage policy applied to the cluster. This policy exists for historical purposes and will eventually be modified to **Thin** provisioning.
-
-|Provisioning type |Description |
-|||
-|**Thick** | Reserved or pre-allocated storage space. It protects systems by allowing them to function even if the vSAN datastore is full because the space is already reserved. For example, if you create a 10-GB virtual disk with thick provisioning. In that case, the full amount of virtual disk storage capacity is pre-allocated on the physical storage of the virtual disk and consumes all the space allocated to it in the datastore. It won't allow other virtual machines (VMs) to share the space from the datastore. |
-|**Thin** | Consumes the space that it needs initially and grows to the data space demand used in the datastore. Outside the default (thick provision), you can create VMs with FTT-1 thin provisioning. For deduplication setup, use thin provisioning for your VM template. |
+> [!NOTE]
+> All of the software-defined data center (SDDC) management VMs (vCenter, NSX manager, NSX controller, NSX edges, and others) use the **Microsoft vSAN Management Storage Policy**, with **Object Space Reservation** set to **Thick** provisioning.
>[!TIP]
->If you're unsure if the cluster will grow to four or more, then deploy using the default policy. If you're sure your cluster will grow, then instead of expanding the cluster after your initial deployment, we recommend to deploy the extra hosts during deployment. As the VMs are deployed to the cluster, change the disk's storage policy in the VM settings to either RAID-5 FTT-1 or RAID-6 FTT-2.
->
->:::image type="content" source="media/concepts/vsphere-vm-storage-policies-2.png" alt-text="Screenshot showing the RAID-5 FTT-1 and RAID-6 FTT-2 options highlighed.":::
+>If you're unsure if the cluster will grow to four or more, then deploy using the default policy. If you're sure your cluster will grow, then instead of expanding the cluster after your initial deployment, we recommend deploying the extra hosts during deployment. As the VMs are deployed to the cluster, change the disk's storage policy in the VM settings to either RAID-5 FTT-1 or RAID-6 FTT-2. In reference to [SLA for Azure VMware Solution](https://azure.microsoft.com/support/legal/sla/azure-vmware/v1_1/), note that more than 6 hosts should be configured in the cluster to use an FTT-2 policy (RAID-1, or RAID-6). Also note that the storage policy is not automatically updated based on cluster size. Similarly, changing the default does not automatically update the running VM policies.
## Data-at-rest encryption
azure-vmware Configure Storage Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-storage-policy.md
You'll run the `Set-LocationStoragePolicy` cmdlet to Modify vSAN based storage p
You'll run the `Set-ClusterDefaultStoragePolicy` cmdlet to specify default storage policy for a cluster,
-> [!NOTE]
-> Changing the storage policy of the default management cluster (Cluster-1) isn't allowed.
- 1. Select **Run command** > **Packages** > **Set-ClusterDefaultStoragePolicy**. 1. Provide the required values or change the default values, and then select **Run**.
backup Private Endpoints Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/private-endpoints-overview.md
Title: Private endpoints overview description: Understand the use of private endpoints for Azure Backup and the scenarios where using private endpoints helps maintain the security of your resources. Previously updated : 11/09/2021 Last updated : 11/09/2021
This article will help you understand how private endpoints for Azure Backup wor
## Recommended and supported scenarios
-While private endpoints are enabled for the vault, they're used for backup and restore of SQL and SAP HANA workloads in an Azure VM and MARS agent backup only. You can use the vault for backup of other workloads as well (they won't require private endpoints though). In addition to backup of SQL and SAP HANA workloads and backup using the MARS agent, private endpoints are also used to perform file recovery for Azure VM backup. For more information, see the following table:
+While private endpoints are enabled for the vault, they're used for backup and restore of SQL and SAP HANA workloads in an Azure VM, MARS agent backup and DPM only. You can use the vault for backup of other workloads as well (they won't require private endpoints though). In addition to backup of SQL and SAP HANA workloads and backup using the MARS agent, private endpoints are also used to perform file recovery for Azure VM backup. For more information, see the following table:
| Scenarios | Recommendations | | | |
-| Backup of workloads in Azure VM (SQL, SAP HANA), Backup using MARS Agent | Use of private endpoints is recommended to allow backup and restore without needing to add to an allowlist any IPs/FQDNs for Azure Backup or Azure Storage from your virtual networks. In that scenario, ensure that VMs that host SQL databases can reach Azure AD IPs or FQDNs. |
+| Backup of workloads in Azure VM (SQL, SAP HANA), Backup using MARS Agent, DPM server. | Use of private endpoints is recommended to allow backup and restore without needing to add to an allowlist any IPs/FQDNs for Azure Backup or Azure Storage from your virtual networks. In that scenario, ensure that VMs that host SQL databases can reach Azure AD IPs or FQDNs. |
| **Azure VM backup** | VM backup doesn't require you to allow access to any IPs or FQDNs. So, it doesn't require private endpoints for backup and restore of disks. <br><br> However, file recovery from a vault containing private endpoints would be restricted to virtual networks that contain a private endpoint for the vault. <br><br> When using ACLΓÇÖed unmanaged disks, ensure the storage account containing the disks allows access to **trusted Microsoft services** if it's ACLΓÇÖed. | | **Azure Files backup** | Azure Files backups are stored in the local storage account. So it doesn't require private endpoints for backup and restore. |
->[!Note]
->Private endpoints aren't supported with DPM and MABS servers.
+>[!NOTE]
+> - Private endpoints are supported with only DPM server 2022 and later.
+> - Private endpoints are not yet supported with MABS.
+ ## Difference in network connections due to private endpoints
-As mentioned above, private endpoints are especially useful for backup of workloads (SQL, SAP HANA) in Azure VMs and MARS agent backups.
+As mentioned above, private endpoints are especially useful for backup of workloads (SQL, SAP HANA) in Azure VMs and MARS agent backups.
In all the scenarios (with or without private endpoints), both the workload extensions (for backup of SQL and SAP HANA instances running inside Azure VMs) and the MARS agent make connection calls to AAD (to FQDNs mentioned under sections 56 and 59 in [Microsoft 365 Common and Office Online](/office365/enterprise/urls-and-ip-address-ranges#microsoft-365-common-and-office-online)). In addition to these connections when the workload extension or MARS agent is installed for recovery services vault _without private endpoints_, connectivity to the following domains are also required:
For the manual management of DNS records after the VM discovery for communicatio
>The private IP addresses for the FQDNs can be found in the private endpoint blade for the private endpoint created for the Recovery Services vault.
-The following diagram shows how the resolution works when using a private DNS zone to resolve these modified service FQDNs.
+The following diagram shows how the resolution works when using a private DNS zone to resolve these modified service FQDNs.
:::image type="content" source="./media/private-endpoints-overview/use-private-dns-zone-to-resolve-modified-service-fqdns-inline.png" alt-text="Diagram showing how the resolution works using a private DNS zone to resolve modified service FQDNs." lightbox="./media/private-endpoints-overview/use-private-dns-zone-to-resolve-modified-service-fqdns-expanded.png":::
backup Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/private-endpoints.md
Title: Create and use private endpoints for Azure Backup
-description: Understand the process to creating private endpoints for Azure Backup where using private endpoints helps maintain the security of your resources.
+description: Understand the process to creating private endpoints for Azure Backup where using private endpoints helps maintain the security of your resources.
Last updated 11/09/2021
This section explains how to create a private endpoint for your vault.
![Create new private endpoint](./media/private-endpoints/new-private-endpoint.png) 1. Once in the **Create Private Endpoint** process, you'll be required to specify details for creating your private endpoint connection.
-
+ 1. **Basics**: Fill in the basic details for your private endpoints. The region should be the same as the vault and the resource being backed up. ![Fill in basic details](./media/private-endpoints/basics-tab.png)
See [Manual approval of private endpoints using the Azure Resource Manager Clien
As described previously, you need the required DNS records in your private DNS zones or servers in order to connect privately. You can either integrate your private endpoint directly with Azure private DNS zones or use your custom DNS servers to achieve this, based on your network preferences. This will need to be done for all three
-Additionally, if your DNS zone or server is present in a subscription that's different than the one containing the private endpoint, also see [Create DNS entries when the DNS server/DNS zone is present in another subscription](#create-dns-entries-when-the-dns-serverdns-zone-is-present-in-another-subscription).
+Additionally, if your DNS zone or server is present in a subscription that's different than the one containing the private endpoint, also see [Create DNS entries when the DNS server/DNS zone is present in another subscription](#create-dns-entries-when-the-dns-serverdns-zone-is-present-in-another-subscription).
### When integrating private endpoints with Azure private DNS zones
When using SQL Availability Groups (AG), you'll need to provision conditional fo
![New conditional forwarder](./media/private-endpoints/new-conditional-forwarder.png)
-### Backup and restore through MARS Agent
+### Backup and restore through MARS Agent and DPM server
+
+>[!NOTE]
+> - Private endpoints are supported with only DPM server 2022 and later.
+> - Private endpoints are not yet supported with MABS.
+ When using the MARS Agent to back up your on-premises resources, make sure your on-premises network (containing your resources to be backed up) is peered with the Azure VNet that contains a private endpoint for the vault, so you can use it. You can then continue to install the MARS agent and configure backup as detailed here. However, you must ensure all communication for backup happens through the peered network only.
Create the following JSON files and use the PowerShell command at the end of the
$vault = Get-AzRecoveryServicesVault ` -ResourceGroupName $vaultResourceGroupName ` -Name $vaultName
-
+ $privateEndpointConnection = New-AzPrivateLinkServiceConnection ` -Name $privateEndpointConnectionName ` -PrivateLinkServiceId $vault.ID `
$privateEndpoint = New-AzPrivateEndpoint `
To configure a proxy server for Azure VM or on-premises machine, follow these steps: 1. Add the following domains that need to be accessed from the proxy server.
-
+ | Service | Domain names | Port | | - | | - | | Azure Backup | *.backup.windowsazure.com | 443 | | Azure Storage | *.blob.core.windows.net <br><br> *.queue.core.windows.net <br><br> *.blob.storage.azure.net | 443 | | Azure active directory <br><br> Updated domain URLs mentioned under sections 56 and 59 in [Microsoft 365 Common and Office Online](/microsoft-365/enterprise/urls-and-ip-address-ranges?view=o365-worldwide&preserve-view=true#microsoft-365-common-and-office-online). | *.msftidentity.com, *.msidentity.com, account.activedirectory.windowsazure.com, accounts.accesscontrol.windows.net, adminwebservice.microsoftonline.com, api.passwordreset.microsoftonline.com, autologon.microsoftazuread-sso.com, becws.microsoftonline.com, clientconfig.microsoftonline-p.net, companymanager.microsoftonline.com, device.login.microsoftonline.com, graph.microsoft.com, graph.windows.net, login.microsoft.com, login.microsoftonline.com, login.microsoftonline-p.com, login.windows.net, logincert.microsoftonline.com, loginex.microsoftonline.com, login-us.microsoftonline.com, nexus.microsoftonline-p.com, passwordreset.microsoftonline.com, provisioningapi.microsoftonline.com <br><br> 20.190.128.0/18, 40.126.0.0/18, 2603:1006:2000::/48, 2603:1007:200::/48, 2603:1016:1400::/48, 2603:1017::/48, 2603:1026:3000::/48, 2603:1027:1::/48, 2603:1036:3000::/48, 2603:1037:1::/48, 2603:1046:2000::/48, 2603:1047:1::/48, 2603:1056:2000::/48, 2603:1057:2::/48 <br><br> *.hip.live.com, *.microsoftonline.com, *.microsoftonline-p.com, *.msauth.net, *.msauthimages.net, *.msecnd.net, *.msftauth.net, *.msftauthimages.net, *.phonefactor.net, enterpriseregistration.windows.net, management.azure.com, policykeyservice.dc.ad.msft.net | As applicable. |
-1. Allow access to these domains in the proxy server and link private DNS zone ( `*.privatelink.<geo>.backup.windowsazure.com`, `*.privatelink.blob.core.windows.net`, `*.privatelink.queue.core.windows.net`) with the VNET where proxy server is created or uses a custom DNS server with the respective DNS entries. <br><br> The VNET where proxy server is running and the VNET where private endpoint NIC is created should be peered, which would allow the proxy server to redirect the requests to private IP.
+1. Allow access to these domains in the proxy server and link private DNS zone ( `*.privatelink.<geo>.backup.windowsazure.com`, `*.privatelink.blob.core.windows.net`, `*.privatelink.queue.core.windows.net`) with the VNET where proxy server is created or uses a custom DNS server with the respective DNS entries. <br><br> The VNET where proxy server is running and the VNET where private endpoint NIC is created should be peered, which would allow the proxy server to redirect the requests to private IP.
>[!NOTE] >In the above text, `<geo>` refers to the region code (for example *eus* and *ne* for East US and North Europe respectively). Refer to the following lists for regions codes:
The following diagram shows a setup (while using the Azure Private DNS zones) wi
### Create DNS entries when the DNS server/DNS zone is present in another subscription In this section, weΓÇÖll discuss the cases where youΓÇÖre using a DNS zone thatΓÇÖs present in a subscription, or a Resource Group thatΓÇÖs different from the one containing the private endpoint for the Recovery Services vault, such as a hub and spoke topology. As the managed identity used for creating private endpoints (and the DNS entries) has permissions only on the Resource Group in which the private endpoints are created, the required DNS entries are needed additionally. Use the following PowerShell scripts to create DNS entries.
-
+ >[!Note] >Refer to the entire process described below to achieve the required results. The process needs to be repeated twice - once during the first discovery (to create DNS entries required for communication storage accounts), and then once during the first backup (to create DNS entries required for back-end storage accounts).
cdn Cdn Custom Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-custom-ssl.md
The following table shows the operation progress that occurs when you disable HT
7. *How do cert renewals work with Bring Your Own Certificate?* To ensure a newer certificate is deployed to PoP infrastructure, upload your new certificate to Azure KeyVault. In your TLS settings on Azure CDN, choose the newest certificate version and select save. Azure CDN will then propagate your new updated cert.
+
+ For **Azure CDN from Verizon** profiles, if you use the same Azure Key Vault certificate on several custom domains (e.g. a wildcard certificate), ensure you update all of your custom domains that use that same certificate to the newer certificate version.
8. *Do I need to re-enable HTTPS after the endpoint restarts?*
cdn Monitoring And Access Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/monitoring-and-access-log.md
Azure CDN from Microsoft Service currently provides Raw logs. Raw logs provide i
| ClientIp | The IP address of the client that made the request. If there was an X-Forwarded-For header in the request, then the Client IP is picked from the same. | | ClientPort | The IP port of the client that made the request. | | HttpMethod | HTTP method used by the request. |
-| HttpStatusCode | The HTTP status code returned from the proxy. |
+| HttpStatusCode | The HTTP status code returned from the proxy. If a request to the the origin timeout, the value for HttpStatusCode is set to **0**.|
| HttpStatusDetails | Resulting status on the request. Meaning of this string value can be found at a Status reference table. | | HttpVersion | Type of the request or connection. | | POP | Short name of the edge where the request landed. |
For more information, see [Azure Monitor metrics](../azure-monitor/essentials/da
| ResponseSize | The number of bytes sent as responses from CDN edge to clients. |Endpoint </br> Client country. </br> Client region. </br> HTTP status. </br> HTTP status group. | | TotalLatency | The total time from the client request received by CDN **until the last response byte send from CDN to client**. |Endpoint </br> Client country. </br> Client region. </br> HTTP status. </br> HTTP status group. |
+> [!NOTE]
+> If a request to the the origin timeout, the value for HttpStatusCode is set to **0**.
+ ***Bytes Hit Ration = (egress from edge - egress from origin)/egress from edge** Scenarios excluded in bytes hit ratio calculation:
cognitive-services Call Read Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/Vision-API-How-to-Topics/call-read-api.md
This guide assumes you have already <a href="https://portal.azure.com/#create/Mi
### Specify the OCR model
-By default, the service will use the latest GA model to extract text. Starting with Read 3.2, a `model-version` parameter allows choosing between the GA and preview models for a given API version. The model you specify will be used to extract text with the Read operation.
+By default, the service will use the latest generally available (GA) model to extract text. Starting with Read 3.2, a `model-version` parameter allows choosing between the GA and preview models for a given API version. The model you specify will be used to extract text with the Read operation.
When using the Read operation, use the following values for the optional `model-version` parameter. |Value| Model used | |:--|:-|
-| 2022-01-30-preview | Latest preview model with additonal Hindi, Arabic and other Devanagari and Arabic script languages and enhancements to the previous preview.
-| 2021-09-30-preview | Previous preview model with addiitonal Russian and Cyrillic languages and enhancements to the GA previous model.
-| 2021-04-12 | most recent GA model |
-| Not provided | most recent GA model |
-| latest | most recent GA model|
+| Not provided | Latest GA model |
+| latest | Latest GA model|
+| [2022-04-30](../whats-new.md#may-2022) | Latest GA model. 164 languages for print text and 9 languages for handwritten text along with several enhancements on quality and performance |
+| [2022-01-30-preview](../whats-new.md#february-2022) | Preview model adds print text support for Hindi, Arabic and related languages. For handwriitten text, adds support for Japanese and Korean. |
+| [2021-09-30-preview](../whats-new.md#september-2021) | Preview model adds print text support for Russian and other Cyrillic languages, For handwriitten text, adds support for Chinese Simplified, French, German, Italian, Portuguese, and Spanish. |
+| 2021-04-12 | 2021 GA model |
### Input language
cognitive-services Computer Vision How To Install Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/computer-vision-how-to-install-containers.md
Containers enable you to run the Computer Vision APIs in your own environment. C
The *Read* OCR container allows you to extract printed and handwritten text from images and documents with support for JPEG, PNG, BMP, PDF, and TIFF file formats. For more information, see the [Read API how-to guide](Vision-API-How-to-Topics/call-read-api.md). ## What's new
-For existing users of the Read containers, a new `3.2-model-2021-09-30-preview` version of the Read container is available with support for 122 languages and general performance and AI enhancements. Please follow the [download instructions](#docker-pull-for-the-read-ocr-container) to get started.
+The `3.2-model-2022-04-30` GA version of the Read container is available with support for [164 languages and other enhancements](./whats-new.md#may-2022). If you are an existing customer, please follow the [download instructions](#docker-pull-for-the-read-ocr-container) to get started.
## Read 3.2 container
-The Read 3.2 OCR container provides:
+The Read 3.2 OCR container latest GA model provides:
* New models for enhanced accuracy. * Support for multiple languages within the same document.
-* Support for a total of 73 languages. See the full list of [OCR-supported languages](./language-support.md#optical-character-recognition-ocr).
+* Support for a total of 164 languages. See the full list of [OCR-supported languages](./language-support.md#optical-character-recognition-ocr).
* A single operation for both documents and images. * Support for larger documents and images. * Confidence scores.
grep -q avx2 /proc/cpuinfo && echo AVX2 supported || echo No AVX2 support detect
Container images for Read are available.
-| Container | Container Registry / Repository / Image Name |
-|--||
-| Read 3.2 model-2021-09-30-preview | `mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-model-2021-09-30-preview` |
-| Read 3.2 | `mcr.microsoft.com/azure-cognitive-services/vision/read:3.2` |
-| Read 2.0-preview | `mcr.microsoft.com/azure-cognitive-services/vision/read:2.0-preview` |
+| Container | Container Registry / Repository / Image Name | Tags |
+|--||--|
+| Read 3.2 GA | `mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-model-2022-04-30` | latest, 3.2, 3.2-model-2022-04-30 |
+| Read 2.0-preview | `mcr.microsoft.com/azure-cognitive-services/vision/read:2.0-preview` |2.0.019300020-amd64-preview |
Use the [`docker pull`](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image. ### Docker pull for the Read OCR container
-For the latest preview:
+# [Version 3.2 GA](#tab/version-3-2)
```bash
-docker pull mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-model-2021-09-30-preview
+docker pull mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-model-2022-04-30
```
-# [Version 3.2](#tab/version-3-2)
-
-```bash
-docker pull mcr.microsoft.com/azure-cognitive-services/vision/read:3.2
-```
-
-# [Version 2.0-preview](#tab/version-2)
+# [Version 2.0 preview](#tab/version-2)
```bash docker pull mcr.microsoft.com/azure-cognitive-services/vision/read:2.0-preview
Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/)
[Examples](computer-vision-resource-container-config.md#example-docker-run-commands) of the `docker run` command are available.
-For the latest preview, replace 3.2 path with:
-
-```
-mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-model-2021-09-30-preview
-```
- # [Version 3.2](#tab/version-3-2) ```bash
-docker run --rm -it -p 5000:5000 --memory 18g --cpus 8 \
-mcr.microsoft.com/azure-cognitive-services/vision/read:3.2 \
+docker run --rm -it -p 5000:5000 --memory 16g --cpus 8 \
+mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-model-2022-04-30 \
Eula=accept \ Billing={ENDPOINT_URI} \ ApiKey={API_KEY}
ApiKey={API_KEY}
This command:
-* Runs the Read OCR container from the container image.
-* Allocates 8 CPU core and 18 gigabytes (GB) of memory.
+* Runs the Read OCR latest GA container from the container image.
+* Allocates 8 CPU core and 16 gigabytes (GB) of memory.
* Exposes TCP port 5000 and allocates a pseudo-TTY for the container. * Automatically removes the container after it exits. The container image is still available on the host computer. You can alternatively run the container using environment variables: ```bash
-docker run --rm -it -p 5000:5000 --memory 18g --cpus 8 \
+docker run --rm -it -p 5000:5000 --memory 16g --cpus 8 \
--env Eula=accept \ --env Billing={ENDPOINT_URI} \ --env ApiKey={API_KEY} \
-mcr.microsoft.com/azure-cognitive-services/vision/read:3.2
+mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-model-2022-04-30
``` # [Version 2.0-preview](#tab/version-2)
To find your connection string:
The container provides REST-based query prediction endpoint APIs.
-For the latest preview:
-
-Use the same Swagger path as 3.2 but a different port if you have already deployed 3.2 at the 5000 port.
- Use the host, `http://localhost:5000`, for container APIs. You can view the Swagger path at: `http://localhost:5000/swagger/`. ### Asynchronous Read
-For the latest preview, everything is the same as 3.2 except for the additional `"modelVersion": "2021-09-30-preview"`.
- # [Version 3.2](#tab/version-3-2) You can use the `POST /vision/v3.2/read/analyze` and `GET /vision/v3.2/read/operations/{operationId}` operations in concert to asynchronously read an image, similar to how the Computer Vision service uses those corresponding REST operations. The asynchronous POST method will return an `operationId` that is used as the identifer to the HTTP GET request.
cognitive-services Computer Vision Resource Container Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/computer-vision-resource-container-config.md
The following Docker examples are for the Read OCR container.
### Basic example ```bash
-docker run --rm -it -p 5000:5000 --memory 18g --cpus 8 \
-mcr.microsoft.com/azure-cognitive-services/vision/read:3.2 \
+docker run --rm -it -p 5000:5000 --memory 16g --cpus 8 \
+mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-model-2022-04-30 \
Eula=accept \ Billing={ENDPOINT_URI} \ ApiKey={API_KEY}
ApiKey={API_KEY}
### Logging example ```bash
-docker run --rm -it -p 5000:5000 --memory 18g --cpus 8 \
-mcr.microsoft.com/azure-cognitive-services/vision/read:3.2 \
+docker run --rm -it -p 5000:5000 --memory 16g --cpus 8 \
+mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-model-2022-04-30 \
Eula=accept \ Billing={ENDPOINT_URI} \ ApiKey={API_KEY}
Logging:Console:LogLevel:Default=Information
### Basic example ```bash
-docker run --rm -it -p 5000:5000 --memory 18g --cpus 8 \
+docker run --rm -it -p 5000:5000 --memory 16g --cpus 8 \
mcr.microsoft.com/azure-cognitive-services/vision/read:2.0-preview \ Eula=accept \ Billing={ENDPOINT_URI} \
ApiKey={API_KEY}
### Logging example ```bash
-docker run --rm -it -p 5000:5000 --memory 18g --cpus 8 \
+docker run --rm -it -p 5000:5000 --memory 16g --cpus 8 \
mcr.microsoft.com/azure-cognitive-services/vision/read:2.0-preview \ Eula=accept \ Billing={ENDPOINT_URI} \
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/language-support.md
Previously updated : 02/04/2022 Last updated : 05/02/2022
Some capabilities of Computer Vision support multiple languages; any capabilitie
## Optical Character Recognition (OCR)
-The Computer Vision OCR APIs support many languages. Read can extract text from images and documents with mixed languages, including from the same text line, without requiring a language parameter. See the [Optical Character Recognition (OCR) overview](overview-ocr.md) for more information.
+The Computer Vision [Read API](./overview-ocr.md#read-api) supports many languages. The `Read` API can extract text from images and documents with mixed languages, including from the same text line, without requiring a language parameter.
> [!NOTE] > **Language code optional** >
-> Read OCR's deep-learning-based universal models extract all multi-lingual text in your documents, including text lines with mixed languages, and do not require specifying a language code. Do not provide the language code as the parameter unless you are sure about the language and want to force the service to apply only the relevant model. Otherwise, the service may return incomplete and incorrect text.
+> `Read` OCR's deep-learning-based universal models extract all multi-lingual text in your documents, including text lines with mixed languages, and do not require specifying a language code. Do not provide the language code as the parameter unless you are sure about the language and want to force the service to apply only the relevant model. Otherwise, the service may return incomplete and incorrect text.
-See [How to specify the model version](./Vision-API-How-to-Topics/call-read-api.md#determine-how-to-process-the-data-optional) to use the new languages.
+See [How to specify the `Read` model](./Vision-API-How-to-Topics/call-read-api.md#determine-how-to-process-the-data-optional) to use the new languages.
-### Handwritten languages
+### Handwritten text
-The following table lists the languages supported by Read for handwritten text.
+The following table lists the OCR supported languages for handwritten text by the most recent `Read` GA model.
|Language| Language code (optional) | Language| Language code (optional) | |:--|:-:|:--|:-:|
-|English|`en`|Japanese (preview) |`ja`|
-|Chinese Simplified (preview) |`zh-Hans`|Korean (preview)|`ko`|
-|French (preview) |`fr`|Portuguese (preview)|`pt`|
-|German (preview) |`de`|Spanish (preview) |`es`|
-|Italian (preview) |`it`|
+|English|`en`|Japanese |`ja`|
+|Chinese Simplified |`zh-Hans`|Korean|`ko`|
+|French |`fr`|Portuguese |`pt`|
+|German |`de`|Spanish |`es`|
+|Italian |`it`|
-### Print languages (preview)
+### Print text
-This section lists the supported languages in the latest preview.
+The following table lists the OCR supported languages for print text by the most recent `Read` GA model.
|Language| Code (optional) |Language| Code (optional) | |:--|:-:|:--|:-:|
-|Angika (Devanagiri) | `anp`|Lakota | `lkt`
-|Arabic | `ar`|Latin | `la`
-|Awadhi-Hindi (Devanagiri) | `awa`|Lithuanian | `lt`
-|Azerbaijani (Latin) | `az`|Lower Sorbian | `dsb`
-|Bagheli | `bfy`|Lule Sami | `smj`
-|Belarusian (Cyrillic) | `be`, `be-cyrl`|Mahasu Pahari (Devanagiri) | `bfz`
-|Belarusian (Latin) | `be`, `be-latn`|Maltese | `mt`
-|Bhojpuri-Hindi (Devanagiri) | `bho`|Malto (Devanagiri) | `kmj`
-|Bodo (Devanagiri) | `brx`|Maori | `mi`
-|Bosnian (Latin) | `bs`|Marathi | `mr`
-|Brajbha | `bra`|Mongolian (Cyrillic) | `mn`
-|Bulgarian | `bg`|Montenegrin (Cyrillic) | `cnr-cyrl`
-|Bundeli | `bns`|Montenegrin (Latin) | `cnr-latn`
-|Buryat (Cyrillic) | `bua`|Nepali | `ne`
-|Chamling | `rab`|Niuean | `niu`
-|Chhattisgarhi (Devanagiri)| `hne`|Nogay | `nog`
-|Croatian | `hr`|Northern Sami (Latin) | `sme`
-|Dari | `prs`|Ossetic | `os`
-|Dhimal (Devanagiri) | `dhi`|Pashto | `ps`
-|Dogri (Devanagiri) | `doi`|Persian | `fa`
-|Erzya (Cyrillic) | `myv`|Punjabi (Arabic) | `pa`
-|Faroese | `fo`|Ripuarian | `ksh`
-|Gagauz (Latin) | `gag`|Romanian | `ro`
-|Gondi (Devanagiri) | `gon`|Russian | `ru`
-|Gurung (Devanagiri) | `gvr`|Sadri (Devanagiri) | `sck`
-|Halbi (Devanagiri) | `hlb`|Samoan (Latin) | `sm`
-|Haryanvi | `bgc`|Sanskrit (Devanagari) | `sa`
-|Hawaiian | `haw`|Santali(Devanagiri) | `sat`
-|Hindi | `hi`|Serbian (Latin) | `sr`, `sr-latn`
-|Ho(Devanagiri) | `hoc`|Sherpa (Devanagiri) | `xsr`
-|Icelandic | `is`|Sirmauri (Devanagiri) | `srx`
-|Inari Sami | `smn`|Skolt Sami | `sms`
-|Jaunsari (Devanagiri) | `Jns`|Slovak | `sk`
-|Kangri (Devanagiri) | `xnr`|Somali (Arabic) | `so`
-|Karachay-Balkar | `krc`|Southern Sami | `sma`
-|Kara-Kalpak (Cyrillic) | `kaa-cyrl`|Tajik (Cyrillic) | `tg`
-|Kazakh (Cyrillic) | `kk-cyrl`|Thangmi | `thf`
-|Kazakh (Latin) | `kk-latn`|Tongan | `to`
-|Khaling | `klr`|Turkmen (Latin) | `tk`
-|Korku | `kfq`|Tuvan | `tyv`
-|Koryak | `kpy`|Urdu | `ur`
-|Kosraean | `kos`|Uyghur (Arabic) | `ug`
-|Kumyk (Cyrillic) | `kum`|Uzbek (Arabic) | `uz-arab`
-|Kurdish (Arabic) | `ku-arab`|Uzbek (Cyrillic) | `uz-cyrl`
-|Kurukh (Devanagiri) | `kru`|Welsh | `cy`
-|Kyrgyz (Cyrillic) | `ky`
-
-### Print languages (GA)
-
-This section lists the supported languages in the latest GA version.
-
-|Language| Code (optional) |Language| Code (optional) |
-|:--|:-:|:--|:-:|
-|Afrikaans|`af`|Japanese | `ja` |
-|Albanian |`sq`|Javanese | `jv` |
-|Asturian |`ast`|K'iche' | `quc` |
-|Basque |`eu`|Kabuverdianu | `kea` |
-|Bislama |`bi`|Kachin (Latin) | `kac` |
-|Breton |`br`|Kara-Kalpak (Latin) | `kaa` |
-|Catalan |`ca`|Kashubian | `csb` |
-|Cebuano |`ceb`|Khasi | `kha` |
-|Chamorro |`ch`|Korean | `ko` |
-|Chinese Simplified | `zh-Hans`|Kurdish (Latin) | `ku-latn`
-|Chinese Traditional | `zh-Hant`|Luxembourgish | `lb` |
-|Cornish |`kw`|Malay (Latin) | `ms` |
-|Corsican |`co`|Manx | `gv` |
-|Crimean Tatar (Latin)|`crh`|Neapolitan | `nap` |
-|Czech | `cs` |Norwegian | `no` |
-|Danish | `da` |Occitan | `oc` |
-|Dutch | `nl` |Polish | `pl` |
-|English | `en` |Portuguese | `pt` |
-|Estonian |`et`|Romansh | `rm` |
-|Fijian |`fj`|Scots | `sco` |
-|Filipino |`fil`|Scottish Gaelic | `gd` |
-|Finnish | `fi` |Slovenian | `sl` |
-|French | `fr` |Spanish | `es` |
-|Friulian | `fur` |Swahili (Latin) | `sw` |
-|Galician | `gl` |Swedish | `sv` |
-|German | `de` |Tatar (Latin) | `tt` |
-|Gilbertese | `gil` |Tetum | `tet` |
-|Greenlandic | `kl` |Turkish | `tr` |
-|Haitian Creole | `ht` |Upper Sorbian | `hsb` |
-|Hani | `hni` |Uzbek (Latin) | `uz` |
-|Hmong Daw (Latin)| `mww` |Volap├╝k | `vo` |
-|Hungarian | `hu` |Walser | `wae` |
-|Indonesian | `id` |Western Frisian | `fy` |
-|Interlingua | `ia` |Yucatec Maya | `yua` |
-|Inuktitut (Latin) | `iu` |Zhuang | `za` |
-|Irish | `ga` |Zulu | `zu` |
-|Italian | `it` |
+|Afrikaans|`af`|Khasi | `kha` |
+|Albanian |`sq`|K'iche' | `quc` |
+|Angika (Devanagiri) | `anp`| Korean | `ko` |
+|Arabic | `ar` | Korku | `kfq`|
+|Asturian |`ast`| Koryak | `kpy`|
+|Awadhi-Hindi (Devanagiri) | `awa`| Kosraean | `kos`|
+|Azerbaijani (Latin) | `az`| Kumyk (Cyrillic) | `kum`|
+|Bagheli | `bfy`| Kurdish (Arabic) | `ku-arab`|
+|Basque |`eu`| Kurdish (Latin) | `ku-latn`
+|Belarusian (Cyrillic) | `be`, `be-cyrl`|Kurukh (Devanagiri) | `kru`|
+|Belarusian (Latin) | `be`, `be-latn`| Kyrgyz (Cyrillic) | `ky`
+|Bhojpuri-Hindi (Devanagiri) | `bho`| Lakota | `lkt` |
+|Bislama |`bi`| Latin | `la` |
+|Bodo (Devanagiri) | `brx`| Lithuanian | `lt` |
+|Bosnian (Latin) | `bs`| Lower Sorbian | `dsb` |
+|Brajbha | `bra`|Lule Sami | `smj`|
+|Breton |`br`|Luxembourgish | `lb` |
+|Bulgarian | `bg`|Mahasu Pahari (Devanagiri) | `bfz`|
+|Bundeli | `bns`|Malay (Latin) | `ms` |
+|Buryat (Cyrillic) | `bua`|Maltese | `mt`
+|Catalan |`ca`|Malto (Devanagiri) | `kmj`
+|Cebuano |`ceb`|Manx | `gv` |
+|Chamling | `rab`|Maori | `mi`|
+|Chamorro |`ch`|Marathi | `mr`|
+|Chhattisgarhi (Devanagiri)| `hne`| Mongolian (Cyrillic) | `mn`|
+|Chinese Simplified | `zh-Hans`|Montenegrin (Cyrillic) | `cnr-cyrl`|
+|Chinese Traditional | `zh-Hant`|Montenegrin (Latin) | `cnr-latn`|
+|Cornish |`kw`|Neapolitan | `nap` |
+|Corsican |`co`|Nepali | `ne`|
+|Crimean Tatar (Latin)|`crh`|Niuean | `niu`|
+|Croatian | `hr`|Nogay | `nog`
+|Czech | `cs` |Northern Sami (Latin) | `sme`|
+|Danish | `da` |Norwegian | `no` |
+|Dari | `prs`|Occitan | `oc` |
+|Dhimal (Devanagiri) | `dhi`| Ossetic | `os`|
+|Dogri (Devanagiri) | `doi`|Pashto | `ps`|
+|Dutch | `nl` |Persian | `fa`|
+|English | `en` |Polish | `pl` |
+|Erzya (Cyrillic) | `myv`|Portuguese | `pt` |
+|Estonian |`et`|Punjabi (Arabic) | `pa`|
+|Faroese | `fo`|Ripuarian | `ksh`|
+|Fijian |`fj`|Romanian | `ro` |
+|Filipino |`fil`|Romansh | `rm` |
+|Finnish | `fi` | Russian | `ru` |
+|French | `fr` |Sadri (Devanagiri) | `sck` |
+|Friulian | `fur` | Samoan (Latin) | `sm`
+|Gagauz (Latin) | `gag`|Sanskrit (Devanagari) | `sa`|
+|Galician | `gl` |Santali(Devanagiri) | `sat` |
+|German | `de` | Scots | `sco` |
+|Gilbertese | `gil` | Scottish Gaelic | `gd` |
+|Gondi (Devanagiri) | `gon`| Serbian (Latin) | `sr`, `sr-latn`|
+|Greenlandic | `kl` | Sherpa (Devanagiri) | `xsr` |
+|Gurung (Devanagiri) | `gvr`| Sirmauri (Devanagiri) | `srx`|
+|Haitian Creole | `ht` | Skolt Sami | `sms` |
+|Halbi (Devanagiri) | `hlb`| Slovak | `sk`|
+|Hani | `hni` | Slovenian | `sl` |
+|Haryanvi | `bgc`|Somali (Arabic) | `so`|
+|Hawaiian | `haw`|Southern Sami | `sma`
+|Hindi | `hi`|Spanish | `es` |
+|Hmong Daw (Latin)| `mww` | Swahili (Latin) | `sw` |
+|Ho(Devanagiri) | `hoc`|Swedish | `sv` |
+|Hungarian | `hu` |Tajik (Cyrillic) | `tg` |
+|Icelandic | `is`| Tatar (Latin) | `tt` |
+|Inari Sami | `smn`|Tetum | `tet` |
+|Indonesian | `id` | Thangmi | `thf` |
+|Interlingua | `ia` |Tongan | `to`|
+|Inuktitut (Latin) | `iu` | Turkish | `tr` |
+|Irish | `ga` |Turkmen (Latin) | `tk`|
+|Italian | `it` |Tuvan | `tyv`|
+|Japanese | `ja` |Upper Sorbian | `hsb` |
+|Jaunsari (Devanagiri) | `Jns`|Urdu | `ur`|
+|Javanese | `jv` |Uyghur (Arabic) | `ug`|
+|Kabuverdianu | `kea` |Uzbek (Arabic) | `uz-arab`|
+|Kachin (Latin) | `kac` |Uzbek (Cyrillic) | `uz-cyrl`|
+|Kangri (Devanagiri) | `xnr`|Uzbek (Latin) | `uz` |
+|Karachay-Balkar | `krc`|Volap├╝k | `vo` |
+|Kara-Kalpak (Cyrillic) | `kaa-cyrl`|Walser | `wae` |
+|Kara-Kalpak (Latin) | `kaa` |Welsh | `cy` |
+|Kashubian | `csb` |Western Frisian | `fy` |
+|Kazakh (Cyrillic) | `kk-cyrl`|Yucatec Maya | `yua` |
+|Kazakh (Latin) | `kk-latn`|Zhuang | `za` |
+|Khaling | `klr`|Zulu | `zu` |
## Image analysis
cognitive-services Overview Ocr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-ocr.md
The **Read** call takes images and documents as its input. They have the followi
* Supported file formats: JPEG, PNG, BMP, PDF, and TIFF * For PDF and TIFF files, up to 2000 pages (only first two pages for the free tier) are processed.
-* The file size must be less than 50 MB (4 MB for the free tier) and dimensions at least 50 x 50 pixels and at most 10000 x 10000 pixels.
+* The file size must be less than 500 MB (4 MB for the free tier) and dimensions at least 50 x 50 pixels and at most 10000 x 10000 pixels.
* The minimum height of the text to be extracted is 12 pixels for a 1024X768 image. This corresponds to about 8 font point text at 150 DPI. ## Supported languages
-The Read API latest preview supports 164 languages for print text and 9 languages for handwritten text.
+The Read API latest generally available (GA) model supports 164 languages for print text and 9 languages for handwritten text.
OCR for print text includes support for English, French, German, Italian, Portuguese, Spanish, Chinese, Japanese, Korean, Russian, Arabic, Hindi, and other international languages that use Latin, Cyrillic, Arabic, and Devanagari scripts.
-OCR for handwritten text includes support for English, and previews of Chinese, French, German, Italian, Japanese, Korean, Portuguese, Spanish languages.
+OCR for handwritten text includes support for English, Chinese Simplified, French, German, Italian, Japanese, Korean, Portuguese, Spanish languages.
See [How to specify the model version](./Vision-API-How-to-Topics/call-read-api.md#determine-how-to-process-the-data-optional) to use the preview languages and features. Refer to the full list of [OCR-supported languages](./language-support.md#optical-character-recognition-ocr).
The Read 3.x cloud APIs are the preferred option for most customers because of e
For on-premise deployment, the [Read Docker container (preview)](./computer-vision-how-to-install-containers.md) enables you to deploy the new OCR capabilities in your own local environment. Containers are great for specific security and data governance requirements. > [!WARNING]
-> The Computer Vision 2.0 RecognizeText operations are in the process of being deprecated in favor of the new [Read API](#read-api) covered in this article. Existing customers should [transition to using Read operations](upgrade-api-versions.md).
+> The Computer Vision [RecognizeText](https://westus.dev.cognitive.microsoft.com/docs/services/5cd27ec07268f6c679a3e641/operations/587f2c6a1540550560080311) and [ocr](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f20d) operations are no longer maintained, and are in the process of being deprecated in favor of the new [Read API](#read-api) covered in this article. Existing customers should [transition to using Read operations](upgrade-api-versions.md).
## Data privacy and security
cognitive-services Read Container Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/read-container-migration-guide.md
If you're using version 2 of the Computer Vision Read OCR container, Use this ar
The Read v3.2 container uses version 3 of the Computer Vision API and has the following endpoints:
-* `/vision/v3.2-preview.1/read/analyzeResults/{operationId}`
-* `/vision/v3.2-preview.1/read/analyze`
-* `/vision/v3.2-preview.1/read/syncAnalyze`
+* `/vision/v3.2/read/analyzeResults/{operationId}`
+* `/vision/v3.2/read/analyze`
+* `/vision/v3.2/read/syncAnalyze`
See the [Computer Vision v3 REST API migration guide](./upgrade-api-versions.md) for detailed information on updating your applications to use version 3 of cloud-based Read API. This information applies to the container as well. Sync operations are only supported in containers. ## Memory requirements
-The requirements and recommendations are based on benchmarks with a single request per second, using an 8-MB image of a scanned business letter that contains 29 lines and a total of 803 characters. The following table describes the minimum and recommended allocations of resources for each Read OCR container.
+The requirements and recommendations are based on benchmarks with a single request per second, using a 523-KB image of a scanned business letter that contains 29 lines and a total of 803 characters. The following table describes the minimum and recommended allocations of resources for each Read OCR container.
|Container |Minimum | Recommended | ||||
-|Read 3.2-preview | 8 cores, 16-GB memory | 8 cores, 24-GB memory |
+|Read 3.2 **2022-04-30** | 4 cores, 8-GB memory | 8 cores, 16-GB memory |
Each core must be at least 2.6 gigahertz (GHz) or faster.
Set the timer with `Queue:Azure:QueueVisibilityTimeoutInMilliseconds`, which set
* Review [OCR overview](overview-ocr.md) to learn more about recognizing printed and handwritten text * Refer to the [Read API](//westus.dev.cognitive.microsoft.com/docs/services/5adf991815e1060e6355ad44/operations/56f91f2e778daf14a499e1fa) for details about the methods supported by the container. * Refer to [Frequently asked questions (FAQ)](FAQ.yml) to resolve issues related to Computer Vision functionality.
-* Use more [Cognitive Services Containers](../cognitive-services-container-support.md)
+* Use more [Cognitive Services Containers](../cognitive-services-container-support.md)
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/whats-new.md
Previously updated : 02/05/2022 Last updated : 05/02/2022
Learn what's new in the service. These items may be release notes, videos, blog posts, and other types of information. Bookmark this page to stay up to date with the service.
+## May 2022
+
+### OCR (Read) API model is generally available (GA)
+
+Computer Vision's [OCR (Read) API](overview-ocr.md) latest model with [164 supported languages](language-support.md) is now generally available as a cloud service and container.
+
+* OCR support for print text expands to 164 languages including Russian, Arabic, Hindi and other languages using Cyrillic, Arabic, and Devanagari scripts.
+* OCR support for handwritten text expands to 9 languages with English, Chinese Simplified, French, German, Italian, Japanese, Korean, Portuguese, and Spanish.
+* Enhanced support for single characters, handwritten dates, amounts, names, other entities commonly found in receipts and invoices.
+* Improved processing of digital PDF documents.
+* Input file size limit increased 10x to 500 MB.
+* Performance and latency improvements.
+* Available as [cloud service](overview-ocr.md#read-api) and [Docker container](computer-vision-how-to-install-containers.md).
+
+See the [OCR how-to guide](Vision-API-How-to-Topics/call-read-api.md#determine-how-to-process-the-data-optional) to learn how to use the GA model.
+
+> [!div class="nextstepaction"]
+> [Get Started with the Read API](./quickstarts-sdk/client-library.md)
+ ## February 2022 ### OCR (Read) API Public Preview supports 164 languages
cognitive-services Profanity Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/profanity-filtering.md
Normally the Translator service retains profanity that is present in the source
If you want to avoid seeing profanity in the translation, even if profanity is present in the source text, use the profanity filtering option available in the Translate() method. This option allows you to choose whether you want the profanity deleted, marked with appropriate tags, or no action taken.
-The Translate method takes the "options" parameter, which contains the new element "ProfanityAction". The accepted values of ProfanityAction are "NoAction", "Marked" and "Deleted."
-
-## Accepted values of ProfanityAction and examples
-|ProfanityAction value | Action | Example: Source - Japanese | Example: Target - English|
-| :|:|:|:|
-| NoAction | Default. Same as not setting the option. Profanity passes from source to target. | 彼は変態です。 | He's a jerk. |
-| Marked | Profane words are surrounded by XML tags \<profanity> ... \</profanity>. | 彼は変態です。 | He's a \<profanity>jerk\</profanity>. |
-| Deleted | Profane words are removed from the output without replacement. | 彼は。 | He's a\. |
+The Translate() method takes the "options" parameter, which contains the new element "ProfanityAction." The accepted values of ProfanityAction are "NoAction," "Marked," and "Deleted." For the value of "Marked," an additional, optional element "ProfanityMarker" can take the values "Asterisk" (default) and "Tag."
++
+## Accepted values and examples of ProfanityMarker and ProfanityAction
+| ProfanityAction value | ProfanityMarker value | Action | Example: Source - Spanish| Example: Target - English|
+|:--|:--|:--|:--|:--|
+| NoAction| | Default. Same as not setting the option. Profanity passes from source to target. | Que coche de \<insert-profane-word> | What a \<insert-profane-word> car |
+| Marked | Asterisk | Profane words are replaced by asterisks (default). | Que coche de \<insert-profane-word> | What a *** car |
+| Marked | Tag | Profane words are surrounded by XML tags \<profanity\>...\</profanity>. | Que coche de \<insert-profane-word> | What a \<profanity> \<insert-profane-word> \</profanity> car |
+| Deleted | | Profane words are removed from the output without replacement. | Que coche de \<insert-profane-word> | What a car |
+
+In the above examples, **\<insert-profane-word>** is a placeholder for profane words.
## Next steps > [!div class="nextstepaction"]
cosmos-db Apache Cassandra Consistency Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/apache-cassandra-consistency-mapping.md
Unlike Azure Cosmos DB, Apache Cassandra does not natively provide precisely def
Apache Cassandra database is a multi-master system by default, and does not provide an out-of-box option for single-region writes with multi-region replication for reads. However, Azure Cosmos DB provides turnkey ability to have either single region, or [multi-region](../how-to-multi-master.md) write configurations. One of the advantages of being able to choose a single region write configuration across multiple regions is the avoidance of cross-region conflict scenarios, and the option of maintaining strong consistency across multiple regions.
-With single-region writes, you can maintain strong consistency, while still maintaining a level of high availability across regions with [automatic failover](../high-availability.md#region-outages). In this configuration, you can still exploit data locality to reduce read latency by downgrading to eventual consistency on a per request basis. In addition to these capabilities, the Azure Cosmos DB platform also provides the ability to enable [zone redundancy](/azure/architecture/reliability/architect) when selecting a region. Thus, unlike native Apache Cassandra, Azure Cosmos DB allows you to navigate the CAP Theorem [trade-off spectrum](../consistency-levels.md#rto) with more granularity.
+With single-region writes, you can maintain strong consistency, while still maintaining a level of high availability across regions with [service-managed failover](../high-availability.md#region-outages). In this configuration, you can still exploit data locality to reduce read latency by downgrading to eventual consistency on a per request basis. In addition to these capabilities, the Azure Cosmos DB platform also provides the ability to enable [zone redundancy](/azure/architecture/reliability/architect) when selecting a region. Thus, unlike native Apache Cassandra, Azure Cosmos DB allows you to navigate the CAP Theorem [trade-off spectrum](../consistency-levels.md#rto) with more granularity.
## Mapping consistency levels
cosmos-db Common Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/common-powershell-samples.md
+
+ Title: Azure PowerShell samples common to all Azure Cosmos DB APIs
+description: Azure PowerShell Samples common to all Azure Cosmos DB APIs
++ Last updated : 05/02/2022+++++
+# Azure PowerShell samples for Azure Cosmos DB API
++
+The following table includes links to sample Azure PowerShell scripts that apply to all Cosmos DB APIs. For API specific samples, see [API specific samples](#api-specific-samples). Common samples are the same across all APIs.
++
+These samples require Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed. If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+
+## Common API Samples
+
+These samples use a SQL (Core) API account. To use these samples for other APIs, copy the related properties and apply to your API specific scripts.
+
+|Task | Description |
+|||
+| [Account keys or connection strings](scripts/powershell/common/keys-connection-strings.md)| Get primary and secondary keys and connection strings, or regenerate an account key.|
+| [Change failover priority or trigger failover](scripts/powershell/common/failover-priority-update.md)| Change the regional failover priority or trigger a manual failover.|
+| [Create an account with IP Firewall](scripts/powershell/common/firewall-create.md)| Create an Azure Cosmos DB account with IP Firewall enabled.|
+| [Update account](scripts/powershell/common/account-update.md) | Update an account's default consistency level.|
+| [Update an account's regions](scripts/powershell/common/update-region.md) | Add regions to an account or change regional failover order.|
+
+## API specific samples
+
+- [Cassandra API samples](cassandr)
+- [Gremlin API samples](graph/powershell-samples.md)
+- [MongoDB API samples](mongodb/powershell-samples.md)
+- [SQL API samples](sql/powershell-samples.md)
+- [Table API samples](table/powershell-samples.md)
+
+## Next steps
+
+- [Azure PowerShell documentation](/powershell)
cosmos-db Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/concepts-limits.md
You can [provision and manage your Azure Cosmos account](how-to-manage-database-
| Resource | Limit | | | |
-| Maximum accounts per subscription | 50 by default. <sup>1</sup> |
+| Maximum number of accounts per subscription | 50 by default. <sup>1</sup> |
| Maximum number of regional failovers | 1/hour by default. <sup>1</sup> <sup>2</sup> | <sup>1</sup> You can increase these limits by creating an [Azure Support request](create-support-request-quota-increase.md).
Depending on which API you use, an Azure Cosmos container can represent either a
| Resource | Limit | | | | | Maximum length of database or container name | 255 |
-| Maximum stored procedures per container | 100 <sup>1</sup> |
-| Maximum UDFs per container | 50 <sup>1</sup> |
+| Maximum number of stored procedures per container | 100 <sup>1</sup> |
+| Maximum number of UDFs per container | 50 <sup>1</sup> |
| Maximum number of paths in indexing policy| 100 <sup>1</sup> | | Maximum number of unique keys per container|10 <sup>1</sup> | | Maximum number of paths per unique key constraint|16 <sup>1</sup> |
cosmos-db Use Regional Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/use-regional-endpoints.md
foreach (string gremlinAccountRegion in gremlinAccountRegions)
## SDK endpoint discovery
-Application can use [Azure Cosmos DB SDK](../sql-api-sdk-dotnet.md) to discover read and write locations for Graph account. These locations can change at any time through manual reconfiguration on the server side or automatic failover.
+Application can use [Azure Cosmos DB SDK](../sql-api-sdk-dotnet.md) to discover read and write locations for Graph account. These locations can change at any time through manual reconfiguration on the server side or service-managed failover.
TinkerPop Gremlin SDK doesn't have an API to discover Cosmos DB Graph database account regions. Applications that need runtime endpoint discovery need to host 2 separate SDKs in the process space.
cosmos-db High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/high-availability.md
The following table summarizes the high availability capability of various accou
* For multi-region Azure Cosmos accounts that are configured with a single-write region, [enable service-managed failover by using Azure CLI or Azure portal](how-to-manage-database-account.md#automatic-failover). After you enable service-managed failover, whenever there's a regional disaster, Cosmos DB will fail over your account without any user inputs.
-* Even if your Azure Cosmos account is highly available, your application may not be correctly designed to remain highly available. To test the end-to-end high availability of your application, as a part of your application testing or disaster recovery (DR) drills, temporarily disable automatic-failover for the account, invoke the [manual failover by using PowerShell, Azure CLI or Azure portal](how-to-manage-database-account.md#manual-failover), then monitor your application's failover. Once complete, you can fail back over to the primary region and restore automatic-failover for the account.
+* Even if your Azure Cosmos account is highly available, your application may not be correctly designed to remain highly available. To test the end-to-end high availability of your application, as a part of your application testing or disaster recovery (DR) drills, temporarily disable service-managed failover for the account, invoke the [manual failover by using PowerShell, Azure CLI or Azure portal](how-to-manage-database-account.md#manual-failover), then monitor your application's failover. Once complete, you can fail back over to the primary region and restore service-managed failover for the account.
> [!IMPORTANT] > Do not invoke manual failover during a Cosmos DB outage on either the source or destination regions, as it requires regions connectivity to maintain data consistency and it will not succeed.
cosmos-db Monitor Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/monitor-cosmos-db.md
Previously updated : 12/01/2020 Last updated : 05/03/2020
When you have critical applications and business processes relying on Azure reso
You can monitor your data with client-side and server-side metrics. When using server-side metrics, you can monitor the data stored in Azure Cosmos DB with the following options:
-* **Monitor from Azure Cosmos DB portal:** You can monitor with the metrics available within the **Metrics** tab of the Azure Cosmos account. The metrics on this tab include throughput, storage, availability, latency, consistency, and system level metrics. By default, these metrics have a retention period of 7 days. To learn more, see the [Monitoring data collected from Azure Cosmos DB](#monitoring-from-azure-cosmos-db) section of this article.
+* **Monitor from Azure Cosmos DB portal:** You can monitor with the metrics available within the **Metrics** tab of the Azure Cosmos account. The metrics on this tab include throughput, storage, availability, latency, consistency, and system level metrics. By default, these metrics have a retention period of seven days. To learn more, see the [Monitoring data collected from Azure Cosmos DB](#monitoring-data) section of this article.
-* **Monitor with metrics in Azure monitor:** You can monitor the metrics of your Azure Cosmos account and create dashboards from the Azure Monitor. Azure Monitor collects the Azure Cosmos DB metrics by default, you donΓÇÖt have configure anything explicitly. These metrics are collected with one-minute granularity, the granularity may vary based on the metric you choose. By default, these metrics have a retention period of 30 days. Most of the metrics that are available from the previous options are also available in these metrics. The dimension values for the metrics such as container name are case-insensitive. So you need to use case-insensitive comparison when doing string comparisons on these dimension values. To learn more, see the [Analyze metric data](#analyze-metric-data) section of this article.
+* **Monitor with metrics in Azure monitor:** You can monitor the metrics of your Azure Cosmos account and create dashboards from the Azure Monitor. Azure Monitor collects the Azure Cosmos DB metrics by default, you will not need to explicitly configure anything. These metrics are collected with one-minute granularity, the granularity may vary based on the metric you choose. By default, these metrics have a retention period of 30 days. Most of the metrics that are available from the previous options are also available in these metrics. The dimension values for the metrics such as container name are case-insensitive. So you need to use case-insensitive comparison when doing string comparisons on these dimension values. To learn more, see the [Analyze metric data](#analyzing-metrics) section of this article.
-* **Monitor with diagnostic logs in Azure Monitor:** You can monitor the logs of your Azure Cosmos account and create dashboards from the Azure Monitor. Telemetry such as events and traces that occur at a second granularity are stored as logs. For example, if the throughput of a container is changes, the properties of a Cosmos account are changed these events are captures within the logs. You can analyze these logs by running queries on the gathered data. To learn more, see the [Analyze log data](#analyze-log-data) section of this article.
+* **Monitor with diagnostic logs in Azure Monitor:** You can monitor the logs of your Azure Cosmos account and create dashboards from the Azure Monitor. Data such as events and traces that occur at a second granularity are stored as logs. For example, if the throughput of a container is changes, the properties of a Cosmos account are changed these events are captures within the logs. You can analyze these logs by running queries on the gathered data. To learn more, see the [Analyze log data](#analyzing-logs) section of this article.
-* **Monitor programmatically with SDKs:** You can monitor your Azure Cosmos account programmatically by using the .NET, Java, Python, Node.js SDKs, and the headers in REST API. To learn more, see the [Monitoring Azure Cosmos DB programmatically](#monitor-cosmosdb-programmatically) section of this article.
+* **Monitor programmatically with SDKs:** You can monitor your Azure Cosmos account programmatically by using the .NET, Java, Python, Node.js SDKs, and the headers in REST API. To learn more, see the [Monitoring Azure Cosmos DB programmatically](#monitor-azure-cosmos-db-programmatically) section of this article.
The following image shows different options available to monitor Azure Cosmos DB account through Azure portal:
When using Azure Cosmos DB, at the client-side you can collect the details for r
## Monitor overview
-The **Overview** page in the Azure portal for each Azure Cosmos DB account includes a brief view of the resource usage, such as total requests, requests that resulted in a specific HTTP status code, and hourly billing. This information is helpful, however only a small amount of the monitoring data is available from this pane. Some of this data is collected automatically and is available for analysis as soon as you create the resource. You can enable additional types of data collection with some configuration.
+The **Overview** page in the Azure portal for each Azure Cosmos DB account includes a brief view of the resource usage, such as total requests, requests that resulted in a specific HTTP status code, and hourly billing. This information is helpful, however only a small amount of the monitoring data is available from this pane. Some of this data is collected automatically and is available for analysis as soon as you create the resource. You can enable other types of data collection with some configuration.
## What is Azure Monitor?
-Azure Cosmos DB creates monitoring data using [Azure Monitor](../azure-monitor/overview.md) which is a full stack monitoring service in Azure that provides a complete set of features to monitor your Azure resources in addition to resources in other clouds and on-premises.
+Azure Cosmos DB creates monitoring data using [Azure Monitor](../azure-monitor/overview.md), which is a full stack monitoring service in Azure that provides a complete set of features to monitor your Azure resources in addition to resources in other clouds and on-premises.
-If you're not already familiar with monitoring Azure services, start with the article [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) which describes the following concepts:
+If you're not already familiar with monitoring Azure services, start with the article [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md), which describes the following concepts:
* What is Azure Monitor? * Costs associated with monitoring
The following sections build on this article by describing the specific data gat
## Cosmos DB insights
-Cosmos DB insights is based on the [workbooks feature of Azure Monitor](../azure-monitor/visualize/workbooks-overview.md) and uses the same monitoring data collected for Azure Cosmos DB described in the sections below. Use Azure Monitor for a view of the overall performance, failures, capacity, and operational health of all your Azure Cosmos DB resources in a unified interactive experience, and leverage the other features of Azure Monitor for detailed analysis and alerting. To learn more, see the [Explore Cosmos DB insights](../azure-monitor/insights/cosmosdb-insights-overview.md) article.
+Cosmos DB insights is a feature based on the [workbooks feature of Azure Monitor](../azure-monitor/visualize/workbooks-overview.md) and uses the same monitoring data collected for Azure Cosmos DB described in the sections below. Use Azure Monitor for a view of the overall performance, failures, capacity, and operational health of all your Azure Cosmos DB resources in a unified interactive experience, and use the other features of Azure Monitor for detailed analysis and alerting. To learn more, see the [Explore Cosmos DB insights](../azure-monitor/insights/cosmosdb-insights-overview.md) article.
> [!NOTE] > When creating containers, make sure you donΓÇÖt create two containers with the same name but different casing. ThatΓÇÖs because some parts of the Azure platform are not case-sensitive, and this can result in confusion/collision of telemetry and actions on containers with such names.
-## <a id="monitoring-from-azure-cosmos-db"></a> Monitoring data
+## Monitoring data
-Azure Cosmos DB collects the same kinds of monitoring data as other Azure resources which are described in [Monitoring data from Azure resources](../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data). See [Azure Cosmos DB monitoring data reference](monitor-cosmos-db-reference.md) for a detailed reference of the logs and metrics created by Azure Cosmos DB.
+Azure Cosmos DB collects the same kinds of monitoring data as other Azure resources, which are described in [Monitoring data from Azure resources](../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data). See [Azure Cosmos DB monitoring data reference](monitor-cosmos-db-reference.md) for a detailed reference of the logs and metrics created by Azure Cosmos DB.
-The **Overview** page in the Azure portal for each Azure Cosmos database includes a brief view of the database usage including its request and hourly billing usage. This is useful information but only a small amount of the monitoring data available. Some of this data is collected automatically and available for analysis as soon as you create the database while you can enable additional data collection with some configuration.
+The **Overview** page in the Azure portal for each Azure Cosmos database includes a brief view of the database usage including its request and hourly billing usage. This is useful information but only a small amount of the monitoring data available. Some of this data is collected automatically and available for analysis as soon as you create the database while you can enable more data collection with some configuration.
:::image type="content" source="media/monitor-cosmos-db/overview-page.png" alt-text="Overview page":::
The **Overview** page in the Azure portal for each Azure Cosmos database include
Platform metrics and the Activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting.
-Resource Logs are not collected and stored until you create a diagnostic setting and route them to one or more locations.
+Resource Logs aren't collected and stored until you create a diagnostic setting and route them to one or more locations.
See [Create diagnostic setting to collect platform logs and metrics in Azure](cosmosdb-monitor-resource-logs.md) for the detailed process for creating a diagnostic setting using the Azure portal and some diagnostic query examples. When you create a diagnostic setting, you specify which categories of logs to collect. The metrics and logs you can collect are discussed in the following sections.
-## <a id="analyze-metric-data"></a> Analyzing metrics
+## Analyzing metrics
-Azure Cosmos DB provides a custom experience for working with metrics. You can analyze metrics for Azure Cosmos DB with metrics from other Azure services using Metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) for details on using this tool. You can also checkout how to monitor [server-side latency](monitor-server-side-latency.md), [request unit usage](monitor-request-unit-usage.md), and [normalized request unit usage](monitor-normalized-request-units.md) for your Azure Cosmos DB resources.
+Azure Cosmos DB provides a custom experience for working with metrics. You can analyze metrics for Azure Cosmos DB with metrics from other Azure services using Metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) for details on using this tool. You can also check out how to monitor [server-side latency](monitor-server-side-latency.md), [request unit usage](monitor-request-unit-usage.md), and [normalized request unit usage](monitor-normalized-request-units.md) for your Azure Cosmos DB resources.
For a list of the platform metrics collected for Azure Cosmos DB, see [Monitoring Azure Cosmos DB data reference metrics](monitor-cosmos-db-reference.md#metrics) article.
For reference, you can see a list of [all resource metrics supported in Azure Mo
### Add filters to metrics
-You can also filter metrics and the chart displayed by a specific **CollectionName**, **DatabaseName**, **OperationType**, **Region**, and **StatusCode**. To filter the metrics, select **Add filter** and choose the required property such as **OperationType** and select a value such as **Query**. The graph then displays the request units consumed for the query operation for the selected period. The operations executed via Stored procedure are not logged so they are not available under the OperationType metric.
+You can also filter metrics and the chart displayed by a specific **CollectionName**, **DatabaseName**, **OperationType**, **Region**, and **StatusCode**. To filter the metrics, select **Add filter** and choose the required property such as **OperationType** and select a value such as **Query**. The graph then displays the request units consumed for the query operation for the selected period. The operations executed via Stored procedure aren't logged so they aren't available under the OperationType metric.
:::image type="content" source="./media/monitor-cosmos-db/add-metrics-filter.png" alt-text="Add a filter to select the metric granularity":::
You can group metrics by using the **Apply splitting** option. For example, you
:::image type="content" source="./media/monitor-cosmos-db/apply-metrics-splitting.png" alt-text="Add apply splitting filter":::
-## <a id="analyze-log-data"></a> Analyzing logs
+## Analyzing logs
Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties. All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](../azure-monitor/essentials/resource-logs-schema.md#top-level-common-schema). For a list of the types of resource logs collected for Azure Cosmos DB, see [Monitoring Azure Cosmos DB data reference](monitor-cosmos-db-reference.md#resource-logs).
-The [Activity log](../azure-monitor/essentials/activity-log.md) is a platform login Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
+The [Activity log](../azure-monitor/essentials/activity-log.md) is a platform that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
Azure Cosmos DB stores data in the following tables.
Azure Cosmos DB stores data in the following tables.
### Sample Kusto queries
+Prior to using Log Analytics to issue Kusto queries, you must [enable diagnostic logs for control plane operations](/azure/cosmos-db/audit-control-plane-logs#enable-diagnostic-logs-for-control-plane-operations). When enabling diagnostic logs, you will select between storing your data in a single [AzureDiagnostics table (legacy)](/azure/azure-monitor/essentials/resource-logs#azure-diagnostics-mode) or [resource-specific tables](/azure/azure-monitor/essentials/resource-logs#resource-specific).
+
+When you select **Logs** from the Azure Cosmos DB menu, Log Analytics is opened with the query scope set to the current Azure Cosmos DB account. Log queries will only include data from that resource.
+ > [!IMPORTANT]
-> When you select **Logs** from the Azure Cosmos DB menu, Log Analytics is opened with the query scope set to the current Azure Cosmos DB account. This means that log queries will only include data from that resource. If you want to run a query that includes data from other accounts or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](../azure-monitor/logs/scope.md) for details.
+> If you want to run a query that includes data from other accounts or data from other Azure services, select **Logs** from the **Azure Monitor** menu. For more information, see [Log query scope and time range in Azure Monitor Log Analytics](../azure-monitor/logs/scope.md).
-Here are some queries that you can enter into the **Log search** search bar to help you monitor your Azure Cosmos resources. These queries work with the [new language](../azure-monitor/logs/log-query-overview.md).
+Here are some queries that you can enter into the **Log search** search bar to help you monitor your Azure Cosmos resources. The exact text of the queries will depend on the [collection mode](/azure/azure-monitor/essentials/resource-logs#select-the-collection-mode) you selected when you enabled diagnostics logs.
-* To query for all of the diagnostic logs from Azure Cosmos DB for a specified time period:
+#### [AzureDiagnostics table (legacy)](#tab/azure-diagnostics)
- ```Kusto
- AzureDiagnostics
- | where ResourceProvider=="Microsoft.DocumentDb" and Category=="DataPlaneRequests"
+* To query for all control-plane logs from Azure Cosmos DB:
+ ```kusto
+ AzureDiagnostics
+ | where ResourceProvider=="MICROSOFT.DOCUMENTDB"
+ | where Category=="ControlPlaneRequests"
```
-* To query for all operations, grouped by resource:
+* To query for all data-plane logs from Azure Cosmos DB:
- ```Kusto
- AzureActivity
- | where ResourceProvider=="Microsoft.DocumentDb" and Category=="DataPlaneRequests"
- | summarize count() by Resource
+ ```kusto
+ AzureDiagnostics
+ | where ResourceProvider=="MICROSOFT.DOCUMENTDB"
+ | where Category=="DataPlaneRequests"
+ ```
+* To query for a filtered list of data-plane logs, specific to a single resource:
+
+ ```kusto
+ AzureDiagnostics
+ | where ResourceProvider=="MICROSOFT.DOCUMENTDB"
+ | where Category=="DataPlaneRequests"
+ | where Resource=="<account-name>"
```
-* To query for all user activity, grouped by resource:
+ > [!IMPORTANT]
+ > In the **AzureDiagnostics** table, many fields are case-sensitive and uppercase including, but not limited to; *ResourceId*, *ResourceGroup*, *ResourceProvider*, and *Resource*.
+
+* To get a count of data-plane logs, grouped by resource:
- ```Kusto
- AzureActivity
- | where Caller == "test@company.com" and ResourceProvider=="Microsoft.DocumentDb" and Category=="DataPlaneRequests"
+ ```kusto
+ AzureDiagnostics
+ | where ResourceProvider=="MICROSOFT.DOCUMENTDB"
+ | where Category=="DataPlaneRequests"
| summarize count() by Resource ```
+* To generate a chart for data-plane logs, grouped by the type of operation:
+
+ ```kusto
+ AzureDiagnostics
+ | where ResourceProvider=="MICROSOFT.DOCUMENTDB"
+ | where Category=="DataPlaneRequests"
+ | summarize count() by OperationName
+ | render columnchart
+ ```
+
+#### [Resource-specific table](#tab/resource-specific-diagnostics)
+
+* To query for all control-plane logs from Azure Cosmos DB:
+
+ ```kusto
+ CDBControlPlaneRequests
+ ```
+
+* To query for all data-plane logs from Azure Cosmos DB:
+
+ ```kusto
+ CDBDataPlaneRequests
+ ```
+
+* To query for a filtered list of data-plane logs, specific to a single resource:
+
+ ```kusto
+ CDBDataPlaneRequests
+ | where AccountName=="<account-name>"
+ ```
+
+* To get a count of data-plane logs, grouped by resource:
+
+ ```kusto
+ CDBDataPlaneRequests
+ | summarize count() by AccountName
+ ```
+
+* To generate a chart for data-plane logs, grouped by the type of operation:
+
+ ```kusto
+ CDBDataPlaneRequests
+ | summarize count() by OperationName
+ | render piechart
+ ```
+++
+These examples are just a small sampling of the rich queries that can be performed in Azure Monitor using the Kusto Query Language. For more information, see [samples for Kusto queries](/azure/data-explorer/kusto/query/samples?pivots=azuremonitor).
+ ## Alerts Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](../azure-monitor/alerts/alerts-metric-overview.md), [logs](../azure-monitor/alerts/alerts-unified-log.md), and the [activity log](../azure-monitor/alerts/activity-log-alerts.md). Different types of alerts have benefits and drawbacks
For example, the following table lists few alert rules for your resources. You c
|:|:|:| |Rate limiting on request units (metric alert) |Dimension name: StatusCode, Operator: Equals, Dimension values: 429 | Alerts if the container or a database has exceeded the provisioned throughput limit. | |Region failed over |Operator: Greater than, Aggregation type: Count, Threshold value: 1 | When a single region is failed over. This alert is helpful if you didn't enable service-managed failover. |
-| Rotate keys(activity log alert)| Event level: Informational , Status: started| Alerts when the account keys are rotated. You can update your application with the new keys. |
+| Rotate keys(activity log alert)| Event level: Informational, Status: started| Alerts when the account keys are rotated. You can update your application with the new keys. |
-## <a id="monitor-cosmosdb-programmatically"></a> Monitor Azure Cosmos DB programmatically
+## Monitor Azure Cosmos DB programmatically
-The account level metrics available in the portal, such as account storage usage and total requests, are not available via the SQL APIs. However, you can retrieve usage data at the collection level by using the SQL APIs. To retrieve collection level data, do the following:
+The account level metrics available in the portal, such as account storage usage and total requests, aren't available via the SQL APIs. However, you can retrieve usage data at the collection level by using the SQL APIs. To retrieve collection level data, do the following:
* To use the REST API, [perform a GET on the collection](/rest/api/cosmos-db/get-a-collection). The quota and usage information for the collection is returned in the x-ms-resource-quota and x-ms-resource-usage headers in the response.
-* To use the .NET SDK, use the [DocumentClient.ReadDocumentCollectionAsync](/dotnet/api/microsoft.azure.documents.client.documentclient.readdocumentcollectionasync) method, which returns a [ResourceResponse](/dotnet/api/microsoft.azure.documents.client.resourceresponse-1) that contains a number of usage properties such as **CollectionSizeUsage**, **DatabaseUsage**, **DocumentUsage**, and more.
+* To use the .NET SDK, use the [DocumentClient.ReadDocumentCollectionAsync](/dotnet/api/microsoft.azure.documents.client.documentclient.readdocumentcollectionasync) method, which returns a [ResourceResponse](/dotnet/api/microsoft.azure.documents.client.resourceresponse-1) that contains many usage properties such as **CollectionSizeUsage**, **DatabaseUsage**, **DocumentUsage**, and more.
-To access additional metrics, use the [Azure Monitor SDK](https://www.nuget.org/packages/Microsoft.Azure.Insights). Available metric definitions can be retrieved by calling:
+To access more metrics, use the [Azure Monitor SDK](https://www.nuget.org/packages/Microsoft.Azure.Insights). Available metric definitions can be retrieved by calling:
```http https://management.azure.com/subscriptions/{SubscriptionId}/resourceGroups/{ResourceGroup}/providers/Microsoft.DocumentDb/databaseAccounts/{DocumentDBAccountName}/providers/microsoft.insights/metricDefinitions?api-version=2018-01-01 ```
-To retrieve individual metrics use the following format:
+To retrieve individual metrics, use the following format:
```http https://management.azure.com/subscriptions/{SubscriptionId}/resourceGroups/{ResourceGroup}/providers/Microsoft.DocumentDb/databaseAccounts/{DocumentDBAccountName}/providers/microsoft.insights/metrics?timespan={StartTime}/{EndTime}&interval={AggregationInterval}&metricnames={MetricName}&aggregation={AggregationType}&`$filter={Filter}&api-version=2018-01-01
cosmos-db Partial Document Update Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/partial-document-update-getting-started.md
Title: Getting started with Azure Cosmos DB Partial Document Update description: This article provides example for how to use Partial Document Update with .NET, Java, Node SDKs-+ Last updated 12/09/2021-+ # Azure Cosmos DB Partial Document Update: Getting Started [!INCLUDE[appliesto-sql-api](includes/appliesto-sql-api.md)]
-This article provides examples for how to use Partial Document Update with .NET, Java, Node SDKs, along with common errors that you may encounter. Code samples for the following scenarios have been provided:
+This article provides examples illustrating for how to use Partial Document Update with .NET, Java, and Node SDKs. This article also details common errors that you may encounter. Code samples for the following scenarios have been provided:
- Executing a single patch operation - Combining multiple patch operations - Conditional patch syntax based on filter predicate - Executing patch operation as part of a Transaction
-## .NET
+## [.NET](#tab/dotnet)
Support for Partial document update (Patch API) in the [Azure Cosmos DB .NET v3 SDK](sql/sql-api-sdk-dotnet-standard.md) is available from version *3.23.0* onwards. You can download it from the [NuGet Gallery](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/3.23.0) > [!NOTE] > A complete partial document update sample can be found in the [.NET v3 samples repository](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ItemManagement/Program.cs) on GitHub.
-**Executing a single patch operation**
-
-```csharp
-ItemResponse<SalesOrder> response = await container.PatchItemAsync<SalesOrder>(
- id: order.Id,
- partitionKey: new PartitionKey(order.AccountNumber),
- patchOperations: new[] { PatchOperation.Replace("/TotalDue", 0) });
-
-SalesOrder updated = response.Resource;
-```
-
-**Combining multiple patch operations**
-
-```csharp
-List<PatchOperation> patchOperations = new List<PatchOperation>();
-patchOperations.Add(PatchOperation.Add("/nonExistentParent/Child", "bar"));
-patchOperations.Add(PatchOperation.Remove("/cost"));
-patchOperations.Add(PatchOperation.Increment("/taskNum", 6));
-patchOperations.Add(PatchOperation.Set("/existingPath/newproperty",value));
-
-container.PatchItemAsync<item>(
- id: 5,
- partitionKey: new PartitionKey("task6"),
- patchOperations: patchOperations );
-```
-
-**Conditional patch syntax based on filter predicate**
-
-```csharp
-PatchItemRequestOptions patchItemRequestOptions = new PatchItemRequestOptions
-{
- FilterPredicate = "from c where (c.TotalDue = 0 OR NOT IS_DEFINED(c.TotalDue))"
-};
-response = await container.PatchItemAsync<SalesOrder>(
- id: order.Id,
- partitionKey: new PartitionKey(order.AccountNumber),
- patchOperations: new[] { PatchOperation.Replace("/ShippedDate", DateTime.UtcNow) },
- patchItemRequestOptions);
-
-SalesOrder updated = response.Resource;
-```
-
-**Executing patch operation as a part of a Transaction**
+- Executing a single patch operation
+ ```csharp
+ ItemResponse<Product> response = await container.PatchItemAsync<Product>(
+ id: "e379aea5-63f5-4623-9a9b-4cd9b33b91d5",
+ partitionKey: new PartitionKey("road-bikes"),
+ patchOperations: new[] {
+ PatchOperation.Replace("/price", 355.45)
+ }
+ );
+
+ Product updated = response.Resource;
+ ```
-```csharp
-List<PatchOperation> patchOperationsUpdateTask = new List<PatchOperation>()
- {
- PatchOperation.Add("/children/1/pk", "patched"),
- PatchOperation.Remove("/description"),
- PatchOperation.Add("/taskNum", 8),
- PatchOperation.Replace("/taskNum", 12)
- };
+- Combining multiple patch operations
-TransactionalBatchPatchItemRequestOptions requestOptionsFalse = new TransactionalBatchPatchItemRequestOptions()
- {
- FilterPredicate = "from c where c.taskNum = 3"
- };
+ ```csharp
+ List<PatchOperation> operations = new ()
+ {
+ PatchOperation.Add($"/color", "silver"),
+ PatchOperation.Remove("/used"),
+ PatchOperation.Increment("/price", 50.00)
+ };
+
+ ItemResponse<Product> response = await container.PatchItemAsync<Product>(
+ id: "e379aea5-63f5-4623-9a9b-4cd9b33b91d5",
+ partitionKey: new PartitionKey("road-bikes"),
+ patchOperations: operations
+ );
+ ```
-TransactionalBatchInternal transactionalBatchInternalFalse = (TransactionalBatchInternal)containerInternal.CreateTransactionalBatch(new Cosmos.PartitionKey(testItem.pk));
-transactionalBatchInternalFalse.PatchItem(id: testItem1.id, patchOperationsUpdateTaskNum12, requestOptionsFalse);
-transactionalBatchInternalFalse.PatchItem(id: testItem2.id, patchOperationsUpdateTaskNum12, requestOptionsFalse);
-transactionalBatchInternalFalse.ExecuteAsync());
-```
+- Conditional patch syntax based on filter predicate
-## Java
+ ```csharp
+ PatchItemRequestOptions options = new()
+ {
+ FilterPredicate = "FROM products p WHERE p.used = false"
+ };
+
+ List<PatchOperation> operations = new ()
+ {
+ PatchOperation.Replace($"/price", 100.00),
+ };
+
+ ItemResponse<Product> response = await container.PatchItemAsync<Product>(
+ id: "e379aea5-63f5-4623-9a9b-4cd9b33b91d5",
+ partitionKey: new PartitionKey("road-bikes"),
+ patchOperations: operations,
+ requestOptions: options
+ );
+ ```
+
+- Executing patch operation as a part of a Transaction
+
+ ```csharp
+ TransactionalBatchPatchItemRequestOptions options = new()
+ {
+ FilterPredicate = "FROM products p WHERE p.used = false"
+ };
+
+ List<PatchOperation> operations = new ()
+ {
+ PatchOperation.Add($"/new", true),
+ PatchOperation.Remove($"/used")
+ };
+
+ TransactionalBatch batch = container.CreateTransactionalBatch(
+ partitionKey: new PartitionKey("road-bikes")
+ );
+ batch.PatchItem(
+ id: "e379aea5-63f5-4623-9a9b-4cd9b33b91d5",
+ patchOperations: operations,
+ requestOptions: options
+ );
+ batch.PatchItem(
+ id: "892f609b-8885-44df-a9ed-cce6c0bd2b9e",
+ patchOperations: operations,
+ requestOptions: options
+ );
+
+ TransactionalBatchResponse response = await batch.ExecuteAsync();
+ bool success = response.IsSuccessStatusCode;
+ ```
+
+## [Java](#tab/java)
Support for Partial document update (Patch API) in the [Azure Cosmos DB Java v4 SDK](sql/sql-api-sdk-java-v4.md) is available from version *4.21.0* onwards. You can either add it to the list of dependencies in your `pom.xml` or download it directly from [Maven](https://mvnrepository.com/artifact/com.azure/azure-cosmos). ```xml <dependency>
- <groupId>com.azure</groupId>
- <artifactId>azure-cosmos</artifactId>
- <version>4.21.0</version>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-cosmos</artifactId>
+ <version>LATEST</version>
</dependency> ``` > [!NOTE] > The full sample can be found in the [Java SDK v4 samples repository](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/tree/main/src/main/java/com/azure/cosmos/examples/patch/sync) on GitHub
-**Executing a single patch operation**
-
-```java
-CosmosPatchOperations cosmosPatchOperations = CosmosPatchOperations.create();
-cosmosPatchOperations.add("/registered", true);
-
-CosmosPatchItemRequestOptions options = new CosmosPatchItemRequestOptions();
-
-CosmosItemResponse<Family> response = this.container.patchItem(id, new PartitionKey(partitionKey),
- cosmosPatchOperations, options, Family.class);
-```
-
-**Combining multiple patch operations**
-
-```java
-CosmosPatchOperations cosmosPatchOperations = CosmosPatchOperations.create();
-cosmosPatchOperations.add("/registered", true)
- .replace("/parents/0/familyName", "Doe");
-CosmosPatchItemRequestOptions options = new CosmosPatchItemRequestOptions();
-
-CosmosItemResponse<Family> response = this.container.patchItem(id, new PartitionKey(partitionKey),
- cosmosPatchOperations, options, Family.class);
-```
-
-**Conditional patch syntax based on filter predicate**
-
-```java
-CosmosPatchOperations cosmosPatchOperations = CosmosPatchOperations.create();
- .add("/vaccinated", true);
-CosmosPatchItemRequestOptions options = new CosmosPatchItemRequestOptions();
-options.setFilterPredicate("from f where f.registered = true");
-
-CosmosItemResponse<Family> response = this.container.patchItem(id, new PartitionKey(partitionKey),
- cosmosPatchOperations, options, Family.class);
-```
+- Executing a single patch operation
-**Executing patch operation as a part of a Transaction**
+ ```java
+ CosmosItemResponse<Product> response = container.patchItem(
+ "e379aea5-63f5-4623-9a9b-4cd9b33b91d5",
+ new PartitionKey("road-bikes"),
+ CosmosPatchOperations
+ .create()
+ .replace("/price", 355.45),
+ Product.class
+ );
-```java
-CosmosBatch batch = CosmosBatch.createCosmosBatch(new PartitionKey(family.getLastName()));
-batch.createItemOperation(family);
+ Product updated = response.getItem();
+ ```
-CosmosPatchOperations cosmosPatchOperations = CosmosPatchOperations.create().add("/registered", false);
-batch.patchItemOperation(family.getId(), cosmosPatchOperations);
+- Combining multiple patch operations
-CosmosBatchResponse response = container.executeCosmosBatch(batch);
-if (response.isSuccessStatusCode()) {
- // if transactional batch succeeds
-}
-```
+ ```java
+ CosmosPatchOperations operations = CosmosPatchOperations
+ .create()
+ .add("/color", "silver")
+ .remove("/used")
+ .increment("/price", 50);
+
+ CosmosItemResponse<Product> response = container.patchItem(
+ "e379aea5-63f5-4623-9a9b-4cd9b33b91d5",
+ new PartitionKey("road-bikes"),
+ operations,
+ Product.class
+ );
+ ```
-## Node.js
+- Conditional patch syntax based on filter predicate
-Support for Partial document update (Patch API) in the [Azure Cosmos DB JavaScript SDK](sql/sql-api-sdk-node.md) is available from version *3.15.0* onwards. You can download it from the [NPM Registry](https://www.npmjs.com/package/@azure/cosmos/v/3.15.0)
+ ```java
+ CosmosPatchItemRequestOptions options = new CosmosPatchItemRequestOptions();
+ options.setFilterPredicate("FROM products p WHERE p.used = false");
+
+ CosmosPatchOperations operations = CosmosPatchOperations
+ .create()
+ .replace("/price", 100.00);
+
+ CosmosItemResponse<Product> response = container.patchItem(
+ "e379aea5-63f5-4623-9a9b-4cd9b33b91d5",
+ new PartitionKey("road-bikes"),
+ operations,
+ options,
+ Product.class
+ );
+ ```
+
+- Executing patch operation as a part of a Transaction
+
+ ```java
+ CosmosBatchPatchItemRequestOptions options = new CosmosBatchPatchItemRequestOptions();
+ options.setFilterPredicate("FROM products p WHERE p.used = false");
+
+ CosmosPatchOperations operations = CosmosPatchOperations
+ .create()
+ .add("/new", true)
+ .remove("/used");
+
+ CosmosBatch batch = CosmosBatch.createCosmosBatch(
+ new PartitionKey("road-bikes")
+ );
+ batch.patchItemOperation(
+ "e379aea5-63f5-4623-9a9b-4cd9b33b91d5",
+ operations,
+ options
+ );
+ batch.patchItemOperation(
+ "892f609b-8885-44df-a9ed-cce6c0bd2b9e",
+ operations,
+ options
+ );
+
+ CosmosBatchResponse response = container.executeCosmosBatch(batch);
+ boolean success = response.isSuccessStatusCode();
+ ```
+
+## [Node.js](#tab/nodejs)
+
+Support for Partial document update (Patch API) in the [Azure Cosmos DB JavaScript SDK](sql/sql-api-sdk-node.md) is available from version *3.15.0* onwards. You can download it from the [npm Registry](https://www.npmjs.com/package/@azure/cosmos/v/3.15.0)
> [!NOTE] > A complete partial document update sample can be found in the [.js v3 samples repository](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cosmosdb/cosmos/samples/v3/typescript/src/ItemManagement.ts#L167) on GitHub. In the sample, as the container is created without a partition key specified, the JavaScript SDK resolves the partition key values from the items through the container's partition key definition.
-**Executing a single patch operation**
-
-```javascript
-const patchSource = itemDefList[1];
-
-const replaceOperation: PatchOperation[] =
- [{
- op: "replace",
- path: "/lastName",
- value: "Martin"
- }];
-
-const { resource: patchSource1 } = await container.item(patchSource.lastName).patch(replaceOperation);
-console.log(`Patched ${patchSource1.lastName} to new ${patchSource1.lastName}.`);
-```
+- Executing a single patch operation
-**Combining multiple patch operations**
+ ```javascript
+ const operations =
+ [
+ { op: 'replace', path: '/price', value: 355.45 }
+ ];
+
+ const { resource: updated } = await container
+ .item(
+ id = 'e379aea5-63f5-4623-9a9b-4cd9b33b91d5',
+ partitionKeyValue = 'road-bikes'
+ )
+ .patch(operations);
+ ```
-```javascript
-const multipleOperations: PatchOperation[] = [
- {
- op: "add",
- path: "/aka",
- value: "MeFamily"
- },
- {
- op: "add",
- path: "/years",
- value: 12
- },
- {
- op: "replace",
- path: "/lastName",
- value: "Jose"
- },
- {
- op: "remove",
- path: "/parents"
- },
- {
- op: "set",
- path: "/children/firstName",
- value: "Anderson"
- },
- {
- op: "incr",
- path: "/years",
- value: 5
- }
- ];
-
-const { resource: patchSource2 } = await container.item(patchSource.id).patch(multipleOperations);
- ```
-
-**Conditional patch syntax based on filter predicate**
+- Combining multiple patch operations
-```javascript
-const operations : PatchOperation[] = [
- {
- op: "add",
- path: "/newImproved",
- value: "it works"
- }
- ];
+ ```javascript
+ const operations =
+ [
+ { op: 'add', path: '/color', value: 'silver' },
+ { op: 'remove', path: '/used' }
+ ];
+
+ const { resource: updated } = await container
+ .item(
+ id = 'e379aea5-63f5-4623-9a9b-4cd9b33b91d5',
+ partitionKeyValue = 'road-bikes'
+ )
+ .patch(operations);
+ ```
-const condition = "from c where NOT IS_DEFINED(c.newImproved)";
+- Conditional patch syntax based on filter predicate
-const { resource: patchSource3 } = await container
- .item(patchSource.id)
- .patch({ condition, operations });
+ ```javascript
+ const filter = 'FROM products p WHERE p.used = false'
+
+ const operations =
+ [
+ { op: 'replace', path: '/price', value: 100.00 }
+ ];
+
+ const { resource: updated } = await container
+ .item(
+ id = 'e379aea5-63f5-4623-9a9b-4cd9b33b91d5',
+ partitionKeyValue = 'road-bikes'
+ )
+ .patch(
+ body = operations,
+ options = filter
+ );
+ ```
-console.log(`Patched ${patchSource3} to new ${patchSource3}.`);
-```
+ ## Support for Server-Side programming Partial Document Update operations can also be [executed on the server-side](stored-procedures-triggers-udfs.md) using Stored procedures, triggers, and user-defined functions. - ```javascript
- this.patchDocument = function (documentLink, patchSpec, options, callback) {
- if (arguments.length < 2) {
- throw new Error(ErrorCodes.BadRequest, sprintf(errorMessages.invalidFunctionCall, 'patchDocument', 2, arguments.length));
+this.patchDocument = function (documentLink, patchSpec, options, callback) {
+ if (arguments.length < 2) {
+ throw new Error(ErrorCodes.BadRequest, sprintf(errorMessages.invalidFunctionCall, 'patchDocument', 2, arguments.length));
+ }
+ if (patchSpec === null || !(typeof patchSpec === "object" || Array.isArray(patchSpec))) {
+ throw new Error(ErrorCodes.BadRequest, errorMessages.patchSpecMustBeObjectOrArray);
+ }
+
+ var documentIdTuple = validateDocumentLink(documentLink, false);
+ var collectionRid = documentIdTuple.collId;
+ var documentResourceIdentifier = documentIdTuple.docId;
+ var isNameRouted = documentIdTuple.isNameRouted;
+
+ patchSpec = JSON.stringify(patchSpec);
+ var optionsCallbackTuple = validateOptionsAndCallback(options, callback);
+
+ options = optionsCallbackTuple.options;
+ callback = optionsCallbackTuple.callback;
+
+ var etag = options.etag || '';
+ var indexAction = options.indexAction || '';
+
+ return collectionObjRaw.patch(
+ collectionRid,
+ documentResourceIdentifier,
+ isNameRouted,
+ patchSpec,
+ etag,
+ indexAction,
+ function (err, response) {
+ if (callback) {
+ if (err) {
+ callback(err);
+ } else {
+ callback(undefined, JSON.parse(response.body), response.options);
+ }
+ } else {
+ if (err) {
+ throw err;
}
- if (patchSpec === null || !(typeof patchSpec === "object" || Array.isArray(patchSpec))) {
- throw new Error(ErrorCodes.BadRequest, errorMessages.patchSpecMustBeObjectOrArray);
- }
-
- var documentIdTuple = validateDocumentLink(documentLink, false);
- var collectionRid = documentIdTuple.collId;
- var documentResourceIdentifier = documentIdTuple.docId;
- var isNameRouted = documentIdTuple.isNameRouted;
-
- patchSpec = JSON.stringify(patchSpec);
- var optionsCallbackTuple = validateOptionsAndCallback(options, callback);
-
- options = optionsCallbackTuple.options;
- callback = optionsCallbackTuple.callback;
-
- var etag = options.etag || '';
- var indexAction = options.indexAction || '';
-
- return collectionObjRaw.patch(
- collectionRid,
- documentResourceIdentifier,
- isNameRouted,
- patchSpec,
- etag,
- indexAction,
- function (err, response) {
- if (callback) {
- if (err) {
- callback(err);
- } else {
- callback(undefined, JSON.parse(response.body), response.options);
- }
- } else {
- if (err) {
- throw err;
- }
- }
- }
- );
- };
+ }
+ }
+ );
+};
``` > [!NOTE] > Definition of validateOptionsAndCallback can be found in the [.js DocDbWrapperScript](https://github.com/Azure/azure-cosmosdb-js-server/blob/1dbe69893d09a5da29328c14ec087ef168038009/utils/DocDbWrapperScript.js#L289) on GitHub. -
-**Sample parameter for patch operation**
-
-```javascript
-function () {
- var doc = {
- "id": "exampleDoc",
- "field1": {
- "field2": 10,
- "field3": 20
- }
- };
- var isAccepted = __.createDocument(__.getSelfLink(), doc, (err, doc) => {
- if (err) throw err;
- var patchSpec = [
- {"op": "add", "path": "/field1/field2", "value": 20},
- {"op": "remove", "path": "/field1/field3"}
- ];
- isAccepted = __.patchDocument(doc._self, patchSpec, (err, doc) => {
- if (err) throw err;
- else {
- getContext().getResponse().setBody(docPatched);
- }
- }
- }
- if(!isAccepted) throw new Error("patch was't accepted")
- }
- }
- if(!isAccepted) throw new Error("create wasn't accepted")
-}
-```
+- Sample parameter for patch operation
+
+ ```javascript
+ function () {
+ var doc = {
+ "id": "exampleDoc",
+ "field1": {
+ "field2": 10,
+ "field3": 20
+ }
+ };
+ var isAccepted = __.createDocument(__.getSelfLink(), doc, (err, doc) => {
+ if (err) throw err;
+ var patchSpec = [
+ {"op": "add", "path": "/field1/field2", "value": 20},
+ {"op": "remove", "path": "/field1/field3"}
+ ];
+ isAccepted = __.patchDocument(doc._self, patchSpec, (err, doc) => {
+ if (err) throw err;
+ else {
+ getContext().getResponse().setBody(docPatched);
+ }
+ }
+ }
+ if(!isAccepted) throw new Error("patch was't accepted")
+ }
+ }
+ if(!isAccepted) throw new Error("create wasn't accepted")
+ }
+ ```
## Troubleshooting
-Here is a list of common errors that you might encounter while using this feature:
+Here's a list of common errors that you might encounter while using this feature:
| **Error Message** | **Description** | | | -- |
-| Invalid patch request: check syntax of patch specification| The Patch operation syntax is invalid. Please review [the specification](partial-document-update.md#rest-api-reference-for-partial-document-update)
-| Invalid patch request: Cannot patch system property `SYSTEM_PROPERTY`. | Patching system-generated properties likeΓÇ»`_id`,ΓÇ»`_ts`,ΓÇ»`_etag`,ΓÇ»`_rid` is not supported. To learn more: [Partial Document Update FAQs](partial-document-update-faq.yml#is-partial-document-update-supported-for-system-generated-properties-)
-| The number of patch operations cannot exceed 10 | There is a limit of 10 patch operations that can be added in a single patch specification. To learn more: [Partial Document Update FAQs](partial-document-update-faq.yml#is-there-a-limit-to-the-number-of-partial-document-update-operations-)
+| Invalid patch request: check syntax of patch specification| The Patch operation syntax is invalid. For more information, see [the partial document update specification](partial-document-update.md#rest-api-reference-for-partial-document-update)
+| Invalid patch request: Can't patch system property `SYSTEM_PROPERTY`. | System-generated properties likeΓÇ»`_id`,ΓÇ»`_ts`,ΓÇ»`_etag`,ΓÇ»`_rid` aren't modifiable using a Patch operation. For more information, see: [Partial Document Update FAQs](partial-document-update-faq.yml#is-partial-document-update-supported-for-system-generated-properties-)
+| The number of patch operations can't exceed 10 | There's a limit of 10 patch operations that can be added in a single patch specification. For more information, see: [Partial Document Update FAQs](partial-document-update-faq.yml#is-there-a-limit-to-the-number-of-partial-document-update-operations-)
| For Operation(`PATCH_OPERATION_INDEX`): Index(`ARRAY_INDEX`) to operate on is out of array bounds | The index of array element to be patched is out of bounds
-| For Operation(`PATCH_OPERATION_INDEX`)): Node(`PATH`) to be replaced has been removed earlier in the transaction.| The path you are trying to patch does not exist.
-| For Operation(`PATCH_OPERATION_INDEX`): Node(`PATH`) to be removed is absent. Note: it may also have been removed earlier in the transaction.ΓÇ» | The path you are trying to patch does not exist.
-| For Operation(`PATCH_OPERATION_INDEX`): Node(`PATH`) to be replaced is absent. | The path you are trying to patch does not exist.
-| For Operation(`PATCH_OPERATION_INDEX`): Node(`PATH`) is not a number.| Increment operation can only work on integer and float. To learn more: [Supported Operations](partial-document-update.md#supported-operations)
-| For Operation(`PATCH_OPERATION_INDEX`): Add Operation can only create a child object of an existing node(array or object) and cannot create path recursively, no path found beyond: `PATH`. | Child paths can be added to an object or array node type. Also, to create `n`th child, `n-1`th child should be present
-| For Operation(`PATCH_OPERATION_INDEX`): Given Operation can only create a child object of an existing node(array or object) and cannot create path recursively, no path found beyond: `PATH`. | Child paths can be added to an object or array node type. Also, to create `n`th child, `n-1`th child should be present
+| For Operation(`PATCH_OPERATION_INDEX`)): Node(`PATH`) to be replaced has been removed earlier in the transaction.| The path you're trying to patch doesn't exist.
+| For Operation(`PATCH_OPERATION_INDEX`): Node(`PATH`) to be removed is absent. Note: it may also have been removed earlier in the transaction.ΓÇ» | The path you're trying to patch doesn't exist.
+| For Operation(`PATCH_OPERATION_INDEX`): Node(`PATH`) to be replaced is absent. | The path you're trying to patch doesn't exist.
+| For Operation(`PATCH_OPERATION_INDEX`): Node(`PATH`) isn't a number.| Increment operation can only work on integer and float. For more information, see: [Supported Operations](partial-document-update.md#supported-operations)
+| For Operation(`PATCH_OPERATION_INDEX`): Add Operation can only create a child object of an existing node(array or object) and can't create path recursively, no path found beyond: `PATH`. | Child paths can be added to an object or array node type. Also, to create `n`th child, `n-1`th child should be present
+| For Operation(`PATCH_OPERATION_INDEX`): Given Operation can only create a child object of an existing node(array or object) and can't create path recursively, no path found beyond: `PATH`. | Child paths can be added to an object or array node type. Also, to create `n`th child, `n-1`th child should be present
## Next steps -- Learn more about conceptual overview of [Partial Document Update](partial-document-update.md)
+- Review the conceptual overview of [Partial Document Update](partial-document-update.md)
cosmos-db Partial Document Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/partial-document-update.md
An example target JSON document:
```json {
- "/": 9,
- "~1": 10
+ "id": "e379aea5-63f5-4623-9a9b-4cd9b33b91d5",
+ "name": "R-410 Road Bicycle",
+ "price": 455.95,
+ "used": false,
+ "categoryId": "road-bikes"
} ``` A JSON Patch document: ```json
-[{ "op": "test", "path": "/~01", "value": 10 }]
+[
+ { "op": "add", "path": "/color", "value": "silver" },
+ { "op": "remove", "path": "/used" },
+ { "op": "set", "path": "/price", "value": 355.45 }
+]
``` The resulting JSON document: ```json {
- "/": 9,
- "~1": 10
+ "id": "e379aea5-63f5-4623-9a9b-4cd9b33b91d5",
+ "name": "R-410 Road Bicycle",
+ "price": 355.45,
+ "categoryId": "road-bikes",
+ "color": "silver"
} ```
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/cassandra/autoscale.md
Title: Create a Cassandra keyspace and table with autoscale for Azure Cosmos DB
-description: Create a Cassandra keyspace and table with autoscale for Azure Cosmos DB
+ Title: Azure Cosmos DB Cassandra API keyspace and table with autoscale
+description: Use Azure CLI to create an Azure Cosmos DB Cassandra API account, keyspace, and table with autoscale.
Previously updated : 02/21/2022 Last updated : 05/02/2022+
-# Create an Azure Cosmos Cassandra API account, keyspace and table with autoscale using Azure CLI
+# Use Azure CLI to create a Cassandra API account, keyspace, and table with autoscale
[!INCLUDE [appliesto-cassandra-api](../../../includes/appliesto-cassandra-api.md)]
-The script in this article demonstrates creating an Azure Cosmos DB account, keyspace, and table with autoscale for the Cassandra API.
+The script in this article creates an Azure Cosmos DB Cassandra API account, keyspace, and table with autoscale.
+## Prerequisites
+- [!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)]
-- This article requires Azure CLI version 2.12.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
+- This script requires Azure CLI version 2.12.1 or later.
+
+ - You can run the script in the Bash environment in [Azure Cloud Shell](/azure/cloud-shell/quickstart). When Cloud Shell opens, make sure to select **Bash** in the environment field at the upper left of the shell window. Cloud Shell has the latest version of Azure CLI.
+
+ [![Launch Cloud Shell in a new window](../../../../../includes/media/cloud-shell-try-it/hdi-launch-cloud-shell.png)](https://shell.azure.com)
+
+ - If you prefer, you can [install Azure CLI](/cli/azure/install-azure-cli) to run the script locally. Run [az version](/cli/azure/reference-index?#az-version) to find your Azure CLI version, and run [az upgrade](/cli/azure/reference-index?#az-upgrade) if you need to upgrade. Sign in to Azure by running [az login](/cli/azure/reference-index#az-login).
## Sample script
+This script uses the following commands:
-### Run the script
+- [az group create](/cli/azure/group#az-group-create) creates a resource group to store all resources.
+- [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) with the `--capabilities EnableCassandra` parameter creates a Cassandra API-enabled Azure Cosmos DB account.
+- [az cosmosdb cassandra keyspace create](/cli/azure/cosmosdb/cassandra/keyspace#az-cosmosdb-cassandra-keyspace-create) creates an Azure Cosmos DB Cassandra keyspace.
+- [az cosmosdb cassandra table create](/cli/azure/cosmosdb/cassandra/table#az-cosmosdb-cassandra-table-create) with the `--max-throughput` parameter set to minimum `4000` creates an Azure Cosmos DB Cassandra table with autoscale.
:::code language="azurecli" source="~/azure_cli_scripts/cosmosdb/cassandra/autoscale.sh" id="FullScript":::
-## Clean up resources
+## Delete resources
+If you don't need the resources you created, use the [az group delete](/cli/azure/group#az-group-delete) command to delete the resource group and all resources it contains, including the Azure Cosmos DB account and keyspace.
```azurecli az group delete --name $resourceGroup ```
-## Sample reference
-
-This script uses the following commands. Each command in the table links to command specific documentation.
-
-| Command | Notes |
-|||
-| [az group create](/cli/azure/group#az-group-create) | Creates a resource group in which all resources are stored. |
-| [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) | Creates an Azure Cosmos DB account. |
-| [az cosmosdb cassandra keyspace create](/cli/azure/cosmosdb/cassandra/keyspace#az-cosmosdb-cassandra-keyspace-create) | Creates an Azure Cosmos Cassandra keyspace. |
-| [az cosmosdb cassandra table create](/cli/azure/cosmosdb/cassandra/table#az-cosmosdb-cassandra-table-create) | Creates an Azure Cosmos Cassandra table. |
-| [az group delete](/cli/azure/resource#az-resource-delete) | Deletes a resource group including all nested resources. |
- ## Next steps
-For more information on the Azure Cosmos DB CLI, see [Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb).
+[Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb)
cosmos-db Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/common/regions.md
Last updated 02/21/2022
The script in this article demonstrates three operations. - Add a region to an existing Azure Cosmos account.-- Change regional failover priority (applies to accounts using automatic failover)
+- Change regional failover priority (applies to accounts using service-managed failover)
- Trigger a manual failover from primary to secondary regions (applies to accounts with manual failover) This script uses a SQL (Core) API account, but these operations are identical across all database APIs in Cosmos DB.
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/autoscale.md
Title: Create a Gremlin database and graph with autoscale for Azure Cosmos DB
-description: Create a Gremlin database and graph with autoscale for Azure Cosmos DB
+ Title: Azure Cosmos DB Gremlin database and graph with autoscale
+description: Use this Azure CLI script to create an Azure Cosmos DB Gremlin API account, database, and graph with autoscale.
Previously updated : 02/21/2022 Last updated : 05/02/2022+
-# Create an Azure Cosmos Gremlin API account, database and graph with autoscale using Azure CLI
+# Use Azure CLI to create a Gremlin API account, database, and graph with autoscale
-The script in this article demonstrates creating a Gremlin API database and graph with autoscale.
+The script in this article creates an Azure Cosmos DB Gremlin API account, database, and graph with autoscale.
+## Prerequisites
+- [!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)]
-- This article requires version 2.30 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
+- This script requires Azure CLI version 2.30 or later.
+
+ - You can run the script in the Bash environment in [Azure Cloud Shell](/azure/cloud-shell/quickstart). When Cloud Shell opens, make sure to select **Bash** in the environment field at the upper left of the shell window. Cloud Shell has the latest version of Azure CLI.
+
+ [![Launch Cloud Shell in a new window](../../../../../includes/media/cloud-shell-try-it/hdi-launch-cloud-shell.png)](https://shell.azure.com)
+
+ - If you prefer, you can [install Azure CLI](/cli/azure/install-azure-cli) to run the script locally. Run [az version](/cli/azure/reference-index?#az-version) to find your Azure CLI version, and run [az upgrade](/cli/azure/reference-index?#az-upgrade) if you need to upgrade. Sign in to Azure by running [az login](/cli/azure/reference-index#az-login).
## Sample script
+This script uses the following commands:
-### Run the script
+- [az group create](/cli/azure/group#az-group-create) creates a resource group to store all resources.
+- [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) with the `--capabilities EnableGremlin` parameter creates a Gremlin-enabled Azure Cosmos DB account.
+- [az cosmosdb gremlin database create](/cli/azure/cosmosdb/gremlin/database#az-cosmosdb-gremlin-database-create) creates an Azure Cosmos DB Gremlin database.
+- [az cosmosdb gremlin graph create](/cli/azure/cosmosdb/gremlin/graph#az-cosmosdb-gremlin-graph-create) with the `--max-throughput` parameter set to minimum `4000` creates an Azure Cosmos DB Gremlin graph with autoscale.
:::code language="azurecli" source="~/azure_cli_scripts/cosmosdb/gremlin/autoscale.sh" id="FullScript":::
-## Clean up resources
+## Delete resources
+If you don't need the resources the script created, use the [az group delete](/cli/azure/group#az-group-delete) command to delete the resource group and all resources it contains, including the Azure Cosmos DB account and database.
```azurecli az group delete --name $resourceGroup ```
-## Sample reference
-
-This script uses the following commands. Each command in the table links to command specific documentation.
-
-| Command | Notes |
-|||
-| [az group create](/cli/azure/group#az-group-create) | Creates a resource group in which all resources are stored. |
-| [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) | Creates an Azure Cosmos DB account. |
-| [az cosmosdb gremlin database create](/cli/azure/cosmosdb/gremlin/database#az-cosmosdb-gremlin-database-create) | Creates an Azure Cosmos Gremlin database. |
-| [az cosmosdb gremlin graph create](/cli/azure/cosmosdb/gremlin/graph#az-cosmosdb-gremlin-graph-create) | Creates an Azure Cosmos Gremlin graph. |
-| [az group delete](/cli/azure/resource#az-resource-delete) | Deletes a resource group including all nested resources. |
- ## Next steps
-For more information on the Azure Cosmos DB CLI, see [Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb).
+[Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb)
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/serverless.md
Title: Create a Gremlin serverless account, database and graph for Azure Cosmos DB
-description: Create a Gremlin serverless account, database and graph for Azure Cosmos DB
+ Title: Azure Cosmos DB Gremlin serverless account, database, and graph
+description: Use this Azure CLI script to create an Azure Cosmos DB Gremlin serverless account, database, and graph.
Previously updated : 02/21/2022 Last updated : 05/02/2022+
-# Create an Azure Cosmos Gremlin API serverless account, database and graph using Azure CLI
+# Use Azure CLI to create a Gremlin serverless account, database, and graph
-The script in this article demonstrates creating a Gremlin serverless account, database and graph.
+The script in this article creates an Azure Cosmos DB Gremlin API serverless account, database, and graph.
+## Prerequisites
+- [!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)]
-- This article requires version 2.30 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
+- This script requires Azure CLI version 2.30 or later.
+
+ - You can run the script in the Bash environment in [Azure Cloud Shell](/azure/cloud-shell/quickstart). When Cloud Shell opens, make sure to select **Bash** in the environment field at the upper left of the shell window. Cloud Shell has the latest version of Azure CLI.
+
+ [![Launch Cloud Shell in a new window](../../../../../includes/media/cloud-shell-try-it/hdi-launch-cloud-shell.png)](https://shell.azure.com)
+
+ - If you prefer, you can [install Azure CLI](/cli/azure/install-azure-cli) to run the script locally. Run [az version](/cli/azure/reference-index?#az-version) to find your Azure CLI version, and run [az upgrade](/cli/azure/reference-index?#az-upgrade) if you need to upgrade. Sign in to Azure by running [az login](/cli/azure/reference-index#az-login).
## Sample script
+This script uses the following commands:
-### Run the script
+- [az group create](/cli/azure/group#az-group-create) creates a resource group to store all resources.
+- [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) with the `--capabilities EnableGremlin EnableServerless` parameter creates a Gremlin-enabled, serverless Azure Cosmos DB account.
+- [az cosmosdb gremlin database create](/cli/azure/cosmosdb/gremlin/database#az-cosmosdb-gremlin-database-create) creates an Azure Cosmos DB Gremlin database.
+- [az cosmosdb gremlin graph create](/cli/azure/cosmosdb/gremlin/graph#az-cosmosdb-gremlin-graph-create) creates an Azure Cosmos DB Gremlin graph.
:::code language="azurecli" source="~/azure_cli_scripts/cosmosdb/gremlin/serverless.sh" id="FullScript":::
-## Clean up resources
+## Delete resources
+If you don't need the resources the script created, use the [az group delete](/cli/azure/group#az-group-delete) command to delete the resource group and all resources it contains, including the Azure Cosmos DB account and database.
```azurecli az group delete --name $resourceGroup ```
-## Sample reference
-
-This script uses the following commands. Each command in the table links to command specific documentation.
-
-| Command | Notes |
-|||
-| [az group create](/cli/azure/group#az-group-create) | Creates a resource group in which all resources are stored. |
-| [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) | Creates an Azure Cosmos DB account. |
-| [az cosmosdb gremlin database create](/cli/azure/cosmosdb/gremlin/database#az-cosmosdb-gremlin-database-create) | Creates an Azure Cosmos Gremlin database. |
-| [az cosmosdb gremlin graph create](/cli/azure/cosmosdb/gremlin/graph#az-cosmosdb-gremlin-graph-create) | Creates an Azure Cosmos Gremlin graph. |
-| [az group delete](/cli/azure/resource#az-resource-delete) | Deletes a resource group including all nested resources. |
- ## Next steps
-For more information on the Azure Cosmos DB CLI, see [Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb).
+[Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb)
cosmos-db Account Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/common/account-update.md
-# Update the regions on an Azure Cosmos DB account using PowerShell
+# Update consistency level for an Azure Cosmos DB account with PowerShell
[!INCLUDE[appliesto-all-apis](../../../includes/appliesto-all-apis.md)] [!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
cosmos-db Update Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/common/update-region.md
Title: PowerShell script to update an Azure Cosmos account's regions
-description: Azure PowerShell script sample - Update an Azure Cosmos account's regions
+ Title: PowerShell script to update regions for an Azure Cosmos DB account
+description: Run this Azure PowerShell script to add regions or change region failover order for an Azure Cosmos DB account.
Previously updated : 05/01/2020 Last updated : 05/02/2022
-# Update an Azure Cosmos account's regions using PowerShell
+# Update regions for an Azure Cosmos DB account by using PowerShell
+ [!INCLUDE[appliesto-all-apis](../../../includes/appliesto-all-apis.md)]
+This PowerShell script updates the Azure regions that an Azure Cosmos DB account uses. You can use this script to add an Azure region or change region failover order.
+ [!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
-This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
-If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+## Prerequisites
+
+- You need an existing Azure Cosmos DB account in an Azure resource group.
-Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
+- The script requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to list your installed versions. If you need to install PowerShell, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+
+- Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
## Sample script
-> [!NOTE]
-> You cannot modify regions and change other Cosmos account properties in the same operation. These must be done as two separate operations.
-> [!NOTE]
-> This sample demonstrates using a SQL (Core) API account. To use this sample for other APIs, copy the related properties and apply to your API specific script.
+The [Update-AzCosmosDBAccountRegion](/powershell/module/az.cosmosdb/update-azcosmosdbaccountregion) command updates Azure regions for an Azure Cosmos DB account. The command requires a resource group name, an Azure Cosmos DB account name, and a list of Azure regions in desired failover order.
+
+In this script, the [Get-AzCosmosDBAccount](/powershell/module/az.cosmosdb/get-azcosmosdbaccount) command gets the Azure Cosmos DB account you specify. [New-AzCosmosDBLocationObject](/powershell/module/az.cosmosdb/new-azcosmosdblocationobject) creates an object of type `PSLocation`. `Update-AzCosmosDBAccountRegion` uses the `PSLocation` parameter to update the account regions.
+
+- If you add a region, don't change the first failover region in the same operation. Change failover priority order in a separate operation.
+- You can't modify regions in the same operation as changing other Azure Cosmos DB account properties. Do these operations separately.
+
+This sample uses a SQL (Core) API account. To use this sample for other APIs, copy the related properties and apply them to your API-specific script.
[!code-powershell[main](../../../../../powershell_scripts/cosmosdb/common/ps-account-update-region.ps1 "Update Azure Cosmos account regions")]
-## Clean up deployment
+Although the script returns a result, the update operation might not be finished. Check the status of the operation in the Azure portal by using the Azure Cosmos DB account **Activity log**.
+
+## Delete Azure resource group
-After the script sample has been run, the following command can be used to remove the resource group and all resources associated with it.
+If you want to delete your Azure Cosmos DB account, you can use the [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) PowerShell command to remove its resource group. This command removes the Azure resource group and all the resources in it, including Azure Cosmos DB accounts and their containers and databases.
```powershell Remove-AzResourceGroup -ResourceGroupName "myResourceGroup" ```
-## Script explanation
-
-This script uses the following commands. Each command in the table links to command specific documentation.
-
-| Command | Notes |
-|||
-|**Azure Cosmos DB**| |
-| [Get-AzCosmosDBAccount](/powershell/module/az.cosmosdb/get-azcosmosdbaccount) | Lists Cosmos DB Accounts, or gets a specified Cosmos DB Account. |
-| [New-AzCosmosDBLocationObject](/powershell/module/az.cosmosdb/new-azcosmosdblocationobject) | Creates an object of type PSLocation to be used as a parameter for Update-AzCosmosDBAccountRegion. |
-| [Update-AzCosmosDBAccountRegion](/powershell/module/az.cosmosdb/update-azcosmosdbaccountregion) | Update Regions of a Cosmos DB Account. |
-|**Azure Resource Groups**| |
-| [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. |
-|||
- ## Next steps
-For more information on the Azure PowerShell, see [Azure PowerShell documentation](/powershell/).
+- [Azure PowerShell documentation](/powershell)
cosmos-db List Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/gremlin/list-get.md
Title: PowerShell script to list and get operations for Azure Cosmos DB Gremlin API
-description: Azure PowerShell script - Azure Cosmos DB list and get operations for Gremlin API
+ Title: PowerShell script to list or get Azure Cosmos DB Gremlin API databases and graphs
+description: Run this Azure PowerShell script to list all or get specific Azure Cosmos DB Gremlin API databases and graphs.
Previously updated : 05/01/2020 Last updated : 05/02/2022
-# List and get databases and graphs for Azure Cosmos DB - Gremlin API
+# PowerShell script to list or get Azure Cosmos DB Gremlin API databases and graphs
+ [!INCLUDE[appliesto-gremlin-api](../../../includes/appliesto-gremlin-api.md)]
+This PowerShell script lists or gets specific Azure Cosmos DB accounts, Gremlin API databases, and Gremlin API graphs.
+ [!INCLUDE [updated-for-az](../../../../../includes/updated-for-az.md)]
-This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed.
-If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+## Prerequisites
-Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
+- This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed. If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+
+- Run [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to Azure.
## Sample script
+In this script:
+
+- [Get-AzCosmosDBAccount](/powershell/module/az.cosmosdb/get-azcosmosdbaccount) lists all or gets a specific Azure Cosmos DB account in an Azure resource group.
+- [Get-AzCosmosDBGremlinDatabase](/powershell/module/az.cosmosdb/get-azcosmosdbgremlindatabase) lists all or gets a specific Gremlin API database in an Azure Cosmos DB account.
+- [Get-AzCosmosDBGremlinGraph](/powershell/module/az.cosmosdb/get-azcosmosdbgremlingraph) lists all or gets a specific Gremlin API graph in a Gremlin API database.
+ [!code-powershell[main](../../../../../powershell_scripts/cosmosdb/gremlin/ps-gremlin-list-get.ps1 "List or get databases or graphs for Gremlin API")]
-## Clean up deployment
+## Delete Azure resource group
-After the script sample has been run, the following command can be used to remove the resource group and all resources associated with it.
+If you want to delete your Azure Cosmos DB account, you can use the [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) PowerShell command to remove its resource group. This command removes the Azure resource group and all the resources in it, including Azure Cosmos DB accounts and their containers and databases.
```powershell Remove-AzResourceGroup -ResourceGroupName "myResourceGroup" ```
-## Script explanation
-
-This script uses the following commands. Each command in the table links to command specific documentation.
-
-| Command | Notes |
-|||
-|**Azure Cosmos DB**| |
-| [Get-AzCosmosDBAccount](/powershell/module/az.cosmosdb/get-azcosmosdbaccount) | Lists Cosmos DB Accounts, or gets a specified Cosmos DB Account. |
-| [Get-AzCosmosDBGremlinDatabase](/powershell/module/az.cosmosdb/get-azcosmosdbgremlindatabase) | Lists Gremlin API Databases in an Account, or gets a specified Gremlin API Database in an Account. |
-| [Get-AzCosmosDBGremlinGraph](/powershell/module/az.cosmosdb/get-azcosmosdbgremlingraph) | Lists Gremlin API Graphs in a Database, or gets a specified Gremlin API Table in a Database. |
-|**Azure Resource Groups**| |
-| [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. |
-|||
- ## Next steps For more information on the Azure PowerShell, see [Azure PowerShell documentation](/powershell/).
cosmos-db Sql Query Join https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-join.md
The results are:
] ```
+> [!IMPORTANT]
+> This example uses mulitple JOIN expressions in a single query. There is a maximum amount of JOINs that can be used in a single query. For more information, see [SQL query limits](/azure/cosmos-db/concepts-limits#sql-query-limits).
+ The following extension of the preceding example performs a double join. You could view the cross product as the following pseudo-code: ```
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/introduction.md
If you currently use Azure Table Storage, you gain the following benefits by mov
| | | | | Latency | Fast, but no upper bounds on latency. | Single-digit millisecond latency for reads and writes, backed with <10 ms latency for reads and writes at the 99th percentile, at any scale, anywhere in the world. | | Throughput | Variable throughput model. Tables have a scalability limit of 20,000 operations/s. | Highly scalable with [dedicated reserved throughput per table](../request-units.md) that's backed by SLAs. Accounts have no upper limit on throughput and support >10 million operations/s per table. |
-| Global distribution | Single region with one optional readable secondary read region for high availability. | [Turnkey global distribution](../distribute-data-globally.md) from one to any number of regions. Support for [automatic and manual failovers](../high-availability.md) at any time, anywhere in the world. Multiple write regions to let any region accept write operations. |
+| Global distribution | Single region with one optional readable secondary read region for high availability. | [Turnkey global distribution](../distribute-data-globally.md) from one to any number of regions. Support for [service-managed and manual failovers](../high-availability.md) at any time, anywhere in the world. Multiple write regions to let any region accept write operations. |
| Indexing | Only primary index on PartitionKey and RowKey. No secondary indexes. | Automatic and complete indexing on all properties by default, with no index management. | | Query | Query execution uses index for primary key, and scans otherwise. | Queries can take advantage of automatic indexing on properties for fast query times. | | Consistency | Strong within primary region. Eventual within secondary region. | [Five well-defined consistency levels](../consistency-levels.md) to trade off availability, latency, throughput, and consistency based on your application needs. |
cost-management-billing Azure Plan Subscription Transfer Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/azure-plan-subscription-transfer-partners.md
Title: Transfer subscriptions under an Azure plan from one partner to another (Preview)
+ Title: Transfer subscriptions under an Azure plan from one partner to another
description: This article helps you understand what you need to know before and after you transfer billing ownership of your Azure subscription. Previously updated : 09/15/2021 Last updated : 05/03/2022
-# Transfer subscriptions under an Azure plan from one partner to another (Preview)
+# Transfer subscriptions under an Azure plan from one partner to another
-This article helps you understand what you need to know before and after you transfer billing ownership of your Azure subscription. To start an Azure subscription transfer that's under an Azure plan from one Microsoft partner to another, you need to contact your partner. The partner will send you instructions about how to begin. After the transfer process is complete, the billing ownership of your subscription is changed.
+This article helps customers of Microsoft partners to understand what they need to know before and after transferring billing ownership of an Azure subscription. To start an Azure subscription transfer that's under an Azure plan from one Microsoft partner to another, you need to contact your partner. The partner will send you instructions about how to begin. After the transfer process is complete, the billing ownership of your subscription is changed.
+
+The steps that a partner takes are documented at [Transfer a customer's Azure subscriptions and/or Reservations (under an Azure plan) to a different CSP](/partner-center/transfer-azure-subscriptions-under-azure-plan).
## User access
cost-management-billing Exchange And Refund Azure Reservations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/exchange-and-refund-azure-reservations.md
Previously updated : 11/18/2021 Last updated : 05/03/2022
Azure has the following policies for cancellations, exchanges, and refunds.
- Only reservation order owners can process a refund. [Learn how to Add or change users who can manage a reservation](manage-reserved-vm-instance.md#who-can-manage-a-reservation-by-default). - For CSP program, the 50,000 USD limit is per customer.
+Let's look at an example with the previous points in mind. If you bought a $300,000 reservation, you can exchange it at any time for another reservation that equals or costs more (of the remaining reservation balance, not the original purchase price). For this example:
+- There's no penalty or annual limits for exchanges.
+- The refund that results from the exchange doesn't count against the refund limit.
+ ## Need help? Contact us. If you have questions or need help, [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
databox-online Azure Stack Edge Gpu Data Resiliency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-data-resiliency.md
+
+ Title: Data resiliency for Azure Stack Edge Pro GPU/Pro R/Mini R
+description: Describes data resiliency for Azure Stack Edge.
++++++ Last updated : 04/18/2022+++
+# Data resiliency for Azure Stack Edge
++
+This article explains the data resiliency behavior for Azure Stack Edge service that runs in Azure and manages Azure Stack Edge devices.
+
+## About Azure Stack Edge
+
+Azure Stack Edge service is used to deploy compute workloads on purpose-built hardware devices, right at the edge where the data is created. The purpose-built Azure Stack Edge devices are available in various form factors and can be ordered, configured, and monitored via the Azure portal. Azure Stack Edge solution can also be deployed in Azure public and Azure Government cloud environments.
+
+### Regions for Azure Stack Edge
+
+Region information is used for Azure Stack Edge service in the following ways:
+
+- You specify an Azure region when creating an Azure Stack Edge Hardware Center order for the Azure Stack Edge device. Data residency norms apply to Edge Hardware Center orders. For more information, see [Data residency for Edge Hardware Center](../azure-edge-hardware-center/azure-edge-hardware-center-overview.md#data-residency).
+
+- You specify an Azure region when creating a management resource for the Azure Stack Edge device. This region is used to store the metadata associated with the resource. The metadata can be stored in a location different than the physical device.
+
+- Finally, there's a region associated with the storage accounts where the customer data is stored by the Azure Stack Edge service. You can configure SMB or NFS shares on the service to store customer data and then associate an Azure Storage account with each configured share.
+
+ Depending on the Azure Storage account configured for the share, your data is automatically and transparently replicated. For example, Azure Geo-Redundant Storage account (GRS) is configured by default when an Azure Storage account is created. With GRS, your data is automatically replicated three times within the primary region, and three times in the paired region. For more information, see [Azure Storage redundancy options](../storage/common/storage-redundancy.md).
++
+### Non-paired region vs. regional pairs
+
+Azure Stack Edge service is a non-regional, always-available service and has no dependency on a specific Azure region. Azure Stack Edge service is also resilient to zone-wide outages and region-wide outages.
++
+- **Regional pairs** - In general, Azure Stack Edge service uses Azure Regional Pairs when storing and processing customer data in all the geographies except Singapore. If there's a regional failure, the instance of the service in the Azure paired region continues to service customers. This ensures that the service is fully resilient to all the zone-wide and region-wide outages.
+
+ For all the Azure Regional Pairs, by default, Microsoft is responsible for the disaster recovery (DR) setup, execution, and testing. In the event of region outage, when the service instance fails over to from the primary region to the secondary region, the Azure Stack Edge service may be inaccessible for a short duration.
+
+- **Non-paired region** - In Singapore, customer can choose that the customer data for Azure Stack Edge reside only in Singapore and not get replicated to the paired region, Hong Kong. With this option enabled, the service is resilient to zone-wide outages, but not to region-wide outages. Once the data residency is set to non-paired region, it persists during the lifetime of the resource and can't be changed.
+
+ In Singapore (South East Asia) region, if the customer has chosen single data residency option that wonΓÇÖt allow replication in the paired region, the customer will be responsible for the DR setup, execution, and testing.
++
+## Cross-region disaster recovery
+
+Cross region disaster recovery for all regions for multiple region geographies is done via using the Azure regional pairs. A regional pair consists of two regions, primary and secondary, within the same geography. Azure serializes platform updates (planned maintenance) across regional pairs, ensuring that only one region in each pair updates at a time. If an outage affects multiple regions, at least one region in each pair is prioritized for recovery. Applications that are deployed across paired regions are guaranteed to have one of the regions recovered with priority. For more information, see [Cross-region replication in Azure](../availability-zones/cross-region-replication-azure.md#cross-region-replication).
+
+In the event of region outage, when the service instance fails over to from the primary region to the secondary region, the Azure Stack Edge service may be inaccessible for a short duration.
+
+For cross-region DR, Microsoft is responsible. The Recovery Time Objective (RTO) for DR is 8 hours, and Recovery Point Objective (RPO) is 15 minutes. For more information, see [Resiliency and continuity overview](/compliance/assurance/assurance-resiliency-and-continuity). <!--Azure Global - Bcdr Service Details, (Go to Business Impact Analysis)-->
+
+Cross region disaster recovery for non-paired region geography only pertains to Singapore. If there's a region-wide service outage in Singapore and you have chosen to keep your data only within Singapore and not replicated to regional pair Hong Kong, you have two options:
+
+- Wait for the Singapore region to be restored.
+- Create a resource in another region, reset the device, and manage your device via the new resource. For detailed instructions, see [Reset and reactivate your Azure Stack Edge device](azure-stack-edge-reset-reactivate-device.md).
+
+In this case, the customer is responsible for DR and must set up a new device and then deploy all the workloads.
++
+## Non-paired region disaster recovery
+
+The disaster recovery isn't identical for non-paired region and multi-region geographies for this service.
+
+For Azure Stack Edge service, all regions use regional pairs except for Singapore where you can configure the service for non-paired region data residency.
+
+- In Singapore, you can configure the service for non-paired region data residency. The single-region geography disaster recovery support applies only to Singapore when the customer has chosen to not enable the regional pair Hong Kong. The customer is responsible for the Singapore customer enabled disaster recovery (CEDR).
+- Except for Singapore, all other regions use regional pairs and Microsoft owns the regional pair disaster recovery.
+
+For the single-region disaster recovery for which the customer is responsible:
+
+- Both the control plane (service side) and the data plane (device data) need to be configured by the customer.
+- There's a potential for data loss if the disaster recovery isnΓÇÖt appropriately configured by the customer. Features and functions remain intact as a new resource is created and the device is reactivated against this resource.
++
+Here are the high-level steps to set up disaster recovery using Azure portal for Azure Stack Edge:
+
+- Create a resource in another region. For more information, see how to [Create a management resource for Azure Stack Edge device](azure-stack-edge-gpu-deploy-prep.md#create-a-management-resource-for-each-device).
+- [Reset the device](azure-stack-edge-reset-reactivate-device.md#reset-device). When the device is reset, the local data on the device is lost. It's necessary that you back up the device prior to the reset. Use a third-party backup solution provider to back up the local data on your device. For more information, see how to [Protect data in Edge cloud shares, Edge local shares, VMs and folders for disaster recovery](azure-stack-edge-gpu-prepare-device-failure.md#protect-device-data).
+- [Reactivate device against a new resource](azure-stack-edge-reset-reactivate-device.md#reactivate-device). When you move to the new resource, you'll also need to restore data on the new resource. For more information, see how to [Restore Edge cloud shares](azure-stack-edge-gpu-recover-device-failure.md#restore-edge-cloud-shares), [Restore Edge local shares](azure-stack-edge-gpu-recover-device-failure.md#restore-edge-local-shares) and [Restore VM files and folders](azure-stack-edge-gpu-recover-device-failure.md#restore-vm-files-and-folders).
+
+For detailed instructions, see [Reset and reactivate your Azure Stack Edge device](azure-stack-edge-reset-reactivate-device.md).
+
+## Planning disaster recovery
+
+Microsoft and its customers operate under the [Shared responsibility model](../availability-zones/business-continuity-management-program.md#shared-responsibility-model). This means that for customer-enabled (responsible services DR), the customer must address disaster recovery for any service they deploy and control. To ensure that recovery is proactive, customers should always pre-deploy secondaries because there's no guarantee of capacity at time of impact for those who haven't pre-allocated.
+
+When using Azure Stack Edge service, the customer can create a resource proactively, ahead of time, in another supported region. In the event of a disaster, this resource can then be deployed.
+
+## Testing disaster recovery
+
+Azure Stack Edge doesnΓÇÖt have DR available as a feature. This implies that the interested customers should perform their own DF failover testing for this service. If a customer is trying to restore a workload or configuration in a new device, they are responsible for the end-to-end configuration.
++
+## Next steps
+
+- Learn more about [Azure data residency requirements](https://azure.microsoft.com/global-infrastructure/data-residency/).
defender-for-iot Dell Edge 5200 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/dell-edge-5200.md
Last updated 04/24/2022
-# Dell Edge 5200
+# Dell Edge 5200 (Rugged)
This article describes the Dell Edge 5200 appliance for OT sensors.
This article describes the Dell Edge 5200 appliance for OT sensors.
|Component | Technical specifications| |:-|:-|
-|Chassis| Desktop / Wall mount server|
+|Chassis| Desktop / Wall mount server Rugged MIL-STD-810G|
|Dimensions| 211 mm (W) x 240 mm (D) x 86 mm (H)| |Weight| 4.7 kg| |Processor| Intel® Core™ i7-9700TE|
defender-for-iot Hpe Proliant Dl360 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-proliant-dl360.md
The following image shows a view of the HPE ProLiant Dl360 back panel:
|Chassis |1U rack server| |Dimensions| 42.9 x 43.46 x 70.7 cm / 1.69" x 17.11" x 27.83" in| |Weight| Max 16.27 kg / 35.86 lb |
-|Processor | Intel Xeon Silver 4215 R 3.2 GHz 11M cache 8c/16T 130 W|
+|Processor | 2x Intel Xeon Silver 4215 R 3.2 GHz 11M cache 8c/16T 130 W|
|Chipset | Intel C621| |Memory | 32 GB = Two 16-GB 2666MT/s DDR4 ECC UDIMM| |Storage| Six 1.2-TB SAS 12G Enterprise 10K SFF (2.5 in) in hot-plug hard drive - RAID 5|
The following image shows a view of the HPE ProLiant Dl360 back panel:
|Power |Two HPE 500-W flex slot platinum hot plug low halogen power supply kit |Rack support | HPE 1U Gen10 SFF easy install rail kit |
-### Port expansion
+## HPE DL360 BOM
+
+|PN |Description |Quantity|
+|-- | --| |
+|P19766-B21 | HPE DL360 Gen10 8SFF NC CTO Server |1|
+|P19766-B21 | Europe - Multilingual Localization |1|
+|P24479-L21 | Intel Xeon-S 4215 R FIO Kit for DL360 G10 |1|
+|P24479-B21 | Intel Xeon-S 4215 R Kit for DL360 Gen10 |1|
+|P00922-B21 | HPE 16-GB 2Rx8 PC4-2933Y-R Smart Kit |2|
+|872479-B21 | HPE 1.2-TB SAS 10K SFF SC DS HDD |6|
+|811546-B21 | HPE 1-GbE 4-p BASE-T I350 Adapter |1|
+|P02377-B21 | HPE Smart Hybrid Capacitor w_ 145 mm Cable |1|
+|804331-B21 | HPE Smart Array P408i-a SR Gen10 Controller |1|
+|665240-B21 | HPE 1-GbE 4-p FLR-T I350 Adapter |1|
+|871244-B21 | HPE DL360 Gen10 High Performance Fan Kit |1|
+|865408-B21 | HPE 500-W FS Plat Hot Plug LH Power Supply Kit |2|
+|512485-B21 | HPE iLO Adv 1-Server License 1 Year Support |1|
+|874543-B21 | HPE 1U Gen10 SFF Easy Install Rail Kit |1|
+
+## Port expansion
Optional modules for port expansion include:
defender-for-iot Ys Techsystems Ys Fit2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/ys-techsystems-ys-fit2.md
Last updated 04/24/2022
-# YS-techsystems YS-FIT2
+# YS-techsystems YS-FIT2 (Rugged)
This article describes the **YS-techsystems YS-FIT2** appliance deployment and installation for OT sensors.
defender-for-iot How To Manage Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-subscriptions.md
# Manage Defender for IoT subscriptions
-## About subscriptions
- Your Defender for IoT deployment is managed through your Microsoft Defender for IoT account subscriptions. You can onboard, edit, and offboard your subscriptions to Defender for IoT in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started). For each subscription, you will be asked to define a number of *committed devices*. Committed devices are the approximate number of devices that will be monitored in your enterprise.
-### Subscription billing
+> [!NOTE]
+> If you've come to this page because you are a [former CyberX customer](https://blogs.microsoft.com/blog/2020/06/22/microsoft-acquires-cyberx-to-accelerate-and-secure-customers-iot-deployments) and have questions about your account, reach out to your account manager for guidance.
++
+## Subscription billing
You are billed based on the number of committed devices associated with each subscription.
You may need to update your subscription with more committed devices, or more fe
Changes in device commitment will take effect one hour after confirming the change. Billing for these changes will be reflected at the beginning of the month following confirmation of the change. You will need to upload a new activation file to your on-premises management console. The activation file reflects the new number of committed devices. See[Upload an activation file](how-to-manage-the-on-premises-management-console.md#upload-an-activation-file).+ ## Offboard a subscription You may need to offboard a subscription, for example if you need to work with a new payment entity. Subscription offboarding takes effect one hour after confirming the offboard. Your upcoming monthly bill will reflect this change.
defender-for-iot How To Troubleshoot The Sensor And On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-troubleshoot-the-sensor-and-on-premises-management-console.md
Audit logs record key information at the time of occurrence. Audit logs are usef
The exported log is added to the **Archived Logs** list. Select the :::image type="icon" source="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/eye-icon.png" border="false"::: button to view the OTP. Send the OTP string to the support team in a separate message from the exported logs. The support team will be able to extract exported logs only by using the unique OTP that's used to encrypt the logs.
+## Clearing sensor data to factory default
+
+In cases where the sensor needs to be relocated or erased, the sensor can be reset to factory default data.
+
+> [!NOTE]
+> Network settings such as IP/DNS/GATEWAY will not be changed by clearing system data.
+
+**To clear system data**:
+1. Sign in to the sensor as the **cyberx** user.
+1. Select **Support** > **Clear system data**, and confirm that you do want to reset the sensor to factory default data.
+
+ :::image type="content" source="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/warning-screenshot.png" alt-text="Screenshot of warning message.":::
+
+All allowlists, policies, and configuration settings are cleared, and the sensor is restarted.
++ ## Next steps - [View alerts](how-to-view-alerts.md)
defender-for-iot Ot Pre Configured Appliances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-pre-configured-appliances.md
For more information, see [Purchase sensors or download software for sensors](ho
You can order any of the following preconfigured appliances for monitoring your OT networks:
-|Capacity / Hardware profile |Appliance |Performance / Monitoring |Physical specifications |
+|Hardware profile |Appliance |Performance / Monitoring |Physical specifications |
|||||
-|Corporate | [HPE ProLiant DL360](appliance-catalog/hpe-proliant-dl360.md) | **Max bandwidth**: 3Gbp/s <br>**Max devices**:12,000 | **Mounting**: 1U <br>**Ports**: 15x RJ45 or 8x SFP (OPT) |
+|Corporate | [HPE ProLiant DL360](appliance-catalog/hpe-proliant-dl360.md) | **Max bandwidth**: 3Gbp/s <br>**Max devices**: 12,000 | **Mounting**: 1U <br>**Ports**: 15x RJ45 or 8x SFP (OPT) |
|Enterprise | [HPE ProLiant DL20/DL20 Plus](appliance-catalog/hpe-proliant-dl20-plus-enterprise.md) <br> (4SFF) | **Max bandwidth**: 1 Gbp/s<br>**Max devices**: 10,000 | **Mounting**: 1U <br>**Ports**: 8x RJ45 or 6x SFP (OPT) | |SMB | [HPE ProLiant DL20/DL20 Plus](appliance-catalog/hpe-proliant-dl20-plus-smb.md) <br> (NHP 2LFF) | **Max bandwidth**: 200Mbp/s<br>**Max devices**: 1,000 | **Mounting**: 1U<br>**Ports**: 4x RJ45 |
-|SMB | [Dell Edge 5200](appliance-catalog/dell-edge-5200.md) <br> (NHP 2LFF) | **Max bandwidth**: 60Mbp/s<br>**Max devices**: 1,000 | **Mounting**: Wall Mount<br>**Ports**: 3x RJ45 |
-|Office | [YS-Techsystems YS-FIT2](appliance-catalog/ys-techsystems-ys-fit2.md) | **Max bandwidth**: 10Mbp/s <br>**Max devices**: 100 | **Mounting**: DIN/VESA<br>**Ports**: 2x RJ45 |
+|SMB | [Dell Edge 5200](appliance-catalog/dell-edge-5200.md) <br> (Rugged MIL-STD-810G) | **Max bandwidth**: 60Mbp/s<br>**Max devices**: 1,000 | **Mounting**: Wall Mount<br>**Ports**: 3x RJ45 |
+|Office | [YS-Techsystems YS-FIT2](appliance-catalog/ys-techsystems-ys-fit2.md) <br>(Rugged MIL-STD-810G) | **Max bandwidth**: 10Mbp/s <br>**Max devices**: 100 | **Mounting**: DIN/VESA<br>**Ports**: 2x RJ45 |
> [!NOTE]
You can order any of the following preconfigured appliances for monitoring your
You can purchase any of the following appliances for your OT on-premises management consoles:
-|Capacity / Hardware profile |Appliance |Max sensors |Physical specifications |
+|Hardware profile |Appliance |Max sensors |Physical specifications |
||||| |Enterprise | [HPE ProLiant DL20/DL20 Plus](appliance-catalog/hpe-proliant-dl20-plus-enterprise.md) <br> (4SFF) | 300 | **Mounting**: 1U <br>**Ports**: 8x RJ45 or 6x SFP (OPT) |
defender-for-iot Ot Virtual Appliances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-virtual-appliances.md
Title: OT monitoring with virtual appliances - Microsoft Defender for IoT description: Learn about system requirements for virtual appliances used for the Microsoft Defender for IoT OT sensors and on-premises management console. Previously updated : 04/04/2022 Last updated : 05/03/2022
The following tables list system requirements for OT network sensors on virtual
For all deployments, bandwidth results for virtual machines may vary, depending on the distribution of protocols and the actual hardware resources that are available, including the CPU model, memory bandwidth, and IOPS.
-# [Corporate](#tab/corporate)
+|Hardware profile |Performance / Monitoring |Physical specifications |
+||||
+|**Corporate** | **Max bandwidth**: 2.5 Gb/sec <br>**Max monitored assets**: 12,000 | **vCPU**: 32 <br>**Memory**: 32 GB <br>**Storage**: 5.6 TB (600 IOPS) |
+|**Enterprise** | **Max bandwidth**: 800 Mb/sec <br>**Max monitored assets**: 10,000 | **vCPU**: 8 <br>**Memory**: 32 GB <br>**Storage**: 1.8 TB (300 IOPS) |
+|**SMB** | **Max bandwidth**: 160 Mb/sec <br>**Max monitored assets**: 1,000 | **vCPU**: 4 <br>**Memory**: 8 GB <br>**Storage**: 500 GB (150 IOPS) |
+|**Office** | **Max bandwidth**: 100 Mb/sec <br>**Max monitored assets**: 800 | **vCPU**: 4 <br>**Memory**: 8 GB <br>**Storage**: 100 GB (150 IOPS) |
+|**Rugged** | **Max bandwidth**: 10 Mb/sec <br>**Max monitored assets**: 100 | **vCPU**: 4 <br>**Memory**: 8 GB <br>**Storage**: 60 GB (150 IOPS) |
-|Specification |Requirements |
-|||
-|**Maximum bandwidth** | 2.5 Gb/sec |
-|**Maximum monitored assets** | 12,000 |
-|**vCPU** | 32 |
-|**Memory** | 32 GB |
-|**Storage** | 5.6 TB (600 IOPS) |
-
-# [Enterprise](#tab/enterprise)
-
-|Specification |Requirements |
-|||
-|**Maximum bandwidth** | 800 Mb/sec |
-|**Maximum monitored assets** | 10,000 |
-|**vCPU** | 8 |
-|**Memory** | 32 GB |
-|**Storage** | 1.8 TB (300 IOPS) |
-
-# [SMB](#tab/smb)
-
-|Specification |Requirements |
-|||
-|**Maximum bandwidth** | 160 Mb/sec |
-|**Maximum monitored assets** | 1000 |
-|**vCPU** | 4 |
-|**Memory** | 8 GB |
-|**Storage** | 500 GB (150 IOPS) |
-
-# [Office](#tab/office)
-
-|Specification |Requirements |
-|||
-|**Maximum bandwidth** | 100 Mb/sec |
-|**Maximum monitored assets** | 800 |
-|**vCPU** | 4 |
-|**Memory** | 8 GB |
-|**Storage** | 100 GB (150 IOPS) |
-
-# [Rugged](#tab/rugged)
-
-|Specification |Requirements |
-|||
-|**Maximum bandwidth** | 10 Mb/sec |
-|**Maximum monitored assets** | 100 |
-|**vCPU** | 4 |
-|**Memory** | 8 GB |
-|**Storage** | 60 GB (150 IOPS) |
--- ## On-premises management console VM requirements An on-premises management console on a virtual appliance is supported for enterprise deployments with the following requirements:
expressroute About Fastpath https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/about-fastpath.md
While FastPath supports most configurations, it doesn't support the following fe
The following FastPath features are in Public preview:
-**VNet Peering** - FastPath will send traffic directly to any VM deployed in a virtual network peered to the one connected to ExpressRoute, bypassing the ExpressRoute virtual network gateway.
+**VNet Peering** - FastPath will send traffic directly to any VM deployed in a virtual network peered to the one connected to ExpressRoute, bypassing the ExpressRoute virtual network gateway. This preview is available for both IPv4 and IPv6 connectivity.
**Private Link Connectivity for 10Gbps ExpressRoute Direct Connectivity** - Private Link traffic sent over ExpressRoute FastPath will bypass the ExpressRoute virtual network gateway in the data path. This preview is available in the following Azure Regions.
This preview supports connectivity to the following Azure
- Azure Storage - Third Party Private Link Services
-This preview is available for connections associated to ExpressRoute Direct circuits. Connections associated to ExpressRoute partner circuits are not eligible for this preview.
+This preview is available for connections associated to ExpressRoute Direct circuits. Connections associated to ExpressRoute partner circuits are not eligible for this preview. Additionally, this preview is available for both IPv4 and IPv6 connectivity.
> [!NOTE] > Private Link pricing will not apply to traffic sent over ExpressRoute FastPath during Public preview. For more information about pricing, check out the [Private Link pricing page](https://azure.microsoft.com/pricing/details/private-link/).
frontdoor Front Door Caching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-caching.md
Cache-Control response headers that indicate that the response won't be cached s
The following request headers won't be forwarded to a backend when using caching. - Content-Length - Transfer-Encoding
+- Accept-Language
## Cache behavior and duration
frontdoor Front Door Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-diagnostics.md
Metrics are a feature for certain Azure resources that allow you to view perform
| BackendHealthPercentage | Backend Health Percentage | Percent | Backend</br>BackendPool | The percentage of successful health probes from Front Door to backends. | | WebApplicationFirewallRequestCount | Web Application Firewall Request Count | Count | PolicyName</br>RuleName</br>Action | The number of client requests processed by the application layer security of Front Door. |
+> [!NOTE]
+> Activity log doesn't include any GET operations or operations that you perform by using either the Azure portal or the original Management API.
+>
+ ## <a name="activity-log"></a>Activity logs Activity logs provide information about the operations done on an Azure Front Door (classic) profile. They also determine the what, who, and when for any write operations (put, post, or delete) done against an Azure Front Door (classic) profile. >[!NOTE]
->Activity logs don't include read (get) operations. They also don't include operations that you perform by using either the Azure portal or the original Management API.
+>If a request to the the origin timeout, the value for HttpStatusCode is set to **0**.
Access activity logs in your Front Door or all the logs of your Azure resources in Azure Monitor. To view activity logs:
Front Door currently provides diagnostic logs. Diagnostic logs provide individua
| ClientIp | The IP address of the client that made the request. If there was an X-Forwarded-For header in the request, then the Client IP is picked from the same. | | ClientPort | The IP port of the client that made the request. | | HttpMethod | HTTP method used by the request. |
-| HttpStatusCode | The HTTP status code returned from the proxy. |
+| HttpStatusCode | The HTTP status code returned from the proxy. If a request to the the origin timeout, the value for HttpStatusCode is set to **0**.|
| HttpStatusDetails | Resulting status on the request. Meaning of this string value can be found at a Status reference table. | | HttpVersion | Type of the request or connection. | | POP | Short name of the edge where the request landed. |
frontdoor How To Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-logs.md
Azure Front Door currently provides individual API requests with each entry havi
| SecurityProtocol | The TLS/SSL protocol version used by the request or null if no encryption. Possible values include: SSLv3, TLSv1, TLSv1.1, TLSv1.2 | | SecurityCipher | When the value for Request Protocol is HTTPS, this field indicates the TLS/SSL cipher negotiated by the client and AFD for encryption. | | Endpoint | The domain name of AFD endpoint, for example, contoso.z01.azurefd.net |
-| HttpStatusCode | The HTTP status code returned from AFD. If a request to the the origin times out, value for HttpStatusCode is set to "0".|
+| HttpStatusCode | The HTTP status code returned from Azure Front Door. If a request to the the origin timeout, the value for HttpStatusCode is set to **0**.|
| Pop | The edge pop, which responded to the user request. | | Cache Status | Provides the status code of how the request gets handled by the CDN service when it comes to caching. Possible values are HIT: The HTTP request was served from AFD edge POP cache. <br> **MISS**: The HTTP request was served from origin. <br/> **PARTIAL_HIT**: Some of the bytes from a request got served from AFD edge POP cache while some of the bytes got served from origin for object chunking scenarios. <br> **CACHE_NOCONFIG**: Forwarding requests without caching settings, including bypass scenario. <br/> **PRIVATE_NOSTORE**: No cache configured in caching settings by customers. <br> **REMOTE_HIT**: The request was served by parent node cache. <br/> **N/A**:** Request that was denied by Signed URL and Rules Set. | | MatchedRulesSetName | The names of the rules that were processed. |
frontdoor How To Monitor Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-monitor-metrics.md
You can configure alerts for each metric such as a threshold for 4XXErrorRate or
| OriginHealth% | The percentage of successful health probes from AFD to origin.| Origin, Origin Group | | WAF request count | Matched WAF request. | Action, rule name, Policy Name |
+> [!NOTE]
+> If a request to the the origin timeout, the value for HttpStatusCode dimension will be **0**.
+>
++ ## Access Metrics in Azure portal 1. From the Azure portal menu, select **All Resources** >> **\<your-AFD-profile>**.
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/overview.md
Title: Organize your resources with management groups - Azure Governance description: Learn about the management groups, how their permissions work, and how to use them. Previously updated : 08/17/2021 Last updated : 05/02/2022
Agreement (EA) subscriptions that are descendants of that management group and w
under those subscriptions. This security policy cannot be altered by the resource or subscription owner allowing for improved governance.
+> [!NOTE]
+> Management groups aren't currently supported for Microsoft Customer Agreement subscriptions.
+ Another scenario where you would use management groups is to provide user access to multiple subscriptions. By moving multiple subscriptions under that management group, you can create one [Azure role assignment](../../role-based-access-control/overview.md) on the management group, which
hdinsight Apache Kafka Quickstart Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-quickstart-bicep.md
+
+ Title: 'Quickstart: Apache Kafka using Bicep - HDInsight'
+description: In this quickstart, you learn how to create an Apache Kafka cluster on Azure HDInsight using Bicep. You also learn about Kafka topics, subscribers, and consumers.
+++++ Last updated : 05/02/2022
+#Customer intent: I need to create a Kafka cluster so that I can use it to process streaming data
++
+# Quickstart: Create Apache Kafka cluster in Azure HDInsight using Bicep
+
+In this quickstart, you use a Bicep to create an [Apache Kafka](./apache-kafka-introduction.md) cluster in Azure HDInsight. Kafka is an open-source, distributed streaming platform. It's often used as a message broker, as it provides functionality similar to a publish-subscribe message queue.
++
+The Kafka API can only be accessed by resources inside the same virtual network. In this quickstart, you access the cluster directly using SSH. To connect other services, networks, or virtual machines to Kafka, you must first create a virtual network and then create the resources within the network. For more information, see the [Connect to Apache Kafka using a virtual network](apache-kafka-connect-vpn-gateway.md) document.
+
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/hdinsight-kafka/).
++
+Two Azure resources are defined in the Bicep file:
+
+* [Microsoft.Storage/storageAccounts](/azure/templates/microsoft.storage/storageaccounts): create an Azure Storage Account.
+* [Microsoft.HDInsight/cluster](/azure/templates/microsoft.hdinsight/clusters): create an HDInsight cluster.
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters clusterName=<cluster-name> clusterLoginUserName=<cluster-username> sshUserName=<ssh-username>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -clusterName "<cluster-name>" -clusterLoginUserName "<cluster-username>" -sshUserName "<ssh-username>"
+ ```
+
+
+
+ You need to provide values for the parameters:
+
+ * Replace **\<cluster-name\>** with the name of the HDInsight cluster to create. The cluster name needs to start with a letter and can contain only lowercase letters, numbers, and dashes.
+ * Replace **\<cluster-username\>** with the credentials used to submit jobs to the cluster and to log in to cluster dashboards. Uppercase letters aren't allowed in the cluster username.
+ * Replace **\<ssh-username\>** with the credentials used to remotely access the cluster.
+
+ You'll be prompted to enter the following:
+
+ * **clusterLoginPassword**, which must be at least 10 characters long and contain at least one digit, one uppercase letter, one lowercase letter, and one non-alphanumeric character except single-quote, double-quote, backslash, right-bracket, full-stop. It also must not contain three consecutive characters from the cluster username of SSH username.
+ * **sshPassword**, which must be 6-72 characters long and must contain at least one digit, one uppercase letter, and one lowercase letter. It must not contain any three consecutive characters from the cluster login name.
+
+ > [!NOTE]
+ > When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Review deployed resources
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+## Get the Apache Zookeeper and Broker host information
+
+When working with Kafka, you must know the *Apache Zookeeper* and *Broker* hosts. These hosts are used with the Kafka API and many of the utilities that ship with Kafka.
+
+In this section, you get the host information from the Ambari REST API on the cluster.
+
+1. Use [ssh command](../hdinsight-hadoop-linux-use-ssh-unix.md) to connect to your cluster. Edit the command below by replacing CLUSTERNAME with the name of your cluster, and then enter the command:
+
+ ```cmd
+ ssh sshuser@CLUSTERNAME-ssh.azurehdinsight.net
+ ```
+
+1. From the SSH connection, use the following command to install the `jq` utility. This utility is used to parse JSON documents, and is useful in retrieving the host information:
+
+ ```bash
+ sudo apt -y install jq
+ ```
+
+1. To set an environment variable to the cluster name, use the following command:
+
+ ```bash
+ read -p "Enter the Kafka on HDInsight cluster name: " CLUSTERNAME
+ ```
+
+ When prompted, enter the name of the Kafka cluster.
+
+1. To set an environment variable with Zookeeper host information, use the command below. The command retrieves all Zookeeper hosts, then returns only the first two entries. This is because you want some redundancy in case one host is unreachable.
+
+ ```bash
+ export KAFKAZKHOSTS=`curl -sS -u admin -G https://$CLUSTERNAME.azurehdinsight.net/api/v1/clusters/$CLUSTERNAME/services/ZOOKEEPER/components/ZOOKEEPER_SERVER | jq -r '["\(.host_components[].HostRoles.host_name):2181"] | join(",")' | cut -d',' -f1,2`
+ ```
+
+ When prompted, enter the password for the cluster login account (not the SSH account).
+
+1. To verify that the environment variable is set correctly, use the following command:
+
+ ```bash
+ echo '$KAFKAZKHOSTS='$KAFKAZKHOSTS
+ ```
+
+ This command returns information similar to the following text:
+
+ `<zookeepername1>.eahjefxxp1netdbyklgqj5y1ud.ex.internal.cloudapp.net:2181,<zookeepername2>.eahjefxxp1netdbyklgqj5y1ud.ex.internal.cloudapp.net:2181`
+
+1. To set an environment variable with Kafka broker host information, use the following command:
+
+ ```bash
+ export KAFKABROKERS=`curl -sS -u admin -G https://$CLUSTERNAME.azurehdinsight.net/api/v1/clusters/$CLUSTERNAME/services/KAFKA/components/KAFKA_BROKER | jq -r '["\(.host_components[].HostRoles.host_name):9092"] | join(",")' | cut -d',' -f1,2`
+ ```
+
+ When prompted, enter the password for the cluster login account (not the SSH account).
+
+1. To verify that the environment variable is set correctly, use the following command:
+
+ ```bash
+ echo '$KAFKABROKERS='$KAFKABROKERS
+ ```
+
+ This command returns information similar to the following text:
+
+ `<brokername1>.eahjefxxp1netdbyklgqj5y1ud.cx.internal.cloudapp.net:9092,<brokername2>.eahjefxxp1netdbyklgqj5y1ud.cx.internal.cloudapp.net:9092`
+
+## Manage Apache Kafka topics
+
+Kafka stores streams of data in *topics*. You can use the `kafka-topics.sh` utility to manage topics.
+
+* **To create a topic**, use the following command in the SSH connection:
+
+ ```bash
+ /usr/hdp/current/kafka-broker/bin/kafka-topics.sh --create --replication-factor 3 --partitions 8 --topic test --zookeeper $KAFKAZKHOSTS
+ ```
+
+ This command connects to Zookeeper using the host information stored in `$KAFKAZKHOSTS`. It then creates a Kafka topic named **test**.
+
+ * Data stored in this topic is partitioned across eight partitions.
+
+ * Each partition is replicated across three worker nodes in the cluster.
+
+ If you created the cluster in an Azure region that provides three fault domains, use a replication factor of 3. Otherwise, use a replication factor of 4.
+
+ In regions with three fault domains, a replication factor of 3 allows replicas to be spread across the fault domains. In regions with two fault domains, a replication factor of four spreads the replicas evenly across the domains.
+
+ For information on the number of fault domains in a region, see the [Availability of Linux virtual machines](../../virtual-machines/availability.md) document.
+
+ Kafka isn't aware of Azure fault domains. When creating partition replicas for topics, it may not distribute replicas properly for high availability.
+
+ To ensure high availability, use the [Apache Kafka partition rebalance tool](https://github.com/hdinsight/hdinsight-kafka-tools). This tool must be ran from an SSH connection to the head node of your Kafka cluster.
+
+ For the highest availability of your Kafka data, you should rebalance the partition replicas for your topic when:
+
+ * You create a new topic or partition
+
+ * You scale up a cluster
+
+* **To list topics**, use the following command:
+
+ ```bash
+ /usr/hdp/current/kafka-broker/bin/kafka-topics.sh --list --zookeeper $KAFKAZKHOSTS
+ ```
+
+ This command lists the topics available on the Kafka cluster.
+
+* **To delete a topic**, use the following command:
+
+ ```bash
+ /usr/hdp/current/kafka-broker/bin/kafka-topics.sh --delete --topic topicname --zookeeper $KAFKAZKHOSTS
+ ```
+
+ This command deletes the topic named `topicname`.
+
+ > [!WARNING]
+ > If you delete the `test` topic created earlier, then you must recreate it. It is used by steps later in this document.
+
+For more information on the commands available with the `kafka-topics.sh` utility, use the following command:
+
+```bash
+/usr/hdp/current/kafka-broker/bin/kafka-topics.sh
+```
+
+## Produce and consume records
+
+Kafka stores *records* in topics. Records are produced by *producers*, and consumed by *consumers*. Producers and consumers communicate with the *Kafka broker* service. Each worker node in your HDInsight cluster is a Kafka broker host.
+
+To store records into the test topic you created earlier, and then read them using a consumer, use the following steps:
+
+1. To write records to the topic, use the `kafka-console-producer.sh` utility from the SSH connection:
+
+ ```bash
+ /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list $KAFKABROKERS --topic test
+ ```
+
+ After this command, you arrive at an empty line.
+
+1. Type a text message on the empty line and hit enter. Enter a few messages this way, and then use **Ctrl + C** to return to the normal prompt. Each line is sent as a separate record to the Kafka topic.
+
+1. To read records from the topic, use the `kafka-console-consumer.sh` utility from the SSH connection:
+
+ ```bash
+ /usr/hdp/current/kafka-broker/bin/kafka-console-consumer.sh --bootstrap-server $KAFKABROKERS --topic test --from-beginning
+ ```
+
+ This command retrieves the records from the topic and displays them. Using `--from-beginning` tells the consumer to start from the beginning of the stream, so all records are retrieved.
+
+ If you're using an older version of Kafka, replace `--bootstrap-server $KAFKABROKERS` with `--zookeeper $KAFKAZKHOSTS`.
+
+1. Use __Ctrl + C__ to stop the consumer.
+
+You can also programmatically create producers and consumers. For an example of using this API, see the [Apache Kafka Producer and Consumer API with HDInsight](apache-kafka-producer-consumer-api.md) document.
+
+## Clean up resources
+
+When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group and its resources.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+In this quickstart, you learned how to create an Apache Kafka cluster in HDInsight using Bicep. In the next article, you learn how to create an application that uses the Apache Kafka Streams API and run it with Kafka on HDInsight.
+
+> [!div class="nextstepaction"]
+> [Use Apache Kafka streams API in Azure HDInsight](./apache-kafka-streams-api.md)
hdinsight Apache Spark Jupyter Spark Use Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-jupyter-spark-use-bicep.md
+
+ Title: 'Quickstart: Create Apache Spark cluster using Bicep - Azure HDInsight'
+description: This quickstart shows how to use Bicep to create an Apache Spark cluster in Azure HDInsight, and run a Spark SQL query.
++ Last updated : 05/02/2022+++
+#Customer intent: As a developer new to Apache Spark on Azure, I need to see how to create a Spark cluster and query some data.
++
+# Quickstart: Create Apache Spark cluster in Azure HDInsight using Bicep
+
+In this quickstart, you use Bicep to create an [Apache Spark](./apache-spark-overview.md) cluster in Azure HDInsight. You then create a Jupyter Notebook file, and use it to run Spark SQL queries against Apache Hive tables. Azure HDInsight is a managed, full-spectrum, open-source analytics service for enterprises. The Apache Spark framework for HDInsight enables fast data analytics and cluster computing using in-memory processing. Jupyter Notebook lets you interact with your data, combine code with markdown text, and do simple visualizations.
+
+If you're using multiple clusters together, you'll want to create a virtual network, and if you're using a Spark cluster you'll also want to use the Hive Warehouse Connector. For more information, see [Plan a virtual network for Azure HDInsight](../hdinsight-plan-virtual-network-deployment.md) and [Integrate Apache Spark and Apache Hive with the Hive Warehouse Connector](../interactive-query/apache-hive-warehouse-connector.md).
++
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/hdinsight-spark-linux/).
++
+Two Azure resources are defined in the Bicep file:
+
+* [Microsoft.Storage/storageAccounts](/azure/templates/microsoft.storage/storageaccounts): create an Azure Storage Account.
+* [Microsoft.HDInsight/cluster](/azure/templates/microsoft.hdinsight/clusters): create an HDInsight cluster.
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters clusterName=<cluster-name> clusterLoginUserName=<cluster-username> sshUserName=<ssh-username>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -clusterName "<cluster-name>" -clusterLoginUserName "<cluster-username>" -sshUserName "<ssh-username>"
+ ```
+
+
+
+ You need to provide values for the parameters:
+
+ * Replace **\<cluster-name\>** with the name of the HDInsight cluster to create.
+ * Replace **\<cluster-username\>** with the credentials used to submit jobs to the cluster and to log in to cluster dashboards. The username has a minimum length of two characters and a maximum length of 20 characters. It must consist of digits, upper or lowercase letters, and/or the following special characters: (!#$%&\'()-^_`{}~).').
+ * Replace **\<ssh-username\>** with the credentials used to remotely access the cluster. The username has a minimum length of two characters. It must consist of digits, upper or lowercase letters, and/or the following special characters: (%&\'^_`{}~). It cannot be the same as the cluster username.
+
+ You'll be prompted to enter the following:
+
+ * **clusterLoginPassword**, which must be at least 10 characters long and must contain at least one digit, one uppercase letter, one lowercase letter, and one non-alphanumeric character except single-quote, double-quote, backslash, right-bracket, full-stop. It also must not contain three consecutive characters from the cluster username or SSH username.
+ * **sshPassword**, which must be 6-72 characters long and must contain at least one digit, one uppercase letter, and one lowercase letter. It must not contain any three consecutive characters from the cluster login name.
+
+ > [!NOTE]
+ > When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+If you run into an issue with creating HDInsight clusters, it could be that you don't have the right permissions to do so. For more information, see [Access control requirements](../hdinsight-hadoop-customize-cluster-linux.md#access-control).
+
+## Review deployed resources
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+## Create a Jupyter Notebook file
+
+[Jupyter Notebook](https://jupyter.org/) is an interactive notebook environment that supports various programming languages. You can use a Jupyter Notebook file to interact with your data, combine code with markdown text, and perform simple visualizations.
+
+1. Open the [Azure portal](https://portal.azure.com).
+
+2. Select **HDInsight clusters**, and then select the cluster you created.
+
+ :::image type="content" source="./media/apache-spark-jupyter-spark-sql/azure-portal-open-hdinsight-cluster.png" alt-text="Open HDInsight cluster in the Azure portal." border="true":::
+
+3. From the portal, in **Cluster dashboards** section, select **Jupyter Notebook**. If prompted, enter the cluster login credentials for the cluster.
+
+ :::image type="content" source="./media/apache-spark-jupyter-spark-sql/hdinsight-spark-open-jupyter-interactive-spark-sql-query.png " alt-text="Open Jupyter Notebook to run interactive Spark SQL query." border="true":::
+
+4. Select **New** > **PySpark** to create a notebook.
+
+ :::image type="content" source="./media/apache-spark-jupyter-spark-sql/hdinsight-spark-create-jupyter-interactive-spark-sql-query.png " alt-text="Create a Jupyter Notebook file to run interactive Spark SQL query." border="true":::
+
+ A new notebook is created and opened with the name Untitled(Untitled.pynb).
+
+## Run Apache Spark SQL statements
+
+SQL (Structured Query Language) is the most common and widely used language for querying and transforming data. Spark SQL functions as an extension to Apache Spark for processing structured data, using the familiar SQL syntax.
+
+1. Verify the kernel is ready. The kernel is ready when you see a hollow circle next to the kernel name in the notebook. Solid circle denotes that the kernel is busy.
+
+ :::image type="content" source="./media/apache-spark-jupyter-spark-sql/jupyter-spark-kernel-status.png " alt-text="Screenshot showing that the kernel is ready." border="true":::
+
+ When you start the notebook for the first time, the kernel performs some tasks in the background. Wait for the kernel to be ready.
+
+1. Paste the following code in an empty cell, and then press **SHIFT + ENTER** to run the code. The command lists the Hive tables on the cluster:
+
+ ```sql
+ %%sql
+ SHOW TABLES
+ ```
+
+ When you use a Jupyter Notebook file with your HDInsight cluster, you get a preset `spark` session that you can use to run Hive queries using Spark SQL. `%%sql` tells Jupyter Notebook to use the preset `spark` session to run the Hive query. The query retrieves the top 10 rows from a Hive table (**hivesampletable**) that comes with all HDInsight clusters by default. The first time you submit the query, Jupyter will create a Spark application for the notebook. It takes about 30 seconds to complete. Once the Spark application is ready, the query is executed in about a second and produces the results. The output looks like:
+
+ :::image type="content" source="./media/apache-spark-jupyter-spark-sql/hdinsight-spark-get-started-hive-query.png " alt-text="Screenshot that shows an Apache Hive query in HDInsight." border="true":::
+
+ Every time you run a query in Jupyter, your web browser window title shows a **(Busy)** status along with the notebook title. You also see a solid circle next to the **PySpark** text in the top-right corner.
+
+1. Run another query to see the data in `hivesampletable`.
+
+ ```sql
+ %%sql
+ SELECT * FROM hivesampletable LIMIT 10
+ ```
+
+ The screen should refresh to show the query output.
+
+ :::image type="content" source="./media/apache-spark-jupyter-spark-sql/hdinsight-spark-get-started-hive-query-output.png " alt-text="Screenshot that shows Hive query output in HDInsight." border="true":::
+
+1. From the **File** menu on the notebook, select **Close and Halt**. Shutting down the notebook releases the cluster resources, including Spark application.
+
+## Clean up resources
+
+When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group and its resources.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+In this quickstart, you learned how to create an Apache Spark cluster in HDInsight and run a basic Spark SQL query. Advance to the next tutorial to learn how to use an HDInsight cluster to run interactive queries on sample data.
+
+> [!div class="nextstepaction"]
+> [Run interactive queries on Apache Spark](./apache-spark-load-data-run-query.md)
healthcare-apis Events Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-faqs.md
Events are generated from the following FHIR service types:
- **FhirResourceDeleted** - The event emitted after a FHIR resource gets soft deleted successfully.
-For more information about the FHIR service delete types, see [FHIR Rest API capabilities for Azure Health Data Services FHIR service](../../healthcare-apis/fhir/fhir-rest-api-capabilities.md)
+For more information about the FHIR service delete types, see [FHIR REST API capabilities for Azure Health Data Services FHIR service](../../healthcare-apis/fhir/fhir-rest-api-capabilities.md)
### What is the payload of an Events message?
healthcare-apis Events Message Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-message-structure.md
In this article, you'll learn about the Events message structure, required and n
> > - **FhirResourceDeleted** - The event emitted after a FHIR resource gets soft deleted successfully. >
-> For more information about the FHIR service delete types, see [FHIR Rest API capabilities for Azure Health Data Services FHIR service](../../healthcare-apis/fhir/fhir-rest-api-capabilities.md)
+> For more information about the FHIR service delete types, see [FHIR REST API capabilities for Azure Health Data Services FHIR service](../../healthcare-apis/fhir/fhir-rest-api-capabilities.md)
## Events message structure
healthcare-apis Events Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-overview.md
Events are a notification and subscription feature in the Azure Health Data Serv
> > - **FhirResourceDeleted** - The event emitted after a FHIR resource gets soft deleted successfully. >
-> For more information about the FHIR service delete types, see [FHIR Rest API capabilities for Azure Health Data Services FHIR service](../../healthcare-apis/fhir/fhir-rest-api-capabilities.md)
+> For more information about the FHIR service delete types, see [FHIR REST API capabilities for Azure Health Data Services FHIR service](../../healthcare-apis/fhir/fhir-rest-api-capabilities.md)
## Scalable
iot-central Concepts Telemetry Properties Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-telemetry-properties-commands.md
IoT Central lets you view the raw data that a device sends to an application. Th
If the telemetry is defined in a component, add a custom message property called `$.sub` with the name of the component as defined in the device model. To learn more, see [Tutorial: Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md).
+> [!IMPORTANT]
+> To display telemetry from components hosted in IoT Edge modules correctly, use [IoT Edge version 1.2.4](https://github.com/Azure/azure-iotedge/releases/tag/1.2.4) or later. If you use an earlier version, telemetry from your components in IoT Edge modules displays as *_unmodeleddata*.
+ ### Primitive types This section shows examples of primitive telemetry types that a device streams to an IoT Central application.
iot-central Troubleshoot Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/troubleshoot-connection.md
To detect which categories your issue is in, run the most appropriate Azure CLI
You may be prompted to install the `uamqp` library the first time you run a `validate` command.
-The two common types of issue that cause device data to not appear in IoT Central are:
+The three common types of issue that cause device data to not appear in IoT Central are:
- Device template to device data mismatch. - Data is invalid JSON.
+- Old versions of IoT Edge cause telemetry from components to display incorrectly as unmodeled data.
### Device template to device data mismatch
If there are no errors reported, but a value isn't appearing, then it's probably
You can't use the validate commands or the **Raw data** view in the UI to detect if the device is sending malformed JSON.
+### IoT Edge version
+
+To display telemetry from components hosted in IoT Edge modules correctly, use [IoT Edge version 1.2.4](https://github.com/Azure/azure-iotedge/releases/tag/1.2.4) or later. If you use an earlier version, telemetry from components in IoT Edge modules displays as *_unmodeleddata*.
+ ## Next steps If you need more help, you can contact the Azure experts on the [Microsoft Q&A and Stack Overflow forums](https://azure.microsoft.com/support/community/). Alternatively, you can file an [Azure support ticket](https://portal.azure.com/#create/Microsoft.Support).
iot-dps Quick Enroll Device Tpm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-enroll-device-tpm.md
To verify that the enrollment group has been created:
:::zone-end - ## Clean up resources
-If you plan to explore the [Azure IoT Hub Device Provisioning Service tutorials](./tutorial-set-up-cloud.md), don't clean up the resources created in this quickstart. Otherwise, use the following steps to delete all resources created by this quickstart.
+If you plan to explore the DPS tutorials, don't clean up the resources created in this quickstart. Otherwise, use the following steps to delete all resources created by this quickstart.
1. Close the sample output window on your computer.
If you plan to explore the [Azure IoT Hub Device Provisioning Service tutorials]
## Next steps
-In this quickstart, youΓÇÖve programmatically created an individual enrollment entry for a TPM device. Optionally, you created a TPM simulated device on your computer and provisioned it to your IoT hub using the Azure IoT Hub Device Provisioning Service. To learn about device provisioning in depth, continue to the tutorial for the Device Provisioning Service setup in the Azure portal.
+In this quickstart, youΓÇÖve programmatically created an individual enrollment entry for a TPM device. Optionally, you created a TPM simulated device on your computer and provisioned it to your IoT hub using the Azure IoT Hub Device Provisioning Service. To learn about provisioning multiple devices, continue to the tutorials for the Device Provisioning Service.
> [!div class="nextstepaction"]
-> [Azure IoT Hub Device Provisioning Service tutorials](./tutorial-set-up-cloud.md)
+> [How to provision devices using symmetric key enrollment groups](how-to-legacy-device-symm-key.md)
iot-dps Quick Enroll Device X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-enroll-device-x509.md
To enroll a single X.509 device, modify the *individual enrollment* sample code
## Clean up resources
-If you plan to explore the [Azure IoT Hub Device Provisioning Service tutorials](./tutorial-set-up-cloud.md), don't clean up the resources created in this quickstart. Otherwise, use the following steps to delete all resources created by this quickstart.
+If you plan to explore the Azure IoT Hub Device Provisioning Service tutorials, don't clean up the resources created in this quickstart. Otherwise, use the following steps to delete all resources created by this quickstart.
1. Close the sample output window on your computer.
If you plan to explore the [Azure IoT Hub Device Provisioning Service tutorials]
## Next steps
-In this quickstart, you created an enrollment group for an X.509 intermediate or root CA certificate using the Azure IoT Hub Device Provisioning Service. To learn about device provisioning in depth, continue to the tutorial for the Device Provisioning Service setup in the Azure portal.
+In this quickstart, you created an enrollment group for an X.509 intermediate or root CA certificate using the Azure IoT Hub Device Provisioning Service. To learn about device provisioning in depth, continue to the tutorials for the Device Provisioning Service.
> [!div class="nextstepaction"]
-> [Azure IoT Hub Device Provisioning Service tutorials](./tutorial-set-up-cloud.md)
+> [Use custom allocation policies with Device Provisioning Service](tutorial-custom-allocation-policies.md)
:::zone pivot="programming-language-nodejs"
iot-dps Tutorial Net Provision Device To Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/tutorial-net-provision-device-to-hub.md
ms.devlang: csharp + # Tutorial: Enroll the device to an IoT hub using the Azure IoT Hub Provisioning Service Client (.NET)
In the previous tutorial, you learned how to set up a device to connect to your
## Prerequisites
-Before you proceed, make sure to configure your device and its *Hardware Security Module* as discussed in the tutorial [Set up a device to provision using Azure IoT Hub Device Provisioning Service](./tutorial-set-up-device.md).
- * Visual Studio > [!NOTE]
iot-dps Tutorial Provision Device To Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/tutorial-provision-device-to-hub.md
+ # Tutorial: Provision the device to an IoT hub using the Azure IoT Hub Device Provisioning Service
iot-dps Tutorial Provision Multiple Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/tutorial-provision-multiple-hubs.md
This tutorial shows how to provision devices for multiple, load-balanced IoT hub
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
-## Prerequisites
-
-This tutorial builds on the previous [Provision device to a hub](tutorial-provision-device-to-hub.md) tutorial.
- ## Use the Azure portal to provision a second device to a second IoT hub
-Follow the steps in the [Provision device to a hub](tutorial-provision-device-to-hub.md) tutorial to provision a second device to another IoT hub.
+Follow the steps in the quickstarts to link a second IoT hub to your DPS instance and provision a device to that hub:
+
+* [Set up the Device Provisioning Service](quick-setup-auto-provision.md)
+* [Provision a simulated symmetric key device](quick-create-simulated-device-symm-key.md)
## Add an enrollment list entry to the second device
iot-dps Tutorial Set Up Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/tutorial-set-up-cloud.md
+ # Tutorial: Configure cloud resources for device provisioning with the IoT Hub Device Provisioning Service
iot-dps Tutorial Set Up Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/tutorial-set-up-device.md
+ # Tutorial: Set up a device to provision using the Azure IoT Hub Device Provisioning Service
iot-fundamentals Iot Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-fundamentals/iot-glossary.md
Applies to: Iot Hub, Device Provisioning Service
### Industry 4.0
-Industry 4.0 describes the fourth revolution that's occurred in manufacturing. Companies can build connected solutions to manage the manufacturing facility and equipment more efficiently by enabling manufacturing equipment to be cloud connected, allow remote access and management from the cloud, and enable OT personnel to have a single pane view of their entire facility.
+Refers to the fourth revolution that's occurred in manufacturing. Companies can build connected [solutions](#solution) to manage the manufacturing facility and equipment more efficiently by enabling manufacturing equipment to be cloud connected, allowing remote access and management from the cloud, and enabling OT personnel to have a single pane view of their entire facility.
Applies to: Iot Hub, IoT Central
Applies to: Digital Twins
### Operational technology
-In industrial facilities, operational technology is the hardware and software that monitors and controls equipment, processes, and infrastructure.
+That hardware and software in an industrial facility that monitors and controls equipment, processes, and infrastructure.
Casing rules: Always lowercase. Abbreviation: OT
-Applies to: Iot Hub, IoT Central
+Applies to: Iot Hub, IoT Central, IoT Edge
### Operations monitoring
iot-hub Iot Hub Devguide Device Twins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-device-twins.md
Use device twins to:
* Store device-specific metadata in the cloud. For example, the deployment location of a vending machine.
-* Report current state information such as available capabilities and conditions from your device app. For example, a device is connected to your IoT hub over cellular or WiFi.
+* Report current state information such as available capabilities and conditions from your device app. For example, whether a device is connected to your IoT hub over cellular or WiFi.
* Synchronize the state of long-running workflows between device app and back-end app. For example, when the solution back end specifies the new firmware version to install, and the device app reports the various stages of the update process.
This information is kept at every level (not just the leaves of the JSON structu
## Optimistic concurrency
-Tags, desired, and reported properties all support optimistic concurrency.
+Tags, desired properties, and reported properties all support optimistic concurrency. If you need to guarantee order of twin property updates, consider implementing synchronization at the application level by waiting for reported properties callback before sending the next update.
+
+Tags have an ETag, as per [RFC7232](https://tools.ietf.org/html/rfc7232), that represents the tag's JSON representation. You can use ETags in conditional update operations from the solution back end to ensure consistency.
Device twins have an ETag (`etag` property), as per [RFC7232](https://tools.ietf.org/html/rfc7232), that represents the twin's JSON representation. You can use the `etag` property in conditional update operations from the solution back end to ensure consistency. This is the only option for ensuring consistency in operations that involve the `tags` container.
machine-learning Dsvm Tutorial Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/dsvm-tutorial-bicep.md
+
+ Title: 'Quickstart: Create an Azure Data Science VM - Bicep'
+
+description: In this quickstart, you use Bicep to quickly deploy a Data Science Virtual Machine
+++ Last updated : 05/02/2022+++++
+# Quickstart: Create an Ubuntu Data Science Virtual Machine using Bicep
+
+This quickstart will show you how to create an Ubuntu 18.04 Data Science Virtual Machine using Bicep. Data Science Virtual Machines are cloud-based virtual machines preloaded with a suite of data science and machine learning frameworks and tools. When deployed on GPU-powered compute resources, all tools and libraries are configured to use the GPU.
++
+## Prerequisites
+
+An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/services/machine-learning/) before you begin.
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/vm-ubuntu-DSVM-GPU-or-CPU/).
++
+The following resources are defined in the Bicep file:
+
+* [Microsoft.Network/networkInterfaces](/azure/templates/microsoft.network/networkinterfaces)
+* [Microsoft.Network/networkSecurityGroups](/azure/templates/microsoft.network/networksecuritygroups)
+* [Microsoft.Network/virtualNetworks](/azure/templates/microsoft.network/virtualnetworks)
+* [Microsoft.Network/publicIPAddresses](/azure/templates/microsoft.network/publicipaddresses)
+* [Microsoft.Storage/storageAccounts](/azure/templates/microsoft.storage/storageaccounts)
+* [Microsoft.Compute/virtualMachines](/azure/templates/microsoft.compute/virtualmachines): Create a cloud-based virtual machine. In this template, the virtual machine is configured as a Data Science Virtual Machine running Ubuntu 18.04.
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters adminUsername=<admin-user> vmName=<vm-name>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -adminUsername "<admin-user>" -vmName "<vm-name>"
+ ```
+
+
+
+ > [!NOTE]
+ > Replace **\<admin-user\>** with the username for the administrator account. Replace **\<vm-name\>** with the name of your virtual machine.
+
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Review deployed resources
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+## Clean up resources
+
+When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group and its resources.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+In this quickstart, you created a Data Science Virtual Machine using Bicep.
+
+> [!div class="nextstepaction"]
+> [Sample programs & ML walkthroughs](dsvm-samples-and-walkthroughs.md)
marketplace Azure App Marketing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-app-marketing.md
Previously updated : 06/01/2021 Last updated : 04/29/2022 # Sell an Azure Application offer
marketplace Azure App Technical Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-app-technical-configuration.md
Previously updated : 06/01/2021 Last updated : 04/29/2022 # Add technical details for an Azure application offer
mysql Concepts Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-aks.md
az network nic list --resource-group nodeResourceGroup -o table
## Next steps
-Create an AKS cluster [using the Azure CLI][./learn/quick-kubernetes-deploy-cli], [using Azure PowerShell][./learn/quick-kubernetes-deploy-powershell], or [using the Azure portal][./learn/quick-kubernetes-deploy-portal].
+Create an AKS cluster [using the Azure CLI](./learn/quick-kubernetes-deploy-cli), [using Azure PowerShell](./learn/quick-kubernetes-deploy-powershell), or [using the Azure portal](./learn/quick-kubernetes-deploy-portal).
network-watcher Diagnose Vm Network Traffic Filtering Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-traffic-filtering-problem.md
If you already have a network watcher enabled in at least one region, skip to th
1. In the **Home** portal, select **More services**. In the **Filter box**, enter *Network Watcher*. When **Network Watcher** appears in the results, select it. 1. Enable a network watcher in the East US region, because that's the region the VM was deployed to in a previous step. Select **Add**, to expand it, and then select **Region** under **Subscription**, as shown in the following picture:
- :::image type="content" source="./media/diagnose-vm-network-traffic-filtering-problem/enable-network-watcher.png" alt-text="Screenshot of how to Enable Network Watcher.":::
+ :::image type="content" source="./media/diagnose-vm-network-traffic-filtering-problem/enable-network-watcher.png" alt-text="Screenshot of how to Enable Network Watcher." lightbox="./media/diagnose-vm-network-traffic-filtering-problem/enable-network-watcher.png":::
1. Select your region then select **Add**. ### Use IP flow verify
To determine why the rules in steps 3-5 of **Use IP flow verify** allow or deny
1. In the search box at the top of the portal, enter *myvm*. When the **myvm Regular Network Interface** appears in the search results, select it. 1. Select **Effective security rules** under **Support + troubleshooting**, as shown in the following picture:
- :::image type="content" source="./media/diagnose-vm-network-traffic-filtering-problem/effective-security-rules.png" alt-text="Screenshot of Effective security rules.":::
+ :::image type="content" source="./media/diagnose-vm-network-traffic-filtering-problem/effective-security-rules.png" alt-text="Screenshot of Effective security rules." lightbox="./media/diagnose-vm-network-traffic-filtering-problem/effective-security-rules.png" :::
In step 3 of **Use IP flow verify**, you learned that the reason the communication was allowed is because of the **AllowInternetOutbound** rule. You can see in the previous picture that the **Destination** for the rule is **Internet**. It's not clear how 13.107.21.200, the address you tested in step 3 of **Use IP flow verify**, relates to **Internet** though. 1. Select the **AllowInternetOutBound** rule, and then scroll down to **Destination**, as shown in the following picture:
openshift Howto Create A Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-create-a-backup.md
description: Learn how to create a backup of your Azure Red Hat OpenShift cluste
Last updated 06/22/2020--++ keywords: aro, openshift, az aro, red hat, cli #Customer intent: As an operator, I need to create an Azure Red Hat OpenShift cluster application backup
In this article, you'll prepare your environment to create an Azure Red Hat Open
> * Setup the prerequisites and install the necessary tools > * Create an Azure Red Hat OpenShift 4 application backup
-If you choose to install and use the CLI locally, this tutorial requires that you are running the Azure CLI version 2.6.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+If you choose to install and use the CLI locally, this tutorial requires that you're running the Azure CLI version 2.6.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
## Before you begin
EOF
## Install Velero on Azure Red Hat OpenShift 4 cluster
-This step will install Velero into its own project and the [custom resource definitions](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/) necessary to do backups and restores with Velero. Make sure you are successfully logged in to an Azure Red Hat OpenShift v4 cluster.
+This step will install Velero into its own project and the [custom resource definitions](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/) necessary to do backups and restores with Velero. Make sure you're successfully logged in to an Azure Red Hat OpenShift v4 cluster.
```bash
A successful backup will output `phase:Completed` and the objects will live in t
## Create a backup with Velero to include snapshots
-To create an application backup with Velero to include the persistent volumes of your application, you'll need to include the namespace that the application is in as well as to include the `snapshot-volumes=true` flag when creating the backup
+To create an application backup with Velero to include the persistent volumes of your application, you'll need to include the namespace that the application is in and include the `snapshot-volumes=true` flag when creating the backup.
```bash velero backup create <name of backup> --include-namespaces=nginx-example --snapshot-volumes=true --include-cluster-resources=true
oc get backups -n velero <name of backup> -o yaml
A successful backup with output `phase:Completed` and the objects will live in the container in the storage account.
-For more information about how to create backups and restores using Velero see [Backup OpenShift resources the native way](https://www.openshift.com/blog/backup-openshift-resources-the-native-way)
+For more information, see [Backup OpenShift resources the native way](https://www.openshift.com/blog/backup-openshift-resources-the-native-way)
## Next steps
openshift Howto Create A Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-create-a-restore.md
description: Learn how to create a restore of your Azure Red Hat OpenShift clust
Last updated 06/22/2020--++ keywords: aro, openshift, az aro, red hat, cli #Customer intent: As an operator, I need to create an Azure Red Hat OpenShift cluster application restore
In this article, you'll prepare your environment to create an Azure Red Hat Open
> * Setup the prerequisites and install the necessary tools > * Create an Azure Red Hat OpenShift 4 application restore
-If you choose to install and use the CLI locally, this tutorial requires that you are running the Azure CLI version 2.6.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+If you choose to install and use the CLI locally, this tutorial requires that you're running the Azure CLI version 2.6.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
## Before you begin
oc get restore -n velero <name of restore created previously> -o yaml
``` When the phase says `Completed`, your Azure Red Hat 4 application should be restored.
-For more information about how to create backups and restores using Velero see [Backup OpenShift resources the native way](https://www.openshift.com/blog/backup-openshift-resources-the-native-way)
+For more information, see [Backup OpenShift resources the native way](https://www.openshift.com/blog/backup-openshift-resources-the-native-way)
## Next steps
openshift Howto Create A Storageclass https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-create-a-storageclass.md
description: Learn how to create an Azure Files StorageClass on Azure Red Hat Op
Last updated 10/16/2020--++ keywords: aro, openshift, az aro, red hat, cli, azure file #Customer intent: As an operator, I need to create a StorageClass on Azure Red Hat OpenShift using Azure File dynamic provisioner
private-5g-core Collect Required Information For A Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/collect-required-information-for-a-site.md
Title: Collect information for a site
-description: Learn about the information you'll need to create a site in an existing private mobile network using the Azure portal.
+description: Learn about the information you'll need to create a site in an existing private mobile network.
# Collect the required information for a site
-Azure Private 5G Core Preview private mobile networks include one or more sites. Each site represents a physical enterprise location (for example, Contoso Corporation's Chicago factory) containing an Azure Stack Edge device that hosts a packet core instance. This how-to guide takes you through the process of collecting the information you'll need to create a new site. You'll use this information to complete the steps in [Create a site](create-a-site.md).
+Azure Private 5G Core Preview private mobile networks include one or more sites. Each site represents a physical enterprise location (for example, Contoso Corporation's Chicago factory) containing an Azure Stack Edge device that hosts a packet core instance. This how-to guide takes you through the process of collecting the information you'll need to create a new site.
+
+You can use this information to create a site in an existing private mobile network using the [Azure portal](create-a-site.md). You can also use it as part of an ARM template to [deploy a new private mobile network and site](deploy-private-mobile-network-with-site-arm-template.md), or [add a new site to an existing private mobile network](create-site-arm-template.md).
## Prerequisites You must have completed all of the steps in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses), [Allocate User Equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md#allocate-user-equipment-ue-ip-address-pools), and [Order and set up your Azure Stack Edge Pro device(s)](complete-private-mobile-network-prerequisites.md#order-and-set-up-your-azure-stack-edge-pro-devices) for your new site.
-## Collect Mobile Network Site resource values
+## Collect mobile network site resource values
-Collect all the values in the following table for the Mobile Network Site resource that will represent your site.
+Collect all the values in the following table for the mobile network site resource that will represent your site.
|Value |Field name in Azure portal | |||
- |The Azure subscription to use to create the Mobile Network Site resource. You must use the same subscription for all resources in your private mobile network deployment. |**Project details: Subscription**|
- |The Azure resource group in which to create the Mobile Network Site resource. We recommend that you use the same resource group that already contains your private mobile network. |**Project details: Resource group**|
+ |The Azure subscription to use to create the mobile network site resource. You must use the same subscription for all resources in your private mobile network deployment. |**Project details: Subscription**|
+ |The Azure resource group in which to create the mobile network site resource. We recommend that you use the same resource group that already contains your private mobile network. |**Project details: Resource group**|
|The name for the site. |**Instance details: Name**|
- |The region in which youΓÇÖre creating the Mobile Network Site resource. We recommend that you use the East US region. |**Instance details: Region**|
- |The private mobile network resource representing the network to which youΓÇÖre adding the site. |**Instance details: Mobile network**|
+ |The region in which youΓÇÖre creating the mobile network site resource. We recommend that you use the East US region. |**Instance details: Region**|
+ |The mobile network resource representing the private mobile network to which youΓÇÖre adding the site. |**Instance details: Mobile network**|
## Collect custom location information
-Collect the name of the custom location that targets the Azure Kubernetes Service on Azure Stack HCI (AKS-HCI) cluster on the Azure Stack Edge Pro device in the site. You commissioned the AKS-HCI cluster as part of the steps in [Order and set up your Azure Stack Edge Pro device(s)](complete-private-mobile-network-prerequisites.md#order-and-set-up-your-azure-stack-edge-pro-devices).
+Identify the custom location that targets the Azure Kubernetes Service on Azure Stack HCI (AKS-HCI) cluster on the Azure Stack Edge Pro device in the site. You commissioned the AKS-HCI cluster as part of the steps in [Order and set up your Azure Stack Edge Pro device(s)](complete-private-mobile-network-prerequisites.md#order-and-set-up-your-azure-stack-edge-pro-devices).
+
+- If you're going to create your site using the Azure portal, collect the name of the custom location.
+- If you're going to create your site using an ARM template, collect the full resource ID of the custom location.
## Collect access network values
Collect all the values in the following table to define the packet core instance
|Value |Field name in Azure portal | |||
- | The IP address for the packet core instance N2 signaling interface. You identified this in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses) and it must match the value you used when deploying the AKS-HCI cluster. |**N2 address (signaling)**
+ | The IP address for the packet core instance N2 signaling interface. You identified this in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses) and it must match the value you used when deploying the AKS-HCI cluster. |**N2 address (signaling)**|
+ | The IP address for the packet core instance N3 interface. You identified this in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses) and it must match the value you used when deploying the AKS-HCI cluster. |N/A. You'll only need this value if you're using an ARM template to create the site.|
| The network address of the access subnet in Classless Inter-Domain Routing (CIDR) notation. You identified this in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses) and it must match the value you used when deploying the AKS-HCI cluster. |**N2 subnet** and **N3 subnet**| | The access subnet default gateway. You identified this in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses) and it must match the value you used when deploying the AKS-HCI cluster. |**N2 gateway** and **N3 gateway**|
Collect all the values in the following table to define the packet core instance
|Value |Field name in Azure portal | ||| |The name of the data network. |**Data network**|
+ | The IP address for the packet core instance N6 interface. You identified this in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses) and it must match the value you used when deploying the AKS-HCI cluster. |N/A. You'll only need this value if you're using an ARM template to create the site.|
|The network address of the data subnet in CIDR notation. You identified this in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses) and it must match the value you used when deploying the AKS-HCI cluster. |**N6 subnet**| |The data subnet default gateway. You identified this in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses) and it must match the value you used when deploying the AKS-HCI cluster. |**N6 gateway**|
- | The network address of the subnet from which dynamic IP addresses must be allocated to User Equipment (UEs), given in CIDR notation. You won't need this if you don't want to support dynamic IP address allocation. You identified this in [Allocate User Equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md#allocate-user-equipment-ue-ip-address-pools). The following example shows the network address format. </br></br>`198.51.100.0/24` </br></br>Note that the UE subnets aren't related to the access subnet. |**Dynamic UE IP pool prefixes**|
- | The network address of the subnet from which static IP addresses must be allocated to User Equipment (UEs), given in CIDR notation. You won't need this if you don't want to support static IP address allocation. You identified this in [Allocate User Equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md#allocate-user-equipment-ue-ip-address-pools). The following example shows the network address format. </br></br>`198.51.100.0/24` </br></br>Note that the UE subnets aren't related to the access subnet. |**Static UE IP pool prefixes**|
+ | The network address of the subnet from which dynamic IP addresses must be allocated to user equipment (UEs), given in CIDR notation. You won't need this if you don't want to support dynamic IP address allocation for this site. You identified this in [Allocate User Equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md#allocate-user-equipment-ue-ip-address-pools). The following example shows the network address format. </br></br>`198.51.100.0/24` </br></br>Note that the UE subnets aren't related to the access subnet. |**Dynamic UE IP pool prefixes**|
+ | The network address of the subnet from which static IP addresses must be allocated to user equipment (UEs), given in CIDR notation. You won't need this if you don't want to support static IP address allocation for this site. You identified this in [Allocate User Equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md#allocate-user-equipment-ue-ip-address-pools). The following example shows the network address format. </br></br>`198.51.100.0/24` </br></br>Note that the UE subnets aren't related to the access subnet. |**Static UE IP pool prefixes**|
|Whether Network Address and Port Translation (NAPT) should be enabled for this data network. NAPT allows you to translate a large pool of private IP addresses for UEs to a small number of public IP addresses. The translation is performed at the point where traffic enters the core network, maximizing the utility of a limited supply of public IP addresses. |**NAPT**| ## Next steps You can now use the information you've collected to create the site. -- [Create a site](create-a-site.md)
+- [Create a site - Azure portal](create-a-site.md)
+- [Create a site - ARM template](create-site-arm-template.md)
private-5g-core Collect Required Information For Private Mobile Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/collect-required-information-for-private-mobile-network.md
Title: Collect information for your private mobile network
-description: This how-to guide shows how to collect the information you need to deploy a private mobile network through Azure Private 5G Core Preview using the Azure portal.
+description: This how-to guide shows how to collect the information you need to deploy a private mobile network through Azure Private 5G Core Preview.
# Collect the required information to deploy a private mobile network
-This how-to guide takes you through the process of collecting the information you'll need to deploy a private mobile network through Azure Private 5G Core Preview using the Azure portal. You'll use this information to complete the steps in [Deploy a private mobile network - Azure portal](how-to-guide-deploy-a-private-mobile-network-azure-portal.md).
+This how-to guide takes you through the process of collecting the information you'll need to deploy a private mobile network through Azure Private 5G Core Preview.
+
+- You can use this information to deploy a private mobile network through the [Azure portal](how-to-guide-deploy-a-private-mobile-network-azure-portal.md).
+- Alternatively, you can use the information to quickly deploy a private mobile network with a single site using an [Azure Resource Manager template (ARM template)](deploy-private-mobile-network-with-site-arm-template.md) In this case, you'll also need to [collect information for the site](collect-required-information-for-a-site.md).
## Prerequisites You must have completed all of the steps in [Complete the prerequisite tasks for deploying a private mobile network](complete-private-mobile-network-prerequisites.md).
-## Collect Mobile Network resource values
+## Collect mobile network resource values
-Collect all of the following values for the Mobile Network resource that will represent your private mobile network.
+Collect all of the following values for the mobile network resource that will represent your private mobile network.
|Value |Field name in Azure portal | |||
- |The Azure subscription to use to deploy the Mobile Network resource. You must use the same subscription for all resources in your private mobile network deployment. This is the subscription you identified in [Complete the prerequisite tasks for deploying a private mobile network](complete-private-mobile-network-prerequisites.md). |**Project details: Subscription**
- |The Azure resource group to use to deploy the Mobile Network resource. You should use a new resource group for this resource. It's useful to include the purpose of this resource group in its name for future identification (for example, *contoso-pmn-rg*). |**Project details: Resource group**|
+ |The Azure subscription to use to deploy the mobile network resource. You must use the same subscription for all resources in your private mobile network deployment. You identified this subscription in [Complete the prerequisite tasks for deploying a private mobile network](complete-private-mobile-network-prerequisites.md). |**Project details: Subscription**
+ |The Azure resource group to use to deploy the mobile network resource. You should use a new resource group for this resource. It's useful to include the purpose of this resource group in its name for future identification (for example, *contoso-pmn-rg*). |**Project details: Resource group**|
|The name for the private mobile network. |**Instance details: Mobile network name**| |The region in which you're deploying the private mobile network. We recommend you use the East US region. |**Instance details: Region**| |The mobile country code for the private mobile network. |**Network configuration: Mobile country code (MCC)**|
As part of creating your private mobile network, you can provision one or more S
If you want to provision SIMs as part of deploying your private mobile network, you must choose one of the following provisioning methods: -- Manually entering values for each SIM into fields in the Azure portal. This option is best when provisioning a small number of SIMs.-- Importing a JSON file containing values for one or more SIM resources. This option is best when provisioning a large number of SIMs. The file format required for this JSON file is given in [JSON file format for provisioning SIMs](#json-file-format-for-provisioning-sims).
+- Manually entering values for each SIM into fields in the Azure portal. This option is best when provisioning a few SIMs.
+- Importing a JSON file containing values for one or more SIM resources. This option is best when provisioning a large number of SIMs. The file format required for this JSON file is given in [JSON file format for provisioning SIMs](#json-file-format-for-provisioning-sims). You'll need to use this option if you're deploying your private mobile network with an ARM template.
You must then collect each of the values given in the following table for each SIM resource you want to provision. |Value |Field name in Azure portal | JSON file parameter name | ||||
- |The name for the SIM resource. This must only contain alphanumeric characters, dashes, and underscores. |**SIM name**|`simName`|
- |The Integrated Circuit Card Identification Number (ICCID). This identifies a specific physical SIM or eSIM, and includes information on the SIM's country and issuer. This is a unique numerical value between 19 and 20 digits in length, beginning with 89. |**ICCID**|`integratedCircuitCardIdentifier`|
- |The international mobile subscriber identity (IMSI). This is a unique number (usually 15 digits) identifying a device or user in a mobile network. |**IMSI**|`internationalMobileSubscriberIdentity`|
- |The Authentication Key (Ki). This is a unique 128-bit value assigned to the SIM by an operator, and is used in conjunction with the derived operator code (OPc) to authenticate a user. This must be a 32-character string, containing hexadecimal characters only. |**Ki**|`authenticationKey`|
- |The derived operator code (OPc). This is derived from the SIM's Ki and the network's operator code (OP), and is used by the packet core to authenticate a user using a standards-based algorithm. This must be a 32-character string, containing hexadecimal characters only. |**Opc**|`operatorKeyCode`|
- |The type of device that is using this SIM. This is an optional, free-form string. You can use it as required to easily identify device types that are using the enterprise's mobile networks. |**Device type**|`deviceType`|
+ |The name for the SIM resource. The name must only contain alphanumeric characters, dashes, and underscores. |**SIM name**|`simName`|
+ |The Integrated Circuit Card Identification Number (ICCID). The ICCID identifies a specific physical SIM or eSIM, and includes information on the SIM's country and issuer. It's a unique numerical value between 19 and 20 digits in length, beginning with 89. |**ICCID**|`integratedCircuitCardIdentifier`|
+ |The international mobile subscriber identity (IMSI). The IMSI is a unique number (usually 15 digits) identifying a device or user in a mobile network. |**IMSI**|`internationalMobileSubscriberIdentity`|
+ |The Authentication Key (Ki). The Ki is a unique 128-bit value assigned to the SIM by an operator, and is used with the derived operator code (OPc) to authenticate a user. The Ki must be a 32-character string, containing hexadecimal characters only. |**Ki**|`authenticationKey`|
+ |The derived operator code (OPc). The OPc is derived from the SIM's Ki and the network's operator code (OP), and is used by the packet core to authenticate a user using a standards-based algorithm. The OPc must be a 32-character string, containing hexadecimal characters only. |**Opc**|`operatorKeyCode`|
+ |The type of device that is using this SIM. This value is an optional, free-form string. You can use it as required to easily identify device types that are using the enterprise's mobile networks. |**Device type**|`deviceType`|
### JSON file format for provisioning SIMs
The following example shows the file format you'll need if you want to provision
## Decide whether you want to use the default service and SIM policy
-You'll be given the option of creating a default service and SIM policy as part of deploying your private mobile network. They allow all traffic in both directions for all the SIMs you provision. They're designed to allow you to quickly deploy a private mobile network and bring SIMs into service automatically, without the need to design your own policy control configuration.
+ Azure Private 5G Core offers a default service and SIM policy that allow all traffic in both directions for all the SIMs you provision. They're designed to allow you to quickly deploy a private mobile network and bring SIMs into service automatically, without the need to design your own policy control configuration.
+
+- If you're using the ARM template in [Quickstart: Deploy a private mobile network and site - ARM template](deploy-private-mobile-network-with-site-arm-template.md), the default service and SIM policy are automatically included.
-Decide whether the default service and SIM policy are suitable for the initial use of your private mobile network. You can find information on each of the specific settings for these resources in [Default service and SIM policy](default-service-sim-policy.md) if you need it.
+- If you use the Azure portal to deploy your private mobile network, you'll be given the option of creating the default service and SIM policy. You'll need to decide whether the default service and SIM policy are suitable for the initial use of your private mobile network. You can find information on each of the specific settings for these resources in [Default service and SIM policy](default-service-sim-policy.md) if you need it.
-If they aren't suitable, you can choose to deploy the private mobile network without any services or SIM policies. In this case, any SIMs you provision won't be brought into service when you create your private mobile network. You'll need to create your own services and SIM policies later.
+ If they aren't suitable, you can choose to deploy the private mobile network without any services or SIM policies. In this case, any SIMs you provision won't be brought into service when you create your private mobile network. You'll need to create your own services and SIM policies later.
For detailed information on services and SIM policies, see [Policy control](policy-control.md).
For detailed information on services and SIM policies, see [Policy control](poli
You can now use the information you've collected to deploy your private mobile network. - [Deploy a private mobile network - Azure portal](how-to-guide-deploy-a-private-mobile-network-azure-portal.md)
+- [Quickstart: Deploy a private mobile network and site - ARM template](deploy-private-mobile-network-with-site-arm-template.md)
private-5g-core Collect Required Information For Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/collect-required-information-for-service.md
Each service has a set of rules to identify the service data flows (SDFs) to whi
In this how-to guide, you'll learn how to collect all the required information to configure a service for Azure Private 5G Core Preview.
-You'll enter each value you collect into its corresponding field (given in the **Azure portal field name** columns in the tables below) as part of the procedure in [Configure a service for Azure Private 5G Core Preview - Azure portal](configure-service-azure-portal.md).
+- You can use this information to configure a service through the Azure portal. In this case, you'll enter each value you collect into its corresponding field (given in the **Azure portal field name** columns in the tables below) as part of the procedure in [Configure a service for Azure Private 5G Core Preview - Azure portal](configure-service-azure-portal.md).
+- Alternatively, you can use the information to create a simple service and SIM policy using the example Azure Resource Manager template (ARM template) given in [Configure a service and SIM policy using an ARM template](configure-service-sim-policy-arm-template.md). The example template uses default values for all settings, but you can choose to replace a subset of the default settings with your own values. The **Included in example ARM template** columns in the tables below indicate which settings can be changed.
## Prerequisites
Each service has many top-level settings that determine its name and the QoS cha
Collect each of the values in the table below for your service.
-| Value | Azure portal field name |
-|--|--|
-| The name of the service. This name must only contain alphanumeric characters, dashes, or underscores. You also must not use any of the following reserved strings: *default*; *requested*; *service*. | **Service name** |
-| A precedence value that the packet core instance must use to decide between services when identifying the QoS values to offer. This value must be an integer between 0 and 255 and must be unique among all services configured on the packet core instance. A lower value means a higher priority. | **Service precedence** |
-| The maximum bit rate (MBR) for uplink traffic (traveling away from user equipment (UEs)) across all SDFs that match data flow policy rules configured on this service. The MBR must be given in the following form: `<Quantity>` `<Unit>` </br></br>`<Unit>` must be one of the following: </br></br>- *bps* </br>- *Kbps* </br>- *Mbps* </br>- *Gbps* </br>- *Tbps* </br></br>`<Quantity>` is the quantity of your chosen unit. </br></br>For example, `10 Mbps`. | **Maximum bit rate (MBR) - Uplink** |
-| The maximum bit rate (MBR) for downlink traffic (traveling towards UEs) across all SDFs that match data flow policy rules configured on this service. The MBR must be given in the following form: `<Quantity>` `<Unit>` </br></br>`<Unit>` must be one of the following: </br></br>- *bps* </br>- *Kbps* </br>- *Mbps* </br>- *Gbps* </br>- *Tbps* </br></br>`<Quantity>` is the quantity of your chosen unit. </br></br>For example, `10 Mbps`. | **Maximum bit rate (MBR) - Downlink** |
-| The default QoS Flow Allocation and Retention Policy (ARP) priority level for this service. Flows with a higher ARP priority level preempt flows with a lower ARP priority level. The ARP priority level must be an integer between 1 (highest priority) and 15 (lowest priority). See 3GPP TS 23.501 for a full description of the ARP parameters. | **Allocation and Retention Priority level** |
-| The default 5G QoS Indicator (5QI) value for this service. The 5QI value identifies a set of 5G QoS characteristics that control QoS forwarding treatment for QoS Flows. See 3GPP TS 23.501 for a full description of the 5QI parameter. </br></br>We recommend you choose a 5QI value that corresponds to a non-GBR QoS Flow (as described in 3GPP TS 23.501). Non-GBR QoS Flows are in the following ranges: 5-9; 69-70; 79-80.</br></br>You can also choose a non-standardized 5QI value.</p><p>Azure Private 5G Core doesn't support 5QI values corresponding GBR or delay-critical GBR QoS Flows. Don't use a value in any of the following ranges: 1-4; 65-67; 71-76; 82-85. | **5G QoS Indicator (5QI)** |
-| The default QoS Flow preemption capability for QoS Flows for this service. The preemption capability of a QoS Flow controls whether it can preempt another QoS Flow with a lower priority level. You can choose from the following values: </br></br>- **May not preempt** </br>- **May preempt** </br></br>See 3GPP TS 23.501 for a full description of the ARP parameters. | **Preemption capability** |
-| The default QoS Flow preemption vulnerability for QoS Flows for this service. The preemption vulnerability of a QoS Flow controls whether it can be preempted another QoS Flow with a higher priority level. You can choose from the following values: </br></br>- **Preemptable** </br>- **Not preemptable** </br></br>See 3GPP TS 23.501 for a full description of the ARP parameters. | **Preemption vulnerability** |
+| Value | Azure portal field name | Included in example ARM template |
+|--|--|--|
+| The name of the service. This name must only contain alphanumeric characters, dashes, or underscores. You also must not use any of the following reserved strings: *default*; *requested*; *service*. | **Service name** |Yes|
+| A precedence value that the packet core instance must use to decide between services when identifying the QoS values to offer. This value must be an integer between 0 and 255 and must be unique among all services configured on the packet core instance. A lower value means a higher priority. | **Service precedence** |Yes|
+| The maximum bit rate (MBR) for uplink traffic (traveling away from user equipment (UEs)) across all SDFs that match data flow policy rules configured on this service. The MBR must be given in the following form: `<Quantity>` `<Unit>` </br></br>`<Unit>` must be one of the following: </br></br>- *bps* </br>- *Kbps* </br>- *Mbps* </br>- *Gbps* </br>- *Tbps* </br></br>`<Quantity>` is the quantity of your chosen unit. </br></br>For example, `10 Mbps`. | **Maximum bit rate (MBR) - Uplink** | Yes|
+| The maximum bit rate (MBR) for downlink traffic (traveling towards UEs) across all SDFs that match data flow policy rules configured on this service. The MBR must be given in the following form: `<Quantity>` `<Unit>` </br></br>`<Unit>` must be one of the following: </br></br>- *bps* </br>- *Kbps* </br>- *Mbps* </br>- *Gbps* </br>- *Tbps* </br></br>`<Quantity>` is the quantity of your chosen unit. </br></br>For example, `10 Mbps`. | **Maximum bit rate (MBR) - Downlink** | Yes|
+| The default QoS Flow Allocation and Retention Policy (ARP) priority level for this service. Flows with a higher ARP priority level preempt flows with a lower ARP priority level. The ARP priority level must be an integer between 1 (highest priority) and 15 (lowest priority). See 3GPP TS 23.501 for a full description of the ARP parameters. | **Allocation and Retention Priority level** |No. Defaults to 9.|
+| The default 5G QoS Indicator (5QI) value for this service. The 5QI value identifies a set of 5G QoS characteristics that control QoS forwarding treatment for QoS Flows. See 3GPP TS 23.501 for a full description of the 5QI parameter. </br></br>We recommend you choose a 5QI value that corresponds to a non-GBR QoS Flow (as described in 3GPP TS 23.501). Non-GBR QoS Flows are in the following ranges: 5-9; 69-70; 79-80.</br></br>You can also choose a non-standardized 5QI value.</p><p>Azure Private 5G Core doesn't support 5QI values corresponding GBR or delay-critical GBR QoS Flows. Don't use a value in any of the following ranges: 1-4; 65-67; 71-76; 82-85. | **5G QoS Indicator (5QI)** |No. Defaults to 9.|
+| The default QoS Flow preemption capability for QoS Flows for this service. The preemption capability of a QoS Flow controls whether it can preempt another QoS Flow with a lower priority level. You can choose from the following values: </br></br>- **May not preempt** </br>- **May preempt** </br></br>See 3GPP TS 23.501 for a full description of the ARP parameters. | **Preemption capability** |No. Defaults to **May not preempt**.|
+| The default QoS Flow preemption vulnerability for QoS Flows for this service. The preemption vulnerability of a QoS Flow controls whether it can be preempted another QoS Flow with a higher priority level. You can choose from the following values: </br></br>- **Preemptable** </br>- **Not preemptable** </br></br>See 3GPP TS 23.501 for a full description of the ARP parameters. | **Preemption vulnerability** |No. Defaults to **Preemptable**.|
## Data flow policy rule(s)
For each data flow policy rule, take the following steps:
- Collect the values in [Collect data flow policy rule values](#collect-data-flow-policy-rule-values) to determine whether SDFs matching this data flow policy rule will be allowed or blocked, and how this data flow policy rule should be prioritized against other data flow policy rules. - Collect the values in [Collect data flow template values](#collect-data-flow-template-values) for one or more data flow templates to use for this data flow policy rule. Data flow templates provide the packet filters the packet core instance will use to match on SDFs.
+> [!NOTE]
+> The ARM template in [Configure a service and SIM policy using an ARM template](configure-service-sim-policy-arm-template.md) only configures a single data flow policy rule and data flow template.
+ ### Collect data flow policy rule values Collect the values in the table below for each data flow policy rule you want to use on this service.
-| Value | Azure portal field name |
-|--|--|
-| The name of the data flow policy rule. This name must only contain alphanumeric characters, dashes, or underscores. It must not match any other rule configured on the same service. You also must not use any of the following reserved strings: *default*; *requested*; *service*. | **Rule name** |
-| A precedence value that the packet core instance must use to decide between data flow policy rules. This value must be an integer between 0 and 255 and must be unique among all data flow policy rules configured on the packet core instance. A lower value means a higher priority. | **Policy rule precedence** |
-| A traffic control setting to determine whether flows that match a data flow template on this data flow policy rule are permitted. Choose one of the following values: </br></br>- **Enabled** - Matching flows are permitted. </br>- **Blocked** - Matching flows are blocked. | **Traffic control** |
+| Value | Azure portal field name | Included in example ARM template |
+|--|--|--|
+| The name of the data flow policy rule. This name must only contain alphanumeric characters, dashes, or underscores. It must not match any other rule configured on the same service. You also must not use any of the following reserved strings: *default*; *requested*; *service*. | **Rule name** |Yes|
+| A precedence value that the packet core instance must use to decide between data flow policy rules. This value must be an integer between 0 and 255 and must be unique among all data flow policy rules configured on the packet core instance. A lower value means a higher priority. | **Policy rule precedence** |Yes|
+| A traffic control setting to determine whether flows that match a data flow template on this data flow policy rule are permitted. Choose one of the following values: </br></br>- **Enabled** - Matching flows are permitted. </br>- **Blocked** - Matching flows are blocked. | **Traffic control** |Yes|
### Collect data flow template values Collect the following values for each data flow template you want to use for a particular data flow policy rule.
-| Value | Azure portal field name |
-|--|--|
-| The name of the data flow template. This name must only contain alphanumeric characters, dashes, or underscores. It must not match any other template configured on the same rule. You also must not use any of the following reserved strings: *default*; *requested*; *service*. | **Template name** |
-| The protocol(s) allowed for this flow. </br></br>If you want to allow the flow to use any protocol within the Internet Protocol suite, you can set this field to **All**.</br></br>If you want to allow a selection of protocols, you can select them from the list displayed in the field. If a protocol isn't in the list, you can specify it by entering its corresponding IANA Assigned Internet Protocol Number, as described in the [IANA website](https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml). For example, for IGMP, you must use 2. | **Protocols** |
-| The direction of this flow. Choose one of the following values: </br></br>- **Uplink** - traffic flowing away from UEs. </br>- **Downlink** - traffic flowing towards UEs.</br>- **Bidirectional** - traffic flowing in both directions. | **Direction** |
-| The remote IP address(es) to which UEs will connect for this flow. </br></br>If you want to allow connections on any IP address, you must use the value `any`. </br></br>Otherwise, you must provide each remote IP address or IP address range to which the packet core instance will connect for this flow. Provide these IP addresses in CIDR notation, including the netmask (for example, `192.0.2.0/24`). </br></br>Provide a comma-separated list of IP addresses and IP address ranges. For example: </br></br>`192.0.2.54/32, 198.51.100.0/24` | **Remote IPs** |
-| The remote port(s) to which UEs will connect for this flow. You can specify one or more ports or port ranges. Port ranges must be specified as `<FirstPort>-<LastPort>`. </br></br>This setting is optional. If you don't specify a value, the packet core instance will allow connections for all remote ports. </br></br>Provide a comma-separated list of your chosen ports and port ranges. For example: </br></br>`8080, 8082-8085` | **Ports** |
+| Value | Azure portal field name | Included in example ARM template |
+|--|--|--|
+| The name of the data flow template. This name must only contain alphanumeric characters, dashes, or underscores. It must not match any other template configured on the same rule. You also must not use any of the following reserved strings: *default*; *requested*; *service*. | **Template name** |Yes|
+| The protocol(s) allowed for this flow. </br></br>If you want to allow the flow to use any protocol within the Internet Protocol suite, you can set this field to **All**.</br></br>If you want to allow a selection of protocols, you can select them from the list displayed in the field. If a protocol isn't in the list, you can specify it by entering its corresponding IANA Assigned Internet Protocol Number, as described in the [IANA website](https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml). For example, for IGMP, you must use 2. | **Protocols** |Yes|
+| The direction of this flow. Choose one of the following values: </br></br>- **Uplink** - traffic flowing away from UEs. </br>- **Downlink** - traffic flowing towards UEs.</br>- **Bidirectional** - traffic flowing in both directions. | **Direction** |Yes|
+| The remote IP address(es) to which UEs will connect for this flow. </br></br>If you want to allow connections on any IP address, you must use the value `any`. </br></br>Otherwise, you must provide each remote IP address or IP address range to which the packet core instance will connect for this flow. Provide these IP addresses in CIDR notation, including the netmask (for example, `192.0.2.0/24`). </br></br>Provide a comma-separated list of IP addresses and IP address ranges. For example: </br></br>`192.0.2.54/32, 198.51.100.0/24` | **Remote IPs** |Yes|
+| The remote port(s) to which UEs will connect for this flow. You can specify one or more ports or port ranges. Port ranges must be specified as `<FirstPort>-<LastPort>`. </br></br>This setting is optional. If you don't specify a value, the packet core instance will allow connections for all remote ports. </br></br>Provide a comma-separated list of your chosen ports and port ranges. For example: </br></br>`8080, 8082-8085` | **Ports** |No. Defaults to no value to allow connections to all remote ports.|
## Next steps
+You can use this information to either create a service using the Azure portal, or use the example ARM template to create a simple service and SIM policy.
+ - [Configure a service for Azure Private 5G Core Preview - Azure portal](configure-service-azure-portal.md)
+- [Configure a service and SIM policy using an ARM template](configure-service-sim-policy-arm-template.md)
private-5g-core Collect Required Information For Sim Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/collect-required-information-for-sim-policy.md
SIM policies allow you to define different sets of policies and interoperability
In this how-to guide, we'll collect all the required information to configure a SIM policy.
-You'll enter each value you collect into its corresponding field (given in the **Field name in Azure portal** columns in the tables below) as part of the procedure in [Configure a SIM policy for Azure Private 5G Core Preview - Azure portal](configure-sim-policy-azure-portal.md).
+- You can use this information to configure a SIM policy through the Azure portal. You'll enter each value you collect into its corresponding field (given in the **Field name in Azure portal** columns in the tables below) as part of the procedure in [Configure a SIM policy for Azure Private 5G Core Preview - Azure portal](configure-sim-policy-azure-portal.md).
+- Alternatively, you can use the information to create a simple service and SIM policy using the example Azure Resource Manager template (ARM template) given in [Configure a service and SIM policy using an ARM template](configure-service-sim-policy-arm-template.md). The example template uses default values for all settings, but you can choose to replace a subset of the default settings with your own values. The **Included in example ARM template** columns in the tables below indicate which settings can be changed.
## Prerequisites
Read [Policy control](policy-control.md) and make sure you're familiar with Azur
## Collect top-level setting values
-SIM policies have top-level settings that are applied to every SIM to which the SIM policy is assigned. These settings include the UE aggregated maximum bit rate (UE-AMBR) and RAT/Frequency Priority ID (RFSP ID).
+SIM policies have top-level settings that are applied to every SIM to which the SIM policy is assigned. These settings include the UE aggregated maximum bit rate (UE-AMBR) and RAT/Frequency Priority ID (RFSP ID).
Collect each of the values in the table below for your SIM policy.
-| Value | Azure portal field name |
-|--|--|
-| The name of the private mobile network for which you're configuring this SIM policy. | N/A |
-| The SIM policy name. The name must be unique across all SIM policies configured for the private mobile network. | **Policy name** |
-| The UE-AMBR for traffic traveling away from UEs across all non-GBR QoS Flows. The UE-AMBR must be given in the following form: </br></br>`<Quantity>` `<Unit>` </br></br>`<Unit>` must be one of the following: </br></br>- *bps* </br>- *Kbps* </br>- *Mbps* </br>- *Gbps* </br>- *Tbps* </br></br>`<Quantity>` is the quantity of your chosen unit. </br></br>For example, `10 Gbps`. </br></br>See 3GPP TS 23.501 for a full description of the UE-AMBR parameter. | **Total bandwidth allowed - Uplink** |
-| The UE-AMBR for traffic traveling towards UEs across all non-GBR QoS Flows. The UE-AMBR must be given in the following form: </br></br>`<Quantity>` `<Unit>` </br></br>`<Unit>` must be one of the following: </br></br>- *bps* </br>- *Kbps* </br>- *Mbps* </br>- *Gbps* </br>- *Tbps* </br></br>`<Quantity>` is the quantity of your chosen unit. </br></br>For example, `10 Gbps`. </br></br>See 3GPP TS 23.501 for a full description of the UE-AMBR parameter. | **Total bandwidth allowed - Downlink** |
-| The interval between UE registrations for UEs using SIMs to which this SIM policy is assigned, given in seconds. Choose an integer that is 30 or greater. If you omit the interval when first creating the SIM policy, it will default to 3,240 seconds (54 minutes). | **Registration timer** |
-| The subscriber profile ID for RAT/Frequency Priority ID (RFSP ID) for this SIM policy, as defined in TS 36.413. If you want to set an RFSP ID, you must specify an integer between 1 and 256. | **RFSP index** |
+| Value | Azure portal field name | Included in example ARM template |
+|--|--|--|
+| The name of the private mobile network for which you're configuring this SIM policy. | N/A | Yes |
+| The SIM policy name. The name must be unique across all SIM policies configured for the private mobile network. | **Policy name** |Yes|
+| The UE-AMBR for traffic traveling away from UEs across all non-GBR QoS Flows. The UE-AMBR must be given in the following form: </br></br>`<Quantity>` `<Unit>` </br></br>`<Unit>` must be one of the following: </br></br>- *bps* </br>- *Kbps* </br>- *Mbps* </br>- *Gbps* </br>- *Tbps* </br></br>`<Quantity>` is the quantity of your chosen unit. </br></br>For example, `10 Gbps`. </br></br>See 3GPP TS 23.501 for a full description of the UE-AMBR parameter. | **Total bandwidth allowed - Uplink** |Yes|
+| The UE-AMBR for traffic traveling towards UEs across all non-GBR QoS Flows. The UE-AMBR must be given in the following form: </br></br>`<Quantity>` `<Unit>` </br></br>`<Unit>` must be one of the following: </br></br>- *bps* </br>- *Kbps* </br>- *Mbps* </br>- *Gbps* </br>- *Tbps* </br></br>`<Quantity>` is the quantity of your chosen unit. </br></br>For example, `10 Gbps`. </br></br>See 3GPP TS 23.501 for a full description of the UE-AMBR parameter. | **Total bandwidth allowed - Downlink** |Yes|
+| The interval between UE registrations for UEs using SIMs to which this SIM policy is assigned, given in seconds. Choose an integer that is 30 or greater. If you omit the interval when first creating the SIM policy, it will default to 3,240 seconds (54 minutes). | **Registration timer** |No. Defaults to 3,240 seconds.|
+| The subscriber profile ID for RAT/Frequency Priority ID (RFSP ID) for this SIM policy, as defined in TS 36.413. If you want to set an RFSP ID, you must specify an integer between 1 and 256. | **RFSP index** |No. Defaults to no value.|
## Collect information for the network scope Within each SIM policy, you'll have a *network scope*. The network scope represents the data network to which SIMs assigned to the SIM policy will have access. It allows you to define the QoS policy settings used for the default QoS Flow for PDU sessions involving these SIMs. These settings include the session aggregated maximum bit rate (Session-AMBR), 5G QoS Indicator (5QI) value, and Allocation and Retention Policy (ARP) priority level. You can also determine the services that will be offered to SIMs. Collect each of the values in the table below for the network scope.
-| Value | Azure portal field name |
-|||
-|The Data Network Name (DNN) of the data network. The DNN must match the one you used when creating the private mobile network. | **Data network** |
-|The names of the services permitted on the data network. You must have already configured your chosen services. For more information on services, see [Policy control](policy-control.md). | **Service configuration** |
-|The maximum bitrate for traffic traveling away from UEs across all non-GBR QoS Flows of a given PDU session. The bitrate must be given in the following form: `<Quantity>` `<Unit>` </br></br>`<Unit>` must be one of the following: </br></br>- *bps* </br>- *Kbps* </br>- *Mbps* </br>- *Gbps* </br>- *Tbps* </br></br>`<Quantity>` is the quantity of your chosen unit. </br></br>For example, `10 Gbps`. </br></br>See 3GPP TS 23.501 for a full description of the Session-AMBR parameter. | **Session aggregate maximum bit rate - Uplink** |
-|The maximum bitrate for traffic traveling towards UEs across all non-GBR QoS Flows of a given PDU session. The bitrate must be given in the following form: `<Quantity>` `<Unit>` </br></br>`<Unit>` must be one of the following: </br></br>- *bps* </br>- *Kbps* </br>- *Mbps* </br>- *Gbps* </br>- *Tbps* </br></br>`<Quantity>` is the quantity of your chosen unit. </br></br>For example, `10 Gbps`. </br></br>See 3GPP TS 23.501 for a full description of the Session-AMBR parameter. | **Session aggregate maximum bit rate - Downlink** |
-|The default 5G QoS Indicator (5QI) value for this data network. The 5QI identifies a set of 5G QoS characteristics that control QoS forwarding treatment for QoS Flows. See 3GPP TS 23.501 for a full description of the 5QI parameter. </br></br>Choose a 5QI value that corresponds to a non-GBR QoS Flow (as described in 3GPP TS 23.501). These values are in the following ranges: 5-9; 69-70; 79-80. </br></br>You can also choose a non-standardized 5QI value. </br></br>Azure Private 5G Core Preview doesn't support 5QI values corresponding to GBR or delay-critical GBR QoS Flows. Don't use a value in any of the following ranges: 1-4; 65-67; 71-76; 82-85. | **5G QoS Indicator (5QI)** |
-|The default QoS Flow Allocation and Retention Policy (ARP) priority level for this data network. Flows with a higher ARP priority level preempt flows with a lower ARP priority level. The ARP priority level must be an integer between 1 (highest priority) and 15 (lowest priority). See 3GPP TS 23.501 for a full description of the ARP parameters. | **Allocation and Retention Priority level** |
-|The default QoS Flow preemption capability for QoS Flows on this data network. The preemption capability of a QoS Flow controls whether it can preempt another QoS Flow with a lower priority level. </br></br>You can choose from the following values: </br></br>- **May preempt** </br>- **May not preempt** </br></br>See 3GPP TS 23.501 for a full description of the ARP parameters. | **Preemption capability** |
-|The default QoS Flow preemption vulnerability for QoS Flows on this data network. The preemption vulnerability of a QoS Flow controls whether it can be preempted another QoS Flow with a higher priority level. </br></br>You can choose from the following values: </br></br>- **Preemptable** </br>- **Not preemptable** </br></br>See 3GPP TS 23.501 for a full description of the ARP parameters. | **Preemption vulnerability** |
-|The default PDU session type for SIMs using this data network. Azure Private 5G Core will use this type by default if the SIM doesn't request a specific type. </br></br>You can choose from the following values: </br></br>- **IPv4** </br>- **IPv6** | **Default session type** |
-|An additional PDU session type that Azure Private 5G Core supports for this data network. This type must not match the default type mentioned above. </br></br>You can choose from the following values: </br></br>- **IPv4** </br>- **IPv6** | **Additional allowed session types** |
+| Value | Azure portal field name | Included in example ARM template |
+|--|--|--|
+|The Data Network Name (DNN) of the data network. The DNN must match the one you used when creating the private mobile network. | **Data network** | Yes |
+|The names of the services permitted on the data network. You must have already configured your chosen services. For more information on services, see [Policy control](policy-control.md). | **Service configuration** | No. The SIM policy will only use the service you configure using the same template. |
+|The maximum bitrate for traffic traveling away from UEs across all non-GBR QoS Flows of a given PDU session. The bitrate must be given in the following form: `<Quantity>` `<Unit>` </br></br>`<Unit>` must be one of the following: </br></br>- *bps* </br>- *Kbps* </br>- *Mbps* </br>- *Gbps* </br>- *Tbps* </br></br>`<Quantity>` is the quantity of your chosen unit. </br></br>For example, `10 Gbps`. </br></br>See 3GPP TS 23.501 for a full description of the Session-AMBR parameter. | **Session aggregate maximum bit rate - Uplink** | Yes |
+|The maximum bitrate for traffic traveling towards UEs across all non-GBR QoS Flows of a given PDU session. The bitrate must be given in the following form: `<Quantity>` `<Unit>` </br></br>`<Unit>` must be one of the following: </br></br>- *bps* </br>- *Kbps* </br>- *Mbps* </br>- *Gbps* </br>- *Tbps* </br></br>`<Quantity>` is the quantity of your chosen unit. </br></br>For example, `10 Gbps`. </br></br>See 3GPP TS 23.501 for a full description of the Session-AMBR parameter. | **Session aggregate maximum bit rate - Downlink** | Yes |
+|The default 5G QoS Indicator (5QI) value for this data network. The 5QI identifies a set of 5G QoS characteristics that control QoS forwarding treatment for QoS Flows. See 3GPP TS 23.501 for a full description of the 5QI parameter. </br></br>Choose a 5QI value that corresponds to a non-GBR QoS Flow (as described in 3GPP TS 23.501). These values are in the following ranges: 5-9; 69-70; 79-80. </br></br>You can also choose a non-standardized 5QI value. </br></br>Azure Private 5G Core Preview doesn't support 5QI values corresponding to GBR or delay-critical GBR QoS Flows. Don't use a value in any of the following ranges: 1-4; 65-67; 71-76; 82-85. | **5G QoS Indicator (5QI)** | No. Defaults to 9. |
+|The default QoS Flow Allocation and Retention Policy (ARP) priority level for this data network. Flows with a higher ARP priority level preempt flows with a lower ARP priority level. The ARP priority level must be an integer between 1 (highest priority) and 15 (lowest priority). See 3GPP TS 23.501 for a full description of the ARP parameters. | **Allocation and Retention Priority level** | No. Defaults to 1. |
+|The default QoS Flow preemption capability for QoS Flows on this data network. The preemption capability of a QoS Flow controls whether it can preempt another QoS Flow with a lower priority level. </br></br>You can choose from the following values: </br></br>- **May preempt** </br>- **May not preempt** </br></br>See 3GPP TS 23.501 for a full description of the ARP parameters. | **Preemption capability** | No. Defaults to **May not preempt**.|
+|The default QoS Flow preemption vulnerability for QoS Flows on this data network. The preemption vulnerability of a QoS Flow controls whether it can be preempted another QoS Flow with a higher priority level. </br></br>You can choose from the following values: </br></br>- **Preemptable** </br>- **Not preemptable** </br></br>See 3GPP TS 23.501 for a full description of the ARP parameters. | **Preemption vulnerability** | No. Defaults to **Preemptable**.|
+|The default PDU session type for SIMs using this data network. Azure Private 5G Core will use this type by default if the SIM doesn't request a specific type. </br></br>You can choose from the following values: </br></br>- **IPv4** </br>- **IPv6** | **Default session type** | No. Defaults to **IPv4**.|
+|An additional PDU session type that Azure Private 5G Core supports for this data network. This type must not match the default type mentioned above. </br></br>You can choose from the following values: </br></br>- **IPv4** </br>- **IPv6** | **Additional allowed session types** |No. Defaults to no value.|
## Next steps
+You can use this information to either create a SIM policy using the Azure portal, or use the example ARM template to create a simple service and SIM policy.
+ - [Configure a SIM policy for Azure Private 5G Core](configure-sim-policy-azure-portal.md)
+- [Configure a service and SIM policy using an ARM template](configure-service-sim-policy-arm-template.md)
+
private-5g-core Configure Service Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/configure-service-azure-portal.md
Title: Configure a service
+ Title: Configure a service - Azure portal
description: With this how-to guide, learn how to configure a service for Azure Private 5G Core Preview through the Azure portal.
private-5g-core Configure Service Sim Policy Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/configure-service-sim-policy-arm-template.md
+
+ Title: Configure a service and SIM policy - ARM template
+
+description: This how-to guide shows how to configure a service and SIM policy using an Azure Resource Manager (ARM) template.
++++ Last updated : 03/21/2022+++
+# Configure a service and SIM policy using an ARM template
+
+*Services* and *SIM policies* are the key components of Azure Private 5G Core Preview's customizable policy control, which allows you to provide flexible traffic handling. You can determine exactly how your packet core instance applies quality of service (QoS) characteristics to service data flows (SDFs) to meet your deployment's needs. For more information, see [Policy control](policy-control.md). In this how-to guide, you'll learn how to use an Azure Resource Manager template (ARM template) to create a simple service and SIM policy.
++
+If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
+
+[![Deploy to Azure.](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.mobilenetwork%2Fmobilenetwork-create-sim-policy%2Fazuredeploy.json)
+
+## Prerequisites
+
+- Ensure you can sign in to the Azure portal using an account with access to the active subscription you used to create your private mobile network. This account must have the built-in Contributor or Owner role at the subscription scope.
+- Identify the name of the Mobile Network resource corresponding to your private mobile network and the resource group containing it.
+- Identify the name of the data network to which your private mobile network connects.
+- The ARM template is populated with values to configure a default service and SIM policy that allows all traffic in both directions.
+
+ If you want to create a service and SIM policy for another purpose, use the information in [Collect the required information for a service](collect-required-information-for-service.md) and [Collect the required information for a SIM policy](collect-required-information-for-sim-policy.md) to design a service and SIM policy to meet your requirements. You'll enter these new values as part of deploying the ARM template.
+
+## Review the template
+
+The template used in this how-to guide is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/mobilenetwork-create-sim-policy).
++
+Two Azure resources are defined in the template.
+
+- [**Microsoft.MobileNetwork/mobileNetworks/services**](/azure/templates/microsoft.mobilenetwork/mobilenetworks/services): create a service.
+- [**Microsoft.MobileNetwork/mobileNetworks/simPolicies**](/azure/templates/microsoft.mobilenetwork/mobilenetworks/simPolicies): create a SIM policy.
+
+## Deploy the template
+
+1. Select the following link to sign in to Azure and open a template.
+
+ [![Deploy to Azure.](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.mobilenetwork%2Fmobilenetwork-create-sim-policy%2Fazuredeploy.json)
+
+1. Select or enter the following values, using the information you retrieved in [Prerequisites](#prerequisites).
+
+ - **Subscription:** select the Azure subscription you used to create your private mobile network.
+ - **Resource group:** select the resource group containing the Mobile Network resource representing your private mobile network.
+ - **Region:** select **East US**.
+ - **Location:** leave this field unchanged.
+ - **Existing Mobile Network Name:** enter the name of the Mobile Network resource representing your private mobile network.
+ - **Existing Slice Name:** enter **slice-1**.
+ - **Existing Data Network Name:** enter the name of the data network to which your private mobile network connects.
+
+1. If you want to use the default service and SIM policy, leave the remaining fields unchanged. Otherwise, fill out the remaining fields to match the service and SIM policy you want to configure, using the information you collected from [Collect the required information for a service](collect-required-information-for-service.md) and [Collect the required information for a SIM policy](collect-required-information-for-sim-policy.md).
+1. Select **Review + create**.
+1. Azure will now validate the configuration values you've entered. You should see a message indicating that your values have passed validation.
+
+ If the validation fails, you'll see an error message and the **Configuration** tab(s) containing the invalid configuration will be flagged. Select the flagged tab(s) and use the error messages to correct invalid configuration before returning to the **Review + create** tab.
+
+1. Once your configuration has been validated, you can select **Create** to create the service and SIM policy. The Azure portal will display a confirmation screen when the deployment is complete.
+
+## Review deployed resources
+
+1. On the confirmation screen, select **Go to resource group**.
+
+ :::image type="content" source="media/template-deployment-confirmation.png" alt-text="Screenshot of the Azure portal showing a deployment confirmation for the ARM template.":::
+
+1. Confirm that your service and SIM policy have been created in the resource group.
+
+ :::image type="content" source="media/configure-service-sim-policy-arm-template\service-and-sim-policy-resource-group.png" alt-text="Screenshot of the Azure portal showing a resource group containing newly provisioned SIMs.":::
+
+## Next steps
+
+You can now assign the SIM policy to your SIMs to bring them into service.
+
+- [Assign a SIM policy to a SIM](provision-sims-azure-portal.md#assign-sim-policies)
private-5g-core Create A Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/create-a-site.md
Title: Create a site
+ Title: Create a site - Azure portal
description: This how-to guide shows how to create a site in your private mobile network.
Last updated 01/27/2022
-# Create a site - Azure Private 5G Core Preview
+# Create a site using the Azure portal
-Azure Private 5G Core private mobile networks include one or more *sites*. Each site represents a physical enterprise location (for example, Contoso Corporation's Chicago factory) containing an Azure Stack Edge device that hosts a packet core instance. In this how-to guide, you'll learn how to create a site in your private mobile network.
+Azure Private 5G Core Preview private mobile networks include one or more *sites*. Each site represents a physical enterprise location (for example, Contoso Corporation's Chicago factory) containing an Azure Stack Edge device that hosts a packet core instance. In this how-to guide, you'll learn how to create a site in your private mobile network using the Azure portal.
## Prerequisites
private-5g-core Create Overview Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/create-overview-dashboard.md
+
+ Title: Create an overview Log Analytics dashboard
+
+description: Information on how to use an ARM template to create an overview Log Analytics dashboard you can use to monitor a packet core instance.
++++ Last updated : 03/20/2022+++
+# Create an overview Log Analytics dashboard using an ARM template
+
+Log Analytics dashboards can visualize all of your saved log queries, giving you the ability to find, correlate, and share data about your private mobile network. In this how-to guide, you'll learn how to create an example overview dashboard using an Azure Resource Manager (ARM) template. This dashboard includes charts to monitor important Key Performance Indicators (KPIs) for a packet core instance's operation, including throughput and the number of connected devices.
++
+If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
+
+[![Deploy to Azure.](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.mobilenetwork%2Fmobilenetwork-create-dashboard%2Fazuredeploy.json)
+
+## Prerequisites
+
+- Ensure you can sign in to the Azure portal using an account with access to the active subscription you used to create your private mobile network. This account must have the built-in Contributor or Owner role at the subscription scope. <!-- Need to confirm whether these access requirements are correct. -->
+- Carry out the steps in [Enabling Log Analytics for Azure Private 5G Core](enable-log-analytics-for-private-5g-core.md).
+- Collect the following information.
+
+ - The name of the **Kubernetes - Azure Arc** resource that represents the Kubernetes cluster on which your packet core instance is running.
+ - The name of the resource group containing the **Kubernetes - Azure Arc** resource.
+
+## Review the template
+
+<!--
+Need to confirm whether the following link is correct.
+-->
+
+The template used in this how-to guide is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/mobilenetwork-create-dashboard). The template for this article is too long to show here. To view the template, see [azuredeploy.json](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.mobilenetwork/mobilenetwork-create-dashboard/azuredeploy.json).
+
+The template defines one [**Microsoft.Portal/dashboards**](/azure/templates/microsoft.portal/dashboards) resource, which is a dashboard that displays data about your packet core instance's activity.
+
+## Deploy the template
+
+1. Select the following link to sign in to Azure and open a template.
+
+ [![Deploy to Azure.](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.mobilenetwork%2Fmobilenetwork-create-dashboard%2Fazuredeploy.json)
+
+1. Select or enter the following values, using the information you retrieved in [Prerequisites](#prerequisites).
+
+ - **Subscription:** set this to the Azure subscription you used to create your private mobile network.
+ - **Resource group:** set this to the resource group in which you want to create the dashboard. You can use an existing resource group or create a new one.
+ - **Region:** select **East US**.
+ - **Connected Cluster Name:** enter the name of the **Kubernetes - Azure Arc** resource that represents the Kubernetes cluster on which your packet core instance is running.
+ - **Connected Cluster Resource Group:** enter the name of the resource group containing the **Kubernetes - Azure Arc** resource.
+ - **Dashboard Display Name:** enter the name you want to use for the dashboard.
+ - **Location:** leave this field unchanged.
+
+ :::image type="content" source="media/create-overview-dashboard/dashboard-configuration-fields.png" alt-text="Screenshot of the Azure portal showing the configuration fields for the dashboard ARM template.":::
+
+1. Select **Review + create**.
+1. Azure will now validate the configuration values you've entered. You should see a message indicating that your values have passed validation.
+
+ If the validation fails, you'll see an error message and the **Configuration** tab(s) containing the invalid configuration will be flagged. Select the flagged tab(s) and use the error messages to correct invalid configuration before returning to the **Review + create** tab.
+
+1. Once your configuration has been validated, you can select **Create** to create the dashboard. The Azure portal will display a confirmation screen when the dashboard has been created.
+
+## Review deployed resources
+
+1. On the confirmation screen, select **Go to resource**.
+
+ :::image type="content" source="media/create-overview-dashboard/deployment-confirmation.png" alt-text="Screenshot of the Azure portal showing a deployment confirmation for the ARM template.":::
+
+1. Select **Go to dashboard**.
+
+ :::image type="content" source="media/create-overview-dashboard/go-to-dashboard-option.png" alt-text="Screenshot of the Azure portal showing the Go to dashboard option.":::
+
+1. The Azure portal displays the new overview dashboard, with several tiles providing information on important KPIs for the packet core instance.
+
+ :::image type="content" source="media/create-overview-dashboard/overview-dashboard.png" alt-text="Screenshot of the Azure portal showing the overview dashboard. It includes tiles for connected devices, gNodeBs, PDU sessions and throughput." lightbox="media/create-overview-dashboard/overview-dashboard.png":::
+
+## Next steps
+
+You can now begin using the overview dashboard to monitor your packet core instance's activity. You can also use the following articles to add more queries to the dashboard.
+
+- [Learn more about constructing queries](monitor-private-5g-core-with-log-analytics.md#construct-queries).
+- [Learn more about how to pin a query to the dashboard](../azure-monitor/visualize/tutorial-logs-dashboards.md#visualize-a-log-query).
private-5g-core Create Site Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/create-site-arm-template.md
+
+ Title: Create a site - ARM template
+
+description: This how-to guide shows how to create a site in your private mobile network using an Azure Resource Manager (ARM) template.
++++ Last updated : 03/16/2022+++
+# Create a site using an ARM template
+
+Azure Private 5G Core Preview private mobile networks include one or more *sites*. Each site represents a physical enterprise location (for example, Contoso Corporation's Chicago factory) containing an Azure Stack Edge device that hosts a packet core instance. In this how-to guide, you'll learn how to create a site in your private mobile network using an Azure Resource Manager template (ARM template).
++
+If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
+
+[![Deploy to Azure.](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.mobilenetwork%2Fmobilenetwork-create-new-site%2Fazuredeploy.json)
+
+## Prerequisites
+
+- Ensure you can sign in to the Azure portal using an account with access to the active subscription you used to create your private mobile network. This account must have the built-in Contributor or Owner role at the subscription scope.
+- Complete the steps in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses) and [Order and set up your Azure Stack Edge Pro device(s)](complete-private-mobile-network-prerequisites.md#order-and-set-up-your-azure-stack-edge-pro-devices) for your new site.
+- Identify the names of the interfaces corresponding to ports 5 and 6 on your Azure Stack Edge Pro device.
+- Collect all of the information in [Collect the required information for a site](collect-required-information-for-a-site.md).
+
+## Review the template
+
+The template used in this how-to guide is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/mobilenetwork-create-new-site).
++
+Four Azure resources are defined in the template.
+
+- [**Microsoft.MobileNetwork/mobileNetworks/sites**](/azure/templates/microsoft.mobilenetwork/mobilenetworks/sites): a resource representing your site as a whole.
+- [**Microsoft.MobileNetwork/packetCoreControlPlanes/packetCoreDataPlanes/attachedDataNetworks**](/azure/templates/microsoft.mobilenetwork/packetcorecontrolplanes/packetcoredataplanes/attacheddatanetworks): a resource providing configuration for the packet core instance's connection to a data network, including the IP address for the N6 interface and data subnet configuration.
+- [**Microsoft.MobileNetwork/packetCoreControlPlanes/packetCoreDataPlanes**](/azure/templates/microsoft.mobilenetwork/packetcorecontrolplanes/packetcoredataplanes): a resource providing configuration for the user plane Network Functions of the packet core instance, including IP configuration for the N3 interface.
+- [**Microsoft.MobileNetwork/packetCoreControlPlanes**](/azure/templates/microsoft.mobilenetwork/packetcorecontrolplanes): a resource providing configuration for the control plane Network Functions of the packet core instance, including IP configuration for the N2 interface.
+
+## Deploy the template
+
+1. Select the following link to sign in to Azure and open a template.
+
+ [![Deploy to Azure.](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.mobilenetwork%2Fmobilenetwork-create-new-site%2Fazuredeploy.json)
+
+1. Select or enter the following values, using the information you retrieved in [Prerequisites](#prerequisites).
+
+ | Field | Value |
+ |--|--|
+ | **Subscription** | Select the Azure subscription you used to create your private mobile network. |
+ | **Resource group** | Select the resource group containing the mobile network resource representing your private mobile network. |
+ | **Region** | Select **East US**. |
+ | **Location** | Leave this field unchanged. |
+ | **Existing Mobile Network Name** | Enter the name of the mobile network resource representing your private mobile network. |
+ | **Existing Data Network Name** | Enter the name of the data network to which your private mobile network connects. |
+ | **Site Name** | Enter a name for your site. |
+ | **Control Plane Access Interface Name** | Enter the name of the interface that corresponds to port 5 on your Azure Stack Edge Pro device. |
+ | **Control Plane Access Ip Address** | Enter the IP address for the packet core instance's N2 signaling interface. |
+ | **Data Plane Access Interface Name** | Enter the name of the interface that corresponds to port 5 on your Azure Stack Edge Pro device. |
+ | **Data Plane Access Interface Ip Address** | Enter the IP address for the packet core instance's N3 interface. |
+ | **Access Subnet** | Enter the network address of the access subnet in Classless Inter-Domain Routing (CIDR) notation. |
+ | **Access Gateway** | Enter the access subnet default gateway. |
+ | **User Plane Data Interface Name** | Enter the name of the interface that corresponds to port 6 on your Azure Stack Edge Pro device. |
+ | **User Plane Data Interface Ip Address** | Enter the IP address for the packet core instance's N6 interface. |
+ | **User Plane Data Interface Subnet** | Enter the network address of the data subnet in CIDR notation. |
+ | **User Plane Data Interface Gateway** | Enter the data subnet default gateway. |
+ |**User Equipment Address Pool Prefix** | Enter the network address of the subnet from which dynamic IP addresses must be allocated to User Equipment (UEs) in CIDR notation. You can omit this if you don't want to support dynamic IP address allocation. |
+ |**User Equipment Static Address Pool Prefix** | Enter the network address of the subnet from which static IP addresses must be allocated to User Equipment (UEs) in CIDR notation. You can omit this if you don't want to support static IP address allocation. |
+ | **Core Network Technology** | Leave this field unchanged. |
+ | **Napt Enabled** | Set this field depending on whether Network Address and Port Translation (NAPT) should be enabled for the data network. |
+ | **Custom Location** | Enter the resource ID of the custom location that targets the Azure Kubernetes Service on Azure Stack HCI (AKS-HCI) cluster on the Azure Stack Edge Pro device in the site. |
+
+1. Select **Review + create**.
+1. Azure will now validate the configuration values you've entered. You should see a message indicating that your values have passed validation.
+
+ If the validation fails, you'll see an error message and the **Configuration** tab(s) containing the invalid configuration will be flagged. Select the flagged tab(s) and use the error messages to correct invalid configuration before returning to the **Review + create** tab.
+
+1. Once your configuration has been validated, you can select **Create** to create the site. The Azure portal will display a confirmation screen when the site has been created.
+
+## Review deployed resources
+
+1. On the confirmation screen, select **Go to resource group**.
+
+ :::image type="content" source="media/template-deployment-confirmation.png" alt-text="Screenshot of the Azure portal showing a deployment confirmation for the ARM template.":::
+
+1. Confirm that the resource group contains the following new resources:
+
+ - A **Mobile Network Site** resource representing the site as a whole.
+ - A **Packet Core Control Plane** resource representing the control plane function of the packet core instance in the site.
+ - A **Packet Core Data Plane** resource representing the data plane function of the packet core instance in the site.
+ - An **Attached Data Network** resource representing the site's view of the data network.
+
+ :::image type="content" source="media/how-to-guide-deploy-a-private-mobile-network-azure-portal/site-related-resources.png" alt-text="Screenshot of the Azure portal showing a resource group containing a site and its related resources." lightbox="media/how-to-guide-deploy-a-private-mobile-network-azure-portal/site-related-resources.png":::
+
+## Next steps
+
+If you haven't already done so, you should now design the policy control configuration for your private mobile network. This allows you to customize how your packet core instances apply quality of service (QoS) characteristics to traffic. You can also block or limit certain flows.
+
+- [Learn more about designing the policy control configuration for your private mobile network](policy-control.md)
private-5g-core Deploy Private Mobile Network With Site Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/deploy-private-mobile-network-with-site-arm-template.md
+
+ Title: Deploy a private mobile network and site - ARM template
+
+description: Learn how to deploy a private mobile network and site using an Azure Resource Manager template (ARM template).
++++++ Last updated : 03/23/2022++
+# Quickstart: Deploy a private mobile network and site - ARM template
+
+Azure Private 5G Core is an Azure cloud service for deploying and managing 5G core network functions on an Azure Stack Edge device, as part of an on-premises private mobile network for enterprises. This quickstart describes how to use an Azure Resource Manager template (ARM template) to deploy the following.
+
+- A private mobile network.
+- A site.
+- The default service and SIM policy (as described in [Default service and SIM policy](default-service-sim-policy.md)).
+- Optionally, one or more SIMs.
++
+If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
+
+[![Deploy to Azure.](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.mobilenetwork%2Fmobilenetwork-create-full-5gc-deployment%2Fazuredeploy.json)
+
+## Prerequisites
+
+- [Complete the prerequisite tasks for deploying a private mobile network](complete-private-mobile-network-prerequisites.md).
+- Ensure you can sign in to the Azure portal using an account with access to the active subscription you identified in [Complete the prerequisite tasks for deploying a private mobile network](complete-private-mobile-network-prerequisites.md). This account must have the built-in Contributor or Owner role at the subscription scope.
+- [Collect the required information to deploy a private mobile network](collect-required-information-for-private-mobile-network.md). If you want to provision SIMs, you'll need to prepare a JSON file containing your SIM information, as described in [JSON file format for provisioning SIMs](collect-required-information-for-private-mobile-network.md#json-file-format-for-provisioning-sims).
+- Identify the names of the interfaces corresponding to ports 5 and 6 on the Azure Stack Edge Pro device in the site.
+- [Collect the required information for a site](collect-required-information-for-a-site.md).
+
+## Review the template
+
+The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/mobilenetwork-create-full-5gc-deployment). The template for this article is too long to show here. To view the template, see [azuredeploy.json](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.mobilenetwork/mobilenetwork-create-full-5gc-deployment/azuredeploy.json).
+
+The following Azure resources are defined in the template.
+
+- [**Microsoft.MobileNetwork/mobileNetworks/dataNetworks**](/azure/templates/microsoft.mobilenetwork/mobilenetworks/datanetworks): a resource representing a data network.
+- [**Microsoft.MobileNetwork/mobileNetworks/slices**](/azure/templates/microsoft.mobilenetwork/mobilenetworks/slices): a resource representing a network slice.
+- [**Microsoft.MobileNetwork/mobileNetworks/services**](/azure/templates/microsoft.mobilenetwork/mobilenetworks/services): a resource representing a service.
+- [**Microsoft.MobileNetwork/mobileNetworks/simPolicies**](/azure/templates/microsoft.mobilenetwork/mobilenetworks/simPolicies): a resource representing a SIM policy.
+- [**Microsoft.MobileNetwork/mobileNetworks/sites**](/azure/templates/microsoft.mobilenetwork/mobilenetworks/sites): a resource representing your site as a whole.
+- [**Microsoft.MobileNetwork/packetCoreControlPlanes/packetCoreDataPlanes/attachedDataNetworks**](/azure/templates/microsoft.mobilenetwork/packetcorecontrolplanes/packetcoredataplanes/attacheddatanetworks): a resource providing configuration for the packet core instance's connection to a data network, including the IP address for the N6 interface and data subnet configuration.
+- [**Microsoft.MobileNetwork/packetCoreControlPlanes/packetCoreDataPlanes**](/azure/templates/microsoft.mobilenetwork/packetcorecontrolplanes/packetcoredataplanes): a resource providing configuration for the user plane Network Functions of the packet core instance, including IP configuration for the N3 interface.
+- [**Microsoft.MobileNetwork/packetCoreControlPlanes**](/azure/templates/microsoft.mobilenetwork/packetcorecontrolplanes): a resource providing configuration for the control plane Network Functions of the packet core instance, including IP configuration for the N2 interface.
+- [**Microsoft.MobileNetwork/mobileNetworks**](/azure/templates/microsoft.mobilenetwork/mobilenetworks): a resource representing the private mobile network as a whole.
+- [**Microsoft.MobileNetwork/sims:**](/azure/templates/microsoft.mobilenetwork/sims) a resource representing a physical SIM or eSIM.
+
+## Deploy the template
+
+1. Select the following link to sign in to Azure and open a template.
+
+ [![Deploy to Azure.](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.mobilenetwork%2Fmobilenetwork-create-full-5gc-deployment%2Fazuredeploy.json)
++
+1. Select or enter the following values, using the information you retrieved in [Prerequisites](#prerequisites).
+
+
+ |Field |Value |
+ |||
+ |**Subscription** | Select the Azure subscription you want to use to create your private mobile network. |
+ |**Resource group** | Create a new resource group. |
+ |**Region** | Select **East US**. |
+ |**Location** | Leave this field unchanged. |
+ |**Mobile Network Name** | Enter a name for the private mobile network. |
+ |**Mobile Country Code** | Enter the mobile country code for the private mobile network. |
+ |**Mobile Network Code** | Enter the mobile network code for the private mobile network. |
+ |**Site Name** | Enter a name for your site. |
+ |**Service Name** | Leave this field unchanged. |
+ |**SIM Resources** | If you want to provision SIMs, paste in the contents of the JSON file containing your SIM information. Otherwise, leave this field unchanged. |
+ |**Sim Policy Name** | Leave this field unchanged. |
+ |**Slice Name** | Leave this field unchanged. |
+ |**Control Plane Access Interface Name** | Enter the name of the interface that corresponds to port 5 on your Azure Stack Edge Pro device. |
+ |**Control Plane Access Ip Address** | Enter the IP address for the packet core instance's N2 signaling interface. |
+ |**User Plane Access Interface Name** | Enter the name of the interface that corresponds to port 5 on your Azure Stack Edge Pro device. |
+ |**User Plane Access Interface Ip Address** | Enter the IP address for the packet core instance's N3 interface. |
+ |**Access Subnet** | Enter the network address of the access subnet in Classless Inter-Domain Routing (CIDR) notation. |
+ |**Access Gateway** | Enter the access subnet default gateway. |
+ |**User Plane Data Interface Name** | Enter the name of the interface that corresponds to port 6 on your Azure Stack Edge Pro device. |
+ |**User Plane Data Interface Ip Address** | Enter the IP address for the packet core instance's N6 interface. |
+ |**User Plane Data Interface Subnet** | Enter the network address of the data subnet in CIDR notation. |
+ |**User Plane Data Interface Gateway** | Enter the data subnet default gateway. |
+ |**User Equipment Address Pool Prefix** | Enter the network address of the subnet from which dynamic IP addresses must be allocated to User Equipment (UEs) in CIDR notation. You can omit this if you don't want to support dynamic IP address allocation. |
+ |**User Equipment Static Address Pool Prefix** | Enter the network address of the subnet from which static IP addresses must be allocated to User Equipment (UEs) in CIDR notation. You can omit this if you don't want to support static IP address allocation. |
+ |**Core Network Technology** | Leave this field unchanged. |
+ |**Napt Enabled** | Set this field depending on whether Network Address and Port Translation (NAPT) should be enabled for the data network.|
+ |**Custom Location** | Enter the resource ID of the custom location that targets the Azure Kubernetes Service on Azure Stack HCI (AKS-HCI) cluster on the Azure Stack Edge Pro device in the site.|
+
+1. Select **Review + create**.
+1. Azure will now validate the configuration values you've entered. You should see a message indicating that your values have passed validation.
+
+ If the validation fails, you'll see an error message and the **Configuration** tab(s) containing the invalid configuration will be flagged. Select the flagged tab(s) and use the error messages to correct invalid configuration before returning to the **Review + create** tab.
+
+1. Once your configuration has been validated, you can select **Create** to deploy the resources. The Azure portal will display a confirmation screen when the deployment is complete.
+
+## Review deployed resources
+
+1. On the confirmation screen, select **Go to resource group**.
+
+ :::image type="content" source="media/template-deployment-confirmation.png" alt-text="Screenshot of the Azure portal showing a deployment confirmation for the ARM template.":::
+
+1. Confirm that the following resources have been created in the resource group.
+
+ - A **Mobile Network** resource representing the private mobile network as a whole.
+ - A **Slice** resource representing a network slice.
+ - A **Data Network** resource representing the data network.
+ - A **Mobile Network Site** resource representing the site as a whole.
+ - A **Packet Core Control Plane** resource representing the control plane function of the packet core instance in the site.
+ - A **Packet Core Data Plane** resource representing the data plane function of the packet core instance in the site.
+ - An **Attached Data Network** resource representing the site's view of the data network.
+ - A **Service** resource representing the default service.
+ - A **SIM Policy** resource representing the default SIM policy.
+ - One or more **SIM** resources representing physical SIMs or eSIMs (if you provisioned any).
+
+ :::image type="content" source="media/create-full-private-5g-core-deployment-arm-template/full-deployment-resource-group.png" alt-text="Screenshot of the Azure portal showing a resource group containing the resources for a full Azure Private 5G Core deployment." lightbox="media/create-full-private-5g-core-deployment-arm-template/full-deployment-resource-group.png":::
+
+## Clean up resources
+
+If you do not want to keep your deployment, [delete the resource group](../azure-resource-manager/management/delete-resource-group.md?tabs=azure-portal#delete-resource-group).
+
+## Next steps
+
+If you have kept your deployment, you can either begin designing policy control to determine how your private mobile network will handle traffic, or you can add more sites to your private mobile network.
+
+- [Learn more about designing the policy control configuration for your private mobile network](policy-control.md)
+- [Collect the required information for a site](collect-required-information-for-a-site.md)
private-5g-core Enable Log Analytics For Private 5G Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/enable-log-analytics-for-private-5g-core.md
In this step, you'll run a query in the Log Analytics workspace to confirm that
## Next steps - [Learn more about monitoring Azure Private 5G Core using Log Analytics](monitor-private-5g-core-with-log-analytics.md)
+- [Create an overview Log Analytics dashboard using an ARM template](create-overview-dashboard.md)
- [Learn more about Log Analytics in Azure Monitor](../azure-monitor/logs/log-analytics-overview.md)
private-5g-core Monitor Private 5G Core With Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/monitor-private-5g-core-with-log-analytics.md
Log Analytics dashboards can visualize all of your saved log queries, giving you
You can find information on how to create a Log Analytics dashboard in [Create and share dashboards of Log Analytics data](../azure-monitor/visualize/tutorial-logs-dashboards.md).
+You can also follow the steps in [Create an overview Log Analytics dashboard using an ARM template](create-overview-dashboard.md) to create an example overview dashboard. This dashboard includes charts to monitor important Key Performance Indicators (KPIs) for your private mobile network's operation, including throughput and the number of connected devices.
+ ## Estimate costs Log Analytics will ingest an average of 1.4 GB of data a day for each log streamed to it by a single packet core instance. [Monitor usage and estimated costs in Azure Monitor](../azure-monitor/usage-estimated-costs.md) provides information on how to estimate the cost of using Log Analytics to monitor Azure Private 5G Core. ## Next steps - [Enable Log Analytics for Azure Private 5G Core](enable-log-analytics-for-private-5g-core.md)
+- [Create an overview Log Analytics dashboard using an ARM template](create-overview-dashboard.md)
- [Learn more about Log Analytics in Azure Monitor](../azure-monitor/logs/log-analytics-overview.md)
private-5g-core Policy Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/policy-control.md
When you first come to design the policy control configuration for your own priv
1. [Collect the required information for a SIM policy](collect-required-information-for-sim-policy.md) 1. [Configure a SIM policy - Azure portal](configure-sim-policy-azure-portal.md)
+You can also use the example Azure Resource Manager template (ARM template) in [Configure a service and SIM policy using an ARM template](configure-service-sim-policy-arm-template.md) to quickly create a SIM policy with a single associated service.
+ ## Next steps - [Learn how to create an example set of policy control configuration](tutorial-create-example-set-of-policy-control-configuration.md)
private-5g-core Provision Sims Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/provision-sims-arm-template.md
+
+ Title: Provision SIMs - ARM template
+
+description: This how-to guide shows how to provision SIMs using an Azure Resource Manager (ARM) template.
++++ Last updated : 03/21/2022+++
+# Provision SIMs for Azure Private 5G Core Preview - ARM template
+
+*SIM resources* represent physical SIMs or eSIMs used by user equipment (UEs) served by the private mobile network. In this how-to guide, you'll learn how to provision new SIMs for an existing private mobile network using an Azure Resource Manager template (ARM template).
++
+If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
+
+[![Deploy to Azure.](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.mobilenetwork%2Fmobilenetwork-provision-sims%2Fazuredeploy.json)
+
+## Prerequisites
+
+- Ensure you can sign in to the Azure portal using an account with access to the active subscription you identified in [Complete the prerequisite tasks for deploying a private mobile network](complete-private-mobile-network-prerequisites.md). This account must have the built-in Contributor role at the subscription scope.
+- Identify the name of the Mobile Network resource corresponding to your private mobile network and the resource group containing it.
+
+## Collect the required information for your SIMs
+
+To begin, collect the values in the following table for each SIM you want to provision.
+
+| Value |Parameter name |
+|--|--|
+| SIM name. The SIM name must only contain alphanumeric characters, dashes, and underscores. | `simName` |
+| The Integrated Circuit Card Identification Number (ICCID). The ICCID identifies a specific physical SIM or eSIM, and includes information on the SIM's country and issuer. The ICCID is a unique numerical value between 19 and 20 digits in length, beginning with 89. | `integratedCircuitCardIdentifier` |
+| The international mobile subscriber identity (IMSI). The IMSI is a unique number (usually 15 digits) identifying a device or user in a mobile network. | `internationalMobileSubscriberIdentity` |
+| The Authentication Key (Ki). The Ki is a unique 128-bit value assigned to the SIM by an operator, and is used with the derived operator code (OPc) to authenticate a user. It must be a 32-character string, containing hexadecimal characters only. | `authenticationKey` |
+| The derived operator code (OPc). The OPc is taken from the SIM's Ki and the network's operator code (OP). The packet core instance uses it to authenticate a user using a standards-based algorithm. The OPc must be a 32-character string, containing hexadecimal characters only. | `operatorKeyCode` |
+| The type of device using this SIM. This value is an optional free-form string. You can use it as required to easily identify device types using the enterprise's private mobile network. | `deviceType` |
+
+## Prepare an array for your SIMs
+
+Use the information you collected in [Collect the required information for your SIMs](#collect-the-required-information-for-your-sims) to create an array containing properties for each of the SIMs you want to provision. The following is an example of an array containing properties for two SIMs.
+
+```json
+[
+ {
+ "simName": "SIM1",
+ "integratedCircuitCardIdentifier": "8912345678901234566",
+ "internationalMobileSubscriberIdentity": "001019990010001",
+ "authenticationKey": "00112233445566778899AABBCCDDEEFF",
+ "operatorKeyCode": "63bfa50ee6523365ff14c1f45f88737d",
+ "deviceType": "Cellphone"
+ },
+ {
+ "simName": "SIM2",
+ "simProfileName": "profile2",
+ "integratedCircuitCardIdentifier": "8922345678901234567",
+ "internationalMobileSubscriberIdentity": "001019990010002",
+ "authenticationKey": "11112233445566778899AABBCCDDEEFF",
+ "operatorKeyCode": "63bfa50ee6523365ff14c1f45f88738d",
+ "deviceType": "Sensor"
+ }
+]
+```
+
+## Review the template
+
+<!--
+Need to confirm whether the following link is correct.
+-->
+
+The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/mobilenetwork-provision-sims).
++
+The template defines one or more [**Microsoft.MobileNetwork/sims**](/azure/templates/microsoft.mobilenetwork/sims) resources, each of which represents a physical SIM or eSIM.
+
+## Deploy the template
+
+1. Select the following link to sign in to Azure and open a template.
+
+ [![Deploy to Azure.](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.mobilenetwork%2Fmobilenetwork-provision-sims%2Fazuredeploy.json)
+
+1. Select or enter the following values, using the information you retrieved in [Prerequisites](#prerequisites). <!-- We should also add a screenshot of a filled out set of parameters. -->
+
+ - **Subscription:** select the Azure subscription you used to create your private mobile network.
+ - **Resource group:** select the resource group containing the Mobile Network resource representing your private mobile network.
+ - **Region:** select **East US**.
+ - **Location:** leave this field unchanged.
+ - **Existing Mobile Network Name:** enter the name of the Mobile Network resource representing your private mobile network.
+ - **SIM resources:** paste in the array you prepared in [Prepare an array for your SIMs](#prepare-an-array-for-your-sims).
+
+ :::image type="content" source="media/provision-sims-arm-template/sims-arm-template-configuration-fields.png" alt-text="Screenshot of the Azure portal showing the configuration fields for the SIMs ARM template.":::
+
+1. Select **Review + create**.
+1. Azure will now validate the configuration values you've entered. You should see a message indicating that your values have passed validation.
+
+ If the validation fails, you'll see an error message and the **Configuration** tab(s) containing the invalid configuration will be flagged. Select the flagged tab(s) and use the error messages to correct invalid configuration before returning to the **Review + create** tab.
+
+1. Once your configuration has been validated, you can select **Create** to provision your SIMs. The Azure portal will display a confirmation screen when the SIMs have been provisioned.
++
+## Review deployed resources
+
+1. Select **Go to resource group**.
+
+ :::image type="content" source="media/template-deployment-confirmation.png" alt-text="Screenshot of the Azure portal showing a deployment confirmation for the ARM template.":::
+
+1. Confirm that your SIMs have been created in the resource group.
+
+ :::image type="content" source="media/provision-sims-arm-template/sims-resource-group.png" alt-text="Screenshot of the Azure portal showing a resource group containing newly provisioned SIMs.":::
+
+## Next steps
+
+You'll need to assign a SIM policy to your SIMs to bring them into service.
+<!-- we may want to update the template to include SIM policies, or update the link below to reference the ARM template procedure rather than the portal -->
+
+- [Configure a SIM policy for Azure Private 5G Core Preview - Azure portal](configure-sim-policy-azure-portal.md)
+- [Assign a SIM policy to a SIM](provision-sims-azure-portal.md#assign-sim-policies)
private-5g-core Provision Sims Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/provision-sims-azure-portal.md
# Provision SIMs for Azure Private 5G Core Preview - Azure portal
-*SIM resources* represent physical SIMs or eSIMs used by user equipment (UEs) served by the private mobile network. In this how-to guide, we'll provision new SIMs for an existing private mobile network. You can also choose to assign static IP addresses and a SIM policy to the SIMs you provision.
+*SIM* resources represent physical SIMs or eSIMs used by user equipment (UEs) served by the private mobile network. In this how-to guide, we'll provision new SIMs for an existing private mobile network. You can also choose to assign static IP addresses and a SIM policy to the SIMs you provision.
## Prerequisites
private-link Create Private Endpoint Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-endpoint-bicep.md
+
+ Title: 'Quickstart: Create a private endpoint using Bicep'
+description: In this quickstart, you'll learn how to create a private endpoint using Bicep.
+++++ Last updated : 05/02/2022+
+#Customer intent: As someone who has a basic network background but is new to Azure, I want to create a private endpoint using Bicep.
++
+# Quickstart: Create a private endpoint using Bicep
+
+In this quickstart, you'll use Bicep to create a private endpoint.
++
+You can also create a private endpoint by using the [Azure portal](create-private-endpoint-portal.md), [Azure PowerShell](create-private-endpoint-powershell.md), the [Azure CLI](create-private-endpoint-cli.md), or an [Azure Resource Manager Template](create-private-endpoint-template.md).
+
+## Prerequisites
+
+You need an Azure account with an active subscription. If you don't already have an Azure account, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+## Review the Bicep file
+
+This Bicep file creates a private endpoint for an instance of Azure SQL Database.
+
+The Bicep file that this quickstart uses is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/private-endpoint-sql/).
++
+The Bicep file defines multiple Azure resources:
+
+- [**Microsoft.Sql/servers**](/azure/templates/microsoft.sql/servers): The instance of SQL Database with the sample database.
+- [**Microsoft.Sql/servers/databases**](/azure/templates/microsoft.sql/servers/databases): The sample database.
+- [**Microsoft.Network/virtualNetworks**](/azure/templates/microsoft.network/virtualnetworks): The virtual network where the private endpoint is deployed.
+- [**Microsoft.Network/privateEndpoints**](/azure/templates/microsoft.network/privateendpoints): The private endpoint that you use to access the instance of SQL Database.
+- [**Microsoft.Network/privateDnsZones**](/azure/templates/microsoft.network/privatednszones): The zone that you use to resolve the private endpoint IP address.
+- [**Microsoft.Network/privateDnsZones/virtualNetworkLinks**](/azure/templates/microsoft.network/privatednszones/virtualnetworklinks)
+- [**Microsoft.Network/privateEndpoints/privateDnsZoneGroups**](/azure/templates/microsoft.network/privateendpoints/privateDnsZoneGroups): The zone group that you use to associate the private endpoint with a private DNS zone.
+- [**Microsoft.Network/publicIpAddresses**](/azure/templates/microsoft.network/publicIpAddresses): The public IP address that you use to access the virtual machine.
+- [**Microsoft.Network/networkInterfaces**](/azure/templates/microsoft.network/networkinterfaces): The network interface for the virtual machine.
+- [**Microsoft.Compute/virtualMachines**](/azure/templates/microsoft.compute/virtualmachines): The virtual machine that you use to test the connection of the private endpoint to the instance of SQL Database.
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters sqlAdministratorLogin=<admin-login> vmAdminUsername=<vm-login>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -sqlAdministratorLogin "<admin-login>" -vmAdminUsername "<vm-login>"
+ ```
+
+
+
+ > [!NOTE]
+ > Replace **\<admin-login\>** with the username for the SQL logical server. Replace **\<vm-login\>** with the username for the virtual machine. You'll be prompted to enter **sqlAdministratorLoginPassword**. You'll also be prompted to enter **vmAdminPassword**, which must be at least 12 characters long and contain at least one lowercase and uppercase character and one special character.
+
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Validate the deployment
+
+> [!NOTE]
+> The Bicep file generates a unique name for the virtual machine myVm<b>{uniqueid}</b> resource, and for the SQL Database sqlserver<b>{uniqueid}</b> resource. Substitute your generated value for **{uniqueid}**.
+
+### Connect to a VM from the internet
+
+Connect to the VM _myVm{uniqueid}_ from the internet by doing the following:
+
+1. In the Azure portal search bar, enter _myVm{uniqueid}_.
+
+1. Select **Connect**. **Connect to virtual machine** opens.
+
+1. Select **Download RDP File**. Azure creates a Remote Desktop Protocol (RDP) file and downloads it to your computer.
+
+1. Open the downloaded RDP file.
+
+ a. If you're prompted, select **Connect**.
+ b. Enter the username and password that you specified when you created the VM.
+
+ > [!NOTE]
+ > You might need to select **More choices** > **Use a different account** to specify the credentials you entered when you created the VM.
+
+1. Select **OK**.
+
+ You might receive a certificate warning during the sign-in process. If you do, select **Yes** or **Continue**.
+
+1. After the VM desktop appears, minimize it to go back to your local desktop.
+
+### Access the SQL Database server privately from the VM
+
+To connect to the SQL Database server from the VM by using the private endpoint, do the following:
+
+1. On the Remote Desktop of _myVM{uniqueid}_, open PowerShell.
+1. Run the following command:
+
+ `nslookup sqlserver{uniqueid}.database.windows.net`ΓÇ»
+
+ You'll receive a message that's similar to this one:
+
+ ```
+ Server: UnKnown
+ Address: 168.63.129.16
+ Non-authoritative answer:
+ Name: sqlserver.privatelink.database.windows.net
+ Address: 10.0.0.5
+ Aliases: sqlserver.database.windows.net
+ ```
+
+1. Install SQL Server Management Studio.
+
+1. On the **Connect to server** pane, do the following:
+ - For **Server type**, select **Database Engine**.
+ - For **Server name**, select **sqlserver{uniqueid}.database.windows.net**.
+ - For **Username**, enter the username that was provided earlier.
+ - For **Password**, enter the password that was provided earlier.
+ - For **Remember password**, selectΓÇ»**Yes**.
+
+1. Select **Connect**.
+1. On the left pane, select **Databases**. Optionally, you can create or query information from _sample-db_.
+1. Close the Remote Desktop connection to _myVm{uniqueid}_.
+
+## Clean up resources
+
+When you no longer need the resources that you created with the private link service, delete the resource group. This removes the private link service and all the related resources.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+For more information about the services that support private endpoints, see:
+
+> [!div class="nextstepaction"]
+> [What is Azure Private Link?](private-link-overview.md#availability)
purview Register Scan Power Bi Tenant Cross Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-power-bi-tenant-cross-tenant.md
This article outlines how to register a Power BI tenant in a cross-tenant scenar
|**Scenarios** |**Microsoft Purview public access allowed/denied** |**Power BI public access allowed /denied** | **Runtime option** | **Authentication option** | **Deployment checklist** | |||||||
-|Scenario 1 |Allowed |Allowed |Azure runtime |Delegated Authentication | [Deployment checklist](#deployment-checklist) |
-|Scenario 2 |Allowed |Allowed |Self-hosted runtime |Delegated Authentication | [Deployment checklist](#deployment-checklist) |
+|Public access with Azure IR |Allowed |Allowed |Azure runtime |Delegated Authentication | [Deployment checklist](#deployment-checklist) |
+|Public access with Self-hosted IR |Allowed |Allowed |Self-hosted runtime |Delegated Authentication | [Deployment checklist](#deployment-checklist) |
### Known limitations
Before you start, make sure you have the following prerequisites:
## Deployment checklist Use any of the following deployment checklists during the setup or for troubleshooting purposes, based on your scenario:
-# [Scenario 1](#tab/Scenario1)
+# [Public access with Azure IR](#tab/Scenario1)
### Scan cross-tenant Power BI using Azure IR and Delegated Authentication in public network
Use any of the following deployment checklists during the setup or for troublesh
2. **Implicit grant and hybrid flows**, **ID tokens (used for implicit and hybrid flows)** is selected. 3. **Allow public client flows** is enabled.
-# [Scenario 2](#tab/Scenario2)
+# [Public access with Self-hosted IR](#tab/Scenario2)
### Scan cross-tenant Power BI using self-hosted IR and Delegated Authentication in public network 1. Make sure Power BI and Microsoft Purview accounts are in cross-tenant.
purview Register Scan Power Bi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-power-bi-tenant.md
This article outlines how to register a Power BI tenant in a **same-tenant scena
|**Scenarios** |**Microsoft Purview public access allowed/denied** |**Power BI public access allowed /denied** | **Runtime option** | **Authentication option** | **Deployment checklist** | |||||||
-|Scenario 1 |Allowed |Allowed |Azure Runtime | Microsoft Purview Managed Identity | [Review deployment checklist](#deployment-checklist) |
-|Scenario 2 |Allowed |Allowed |Self-hosted runtime |Delegated Authentication | [Review deployment checklist](#deployment-checklist) |
-|Scenario 3 |Allowed |Denied |Self-hosted runtime |Delegated Authentication | [Review deployment checklist](#deployment-checklist) |
-|Scenario 4 |Denied |Allowed |Self-hosted runtime |Delegated Authentication | [Review deployment checklist](#deployment-checklist) |
-|Scenario 5 |Denied |Denied |Self-hosted runtime |Delegated Authentication | [Review deployment checklist](#deployment-checklist) |
+|Public access with Azure IR |Allowed |Allowed |Azure Runtime | Microsoft Purview Managed Identity | [Review deployment checklist](#deployment-checklist) |
+|Public access with Self-hosted IR |Allowed |Allowed |Self-hosted runtime |Delegated Authentication | [Review deployment checklist](#deployment-checklist) |
+|Private access |Allowed |Denied |Self-hosted runtime |Delegated Authentication | [Review deployment checklist](#deployment-checklist) |
+|Private access |Denied |Allowed |Self-hosted runtime |Delegated Authentication | [Review deployment checklist](#deployment-checklist) |
+|Private access |Denied |Denied |Self-hosted runtime |Delegated Authentication | [Review deployment checklist](#deployment-checklist) |
### Known limitations
Before you start, make sure you have the following prerequisites:
## Deployment checklist Use any of the following deployment checklists during the setup or for troubleshooting purposes, based on your scenario:
-# [Scenario 1](#tab/Scenario1)
-
-### Scenario 1 - Scan same-tenant Power BI using Azure IR and Managed Identity in public network
+# [Public access with Azure IR](#tab/Scenario1)
+### Scan same-tenant Power BI using Azure IR and Managed Identity in public network
1. Make sure Power BI and Microsoft Purview accounts are in the same tenant. 2. Make sure Power BI tenant Id is entered correctly during the registration.
Use any of the following deployment checklists during the setup or for troublesh
6. From Azure Active Directory tenant, make sure [Microsoft Purview account MSI is member of the new security group](#authenticate-to-power-bi-tenant-managed-identity-only). 7. On the Power BI Tenant Admin portal, validate if [Allow service principals to use read-only Power BI admin APIs](#associate-the-security-group-with-power-bi-tenant) is enabled for the new security group.
-# [Scenario 2](#tab/Scenario2)
-### Scenario 2 - Scan same-tenant Power BI using self-hosted IR and Delegated Authentication in public network
+# [Public access with Self-hosted IR](#tab/Scenario2)
+### Scan same-tenant Power BI using self-hosted IR and Delegated Authentication in public network
1. Make sure Power BI and Microsoft Purview accounts are in the same tenant. 2. Make sure Power BI tenant Id is entered correctly during the registration.
Use any of the following deployment checklists during the setup or for troublesh
3. Network connectivity from Self-hosted runtime to Microsoft services is enabled. 4. [JDK 8 or later](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html) is installed.
-# [Scenario 3, 4 and 5](#tab/Scenario3)
-### Scenario 3, 4 and 5 - Scan same-tenant Power BI using self-hosted IR and Delegated Authentication in a private network
+# [Private access](#tab/Scenario3)
+### Scan same-tenant Power BI using self-hosted IR and Delegated Authentication in a private network
1. Make sure Power BI and Microsoft Purview accounts are in the same tenant. 2. Make sure Power BI tenant Id is entered correctly during the registration.
search Search Howto Connecting Azure Sql Database To Azure Search Using Indexers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md
Previously updated : 02/28/2022 Last updated : 05/03/2022 # Index data from Azure SQL
This article supplements [**Create an indexer**](search-howto-create-indexers.md
+ A REST client, such as [Postman](search-get-started-rest.md) or [Visual Studio Code with the extension for Azure Cognitive Search](search-get-started-vs-code.md) to send REST calls that create the data source, index, and indexer. ++ If you're using the [Azure portal](https://portal.azure.com/) to create the data source, make sure that access to all public networks is enabled in the Azure SQL firewall while going through the instructions below. Otherwise, you need to enable access to all public networks during this setup and then disable it again, or instead, you must use REST API from a device with an authorized IP in the firewall rules, to perform these operations. If the Azure SQL firewall has public networks access disabled, there will be errors when connecting from the portal to it.+ <!-- Real-time data synchronization must not be an application requirement. An indexer can reindex your table at most every five minutes. If your data changes frequently, and those changes need to be reflected in the index within seconds or single minutes, we recommend using the [REST API](/rest/api/searchservice/AddUpdate-or-Delete-Documents) or [.NET SDK](search-get-started-dotnet.md) to push updated rows directly. Incremental indexing is possible. If you have a large data set and plan to run the indexer on a schedule, Azure Cognitive Search must be able to efficiently identify new, changed, or deleted rows. Non-incremental indexing is only allowed if you're indexing on demand (not on schedule), or indexing fewer than 100,000 rows. For more information, see [Capturing Changed and Deleted Rows](#CaptureChangedRows) below. -->
It's not recommended. Only **rowversion** allows for reliable data synchronizati
+ You can ensure that when the indexer runs, there are no outstanding transactions on the table thatΓÇÖs being indexed (for example, all table updates happen as a batch on a schedule, and the Azure Cognitive Search indexer schedule is set to avoid overlapping with the table update schedule).
-+ You periodically do a full reindex to pick up any missed rows.
++ You periodically do a full reindex to pick up any missed rows.
search Search Howto Index Cosmosdb Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-cosmosdb-mongodb.md
api-key: [Search service admin key]
} ```
+## Limitations
+
+These are the limitations of this feature:
+++ Custom queries are not supported.+++ In this feature, the column name `_ts` is a reserved word. If there is a column called `_ts` in the Mongo database, the indexer will fail. If this is the case, it is recommended an alternate method to index is used, such as [Push API](search-what-is-data-import.md) or through [Azure Data Factory](../data-factory/connector-azure-cosmos-db.md) by selecting an Azure Cognitive Search index sink.++ ## Next steps You can now control how you [run the indexer](search-howto-run-reset-indexers.md), [monitor status](search-howto-monitor-indexers.md), or [schedule indexer execution](search-howto-schedule-indexers.md). The following articles apply to indexers that pull content from Azure Cosmos DB:
search Search Import Data Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-import-data-portal.md
Previously updated : 10/06/2021 Last updated : 05/03/2022 # Import data wizard in Azure Cognitive Search
The wizard is not without limitations. Constraints are summarized as follows:
+ AI enrichment, as exposed in the portal, is limited to a subset of built-in skills.
-+ A [knowledge store](knowledge-store-concept-intro.md), which can be created by the wizard, is limited to a few default projections and uses a default naming convention. If you want to customize names or projections, you will need to create the knowledge store through REST or the SDKs.
++ A [knowledge store](knowledge-store-concept-intro.md), which can be created by the wizard, is limited to a few default projections and uses a default naming convention. If you want to customize names or projections, you will need to create the knowledge store through REST API or the SDKs.+++ Public access to all networks must be enabled on the supported data source while the wizard is used, since the portal won't be able to access the data source during setup if public access is disabled. This means that if your data source has a firewall enabled, you must disable it, run the Import Data wizard and then enable it after wizard setup is completed. If this is not an option, you can create Azure Cognitive Search data source, indexer, skillset and index through REST API or the SDKs. ## Workflow
Internally, the wizard also sets up the following, which is not visible in the i
The best way to understand the benefits and limitations of the wizard is to step through it. The following quickstart will guide you through each step. > [!div class="nextstepaction"]
-> [Quickstart: Create a search index using the Azure portal](search-get-started-portal.md)
+> [Quickstart: Create a search index using the Azure portal](search-get-started-portal.md)
search Search Index Azure Sql Managed Instance With Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-index-azure-sql-managed-instance-with-managed-identity.md
+
+ Title: Connect to Azure SQL Managed Instance using managed identity
+
+description: Learn how to set up an Azure Cognitive Search indexer connection to an Azure SQL Managed Instance using a managed identity
+++++++ Last updated : 05/03/2022++
+# Set up an indexer connection to Azure SQL Managed Instance using a managed identity
+
+This article describes how to set up an Azure Cognitive Search indexer connection to [SQL Managed Instance](/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview) using a managed identity instead of providing credentials in the connection string.
+
+You can use a system-assigned managed identity or a user-assigned managed identity (preview). Managed identities are Azure AD logins and require Azure role assignments to access data in SQL Managed Instance.
+
+Before learning more about this feature, it is recommended that you have an understanding of what an indexer is and how to set up an indexer for your data source. More information can be found at the following links:
+
+* [Indexer overview](search-indexer-overview.md)
+* [SQL Managed Instance indexer](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md)
+
+## Prerequisites
+
+* [Create a managed identity](search-howto-managed-identities-data-sources.md) for your search service.
+
+* Azure AD admin role on SQL Managed Instance:
+
+ To assign read permissions on SQL Managed Instance, you must be an Azure Global Admin with a SQL Managed Instance. See [Configure and manage Azure AD authentication with SQL Managed Instance](/azure/azure-sql/managed-instance/authentication-aad-configure) and follow the steps to provision an Azure AD admin (SQL Managed Instance).
+
+* [Configure public endpoint and NSG in SQL Managed Instance](search-howto-connecting-azure-sql-mi-to-azure-search-using-indexers.md) to allow connections from Azure Cognitive Search.
+
+## 1 - Assign permissions to read the database
+
+Follow these steps to assign the search service system managed identity permission to read the SQL Managed database.
+
+1. Connect to your SQL Managed Instance through SQL Server Management Studio (SSMS) by using one of the following methods:
+
+ - [Configure a point-to-site connection from on-premises](/azure/azure-sql/managed-instance/point-to-site-p2s-configure)
+ - [Configure an Azure VM](/azure/azure-sql/managed-instance/connect-vm-instance-configure)
+
+1. Authenticate with your Azure AD account.
+
+ :::image type="content" source="./media/search-index-azure-sql-managed-instance-with-managed-identity/sql-login.png" alt-text="Showing screenshot of the Connect to Server dialog.":::
+
+3. From the left pane, locate the SQL database you will be using as data source for indexing and right-click it. Select **New Query**.
+
+ :::image type="content" source="./media/search-index-azure-sql-managed-instance-with-managed-identity/new-sql-query.png" alt-text="Showing screenshot of new SQL query.":::
++
+4. In the T-SQL window, copy the following commands and include the brackets around your search service name. Click on **Execute**.
+
+
+ ```tsql
+ CREATE USER [insert your search service name here or user-assigned managed identity name] FROM EXTERNAL PROVIDER;
+ EXEC sp_addrolemember 'db_datareader', [insert your search service name here or user-assigned managed identity name];
+ ```
+
+ :::image type="content" source="./media/search-index-azure-sql-managed-instance-with-managed-identity/execute-sql-query.png" alt-text="Showing screenshot of how to execute SQL query.":::
+
+If you later change the search service system identity after assigning permissions, you must remove the role membership and remove the user in the SQL database, then repeat the permission assignment. Removing the role membership and user can be accomplished by running the following commands:
+
+ ```tsql
+sp_droprolemember 'db_datareader', [insert your search service name or user-assigned managed identity name];
+
+DROP USER IF EXISTS [insert your search service name or user-assigned managed identity name];
+```
+
+## 2 - Add a role assignment
+
+In this step you will give your Azure Cognitive Search service permission to read data from your SQL Managed Instance.
+
+1. In the Azure portal navigate to your SQL Managed Instance page.
+1. Select **Access control (IAM)**.
+1. Select **Add** then **Add role assignment**.
+
+ :::image type="content" source="./media/search-index-azure-sql-managed-instance-with-managed-identity/access-control-add-role-assignment.png" alt-text="Showing screenshot of the Access Control page." lightbox="media/search-index-azure-sql-managed-instance-with-managed-identity/access-control-add-role-assignment.png":::
++
+4. Select **Reader** role.
+1. Leave **Assign access to** as **Azure AD user, group or service principal**.
+1. If you're using a system-assigned managed identity, search for your search service, then select it. If you're using a user-assigned managed identity, search for the name of the user-assigned managed identity, then select it. Select **Save**.
+
+ Example for SQL Managed Instance using a system-assigned managed identity:
+
+ :::image type="content" source="./media/search-index-azure-sql-managed-instance-with-managed-identity/add-role-assignment.png" alt-text="Showing screenshot of the member role assignment.":::
+
+## 3 - Create the data source
+
+Create the data source and provide a system-assigned managed identity.
+
+### System-assigned managed identity
+
+The [REST API](/rest/api/searchservice/create-data-source), Azure portal, and the [.NET SDK](/dotnet/api/azure.search.documents.indexes.models.searchindexerdatasourceconnection) support system-assigned managed identity.
+
+When you're connecting with a system-assigned managed identity, the only change to the data source definition is the format of the "credentials" property. You'll provide an Initial Catalog or Database name and a ResourceId that has no account key or password. The ResourceId must include the subscription ID of SQL Managed Instance, the resource group of SQL Managed instance, and the name of the SQL database.
+
+Here is an example of how to create a data source to index data from a storage account using the [Create Data Source](/rest/api/searchservice/create-data-source) REST API and a managed identity connection string. The managed identity connection string format is the same for the REST API, .NET SDK, and the Azure portal.
+
+```http
+POST https://[service name].search.windows.net/datasources?api-version=2020-06-30
+Content-Type: application/json
+api-key: [admin key]
+
+{
+ "name" : "sql-mi-datasource",
+ "type" : "azuresql",
+ "credentials" : {
+ "connectionString" : "Database=[SQL database name];ResourceId=/subscriptions/[subscription ID]/resourcegroups/[resource group name]/providers/Microsoft.Sql/managedInstances/[SQL Managed Instance name];Connection Timeout=100;"
+ },
+ "container" : {
+ "name" : "my-table"
+ }
+}
+```
+
+## 4 - Create the index
+
+The index specifies the fields in a document, attributes, and other constructs that shape the search experience.
+
+Here's a [Create Index](/rest/api/searchservice/create-index) REST API call with a searchable `booktitle` field:
+
+```http
+POST https://[service name].search.windows.net/indexes?api-version=2020-06-30
+Content-Type: application/json
+api-key: [admin key]
+
+{
+ "name" : "my-target-index",
+ "fields": [
+ { "name": "id", "type": "Edm.String", "key": true, "searchable": false },
+ { "name": "booktitle", "type": "Edm.String", "searchable": true, "filterable": false, "sortable": false, "facetable": false }
+ ]
+}
+```
+
+## 5 - Create the indexer
+
+An indexer connects a data source with a target search index, and provides a schedule to automate the data refresh. Once the index and data source have been created, you're ready to create the indexer.
+
+Here's a [Create Indexer](/rest/api/searchservice/create-indexer) REST API call with an Azure SQL indexer definition. The indexer will run when you submit the request.
+
+```http
+POST https://[service name].search.windows.net/indexers?api-version=2020-06-30
+Content-Type: application/json
+api-key: [admin key]
+
+{
+ "name" : "sql-mi-indexer",
+ "dataSourceName" : "sql-mi-datasource",
+ "targetIndexName" : "my-target-index"
+}
+```
+
+## Troubleshooting
+
+If you get an error when the indexer tries to connect to the data source that says that the client is not allowed to access the server, take a look at [common indexer errors](./search-indexer-troubleshooting.md).
+
+You can also rule out any firewall issues by trying the connection with and without restrictions in place.
+
+## See also
+
+* [SQL Managed Instance and Azure SQL Database indexer](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md)
search Search Indexer Howto Access Private https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-howto-access-private.md
Previously updated : 02/17/2022 Last updated : 05/03/2022 # Make outbound connections through a private endpoint
To create a shared private link, use the Azure portal or the [Create Or Update S
+ If you're connecting to a preview data source, such as Azure Database for MySQL or Azure Functions, use a preview version of the Management REST API to create the shared private link. Preview versions that support a shared private link include `2020-08-01-preview` or `2021-04-01-preview`. ++ If you're using the [Azure portal](https://portal.azure.com/), make sure that access to all public networks is enabled in the data source resource firewall while going through the instructions below. Otherwise, you need to enable access to all public networks during this setup and then disable it again, or instead, you must use REST API from a device with an authorized IP in the firewall rules, to perform these operations. If the supported data source resource has public networks access disabled, there will be errors when connecting from the portal to it.+
+> [!NOTE]
+> When using Private Link for data sources, [Import data](search-import-data-portal.md) wizard is not supported.
+ <a name="group-ids"></a> ## Supported resources and group IDs
search Search Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-language-support.md
Previously updated : 04/21/2022 Last updated : 05/03/2022 # Create an index for multiple languages in Azure Cognitive Search
A multilingual search application supports searching over and retrieving results
+ On the query request, set the `searchFields` parameter to scope full text search to specific fields, and then use `select` to return just those fields that have compatible content.
-The success of this technique hinges on the integrity of field content. By itself, Azure Cognitive Search does not translate strings or perform language detection as part of query execution. It's up to you to make sure that fields contain the strings you expect.
+The success of this technique hinges on the integrity of field content. By itself, Azure Cognitive Search doesn't translate strings or perform language detection as part of query execution. It's up to you to make sure that fields contain the strings you expect.
+
+## Need text translation?
+
+This article assumes you have translated strings in place. If that's not the case, you can attach Cognitive Services to an [enrichment pipeline](cognitive-search-concept-intro.md), invoking text translation during data ingestion. Text translation takes a dependency on the indexer feature and Cognitive Services, but all setup is done within Azure Cognitive Search.
+
+To add text translation, follow these steps:
+
+1. Verify your content is in a [supported data source](search-indexer-overview.md#supported-data-sources).
+
+1. [Create a data source](search-howto-create-indexers.md#prepare-external-data) that points to your content.
+
+1. [Create a skillset](cognitive-search-defining-skillset.md) that includes the [Text Translation skill](cognitive-search-skill-text-translation.md).
+
+ The Text Translation skill takes a single string as input. If you have multiple fields, can create a skillset that calls Text Translation multiple times, once for each field. Alternatively, you can use the [Text Merger skill](cognitive-search-skill-textmerger.md) to consolidate the content of multiple fields into one long string.
+
+1. Create an index that includes fields for translated strings. Most of this article covers index design and field definitions for indexing and querying multi-language content.
+
+1. [Attach a multi-region Cognitive Services resource](cognitive-search-attach-cognitive-services.md) to your skillset.
+
+1. [Create and run the indexer](search-howto-create-indexers.md), and then apply the guidance in this article to query just the fields of interest.
> [!TIP]
-> If text translation is a requirement, you can [create a skillset](cognitive-search-defining-skillset.md) that adds [text translation](cognitive-search-skill-text-translation.md) to the indexing pipeline. This approach requires [using an indexer](search-howto-create-indexers.md) and [attaching a Cognitive Services resource](cognitive-search-attach-cognitive-services.md).
->
-> Text translation is built into the [Import data wizard](cognitive-search-quickstart-blob.md). If you have a [supported data source](search-indexer-overview.md#supported-data-sources) with text you'd like to translate, you can step through the wizard to try out the language detection and translation functionality.
+> Text translation is built into the [Import data wizard](cognitive-search-quickstart-blob.md). If you have a [supported data source](search-indexer-overview.md#supported-data-sources) with text you'd like to translate, you can step through the wizard to try out the language detection and translation functionality before writing any code.
## Define fields for content in different languages
The "analyzer" property on a field definition is used to set the [language analy
## Build and load an index
-An intermediate (and perhaps obvious) step is that you have to [build and populate the index](search-get-started-dotnet.md) before formulating a query. We mention this step here for completeness. One way to determine index availability is by checking the indexes list in the [portal](https://portal.azure.com).
+An intermediate step is [building and populating the index](search-get-started-dotnet.md) before formulating a query. We mention this step here for completeness. One way to determine index availability is by checking the indexes list in the [portal](https://portal.azure.com).
## Constrain the query and trim results
Parameters on the query are used to limit search to specific fields and then tri
Given a goal of constraining search to fields containing French strings, you would use **searchFields** to target the query at fields containing strings in that language.
-Specifying the analyzer on a query request is not necessary. A language analyzer on the field definition will always be used during query processing. For queries that specify multiple fields invoking different language analyzers, the terms or phrases will be processed independently by the assigned analyzers for each field.
+Specifying the analyzer on a query request isn't necessary. A language analyzer on the field definition will always be used during query processing. For queries that specify multiple fields invoking different language analyzers, the terms or phrases will be processed independently by the assigned analyzers for each field.
By default, a search returns all fields that are marked as retrievable. As such, you might want to exclude fields that don't conform to the language-specific search experience you want to provide. Specifically, if you limited search to a field with French strings, you probably want to exclude fields with English strings from your results. Using the **$select** query parameter gives you control over which fields are returned to the calling application.
private static void RunQueries(SearchClient srchclient)
## Boost language-specific fields
-Sometimes the language of the agent issuing a query is not known, in which case the query can be issued against all fields simultaneously. IA preference for results in a certain language can be defined using [scoring profiles](index-add-scoring-profiles.md). In the example below, matches found in the description in English will be scored higher relative to matches in other languages:
+Sometimes the language of the agent issuing a query isn't known, in which case the query can be issued against all fields simultaneously. IA preference for results in a certain language can be defined using [scoring profiles](index-add-scoring-profiles.md). In the example below, matches found in the description in English will be scored higher relative to matches in other languages:
```JSON "scoringProfiles": [
security Customer Lockbox Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/customer-lockbox-overview.md
The following services are generally available for Customer Lockbox:
- Azure Monitor - Azure Storage - Azure SQL Database
+- Azure SQL managed Instance
- Azure subscription transfers - Azure Synapse Analytics - Virtual machines in Azure (covering remote desktop access, access to memory dumps, and managed disks)
security Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/feature-availability.md
The following tables display the current Microsoft Sentinel feature availability
| - [Rapid7](../../sentinel/sentinel-solutions-catalog.md#rapid7) | Public Preview | Not Available | | - [RSA SecurID](../../sentinel/sentinel-solutions-catalog.md#rsa) | Public Preview | Not Available | | - [Salesforce Service Cloud](../../sentinel/data-connectors-reference.md#salesforce-service-cloud-preview) | Public Preview | Not Available |
-| - [SAP (Continuous Threat Monitoring for SAP)](../../sentinel/sap-deploy-solution.md) | Public Preview | Not Available |
+| - [SAP (Continuous Threat Monitoring for SAP)](../../sentinel/sap/deployment-overview.md) | Public Preview | Not Available |
| - [Semperis](../../sentinel/sentinel-solutions-catalog.md#semperis) | Public Preview | Not Available | | - [Senserva Pro](../../sentinel/sentinel-solutions-catalog.md#senserva-pro) | Public Preview | Not Available | | - [Slack Audit](../../sentinel/sentinel-solutions-catalog.md#slack) | Public Preview | Not Available |
security Ransomware Features Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/ransomware-features-resources.md
Microsoft has invested in Azure native security capabilities that organizations can leverage to defeat ransomware attack techniques found in both high-volume, everyday attacks, and sophisticated targeted attacks. Key capabilities include:-- **Native Threat Detection**: Microsoft Defender for Cloud provides high-0quality threat detection and response capabilities, also called Extended Detection and Response (XDR). This helps you:
+- **Native Threat Detection**: Microsoft Defender for Cloud provides high-quality threat detection and response capabilities, also called Extended Detection and Response (XDR). This helps you:
- Avoid wasting time and talent of scarce security resources to build custom alerts using raw activity logs. - Ensure effective security monitoring, which often enables security teams to rapidly approve use of Azure services. - **Passwordless and Multi-factor authentication**: Azure Active Directory MFA, Azure AD Authenticator App, and Windows Hello provide these capabilities. This helps protect accounts against commonly seen password attacks (which account for 99.9% of the volume of identity attacks we see in Azure AD). While no security is perfect, eliminating password-only attack vectors dramatically lowers the ransomware attack risk to Azure resources.
Other articles in this series:
- [Ransomware protection in Azure](ransomware-protection.md) - [Prepare for a ransomware attack](ransomware-prepare.md)-- [Detect and respond to ransomware attack](ransomware-detect-respond.md)
+- [Detect and respond to ransomware attack](ransomware-detect-respond.md)
sentinel Data Connectors Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors-reference.md
Configure eNcore to stream data via TCP to the Log Analytics Agent. This configu
| Connector attribute | Description | | | | | **Data ingestion method** | Only available after installing the [Continuous Threat Monitoring for SAP solution](sentinel-solutions-catalog.md#sap)|
-| **Log Analytics table(s)** | See [Microsoft Sentinel SAP solution logs reference](sap-solution-log-reference.md) |
-| **Vendor documentation/<br>installation instructions** | [Deploy SAP continuous threat monitoring](sap-deploy-solution.md) |
+| **Log Analytics table(s)** | See [Microsoft Sentinel SAP solution data reference](sap/sap-solution-log-reference.md) |
+| **Vendor documentation/<br>installation instructions** | [Deploy SAP continuous threat monitoring](sap/deployment-overview.md) |
| **Supported by** | Microsoft |
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| Connector attribute | Description | | | |
-| **Data ingestion method** | [**Syslog**](connect-syslog.md), with, [ASIM parsers](normalization-about-parsers.md) based on Kusto functons |
+| **Data ingestion method** | [**Syslog**](connect-syslog.md), with, [ASIM parsers](normalization-about-parsers.md) based on Kusto functions |
| **Log Analytics table(s)** | [Syslog](/azure/azure-monitor/reference/tables/syslog) | | **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) | | **Supported by** | Microsoft |
sentinel Iot Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/iot-solution.md
To use this playbook, create a watchlist that maps between the sensor names and
Typically, the entity authorized to program a PLC is the Engineering Workstation. Therefore, attackers might create new Engineering Workstations in order to create malicious PLC programming.
-This playbook opens a ticket in SerivceNow each time a new Engineering Workstation is detected, explicitly parsing the IoT device entity fields.
+This playbook opens a ticket in ServiceNow each time a new Engineering Workstation is detected, explicitly parsing the IoT device entity fields.
## Next steps
sentinel Sap Deploy Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap-deploy-solution.md
- Title: Deploy SAP continuous threat monitoring | Microsoft Docs
-description: Learn how to deploy the Microsoft Sentinel solution for SAP environments.
---- Previously updated : 11/09/2021--
-# Deploy SAP continuous threat monitoring (preview)
--
-This article takes you step by step through the process of deploying Microsoft Sentinel continuous threat monitoring for SAP.
-
-> [!IMPORTANT]
-> The Microsoft Sentinel SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
->
-
-## Overview
-
-[Microsoft Sentinel solutions](sentinel-solutions.md) include bundled security content, such as threat detections, workbooks, and watchlists. With these solutions, you can onboard Microsoft Sentinel security content for a specific data connector by using a single process.
-
-By using the Microsoft Sentinel SAP data connector, you can monitor SAP systems for sophisticated threats within the business and application layers.
-
-The SAP data connector streams 14 application logs from the entire SAP system landscape. The data connector collects logs from Advanced Business Application Programming (ABAP) via NetWeaver RFC calls and from file storage data via OSSAP Control interface. The SAP data connector adds to the ability of Microsoft Sentinel to monitor the SAP underlying infrastructure.
-
-To ingest SAP logs into Microsoft Sentinel, you must have the Microsoft Sentinel SAP data connector installed in your SAP environment. For the deployment, we recommend that you use a Docker container on an Azure virtual machine, as described in this tutorial.
-
-After the SAP data connector is deployed, deploy the SAP solution security content to gain insight into your organization's SAP environment and improve any related security operation capabilities.
-
-In this article, you'll learn how to:
-
-> [!div class="checklist"]
-> * Prepare your SAP system for the SAP data connector deployment.
-> * Use a Docker container and an Azure virtual machine (VM) to deploy the SAP data connector.
-> * Deploy the SAP solution security content in Microsoft Sentinel.
-
-> [!NOTE]
-> Extra steps are required to deploy your SAP data connector over a Secure Network Communications (SNC) connection. For more information, see [Deploy the Microsoft Sentinel SAP data connector with SNC](sap-solution-deploy-snc.md).
->
-## Prerequisites
-
-To deploy the Microsoft Sentinel SAP data connector and security content as described in this tutorial, you must meet the following prerequisites:
-
-| Area | Description |
-| | |
-|**Azure prerequisites** | **Access to Microsoft Sentinel**. Make a note of your Microsoft Sentinel workspace ID and key to use in this tutorial when you [deploy your SAP data connector](#deploy-your-sap-data-connector). <br><br>To view these details from Microsoft Sentinel, go to **Settings** > **Workspace settings** > **Agents management**. <br><br>**Ability to create Azure resources**. For more information, see the [Azure Resource Manager documentation](../azure-resource-manager/management/manage-resources-portal.md). <br><br>**Access to your Azure key vault**. This tutorial describes the recommended steps for using your Azure key vault to store your credentials. For more information, see the [Azure Key Vault documentation](../key-vault/index.yml). |
-|**System prerequisites** | **Software**. The SAP data connector deployment script automatically installs software prerequisites. For more information, see [Automatically installed software](#automatically-installed-software). <br><br> **System connectivity**. Ensure that the VM serving as your SAP data connector host has access to: <br>- Microsoft Sentinel <br>- Your Azure key vault <br>- The SAP environment host, via the following TCP ports: *32xx*, *5xx13*, and *33xx*, *48xx* (in case SNC is used) where *xx* is the SAP instance number. <br><br>Make sure that you also have an SAP user account in order to access the SAP software download page.<br><br>**System architecture**. The SAP solution is deployed on a VM as a Docker container, and each SAP client requires its own container instance. For sizing recommendations, see [Recommended virtual machine sizing](sap-solution-detailed-requirements.md#recommended-virtual-machine-sizing). <br>Your VM and the Microsoft Sentinel workspace can be in different Azure subscriptions, and even different Azure AD tenants.|
-|**SAP prerequisites** | **Supported SAP versions**. We recommend using [SAP_BASIS versions 750 SP13](https://support.sap.com/en/my-support/software-downloads/support-package-stacks/product-versions.html#:~:text=SAP%20NetWeaver%20%20%20%20SAP%20Product%20Version,%20%20SAPKB710%3Cxx%3E%20%207%20more%20rows) or later. <br><br>Certain steps in this tutorial provide alternative instructions if you're working on older SAP version [SAP_BASIS 740](https://support.sap.com/en/my-support/software-downloads/support-package-stacks/product-versions.html#:~:text=SAP%20NetWeaver%20%20%20%20SAP%20Product%20Version,%20%20SAPKB710%3Cxx%3E%20%207%20more%20rows).<br><br> **SAP system details**. Make a note of the following SAP system details for use in this tutorial:<br>- SAP system IP address<br>- SAP system number, such as `00`<br>- SAP System ID, from the SAP NetWeaver system (for example, `NPL`) <br>- SAP client ID, such as`001`<br><br>**SAP NetWeaver instance access**. Access to your SAP instances must use one of the following options: <br>- [SAP ABAP user/password](#configure-your-sap-system). <br>- A user with an X509 certificate, using SAP CRYPTOLIB PSE. This option might require expert manual steps.<br><br>**Support from your SAP team**. You'll need the support of your SAP team to help ensure that your SAP system is [configured correctly](#configure-your-sap-system) for the solution deployment. |
--
-### Automatically installed software
-
-The [SAP data connector deployment script](#deploy-your-sap-data-connector) installs the following software on your VM by using *sudo* (root) privileges:
--- [Unzip](https://www.microsoft.com/en-us/p/unzip/9mt44rnlpxxt?activetab=pivot:overviewtab)-- [NetCat](https://sectools.org/tool/netcat/)-- [Python 3.6 or later](https://www.python.org/downloads/)-- [Python 3-pip](https://pypi.org/project/pip/)-- [Docker](https://www.docker.com/)-
-## Configure your SAP system
-
-This procedure describes how to ensure that your SAP system has the correct prerequisites installed and is configured for the Microsoft Sentinel SAP data connector deployment.
-
-> [!IMPORTANT]
-> Perform this procedure together with your SAP team to ensure correct configurations.
->
-
-**To configure your SAP system for the SAP data connector**:
-
-1. Ensure that the following SAP notes are deployed in your system, depending on your version:
-
- | SAP&nbsp;BASIS&nbsp;versions | Required note |
- | | |
- | - 750 SP01 to SP12<br>- 751 SP01 to SP06<br>- 752 SP01 to SP03 | 2641084: Standardized read access for the Security Audit log data |
- | - 700 to 702<br>- 710 to 711, 730, 731, 740, and 750 | 2173545: CD: CHANGEDOCUMENT_READ_ALL |
- | - 700 to 702<br>- 710 to 711, 730, 731, and 740<br>- 750 to 752 | 2502336: CD (Change Document): RSSCD100 - read only from archive, not from database |
--
- Later versions don't require the extra notes. For more information, see the [SAP support Launchpad site](https://support.sap.com/en/https://docsupdatetracker.net/index.html). Log in with an SAP user account.
-
-1. Download and install one of the following SAP change requests from the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP/CR):
-
- - **SAP version 750 or later**: Install the SAP change request *NPLK900202*
- - **SAP version 740**: Install the SAP change request *NPLK900201*
-
- When you're performing this step, be sure to use binary mode to transfer the files to the SAP system, and use the **STMS_IMPORT** SAP transaction code.
-
- > [!NOTE]
- > In the SAP **Import Options** area, the **Ignore Invalid Component Version** option might be displayed. If it is displayed, select this option before you continue.
- >
-
-1. Create a new SAP role named **/MSFTSEN/SENTINEL_CONNECTOR** by importing the SAP change request *NPLK900163*. Use the **STMS_IMPORT** SAP transaction code.
-
- Verify that the role is created with the required permissions, such as:
-
- :::image type="content" source="media/sap/required-sap-role-authorizations.png" alt-text="Required SAP role permissions for the Microsoft Sentinel SAP data connector.":::
-
- For more information, see [authorizations for the ABAP user](sap-solution-detailed-requirements.md#required-abap-authorizations).
-
-1. Create a non-dialog RFC/NetWeaver user for the SAP data connector, and attach the newly created */MSFTSEN/SENTINEL_CONNECTOR* role.
-
- - After you attach the role, verify that the role permissions are distributed to the user.
- - This process requires that you use a username and password for the ABAP user. After the new user is created and has the required permissions, be sure to change the ABAP user password.
-
-1. Download and place the SAP NetWeaver RFC SDK 7.50 for Linux on x86_64 64 BIT version on your VM, because it's required during the installation process.
-
- For example, find the SDK on the [SAP software download site](https://launchpad.support.sap.com/#/softwarecenter/template/products/_APP=00200682500000001943&_EVENT=DISPHIER&HEADER=Y&FUNCTIONBAR=N&EVENT=TREE&NE=NAVIGATE&ENR=01200314690100002214&V=MAINT) > **SAP NW RFC SDK** > **SAP NW RFC SDK 7.50** > **nwrfc750X_X-xxxxxxx.zip**. Be sure to download the **LINUX ON X86_64 65BIT** option. Copy the file, such as by using SCP, to your VM.
-
- You'll need an SAP user account to access the SAP software download page.
-
-1. (Optional) The SAP *Auditlog* file is used system-wide and supports multiple SAP clients. However, each instance of the Microsoft Sentinel SAP solution supports a single SAP client only.
-
- Therefore, if you have a multi-client SAP system, to avoid data duplication, we recommend that you enable the *Auditlog* file only for the client where you deploy the SAP solution.
--
-## Deploy a Linux VM for your SAP data connector
-
-This procedure describes how to use the Azure CLI to deploy an Ubuntu server 18.04 LTS VM and assign it with a [system-managed identity](../active-directory/managed-identities-azure-resources/index.yml).
-
-> [!TIP]
-> You can also deploy the data connector on RHEL version 7.7 or later or on SUSE version 15 or later. Note that any OS and patch levels must be completely up to date.
->
-
-**To deploy and prepare your Ubuntu VM, do the following**:
-
-1. Make sure that you have enough disk space for the Docker container runtime environment so that you'll have enough space for your operation agent logs.
-
- For example, in Ubuntu, you can mount a disk to the `/var/lib/docker` directory before installing the container, as by default you may have little space allocated to the `/var` directory.
-
- For more information, see [Recommended virtual machine sizing](sap-solution-detailed-requirements.md#recommended-virtual-machine-sizing).
-
-1. Use the following command as an example for deploying your VM, inserting the values for your resource group and VM name where indicated.
-
- ```azurecli
- az vm create --resource-group [resource group name] --name [VM Name] --image UbuntuLTS --admin-username azureuser --data-disk-sizes-gb 10 ΓÇô --size Standard_DS2 --generate-ssh-keys --assign-identity
- ```
-
-1. On your new VM, install:
-
- - [Venv](https://docs.python.org/3.8/library/venv.html), with Python version 3.8 or later.
- - The [Azure CLI](/cli/azure/), version 2.8.0 or later.
-
-> [!IMPORTANT]
-> Be sure to apply any security best practices for your organization, just as you would for any other VM.
->
-
-For more information, see [Quickstart: Create a Linux virtual machine with the Azure CLI](../virtual-machines/linux/quick-create-cli.md).
-
-## Create a key vault for your SAP credentials
-
-In this tutorial, you use a newly created or dedicated [Azure key vault](../key-vault/index.yml) to store credentials for your SAP data connector.
-
-To create or dedicate an Azure key vault, do the following:
-
-1. Create a new Azure key vault, or choose an existing one to dedicate to your SAP data connector deployment.
-
- For example, to create a new key vault, run the following commands. Be sure to use the name of your key vault resource group and enter your key vault name.
-
- ```azurecli
- kvgp=<KVResourceGroup>
-
- kvname=<keyvaultname>
-
- #Create a key vault
- az keyvault create \
- --name $kvname \
- --resource-group $kvgp
- ```
-
-1. Assign an access policy, including GET, LIST, and SET permissions to the VM's managed identity, by using one of the following methods:
-
- - **The Azure portal**:
-
- 1. In your Azure key vault, select **Access Policies** > **Add Access Policy - Secret Permissions: Get, List, and Set** > **Select Principal**.
- 1. Enter your [VM name](#deploy-a-linux-vm-for-your-sap-data-connector), and then select **Add** > **Save**.
-
- For more information, see the [Key Vault documentation](../key-vault/general/assign-access-policy-portal.md).
-
- - **The Azure CLI**:
-
- 1. Run the following command to get the [VM's principal ID](#deploy-a-linux-vm-for-your-sap-data-connector). Be sure to enter the name of your Azure resource group.
-
- ```azurecli
- VMPrincipalID=$(az vm show -g [resource group] -n [Virtual Machine] --query identity.principalId -o tsv)
- ```
-
- Your principal ID is displayed for you to use in the next step.
-
- 1. Run the following command to assign the VM access permissions to the key vault. Be sure to enter the name of your resource group and the principal ID value that was returned from the previous step.
-
- ```azurecli
- az keyvault set-policy -n [key vault] -g [resource group] --object-id $VMPrincipalID --secret-permissions get list set
- ```
-
-## Deploy your SAP data connector
-
-The deployment script of the Microsoft Sentinel SAP data connector installs the [required software](#automatically-installed-software) and then installs the connector on your [newly created VM](#deploy-a-linux-vm-for-your-sap-data-connector). It also stores credentials in your [dedicated key vault](#create-a-key-vault-for-your-sap-credentials).
-
-The deployment script of the SAP data connector is stored in [Microsoft Sentinel GitHub repository > DataConnectors > SAP](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/sapcon-sentinel-kickstart.sh).
-
-To run the SAP data connector deployment script, you'll need the following items:
--- Your Microsoft Sentinel workspace details, as listed in the [Prerequisites](#prerequisites) section.-- The SAP system details, as listed in the [Prerequisites](#prerequisites) section.-- Access to a VM user with sudo privileges.-- The SAP user that you created in [Configure your SAP system](#configure-your-sap-system), with the **/MSFTSEN/SENTINEL_CONNECTOR** role applied.-- The help of your SAP team.-
-To run the SAP solution deployment script, do the following:
-
-1. Run the following command to deploy the SAP solution on your VM:
-
- ```azurecli
- wget -O sapcon-sentinel-kickstart.sh https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/sapcon-sentinel-kickstart.sh && bash ./sapcon-sentinel-kickstart.sh
- ```
-
-1. Follow the on-screen instructions to enter your SAP and key vault details and complete the deployment. When the deployment is complete, a confirmation message is displayed:
-
- ```azurecli
- The process has been successfully completed, thank you!
- ```
-
- Microsoft Sentinel starts to retrieve SAP logs for the configured time span, until 24 hours before the initialization time.
-
-1. We recommend that you review the system logs to make sure that the data connector is transmitting data. Run:
-
- ```bash
- docker logs -f sapcon-[SID]
- ```
-
-## Deploy SAP security content
-
-Deploy the [SAP security content](sap-solution-security-content.md) from the Microsoft Sentinel **Solutions** and **Watchlists** areas.
-
-The **Microsoft Sentinel - Continuous Threat Monitoring for SAP** solution enables the SAP data connector to be displayed in the Microsoft Sentinel **Data connectors** area. The solution also deploys the **SAP - System Applications and Products** workbook and SAP-related analytics rules.
-
-Add SAP-related watchlists to your Microsoft Sentinel workspace manually.
-
-To deploy SAP solution security content, do the following:
-
-1. In Microsoft Sentinel, on the left pane, select **Solutions (Preview)**.
-
- The **Solutions** page displays a filtered, searchable list of solutions.
-
-1. To open the SAP solution page, select **Microsoft Sentinel - Continuous Threat Monitoring for SAP (preview)**.
-
- :::image type="content" source="media/sap/sap-solution.png" alt-text="Screenshot of the 'Microsoft Sentinel - Continuous Threat Monitoring for SAP (preview)' solution pane.":::
-
-1. To launch the solution deployment wizard, select **Create**, and then enter the details of the Azure subscription, resource group, and Log Analytics workspace where you want to deploy the solution.
-
-1. Select **Next** to cycle through the **Data Connectors** **Analytics** and **Workbooks** tabs, where you can learn about the components that will be deployed with this solution.
-
- The default name for the workbook is **SAP - System Applications and Products - Preview**. Change it in the workbooks tab as needed.
-
- For more information, see [Microsoft Sentinel SAP solution: security content reference (public preview)](sap-solution-security-content.md).
-
-1. On the **Review + create tab** pane, wait for the **Validation Passed** message, then select **Create** to deploy the solution.
-
- > [!TIP]
- > You can also select **Download a template** for a link to deploy the solution as code.
-
-1. After the deployment is completed, a confirmation message appears at the upper right.
-
- To display the newly deployed content, go to:
-
- - **Threat Management** > **Workbooks** > **My workbooks**, to find the [built-in SAP workbooks](sap-solution-security-content.md#built-in-workbooks).
- - **Configuration** > **Analytics** to find a series of [SAP-related analytics rules](sap-solution-security-content.md#built-in-analytics-rules).
-
-1. Add SAP-related watchlists to use in your search, detection rules, threat hunting, and response playbooks. These watchlists provide the configuration for the Microsoft Sentinel SAP Continuous Threat Monitoring solution. Do the following:
-
- a. Download SAP watchlists from the Microsoft Sentinel GitHub repository at https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP/Analytics/Watchlists.
- b. In the Microsoft Sentinel **Watchlists** area, add the watchlists to your Microsoft Sentinel workspace. Use the downloaded CSV files as the sources, and then customize them as needed for your environment.
-
- [![SAP-related watchlists added to Microsoft Sentinel.](media/sap/sap-watchlists.png)](media/sap/sap-watchlists.png#lightbox)
-
- For more information, see [Use Microsoft Sentinel watchlists](watchlists.md) and [Available SAP watchlists](sap-solution-security-content.md#available-watchlists).
-
-1. In Microsoft Sentinel, go to the **Microsoft Sentinel Continuous Threat Monitoring for SAP** data connector to confirm the connection:
-
- [![Screenshot of the Microsoft Sentinel Continuous Threat Monitoring for SAP data connector page.](media/sap/sap-data-connector.png)](media/sap/sap-data-connector.png#lightbox)
-
- SAP ABAP logs are displayed on the Microsoft Sentinel **Logs** page, under **Custom logs**:
-
- [![Screenshot of the SAP ABAP logs in the 'Custom Logs' area in Microsoft Sentinel.](media/sap/sap-logs-in-sentinel.png)](media/sap/sap-logs-in-sentinel.png#lightbox)
-
- For more information, see [Microsoft Sentinel SAP solution logs reference (public preview)](sap-solution-log-reference.md).
--
-## Update your SAP data connector
-
-If you have a Docker container already running with an earlier version of the SAP data connector, run the SAP data connector update script to get the latest features available.
-
-Make sure that you have the most recent versions of the relevant deployment scripts from the Microsoft Sentinel github repository.
-
-Run:
-
-```azurecli
-wget -O sapcon-instance-update.sh https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/sapcon-instance-update.sh && bash ./sapcon-instance-update.sh
-```
-
-The SAP data connector Docker container on your machine is updated.
-
-Be sure to check for any other available updates, such as:
--- Relevant SAP change requests, in the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP/CR).-- Microsoft Sentinel SAP security content, in the **Microsoft Sentinel Continuous Threat Monitoring for SAP** solution.-- Relevant watchlists, in the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP/Analytics/Watchlists).-
-## Collect SAP HANA audit logs
-
-If you have SAP HANA database audit logs configured with Syslog, you'll also need to configure your Log Analytics agent to collect the Syslog files.
-
-1. Make sure that the SAP HANA audit log trail is configured to use Syslog, as described in *SAP Note 0002624117*, which is accessible from the [SAP Launchpad support site](https://launchpad.support.sap.com/#/notes/0002624117). For more information, see:
-
- - [SAP HANA Audit Trail - Best Practice](https://archive.sap.com/documents/docs/DOC-51098)
- - [Recommendations for Auditing](https://help.sap.com/viewer/742945a940f240f4a2a0e39f93d3e2d4/2.0.05/en-US/5c34ecd355e44aa9af3b3e6de4bbf5c1.html)
-
-1. Check your operating system Syslog files for any relevant HANA database events.
-
-1. Install and configure a Log Analytics agent on your machine:
-
- a. Sign in to your HANA database operating system as a user with sudo privileges.
- b. In the Azure portal, go to your Log Analytics workspace. On the left pane, under **Settings**, select **Agents management** > **Linux servers**.
- c. Under **Download and onboard agent for Linux**, copy the code that's displayed in the box to your terminal, and then run the script.
-
- The Log Analytics agent is installed on your machine and connected to your workspace. For more information, see [Install Log Analytics agent on Linux computers](../azure-monitor/agents/agent-linux.md) and [OMS Agent for Linux](https://github.com/microsoft/OMS-Agent-for-Linux) on the Microsoft GitHub repository.
-
-1. Refresh the **Agents Management > Linux servers** tab to confirm that you have **1 Linux computers connected**.
-
-1. On the left pane, under **Settings**, select **Agents configuration**, and then select the **Syslog** tab.
-
-1. Select **Add facility** to add the facilities you want to collect.
-
- > [!TIP]
- > Because the facilities where HANA database events are saved can change between different distributions, we recommend that you add all facilities, check them against your Syslog logs, and then remove any that aren't relevant.
- >
-
-1. In Microsoft Sentinel, check to confirm that HANA database events are now shown in the ingested logs.
-
-## Next steps
-
-Learn more about the Microsoft Sentinel SAP solutions:
--- [Deploy the Microsoft Sentinel SAP data connector with SNC](sap-solution-deploy-snc.md)-- [Expert configuration options, on-premises deployment, and SAPControl log sources](sap-solution-deploy-alternate.md)-- [Microsoft Sentinel SAP solution detailed SAP requirements](sap-solution-detailed-requirements.md)-- [Microsoft Sentinel SAP solution logs reference](sap-solution-log-reference.md)-- [Microsoft Sentinel SAP solution: built-in security content](sap-solution-security-content.md)-- [Troubleshoot your Microsoft Sentinel SAP solution deployment](sap-deploy-troubleshoot.md)-
-For more information, see [Microsoft Sentinel solutions](sentinel-solutions.md).
sentinel Sap Solution Deploy Snc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap-solution-deploy-snc.md
- Title: Deploy the Microsoft Sentinel SAP data connector with Secure Network Communications (SNC) | Microsoft Docs
-description: Learn how to deploy the Microsoft Sentinel data connector for SAP environments with a secure connection via SNC, for the NetWeaver/ABAP interface based logs.
---- Previously updated : 11/09/2021--
-# Deploy the Microsoft Sentinel SAP data connector with SNC
--
-This article describes how to deploy the Microsoft Sentinel SAP data connector when you have a secure connection to SAP via Secure Network Communications (SNC) for the NetWeaver/ABAP interface based logs.
-
-> [!NOTE]
-> The default, and most recommended process for deploying the Microsoft Sentinel SAP data connector is by [using an Azure VM](sap-deploy-solution.md). This article is intended for advanced users.
-
-> [!IMPORTANT]
-> The Microsoft Sentinel SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-## Prerequisites
-
-The basic prerequisites for deploying your Microsoft Sentinel SAP data connector are the same regardless of your deployment method.
-
-Make sure that your system complies with the prerequisites documented in the main [SAP data connector deployment procedure](sap-deploy-solution.md#prerequisites) before you start.
-
-Other prerequisites for working with SNC include:
--- **A secure connection to SAP with SNC**. Define the connection-specific SNC parameters in the repository constants for the AS ABAP system you're connecting to. For more information, see the relevant [SAP community wiki page](https://wiki.scn.sap.com/wiki/display/Security/Securing+Connections+to+AS+ABAP+with+SNC).--- **The SAPCAR utility**, downloaded from the SAP Service Marketplace. For more information, see the [SAP Installation Guide](https://help.sap.com/viewer/d1d04c0d65964a9b91589ae7afc1bd45/2021.0/en-US/467291d0dc104d19bba073a0380dc6b4.html)-
-For more information, see [Microsoft Sentinel SAP solution detailed SAP requirements (public preview)](sap-solution-detailed-requirements.md).
-
-## Create your Azure key vault
-
-Create an Azure key vault that you can dedicate to your Microsoft Sentinel SAP data connector.
-
-Run the following command to create your Azure key vault and grant access to an Azure service principal:
-
-``` azurecli
-kvgp=<KVResourceGroup>
-
-kvname=<keyvaultname>
-
-spname=<sp-name>
-
-kvname=<keyvaultname>
-# Optional when Azure MI not enabled - Create sp user for AZ cli connection, save details for env.list file
-
-az ad sp create-for-rbac ΓÇôname $spname --role Contributor --scopes /subscriptions/<subscription_id>
-
-SpID=$(az ad sp list ΓÇôdisplay-name $spname ΓÇôquery ΓÇ£[].appIdΓÇ¥ --output tsv
-
-#Create key vault
-az keyvault create \
- --name $kvname \
- --resource-group $kvgp
-
-# Add access to SP
-az keyvault set-policy --name $kvname --resource-group $kvgp --object-id $spID --secret-permissions get list set
-```
-
-For more information, see [Quickstart: Create a key vault using the Azure CLI](../key-vault/general/quick-create-cli.md).
-
-## Add Azure Key Vault secrets
-
-To add Azure Key Vault secrets, run the following script, with your own system ID and the credentials you want to add:
-
-```azurecli
-#Add Azure Log ws ID
-az keyvault secret set \
- --name <SID>-LOG_WS_ID \
- --value "<logwsod>" \
- --description SECRET_AZURE_LOG_WS_ID --vault-name $kvname
-
-#Add Azure Log ws public key
-az keyvault secret set \
- --name <SID>-LOG_WS_PUBLICKEY \
- --value "<loswspubkey>" \
- --description SECRET_AZURE_LOG_WS_PUBLIC_KEY --vault-name $kvname
-```
-
-For more information, see the [az keyvault secret](/cli/azure/keyvault/secret) CLI documentation.
-
-## Deploy the SAP data connector
-
-This procedure describes how to deploy the SAP data connector on a VM when connecting via SNC.
-
-We recommend that you perform this procedure after you have a [key vault](#create-your-azure-key-vault) ready with your [SAP credentials](#add-azure-key-vault-secrets).
-
-**To deploy the SAP data connector**:
-
-1. On your data connector VM, download the latest SAP NW RFC SDK from the [SAP Launchpad site](https://support.sap.com) > **SAP NW RFC SDK** > **SAP NW RFC SDK 7.50** > **nwrfc750X_X-xxxxxxx.zip**.
-
- > [!NOTE]
- > You'll need your SAP user sign-in information in order to access the SDK, and you must download the SDK that matches your operating system.
- >
- > Make sure to select the **LINUX ON X86_64** option.
-
-1. Create a new folder with a meaningful name, and copy the SDK zip file into your new folder.
-
-1. Clone the Microsoft Sentinel solution GitHub repo onto your data connector VM, and copy Microsoft Sentinel SAP solution **systemconfig.ini** file into your new folder.
-
- For example:
-
- ```bash
- mkdir /home/$(pwd)/sapcon/<sap-sid>/
- cd /home/$(pwd)/sapcon/<sap-sid>/
- wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/template/systemconfig.ini
- cp <**nwrfc750X_X-xxxxxxx.zip**> /home/$(pwd)/sapcon/<sap-sid>/
- ```
-
-1. Edit the **systemconfig.ini** file as needed, using the embedded comments as a guide.
-
- You'll need to edit all configurations except for the key vault secrets. For more information, see [Manually configure the SAP data connector](sap-solution-deploy-alternate.md#manually-configure-the-sap-data-connector).
-
-1. Define the logs that you want to ingest into Microsoft Sentinel using the instructions in the **systemconfig.ini** file.
-
- For example, see [Define the SAP logs that are sent to Microsoft Sentinel](sap-solution-deploy-alternate.md#define-the-sap-logs-that-are-sent-to-microsoft-sentinel).
-
- > [!NOTE]
- > Relevant logs for SNC communications are only those logs that are retrieved via the NetWeaver / ABAP interface. SAP Control and HANA logs are out of scope for SNC.
- >
-
-1. Define the following configurations using the instructions in the **systemconfig.ini** file:
-
- - Whether to include user email addresses in audit logs
- - Whether to retry failed API calls
- - Whether to include cexal audit logs
- - Whether to wait an interval of time between data extractions, especially for large extractions
-
- For more information, see [SAL logs connector configurations](sap-solution-deploy-alternate.md#sal-logs-connector-settings).
-
-1. Save your updated **systemconfig.ini** file in the **sapcon** directory on your VM.
-
-1. Download and run the pre-defined Docker image with the SAP data connector installed. Run:
-
- ```bash
- docker pull docker pull mcr.microsoft.com/azure-sentinel/solutions/sapcon:latest-preview
- docker create -v $(pwd):/sapcon-app/sapcon/config/system -v /home/azureuser /sap/sec:/sapcon-app/sec --env SCUDIR=/sapcon-app/sec --name sapcon-snc mcr.microsoft.com/azure-sentinel/solutions/sapcon:latest-preview
- ```
-
-## Post-deployment SAP system procedures
-
-After deploying your SAP data connector, perform the following SAP system procedures:
-
-1. Download the SAP Cryptographic Library from the [SAP Service Marketplace](https://launchpad.support.sap.com/#/) > **Software Downloads** > **Browse our Download Catalog** > **SAP Cryptographic Software**.
-
- For more information, see the [SAP Installation Guide](https://help.sap.com/viewer/d1d04c0d65964a9b91589ae7afc1bd45/5.0.4/en-US/86921b29cac044d68d30e7b125846860.html).
-
-1. Use the SAPCAR utility to extract the library files, and deploy them to your SAP data connector VM, in the `<sec>` directory.
-
-1. Verify that you have permissions to run the library files.
-
-1. Define an environment variable named **SECUDIR**, with a value of the full path to the `<sec>` directory.
-
-1. Create a personal security environment (PSE). The **sapgenspe** command-line tool is available in your `<sec>` directory on your SAP data connector VM.
-
- For example:
-
- ```bash
- ./sapgenpse get_pse -p my_pse.pse -noreq -x my_pin "CN=sapcon.com, O=my_company, C=IL"
- ```
-
- For more information, see [Creating a Personal Security Environment](https://help.sap.com/viewer/4773a9ae1296411a9d5c24873a8d418c/8.0/en-US/285bb1fda3fa472c8d9205bae17a6f95.html) in the SAP documentation.
-
-1. Create credentials for your PSE. For example:
-
- ```bash
- ./sapgenpse seclogin -p my_pse.pse -x my_pin -O MXDispatcher_Service_User
- ```
-
- For more information, see [Creating Credentials](https://help.sap.com/viewer/4773a9ae1296411a9d5c24873a8d418c/8.0/en-US/d8b50371667740e797e6c9f0e9b7141f.html) in the SAP documentation.
-
-1. Exchange the Public-Key certificates between the Identity Center and the AS ABAP's SNC PSE.
-
- For example, to export the Identity Center's Public-Key certificate, run:
-
- ```bash
- ./sapgenpse export_own_cert -o my_cert.crt -p my_pse.pse -x abcpin
- ```
-
- Import the certificate to the AS ABAP's SNC PSE, export it from the PSE, and then import it back to the Identity Center.
-
- For example, to import the certificate to the Identity Center, run:
-
- ```bash
- ./sapgenpse maintain_pk -a full_path/my_secure_dir/my_exported_cert.crt -p my_pse.pse -x my_pin
- ```
-
- For more information, see [Exchanging the Public-Key Certificates](https://help.sap.com/viewer/4773a9ae1296411a9d5c24873a8d418c/8.0/en-US/7bbf90b29c694e6080e968559170fbcd.html) in the SAP documentation.
-
-## Edit the SAP data connector configuration
-
-1. On your SAP data connector VM, navigate to the **systemconfig.ini** file and define the following parameters with the relevant values:
-
- ```ini
- [Secrets Source]
- secrets = AZURE_KEY_VAULT
- ```
-
-1. In your [Azure key vault](#create-your-azure-key-vault), generate the following secrets:
-
- - `<Interprefix>-ABAPSNCPARTNERNAME`, where the value is the `<Relevant DN details>`
- - `<Interprefix>-ABAPSNCLIB`, where the value is the `<lib_Path>`
- - `<Interprefix>-ABAPX509CERT`, where the value is the `<Certificate_Code>)`
-
- For example:
-
- ```ini
- S4H-ABAPSNCPARTNERNAME = 'p:CN=help.sap.com, O=SAP_SE, C=IL' (Relevant DN)
- S4H-ABAPSNCLIB = 'home/user/sec-dir' (Relevant directory)
- S4H-ABAPX509CERT = 'MIIDJjCCAtCgAwIBAgIBNzA ... NgalgcTJf3iUjZ1e5Iv5PLKO' (Relevant certificate code)
- ```
-
- > [!NOTE]
- > By default, the `<Interprefix>` value is your SID, such as `A4H-<ABAPSNCPARTNERNAME>`.
- >
-
-If you're entering secrets directly to the configuration file, define the parameters as follows:
-
-```ini
-[Secrets Source]
-secrets = DOCKER_FIXED
-[ABAP Central Instance]
-snc_partnername = <Relevant_DN_Deatils>
-snc_lib = <lib_Path>
-x509cert = <Certificate_Code>
-For example:
-snc_partnername = p:CN=help.sap.com, O=SAP_SE, C=IL (Relevant DN)
-snc_lib = /sapcon-app/sec/libsapcrypto.so (Relevant directory)
-x509cert = MIIDJjCCAtCgAwIBAgIBNzA ... NgalgcTJf3iUjZ1e5Iv5PLKO (Relevant certificate code)
-```
-
-### Attach the SNC parameters to your user
-
-1. On your SAP data connector VM, call the `SM30` transaction and select to maintain the `USRACLEXT` table.
-
-1. Add a new entry. In the **User** field, enter the communication user that's used to connect to the ABAP system.
-
-1. Enter the SNC name when prompted. The SNC name is the unique, distinguished name provided when you created the Identity Manager PSE. For example: `CN=IDM, OU=SAP, C=DE`
-
- Make sure to add a `p` before the SNC name. For example: `p:CN=IDM, OU=SAP, C=DE`.
-
-1. Select **Save**.
-
-SNC is enabled on your data connector VM.
-
-## Activate the SAP data connector
-
-This procedure describes how to activate the SAP data connector using the secured SNC connection you created using the procedures earlier in this article.
-
-1. Activate the docker image:
-
- ```bash
- docker start sapcon-<SID>
- ```
-
-1. Check the connection. Run:
-
- ```bash
- docker logs sapcon-<SID>
- ```
-
-1. If the connection fails, use the logs to understand the issue.
-
- If you need to, disable the docker image:
-
- ```bash
- docker stop sapcon-<SID>
- ```
-
-For example, issues may occur because of a misconfiguration in the **systemconfig.ini** file, or in your Azure key vault, or some of the steps for creating a secure connection via SNC weren't run correctly.
-
-Try performing the steps above again to configure a secure connection via SNC. For more information, see also [Troubleshooting your Microsoft Sentinel SAP solution deployment](sap-deploy-troubleshoot.md).
-
-## Next steps
-
-After your SAP data connector is activated, continue by deploying the **Microsoft Sentinel - Continuous Threat Monitoring for SAP** solution. For more information, see [Deploy SAP security content](sap-deploy-solution.md#deploy-sap-security-content).
-
-Deploying the solution enables the SAP data connector to display in Microsoft Sentinel and deploys the SAP workbook and analytics rules. When you're done, manually add and customize your SAP watchlists.
-
-For more information, see:
--- [Microsoft Sentinel SAP solution detailed SAP requirements](sap-solution-detailed-requirements.md)-- [Microsoft Sentinel SAP solution logs reference](sap-solution-log-reference.md)-- [Microsoft Sentinel SAP solution: security content reference](sap-solution-security-content.md)
sentinel Sap Solution Detailed Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap-solution-detailed-requirements.md
- Title: Microsoft Sentinel SAP solution detailed SAP requirements | Microsoft Docs
-description: Learn about the detailed SAP system requirements for the Microsoft Sentinel SAP solution.
---- Previously updated : 11/09/2021--
-# Microsoft Sentinel SAP solution detailed SAP requirements (public preview)
--
-The [default procedure for deploying the Microsoft Sentinel SAP solution](sap-deploy-solution.md) includes the required SAP change requests and SAP notes, and provides a built-in role with all required permissions.
-
-This article lists the required SAP change requests, notes, and permissions in detail.
-
-Use this article as a reference if you're an admin, or if you're [deploying the SAP solution manually](sap-solution-deploy-alternate.md). This article is intended for advanced SAP users.
--
-> [!IMPORTANT]
-> The Microsoft Sentinel SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
->
-
-> [!NOTE]
-> Additional requirements are listed if you're deploying your SAP data connector using a secure SNC connection. For more information, see [Deploy the Microsoft Sentinel SAP data connector with SNC](sap-solution-deploy-snc.md).
->
-## Recommended virtual machine sizing
-
-The following table describes the recommended sizing for your virtual machine, depending on your intended usage:
-
-|Usage |Recommended sizing |
-|||
-|**Minimum specification**, such as for a lab environment | A *Standard_B2s* VM |
-|**Standard connector** (default) | A *DS2_v2* VM, with: <br>- 2 cores<br>- 8-GB memory |
-|**Multiple connectors** |A *Standard_B4ms* VM, with: <br>- 4 cores<br>- 16-GB memory |
--
-Also, make sure that you have enough disk space for the Docker container runtime environment so that you'll have enough space for your operation agent logs. We recommend that you have 200 GB available.
-
-For example, in Ubuntu, you can mount a disk to the `/var/lib/docker` directory before installing the container, as by default you may have little space allocated to the `/var` directory.
-
-## Required SAP log change requests
-
-The following SAP log change requests are required for the SAP solution, depending on your SAP Basis version:
--- **SAP Basis versions 7.50 and higher**, install NPLK900202-- **For lower versions**, install NPLK900201-- **To create an SAP role with the required authorizations**, for any supported SAP Basis version, install NPLK900163. For more information, see [Configure your SAP system](sap-deploy-solution.md#configure-your-sap-system) and [Required ABAP authorizations](#required-abap-authorizations).-
-> [!NOTE]
-> The required SAP log change requests expose custom RFC FMs that are required for the connector, and do not change any standard or custom objects.
->
-
-## Required SAP notes
-
-If you have an SAP Basis version of 7.50 or lower, install the following SAP notes:
-
-|SAP BASIS versions |Required note |
-|||
-|- 750 SP01 to SP12<br>- 751 SP01 to SP06<br>- 752 SP01 to SP03 | 2641084: Standardized read access for the Security Audit log data |
-|- 700 to 702<br>- 710 to 711, 730, 731, 740, and 750 | 2173545: CD: CHANGEDOCUMENT_READ_ALL |
-|- 700 to 702<br>- 710 to 711, 730, 731, and 740<br>- 750 to 752 | 2502336: CD (Change Document): RSSCD100 - read only from archive, not from database |
--
-Access the SAP notes from the [SAP support Launchpad site](https://support.sap.com/en/https://docsupdatetracker.net/index.html).
-## Requires SAP ports access:
-The SAP environment host, via the following TCP ports: 32xx, 5xx13, and 33xx, where xx is the SAP instance number.
-
-## Required ABAP authorizations
-
-The following table lists the ABAP authorizations required for the backend SAP user to connect Microsoft Sentinel to the SAP logs. For more information, see [Configure your SAP system](sap-deploy-solution.md#configure-your-sap-system).
-
-Required authorizations are listed by log type. You only need the authorizations listed for the types of logs you plan to ingest into Microsoft Sentinel.
-
-> [!TIP]
-> To create the role with all required authorizations, deploy the SAP change request NPLK900163 on your SAP system. This change request creates the **/MSFTSEN/SENTINEL_CONNECTOR** role, and you, typically a SAP Basis or role owner, must assign the role to the ABAP user connecting to Azure Sentinel.
->
-
-| Authorization Object | Field | Value |
-| -- | -- | -- |
-| **All RFC logs** | | |
-| S_RFC | FUGR | /OSP/SYSTEM_TIMEZONE |
-| S_RFC | FUGR | ARFC |
-| S_RFC | FUGR | STFC |
-| S_RFC | FUGR | RFC1 |
-| S_RFC | FUGR | SDIFRUNTIME |
-| S_RFC | FUGR | SMOI |
-| S_RFC | FUGR | SYST |
-| S_RFC | FUGR/FUNC | SRFC/RFC_SYSTEM_INFO |
-| S_RFC | FUGR/FUNC | THFB/TH_SERVER_LIST |
-| S_TCODE | TCD | SM51 |
-| **ABAP Application Log** | | |
-| S_APPL_LOG | ACTVT | Display |
-| S_APPL_LOG | ALG_OBJECT | * |
-| S_APPL_LOG | ALG_SUBOBJ | * |
-| S_RFC | FUGR | SXBP_EXT |
-| S_RFC | FUGR | /MSFTSEN/_APPLOG |
-| **ABAP Change Documents Log** | | |
-| S_RFC | FUGR | /MSFTSEN/_CHANGE_DOCS |
-| **ABAP CR Log** | | |
-| S_RFC | FUGR | CTS_API |
-| S_RFC | FUGR | /MSFTSEN/_CR |
-| S_TRANSPRT | ACTVT | Display |
-| S_TRANSPRT | TTYPE | * |
-| **ABAP DB Table Data Log** | | |
-| S_RFC | FUGR | /MSFTSEN/_TD |
-| S_TABU_DIS | ACTVT | Display |
-| S_TABU_DIS | DICBERCLS | &NC& |
-| S_TABU_DIS | DICBERCLS | + Any object required for logging |
-| S_TABU_NAM | ACTVT | Display |
-| S_TABU_NAM | TABLE | + Any object required for logging |
-| S_TABU_NAM | TABLE | DBTABLOG |
-| **ABAP Job Log** | | |
-| S_RFC | FUGR | SXBP |
-| S_RFC | FUGR | /MSFTSEN/_JOBLOG |
-| **ABAP Job Log, ABAP Application Log** | | |
-| S_XMI_PRD | INTERFACE | XBP |
-| **ABAP Security Audit Log - XAL** | | |
-| All RFC | S_RFC | FUGR |
-| S_ADMI_FCD | S_ADMI_FCD | AUDD |
-| S_RFC | FUGR | SALX |
-| S_USER_GRP | ACTVT | Display |
-| S_USER_GRP | CLASS | * |
-| S_XMI_PRD | INTERFACE | XAL |
-| **ABAP Security Audit Log - XAL, ABAP Job Log, ABAP Application Log** | | |
-| S_RFC | FUGR | SXMI |
-| S_XMI_PRD | EXTCOMPANY | Microsoft |
-| S_XMI_PRD | EXTPRODUCT | Microsoft Sentinel |
-| **ABAP Security Audit Log - SAL** | | |
-| S_RFC | FUGR | RSAU_LOG |
-| S_RFC | FUGR | /MSFTSEN/_AUDITLOG |
-| **ABAP Spool Log, ABAP Spool Output Log** | | |
-| S_RFC | FUGR | /MSFTSEN/_SPOOL |
-| **ABAP Workflow Log** | | |
-| S_RFC | FUGR | SWRR |
-| S_RFC | FUGR | /MSFTSEN/_WF |
-| **User Data** | | |
-| S_RFC | FUNC | RFC_READ_TABLE |
--
-## Next steps
-
-For more information, see:
--- [Deploy the Microsoft Sentinel solution for SAP](sap-deploy-solution.md)-- [Deploy the Microsoft Sentinel SAP data connector with SNC](sap-solution-deploy-snc.md)-- [Expert configuration options, on-premises deployment, and SAPControl log sources](sap-solution-deploy-alternate.md)-- [Microsoft Sentinel SAP solution logs reference](sap-solution-log-reference.md)-- [Microsoft Sentinel SAP solution: available security content](sap-solution-security-content.md)-- [Troubleshooting your Microsoft Sentinel SAP solution deployment](sap-deploy-troubleshoot.md)
sentinel Sap Solution Security Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap-solution-security-content.md
- Title: Microsoft Sentinel SAP solution - security content reference | Microsoft Docs
-description: Learn about the built-in security content provided by the Microsoft Sentinel SAP solution.
---- Previously updated : 02/22/2022--
-# Microsoft Sentinel SAP solution: security content reference (public preview)
--
-This article details the security content available for the [Microsoft Sentinel SAP solution](sap-deploy-solution.md#deploy-sap-security-content).
-
-Available security content includes a built-in workbook and built-in analytics rules. You can also add SAP-related [watchlists](watchlists.md) to use in your search, detection rules, threat hunting, and response playbooks.
-
-> [!IMPORTANT]
-> The Microsoft Sentinel SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
->
--
-## Built in workbooks
-
-Use the following built-in workbooks to visualize and monitor data ingested via the SAP data connector. After deploying the SAP solution, SAP workbooks are found in the **My workbooks** tab.
--
-|Workbook name |Description |Logs |
-||| |
-|<a name="sapsystem-applications-and-products-workbook"></a>**SAP - Audit Log Browser** |Displays data such as: <br><br>General system health, including user sign-ins over time, events ingested by the system, message classes and IDs, and ABAP programs run <br><br>Severities of events occurring in your system <br><br>Authentication and authorization events occurring in your system |Uses data from the following log: <br><br>[ABAPAuditLog_CL](sap-solution-log-reference.md#abap-security-audit-log) |
-|**SAP - Suspicious Privileges Operations** | Displays data such as: <br><br>Sensitive and critical assignments <br><br>Actions and changes made to sensitive, privileged users <br><br>Changes made to roles |Uses data from the following logs: <br><br>[ABAPAuditLog_CL](sap-solution-log-reference.md#abap-security-audit-log) <br><br>[ABAPChangeDocsLog_CL](sap-solution-log-reference.md#abap-change-documents-log) |
-|**SAP - Initial Access & Attempts to Bypass SAP Security Mechanisms** | Displays data such as: <br><br>Executions of sensitive programs, code, and function modules <br><br>Configuration changes, including log deactivations <br><br>Changes made in debug mode |Uses data from the following logs: <br><br>[ABAPAuditLog_CL](sap-solution-log-reference.md#abap-security-audit-log)<br><br>[ABAPTableDataLog_CL](sap-solution-log-reference.md#abap-db-table-data-log)<br><br>[Syslog](sap-solution-log-reference.md#abap-syslog) |
-|**SAP - Persistency & Data Exfiltration** | Displays data such as: <br><br>Internet Communication Framework (ICF) services, including activations and deactivations and data about new services and service handlers <br><br> Insecure operations, including both function modules and programs <br><br>Direct access to sensitive tables | Uses data from the following logs: <br><br>[ABAPAuditLog_CL](sap-solution-log-reference.md#abap-security-audit-log) <br><br>[ABAPTableDataLog_CL](sap-solution-log-reference.md#abap-db-table-data-log)<br><br>[ABAPSpoolLog_CL](sap-solution-log-reference.md#abap-spool-log)<br><br>[ABAPSpoolOutputLog_CL](sap-solution-log-reference.md#apab-spool-output-log)<br><br>[Syslog](sap-solution-log-reference.md#abap-syslog) |
--
-For more information, see [Tutorial: Visualize and monitor your data](monitor-your-data.md) and [Deploy SAP continuous threat monitoring (public preview)](sap-deploy-solution.md).
-
-## Built-in analytics rules
-
-The following tables list the built-in [analytics rules](sap-deploy-solution.md#deploy-sap-security-content) that are included in the Microsoft Sentinel SAP solution, deployed from the Microsoft Sentinel Solutions marketplace.
-
-### Built-in SAP analytics rules for initial access
-
-|Rule name |Description |Source action |Tactics |
-|||||
-|**SAP - High - Login from unexpected network** | Identifies a sign-in from an unexpected network. <br><br>Maintain networks in the [SAP - Networks](#networks) watchlist. | Sign in to the backend system from an IP address that is not assigned to one of the networks. <br><br>**Data sources**: SAPcon - Audit Log | Initial Access |
-|**SAP - High - SPNego Attack** | Identifies SPNego Replay Attack. | **Data sources**: SAPcon - Audit Log | Impact, Lateral Movement |
-|**SAP - High- Dialog logon attempt from a privileged user** | Identifies dialog sign-in attempts, with the **AUM** type, by privileged users in a SAP system. For more information, see the [SAPUsersGetPrivileged](sap-solution-log-reference.md#sapusersgetprivileged). | Attempt to sign in from the same IP to several systems or clients within the scheduled time interval<br><br>**Data sources**: SAPcon - Audit Log | Impact, Lateral Movement |
-|**SAP - Medium - Brute force attacks** | Identifies brute force attacks on the SAP system using RFC logons | Attempt to login from the same IP to several systems/clients within the scheduled time interval using RFC<br><br>**Data sources**: SAPcon - Audit Log | Credential Access |
-|**SAP - Medium - Multiple Logons from the same IP** | Identifies the sign-in of several users from same IP address within a scheduled time interval. <br><br>**Sub-use case**: [Persistency](#built-in-sap-analytics-rules-for-persistency) | Sign in using several users through the same IP address. <br><br>**Data sources**: SAPcon - Audit Log | Initial Access |
-|**SAP - Medium - Multiple Logons by User** | Identifies sign-ins of the same user from several terminals within scheduled time interval. <br><br>Available only via the Audit SAL method, for SAP versions 7.5 and higher. | Sign in using the same user, using different IP addresses. <br><br>**Data sources**: SAPcon - Audit Log | PreAttack, Credential Access, Initial Access, Collection <br><br>**Sub-use case**: [Persistency](#built-in-sap-analytics-rules-for-persistency) |
-|**SAP - Informational - Lifecycle - SAP Notes were implemented in system** | Identifies SAP Note implementation in the system. | Implement an SAP Note using SNOTE/TCI. <br><br>**Data sources**: SAPcon - Change Requests | - |
--
-### Built-in SAP analytics rules for data exfiltration
-
-|Rule name |Description |Source action |Tactics |
-|||||
-|**SAP - Medium - FTP for non authorized servers** |Identifies an FTP connection for a non-authorized server. | Create a new FTP connection, such as by using the FTP_CONNECT Function Module. <br><br>**Data sources**: SAPcon - Audit Log | Discovery, Initial Access, Command and Control |
-|**SAP - Medium - Insecure FTP servers configuration** |Identifies insecure FTP server configurations, such as when an FTP allowlist is empty or contains placeholders. | Do not maintain or maintain values that contain placeholders in the `SAPFTP_SERVERS` table, using the `SAPFTP_SERVERS_V` maintenance view. (SM30) <br><br>**Data sources**: SAPcon - Audit Log | Initial Access, Command and Control |
-|**SAP - Medium - Multiple Files Download** |Identifies multiple file downloads for a user within a specific time-range. | Download multiple files using the SAPGui for Excel, lists, and so on. <br><br>**Data sources**: SAPcon - Audit Log | Collection, Exfiltration, Credential Access |
-|**SAP - Medium - Multiple Spool Executions** |Identifies multiple spools for a user within a specific time-range. | Create and run multiple spool jobs of any type by a user. (SP01) <br><br>**Data sources**: SAPcon - Spool Log, SAPcon - Audit Log | Collection, Exfiltration, Credential Access |
-|**SAP - Medium - Multiple Spool Output Executions** |Identifies multiple spools for a user within a specific time-range. | Create and run multiple spool jobs of any type by a user. (SP01) <br><br>**Data sources**: SAPcon - Spool Output Log, SAPcon - Audit Log | Collection, Exfiltration, Credential Access |
-|**SAP - Medium - Sensitive Tables Direct Access By RFC Logon** |Identifies a generic table access by RFC sign in. <br><br> Maintain tables in the [SAP - Sensitive Tables](#tables) watchlist.<br><br> **Note**: Relevant for production systems only. | Open the table contents using SE11/SE16/SE16N.<br><br>**Data sources**: SAPcon - Audit Log | Collection, Exfiltration, Credential Access |
-|**SAP - Medium - Spool Takeover** |Identifies a user printing a spool request that was created by someone else. | Create a spool request using one user, and then output it in using a different user. <br><br>**Data sources**: SAPcon - Spool Log, SAPcon - Spool Output Log, SAPcon - Audit Log | Collection, Exfiltration, Command and Control |
-|**SAP - Low - Dynamic RFC Destination** | Identifies the execution of RFC using dynamic destinations. <br><br>**Sub-use case**: [Attempts to bypass SAP security mechanisms](#built-in-sap-analytics-rules-for-attempts-to-bypass-sap-security-mechanisms)| Execute an ABAP report that uses dynamic destinations (cl_dynamic_destination). For example, DEMO_RFC_DYNAMIC_DEST. <br><br>**Data sources**: SAPcon - Audit Log | Collection, Exfiltration |
-|**SAP - Low - Sensitive Tables Direct Access By Dialog Logon** | Identifies generic table access via dialog sign-in. | Open table contents using `SE11`/`SE16`/`SE16N`. <br><br>**Data sources**: SAPcon - Audit Log | Discovery |
--
-### Built-in SAP analytics rules for persistency
-
-|Rule name |Description |Source action |Tactics |
-|||||
-|**SAP - High - Activation or Deactivation of ICF Service** | Identifies activation or deactivation of ICF Services. | Activate a service using SICF.<br><br>**Data sources**: SAPcon - Table Data Log | Command and Control, Lateral Movement, Persistence |
-|**SAP - High - Function Module tested** | Identifies the testing of a function module. | Test a function module using `SE37` / `SE80`. <br><br>**Data sources**: SAPcon - Audit Log | Collection, Defense Evasion, Lateral Movement |
-| **SAP - High - HANA DB - User Admin actions** | Identifies user administration actions. | Create, update, or delete a database user. <br><br>**Data Sources**: Linux Agent - Syslog* |Privilege Escalation |
-|**SAP - High - New ICF Service Handlers** | Identifies creation of ICF Handlers. | Assign a new handler to a service using SICF.<br><br>**Data sources**: SAPcon - Audit Log | Command and Control, Lateral Movement, Persistence |
-|**SAP - High - New ICF Services** | Identifies creation of ICF Services. | Create a service using SICF.<br><br>**Data sources**: SAPcon - Table Data Log | Command and Control, Lateral Movement, Persistence |
-|**SAP - Medium - Execution of Obsolete or Insecure Function Module** |Identifies the execution of an obsolete or insecure ABAP function module. <br><br>Maintain obsolete functions in the [SAP - Obsolete Function Modules](#modules) watchlist. Make sure to activate table logging changes for the `EUFUNC` table in the backend. (SE13)<br><br> **Note**: Relevant for production systems only. | Run an obsolete or insecure function module directly using SE37. <br><br>**Data sources**: SAPcon - Table Data Log | Discovery, Command and Control |
-|**SAP - Medium - Execution of Obsolete/Insecure Program** |Identifies the execution of an obsolete or insecure ABAP program. <br><br> Maintain obsolete programs in the [SAP - Obsolete Programs](#programs) watchlist.<br><br> **Note**: Relevant for production systems only. | Run a program directly using SE38/SA38/SE80, or by using a background job. <br><br>**Data sources**: SAPcon - Audit Log | Discovery, Command and Control |
-|**SAP - Low - Multiple Password Changes by User** | Identifies multiple password changes by user. | Change user password <br><br>**Data sources**: SAPcon - Audit Log | Credential Access |
---
-### Built-in SAP analytics rules for attempts to bypass SAP security mechanisms
-
-|Rule name |Description |Source action |Tactics |
-|||||
-|**SAP - High - Client Configuration Change** | Identifies changes for client configuration such as the client role or the change recording mode. | Perform client configuration changes using the `SCC4` transaction code. <br><br>**Data sources**: SAPcon - Audit Log | Defense Evasion, Exfiltration, Persistence |
-|**SAP - High - Data has Changed during Debugging Activity** | Identifies changes for runtime data during a debugging activity. <br><br>**Sub-use case**: [Persistency](#built-in-sap-analytics-rules-for-persistency) | 1. Activate Debug ("/h"). <br>2. Select a field for change and update its value.<br><br>**Data sources**: SAPcon - Audit Log | Execution, Lateral Movement |
-|**SAP - High - Deactivation of Security Audit Log** | Identifies deactivation of the Security Audit Log, | Disable security Audit Log using `SM19/RSAU_CONFIG`. <br><br>**Data sources**: SAPcon - Audit Log | Exfiltration, Defense Evasion, Persistence |
-|**SAP - High - Execution of a Sensitive ABAP Program** |Identifies the direct execution of a sensitive ABAP program. <br><br>Maintain ABAP Programs in the [SAP - Sensitive ABAP Programs](#programs) watchlist. | Run a program directly using `SE38`/`SA38`/`SE80`. <br> <br>**Data sources**: SAPcon - Audit Log | Exfiltration, Lateral Movement, Execution |
-|**SAP - High - Execution of a Sensitive Transaction Code** | Identifies the execution of a sensitive Transaction Code. <br><br>Maintain transaction codes in the [SAP - Sensitive Transaction Codes](#transactions) watchlist. | Run a sensitive transaction code. <br><br>**Data sources**: SAPcon - Audit Log | Discovery, Execution |
-|**SAP - High - Execution of Sensitive Function Module** | Identifies the execution of a sensitive ABAP function module. <br><br>**Sub-use case**: [Persistency](#built-in-sap-analytics-rules-for-persistency)<br><br>**Note**: Relevant for production systems only. <br><br>Maintain sensitive functions in the [SAP - Sensitive Function Modules](#modules) watchlist, and make sure to activate table logging changes in the backend for the EUFUNC table. (SE13) | Run a sensitive function module directly using SE37. <br><br>**Data sources**: SAPcon - Table Data Log | Discovery, Command and Control
-|**SAP - High - HANA DB - Audit Trail Policy Changes** | Identifies changes for HANA DB audit trail policies. | Create or update the existing audit policy in security definitions. <br> <br>**Data sources**: Linux Agent - Syslog | Lateral Movement, Defense Evasion, Persistence |
-|**SAP - High - HANA DB - Deactivation of Audit Trail** | Identifies the deactivation of the HANA DB audit log. | Deactivate the audit log in the HANA DB security definition. <br><br>**Data sources**: Linux Agent - Syslog | Persistence, Lateral Movement, Defense Evasion |
-|**SAP - High - RFC Execution of a Sensitive Function Module** | Sensitive function models to be used in relevant detections. <br><br>Maintain function modules in the [SAP - Sensitive Function Modules](#module) watchlist. | Run a function module using RFC. <br><br>**Data sources**: SAPcon - Audit Log | Execution, Lateral Movement, Discovery |
-|**SAP - High - System Configuration Change** | Identifies changes for system configuration. | Adapt system change options or software component modification using the `SE06` transaction code.<br><br>**Data sources**: SAPcon - Audit Log |Exfiltration, Defense Evasion, Persistence |
-|**SAP - Medium - Debugging Activities** | Identifies all debugging related activities. <br><br>**Sub-use case**: [Persistency](#built-in-sap-analytics-rules-for-persistency) |Activate Debug ("/h") in the system, debug an active process, add breakpoint to source code, and so on. <br><br>**Data sources**: SAPcon - Audit Log | Discovery |
-|**SAP - Medium - Security Audit Log Configuration Change** | Identifies changes in the configuration of the Security Audit Log | Change any Security Audit Log Configuration using `SM19`/`RSAU_CONFIG`, such as the filters, status, recording mode, and so on. <br><br>**Data sources**: SAPcon - Audit Log | Persistence, Exfiltration, Defense Evasion |
-|**SAP - Medium - Transaction is unlocked** |Identifies unlocking of a transaction. | Unlock a transaction code using `SM01`/`SM01_DEV`/`SM01_CUS`. <br><br>**Data sources**: SAPcon - Audit Log | Persistence, Execution |
-|**SAP - Low - Dynamic ABAP Program** | Identifies the execution of dynamic ABAP programming. For example, when ABAP code was dynamically created, changed, or deleted. <br><br> Maintain excluded transaction codes in the [SAP - Transactions for ABAP Generations](#transactions) watchlist. | Create an ABAP Report that uses ABAP program generation commands, such as INSERT REPORT, and then run the report. <br><br>**Data sources**: SAPcon - Audit Log | Discovery, Command and Control, Impact |
--
-### Built-in SAP analytics rules for suspicious privileges operations
-
-|Rule name |Description |Source action |Tactics |
-|||||
-|**SAP - High - Change in Sensitive privileged user** | Identifies changes of sensitive privileged users. <br> <br>Maintain privileged users in the [SAP - Privileged Users](#users) watchlist. | Change user details / authorizations using `SU01`. <br><br>**Data sources**: SAPcon - Audit Log | Privilege Escalation, Credential Access |
-|**SAP - High - HANA DB - Assign Admin Authorizations** | Identifies admin privilege or role assignment. | Assign a user with any admin role or privileges. <br><br>**Data sources**: Linux Agent - Syslog | Privilege Escalation |
-|**SAP - High - Sensitive privileged user logged in** | Identifies the Dialog sign-in of a sensitive privileged user. <br><br>Maintain privileged users in the [SAP - Privileged Users](#users) watchlist. | Sign in to the backend system using `SAP*` or another privileged user. <br><br>**Data sources**: SAPcon - Audit Log | Initial Access, Credential Access |
-| **SAP - High - Sensitive privileged user makes a change in other user** | Identifies changes of sensitive, privileged users in other users. | Change user details / authorizations using SU01. <br><br>**Data Sources**: SAPcon - Audit Log | Privilege Escalation, Credential Access |
-|**SAP - High - Sensitive Users Password Change and Login** | Identifies password changes for privileged users. | Change the password for a privileged user and sign into the system. <br>Maintain privileged users in the [SAP - Privileged Users](#users) watchlist.<br><br>**Data sources**: SAPcon - Audit Log | Impact, Command and Control, Privilege Escalation |
-|**SAP - High - User Creates and uses new user** | Identifies a user creating and using other users. <br><br>**Sub-use case**: [Persistency](#built-in-sap-analytics-rules-for-persistency) | Create a user using SU01, and then sign in, using the newly created user and the same IP address.<br><br>**Data sources**: SAPcon - Audit Log | Discovery, PreAttack, Initial Access |
-|**SAP - High - User Unlocks and uses other users** |Identifies a user being unlocked and used by other users. <br><br>**Sub-use case**: [Persistency](#built-in-sap-analytics-rules-for-persistency) | Unlock a user using SU01, and then sign in using the unlocked user and the same IP address.<br><br>**Data sources**: SAPcon - Audit Log, SAPcon - Change Documents Log | Discovery, PreAttack, Initial Access, Lateral Movement |
-|**SAP - Medium - Assignment of a sensitive profile** | Identifies new assignments of a sensitive profile to a user. <br><br>Maintain sensitive profiles in the [SAP - Sensitive Profiles](#profiles) watchlist. | Assign a profile to a user using `SU01`. <br><br>**Data sources**: SAPcon - Change Documents Log | Privilege Escalation |
-|**SAP - Medium - Assignment of a sensitive role** | Identifies new assignments for a sensitive role to a user. <br><br>Maintain sensitive roles in the [SAP - Sensitive Roles](#roles) watchlist.| Assign a role to a user using `SU01` / `PFCG`. <br><br>**Data sources**: SAPcon - Change Documents Log, Audit Log | Privilege Escalation |
-|**SAP - Medium - Critical authorizations assignment - New Authorization Value** | Identifies the assignment of a critical authorization object value to a new user. <br><br>Maintain critical authorization objects in the [SAP - Critical Authorization Objects](#objects) watchlist. | Assign a new authorization object or update an existing one in a role, using `PFCG`. <br><br>**Data sources**: SAPcon - Change Documents Log | Privilege Escalation |
-|**SAP - Medium - Critical authorizations assignment - New User Assignment** | Identifies the assignment of a critical authorization object value to a new user. <br><br>Maintain critical authorization objects in the [SAP - Critical Authorization Objects](#objects) watchlist. | Assign a new user to a role that holds critical authorization values, using `SU01`/`PFCG`. <br><br>**Data sources**: SAPcon - Change Documents Log | Privilege Escalation |
-|**SAP - Medium - Sensitive Roles Changes** |Identifies changes in sensitive roles. <br><br> Maintain sensitive roles in the [SAP - Sensitive Roles](#roles) watchlist. | Change a role using PFCG. <br><br>**Data sources**: SAPcon - Change Documents Log, SAPcon ΓÇô Audit Log | Impact, Privilege Escalation, Persistence |
--
-## Available watchlists
-
-The following table lists the [watchlists](sap-deploy-solution.md#deploy-sap-security-content) available for the Microsoft Sentinel SAP solution, and the fields in each watchlist.
-
-These watchlists provide the configuration for the Microsoft Sentinel SAP Continuous Threat Monitoring solution. The [SAP watchlists](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP/Analytics/Watchlists) are available in the Microsoft Sentinel GitHub repository.
-
-|Watchlist name |Description and fields |
-|||
-|<a name="objects"></a>**SAP - Critical Authorization Objects** | Critical Authorizations object, where assignments should be governed. <br><br>- **AuthorizationObject**: An SAP authorization object, such as `S_DEVELOP`, `S_TCODE`, or `Table TOBJ` <br>- **AuthorizationField**: An SAP authorization field, such as `OBJTYP` or `TCD` <br>- **AuthorizationValue**: An SAP authorization field value, such as `DEBUG` <br>- **ActivityField** : SAP activity field. For most cases, this value will be `ACTVT`. For Authorizations objects without an **Activity**, or with only an **Activity** field, filled with `NOT_IN_USE`. <br>- **Activity**: SAP activity, according to the authorization object, such as: `01`: Create; `02`: Change; `03`: Display, and so on. <br>- **Description**: A meaningful Critical Authorization Object description. |
-|**SAP - Excluded Networks** | For internal maintenance of excluded networks, such as to ignore web dispatchers, terminal servers, and so on. <br><br>-**Network**: A network IP address or range, such as `111.68.128.0/17`. <br>-**Description**: A meaningful network description.|
-|**SAP Excluded Users** |System users who are signed in to the system and must be ignored. For example, alerts for multiple sign-ins by the same user. <br><br>- **User**: SAP User <br>-**Description**: A meaningful user description. |
-|<a name="networks"></a>**SAP - Networks** | Internal and maintenance networks for identification of unauthorized logins. <br><br>- **Network**: Network IP address or range, such as `111.68.128.0/17` <br>- **Description**: A meaningful network description.|
-|<a name="users"></a>**SAP - Privileged Users** | Privileged users that are under extra restrictions. <br><br>- **User**: the ABAP user, such as `DDIC` or `SAP` <br>- **Description**: A meaningful user description. |
-|<a name= "programs"></a>**SAP - Sensitive ABAP Programs** | Sensitive ABAP programs (reports), where execution should be governed. <br><br>- **ABAPProgram**: ABAP program or report, such as `RSPFLDOC` <br>- **Description**: A meaningful program description.|
-|<a name="module"></a>**SAP - Sensitive Function Module** | Internal and maintenance networks for identification of unauthorized logins. <br><br>- **FunctionModule**: An ABAP function module, such as `RSAU_CLEAR_AUDIT_LOG` <br>- **Description**: A meaningful module description. |
-|<a name="profiles"></a>**SAP - Sensitive Profiles** | Sensitive profiles, where assignments should be governed. <br><br>- **Profile**: SAP authorization profile, such as `SAP_ALL` or `SAP_NEW` <br>- **Description**: A meaningful profile description.|
-|<a name="tables"></a>**SAP - Sensitive Tables** | Sensitive tables, where access should be governed. <br><br>- **Table**: ABAP Dictionary Table, such as `USR02` or `PA008` <br>- **Description**: A meaningful table description. |
-|<a name="roles"></a>**SAP - Sensitive Roles** | Sensitive roles, where assignment should be governed. <br><br>- **Role**: SAP authorization role, such as `SAP_BC_BASIS_ADMIN` <br>- **Description**: A meaningful role description. |
-|<a name="transactions"></a>**SAP - Sensitive Transactions** | Sensitive transactions where execution should be governed. <br><br>- **TransactionCode**: SAP transaction code, such as `RZ11` <br>- **Description**: A meaningful code description. |
-|<a name="systems"></a>**SAP - Systems** | Describes the landscape of SAP systems according to role and usage.<br><br>- **SystemID**: the SAP system ID (SYSID) <br>- **SystemRole**: the SAP system role, one of the following values: `Sandbox`, `Development`, `Quality Assurance`, `Training`, `Production` <br>- **SystemUsage**: The SAP system usage, one of the following values: `ERP`, `BW`, `Solman`, `Gateway`, `Enterprise Portal` |
-|<a name="users"></a>**SAP - Excluded Users** | System users that are logged in and need to be ignored, such as for the Multiple logons by user alert. <br><br>- **User**: SAP User <br>- **Description**: A meaningful user description |
-|<a name="networks"></a>**SAP - Excluded Networks** | Maintain internal, excluded networks for ignoring web dispatchers, terminal servers, and so on. <br><br>- **Network**: Network IP address or range, such as `111.68.128.0/17` <br>- **Description**: A meaningful network description |
-|<a name="modules"></a>**SAP - Obsolete Function Modules** | Obsolete function modules, whose execution should be governed. <br><br>- **FunctionModule**: ABAP Function Module, such as TH_SAPREL <br>- **Description**: A meaningful function module description |
-|<a name="programs"></a>**SAP - Obsolete Programs** | Obsolete ABAP programs (reports), whose execution should be governed. <br><br>- **ABAPProgram**:ABAP Program, such as TH_ RSPFLDOC <br>- **Description**: A meaningful ABAP program description |
-|<a name="transactions"></a>**SAP - Transactions for ABAP Generations** | Transactions for ABAP generations whose execution should be governed. <br><br>- **TransactionCode**:Transaction Code, such as SE11. <br>- **Description**: A meaningful Transaction Code description |
-|<a name="servers"></a>**SAP - FTP Servers** | FTP Servers for identification of unauthorized connections. <br><br>- **Client**:such as 100. <br>- **FTP_Server_Name**: FTP server name, such as http://contoso.com/ <br>-**FTP_Server_Port**:FTP server port, such as 22. <br>- **Description**A meaningful FTP Server description |
---
-## Next steps
-
-For more information, see:
--- [Deploy the Microsoft Sentinel solution for SAP](sap-deploy-solution.md)-- [Microsoft Sentinel SAP solution logs reference](sap-solution-log-reference.md)-- [Deploy the Microsoft Sentinel SAP data connector with SNC](sap-solution-deploy-snc.md)-- [Expert configuration options, on-premises deployment, and SAPControl log sources](sap-solution-deploy-alternate.md)-- [Microsoft Sentinel SAP solution detailed SAP requirements](sap-solution-detailed-requirements.md)-- [Troubleshooting your Microsoft Sentinel SAP solution deployment](sap-deploy-troubleshoot.md)
sentinel Collect Sap Hana Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/collect-sap-hana-audit-logs.md
+
+ Title: Collect SAP HANA audit logs in Microsoft Sentinel | Microsoft Docs
+description: This article explains how to collect audit logs from your SAP HANA database.
+++ Last updated : 03/02/2022++
+# Collect SAP HANA audit logs in Microsoft Sentinel
++
+This article explains how to collect audit logs from your SAP HANA database.
+
+> [!IMPORTANT]
+> The Microsoft Sentinel SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+If you have SAP HANA database audit logs configured with Syslog, you'll also need to configure your Log Analytics agent to collect the Syslog files.
+
+## Collect SAP HANA audit logs
+
+1. Make sure that the SAP HANA audit log trail is configured to use Syslog, as described in *SAP Note 0002624117*, which is accessible from the [SAP Launchpad support site](https://launchpad.support.sap.com/#/notes/0002624117). For more information, see:
+
+ - [SAP HANA Audit Trail - Best Practice](https://archive.sap.com/documents/docs/DOC-51098)
+ - [Recommendations for Auditing](https://help.sap.com/viewer/742945a940f240f4a2a0e39f93d3e2d4/2.0.05/en-US/5c34ecd355e44aa9af3b3e6de4bbf5c1.html)
+
+1. Check your operating system Syslog files for any relevant HANA database events.
+
+1. Install and configure a Log Analytics agent on your machine:
+
+ 1. Sign in to your HANA database operating system as a user with sudo privileges.
+
+ 1. In the Azure portal, go to your Log Analytics workspace. On the left pane, under **Settings**, select **Agents management** > **Linux servers**.
+
+ 1. Under **Download and onboard agent for Linux**, copy the code that's displayed in the box to your terminal, and then run the script.
+
+ The Log Analytics agent is installed on your machine and connected to your workspace. For more information, see [Install Log Analytics agent on Linux computers](../../azure-monitor/agents/agent-linux.md) and [OMS Agent for Linux](https://github.com/microsoft/OMS-Agent-for-Linux) on the Microsoft GitHub repository.
+
+1. Refresh the **Agents Management > Linux servers** tab to confirm that you have **1 Linux computers connected**.
+
+1. On the left pane, under **Settings**, select **Agents configuration**, and then select the **Syslog** tab.
+
+1. Select **Add facility** to add the facilities you want to collect.
+
+ > [!TIP]
+ > Because the facilities where HANA database events are saved can change between different distributions, we recommend that you add all facilities, check them against your Syslog logs, and then remove any that aren't relevant.
+ >
+
+1. In Microsoft Sentinel, check to confirm that HANA database events are now shown in the ingested logs.
+
+## Next steps
+
+Learn more about the Microsoft Sentinel SAP solutions:
+
+- [Deploy Continuous Threat Monitoring for SAP](deployment-overview.md)
+- [Prerequisites for deploying SAP continuous threat monitoring](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
+- [Deploy SAP Change Requests (CRs) and configure authorization](preparing-sap.md)
+- [Deploy and configure the SAP data connector agent container](deploy-data-connector-agent-container.md)
+- [Deploy SAP security content](deploy-sap-security-content.md)
+- [Deploy the Microsoft Sentinel SAP data connector with SNC](configure-snc.md)
+- [Enable and configure SAP auditing](configure-audit.md)
+
+Troubleshooting:
+
+- [Troubleshoot your Microsoft Sentinel SAP solution deployment](sap-deploy-troubleshoot.md)
+- [Configure SAP Transport Management System](configure-transport.md)
+
+Reference files:
+
+- [Microsoft Sentinel SAP solution data reference](sap-solution-log-reference.md)
+- [Microsoft Sentinel SAP solution: security content reference](sap-solution-security-content.md)
+- [Kickstart script reference](reference-kickstart.md)
+- [Update script reference](reference-update.md)
+- [Systemconfig.ini file reference](reference-systemconfig.md)
+
+For more information, see [Microsoft Sentinel solutions](../sentinel-solutions.md).
+
sentinel Configuration File Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/configuration-file-reference.md
+
+ Title: Configuration file reference | Microsoft Docs
+description: Configuration file reference
+++ Last updated : 03/02/2022++
+# Configuration file reference
sentinel Configure Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/configure-audit.md
+
+ Title: Enable and configure SAP auditing for Microsoft Sentinel | Microsoft Docs
+description: This article shows you how to enable and configure auditing for the Microsoft Sentinel Continuous Threat Monitoring solution for SAP, so that you can have complete visibility into your SAP solution.
+++ Last updated : 04/27/2022++
+# Enable and configure SAP auditing for Microsoft Sentinel
++
+This article shows you how to enable and configure auditing for the Microsoft Sentinel Continuous Threat Monitoring solution for SAP, so that you can have complete visibility into your SAP solution.
+
+> [!IMPORTANT]
+> The Microsoft Sentinel SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+> We strongly recommend that any management of your SAP system is carried out by an experienced SAP system administrator.
+>
+> The steps in this article may vary, depending on your SAP sytem's version, and should be considered as a sample only.
+
+Some installations of SAP systems may not have audit log enabled by default. For best results in evaluating the performance and efficacy of the Microsoft Sentinel Continuous Threat Monitoring solution for SAP, enable auditing of your SAP system and configure the audit parameters.
+
+## Check if auditing is enabled
+
+1. Sign in to the SAP GUI and run the **RSAU_CONFIG** transaction.
+
+ ![Screenshot showing how to run the R S A U CONFIG transaction.](./media/configure-audit/rsau-config.png)
+
+1. In the **Security Audit Log - Display of Current Configuration** window, find the **Parameter** section within the **Configuration** section. Under **General Parameters**, see that the **Static security audit active** checkbox is marked.
+
+## Enable auditing
+
+> [!IMPORTANT]
+> Your audit policy should be determined in close collaboration with SAP administrators and your security department.
+
+1. Sign in to the SAP GUI and run the **RSAU_CONFIG** transaction.
+
+1. In the **Security Audit Log** screen, select **Parameter** under **Security Audit Log Configuration** section in **Configuration** tree.
+
+1. If the **Static security audit active** checkbox is marked, system-level auditing is turned on. If it isn't, select **Display <-> Change** and mark the **Static security audit active** checkbox.
+
+1. By default, the SAP system logs the client name (terminal ID) rather than client IP address. If you want the system to log by client IP address instead, mark the **Log peer address not terminal ID** checkbox in the **General Parameters** section.
+
+1. If you changed any settings in the **Security Audit Log Configuration - Parameter** section, select **Save** to save the changes. Auditing will be activated only after the server is rebooted.
+
+ ![Screenshot showing R S A U CONFIG parameters.](./media/configure-audit/rsau-config-parameter.png)
+
+1. Right-click **Static Configuration** and select **Create Profile**.
+
+ ![Screenshot showing R S A U CONFIG create profile screen.](./media/configure-audit/create-profile.png)
+
+1. Specify a name for the profile in the **Profile/Filter Number** field.
+
+1. Mark the **Filter for recording active** checkbox.
+
+1. In the **Client** field, enter `*`.
+
+1. In the **User** field enter `*`.
+
+1. Under **Event Selection**, choose **Classic event selection** and select all the event types in the list.
+
+ Alternatively, choose **Detail event selection**, review the list of message IDs listed in the [Recommended audit categories](#recommended-audit-categories) section of this article, and configure them in **Detail event selection**.
+
+1. Select **Save**.
+
+ ![Screenshot showing Static profile settings.](./media/configure-audit/create-profile-settings.png)
+
+1. You'll see that the **Static Configuration** section displays the newly created profile. Right-click the profile and select **Activate**.
+
+1. In the confirmation window select **Yes** to activate the newly created profile.
+
+### Recommended audit categories
+
+The following table lists Message IDs used by the Continuous Threat Monitoring for SAP solution. In order for analytics rules to detect events properly, we strongly recommend configuring an audit policy that includes the message IDs listed below as a minimum.
+
+| Message ID | Message text | Category name | Event Weighting | Class Used in Rules |
+| - | - | - | - | - |
+| AU1 | Logon successful (type=&A, method=&C) | Logon | Severe | Used |
+| AU2 | Logon failed (reason=&B, type=&A, method=&C) | Logon | Critical | Used |
+| AU3 | Transaction &A started. | Transaction Start | Non-Critical | Used |
+| AU5 | RFC/CPIC logon successful (type=&A, method=&C) | RFC Login | Non-Critical | Used |
+| AU6 | RFC/CPIC logon failed, reason=&B, type=&A, method=&C | RFC Login | Critical | Used |
+| AU7 | User &A created. | User Master Record Change | Critical | Used |
+| AU8 | User &A deleted. | User Master Record Change | Severe | Used |
+| AU9 | User &A locked. | User Master Record Change | Severe | Used |
+| AUA | User &A unlocked. | User Master Record Change | Severe | Used |
+| AUB | Authorizations for user &A changed. | User Master Record Change | Severe | Used |
+| AUD | User master record &A changed. | User Master Record Change | Severe | Used |
+| AUE | Audit configuration changed | System | Critical | Used |
+| AUF | Audit: Slot &A: Class &B, Severity &C, User &D, Client &E, &F | System | Critical | Used |
+| AUI | Audit: Slot &A Inactive | System | Critical | Used |
+| AUJ | Audit: Active status set to &1 | System | Critical with Monitor Alert | Used |
+| AUK | Successful RFC call &C (function group = &A) | RFC Start | Non-Critical | Used |
+| AUM | User &B locked in client &A after errors in password checks | Logon | Critical with Monitor Alert | Used |
+| AUO | Logon failed (reason = &B, type = &A) | Logon | Severe | Used |
+| AUP | Transaction &A locked | Transaction Start | Severe | Used |
+| AUQ | Transaction &A unlocked | Transaction Start | Severe | Used |
+| AUR | &A &B created | User Master Record Change | Severe | Used |
+| AUT | &A &B changed | User Master Record Change | Severe | Used |
+| AUW | Report &A started | Report Start | Non-Critical | Used |
+| AUY | Download &A Bytes to File &C | Other | Severe | Used |
+| BU1 | Password check failed for user &B in client &A | Other | Critical with Monitor Alert | Used |
+| BU2 | Password changed for user &B in client &A | User Master Record Change | Non-Critical | Used |
+| BU4 | Dynamic ABAP code: Event &A, event type &B, check total &C | Other | Non-Critical | Used |
+| BUG | HTTP Security Session Management was deactivated for client &A. | Other | Critical with Monitor Alert | Used |
+| BUI | SPNego replay attack detected (UPN=&A) | Logon | Critical | Used |
+| BUV | Invalid hash value &A. The context contains &B. | User Master Record Change | Critical | Used |
+| BUW | A refresh token issued to client &A was used by client &B. | User Master Record Change | Critical | Used |
+| CUK | C debugging activated | Other | Critical | Used |
+| CUL | Field content in debugger changed by user &A: &B (&C) | Other | Critical | Used |
+| CUM | Jump to ABAP Debugger by user &A: &B (&C) | Other | Critical | Used |
+| CUN | A process was stopped from the debugger by user &A (&C) | Other | Critical | Used |
+| CUO | Explicit database operation in debugger by user &A: &B (&C) | Other | Critical | Used |
+| CUP | Non-exclusive debugging session started by user &A (&C) | Other | Critical | Used |
+| CUS | Logical file name &B is not a valid alias for logical file name &A | Other | Severe | Used |
+| CUZ | Generic table access by RFC to &A with activity &B | RFC Start | Critical | Used |
+| DU1 | FTP server allowlist is empty | RFC Start | Severe | Used |
+| DU2 | FTP server allowlist is non-secure due to use of placeholders | RFC Start | Severe | Used |
+| DU8 | FTP connection request for server &A successful | RFC Start | Non-Critical | Used |
+| DU9 | Generic table access call to &A with activity &B (auth. check: &C ) | Transaction Start | Non-Critical | Used |
+| DUH | OAuth 2.0: Token declared invalid (OAuth client=&A, user=&B, token type=&C) | User Master Record Change | Severe with Monitor Alert | Used |
+| EU1 | System change options changed ( &A to &B ) | System | Critical | Used |
+| EU2 | Client &A settings changed ( &B ) | System | Critical | Used |
+| EUF | Could not call RFC function module &A | RFC Start | Non-Critical | Used |
+| FU0 | Exclusive security audit log medium changed (new status &A) | System | Critical | Used |
+| FU1 | RFC function &B with dynamic destination &C was called in program &A | RFC Start | Non-Critical | Used |
+
+## Next steps
+
+Learn more about the Microsoft Sentinel SAP solutions:
+
+- [Deploy Continuous Threat Monitoring for SAP](deployment-overview.md)
+- [Prerequisites for deploying SAP continuous threat monitoring](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
+- [Deploy SAP Change Requests (CRs) and configure authorization](preparing-sap.md)
+- [Deploy and configure the SAP data connector agent container](deploy-data-connector-agent-container.md)
+- [Deploy SAP security content](deploy-sap-security-content.md)
+- [Deploy the Microsoft Sentinel SAP data connector with SNC](configure-snc.md)
+- [Collect SAP HANA audit logs](collect-sap-hana-audit-logs.md)
+
+Troubleshooting:
+
+- [Troubleshoot your Microsoft Sentinel SAP solution deployment](sap-deploy-troubleshoot.md)
+- [Configure SAP Transport Management System](configure-transport.md)
+
+Reference files:
+
+- [Microsoft Sentinel SAP solution data reference](sap-solution-log-reference.md)
+- [Microsoft Sentinel SAP solution: security content reference](sap-solution-security-content.md)
+- [Kickstart script reference](reference-kickstart.md)
+- [Update script reference](reference-update.md)
+- [Systemconfig.ini file reference](reference-systemconfig.md)
+
+For more information, see [Microsoft Sentinel solutions](../sentinel-solutions.md).
sentinel Configure Snc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/configure-snc.md
+
+ Title: Deploy the Microsoft Sentinel SAP data connector with Secure Network Communications (SNC) | Microsoft Docs
+description: This article shows you how to deploy the **Microsoft Sentinel data connector for SAP** to ingest NetWeaver/ABAP logs over a secure connection using Secure Network Communications.
++++ Last updated : 05/03/2022++
+# Deploy the Microsoft Sentinel SAP data connector with SNC
++
+This article shows you how to deploy the **Microsoft Sentinel data connector for SAP** to ingest NetWeaver/ABAP logs over a secure connection using Secure Network Communications (SNC).
+
+> [!IMPORTANT]
+> The Microsoft Sentinel SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+The Continuous Threat Monitoring for SAP data connector agent typically connects to an SAP ABAP server using an RFC connection, and a user's username and password for authentication.
+
+However, some environments may require the connection be over an encrypted channel, and client certificates be used for authentication. In these cases you can use SAP Secure Network Communication for this purpose, and you'll have to take the appropriate steps as outlined in this article.
+
+## Prerequisites
+
+- SAP Cryptographic library [Download the SAP Cryptographic Library](https://help.sap.com/viewer/d1d04c0d65964a9b91589ae7afc1bd45/5.0.4/en-US/86921b29cac044d68d30e7b125846860.html).
+- Network connectivity. SNC uses ports *48xx* (where xx is the SAP instance number) to connect to the ABAP server.
+- SAP server configured to support SNC authentication.
+- Self-signed, or enterprise CA-issued certificate for user authentication.
+
+> [!NOTE]
+> This guide is a sample case for configuring SNC. In production environments it is strongly recommended to consult with SAP administrators to devise a deployment plan.
+
+## Configure your SNC deployment
+
+### Export server certificate
+
+1. Sign in to your SAP client and run the **STRUST** transaction.
+
+1. Navigate and expand the **SNC SAPCryptolib** section in the left hand pane.
+
+1. Select the system, then select the value of the **Subject** field.
+
+ The server certificate information will be displayed in the **Certificate** section at the bottom of the page.
+
+1. Select the **Export certificate** button at the bottom of the page.
+
+ ![Screenshot showing how to export a server certificate.](./media/configure-snc/export-server-certificate.png)
+
+1. In the **Export Certificate** dialog box, select **Base64** as the file format, select the double boxes icon next to the **File Path** field, and select a filename to export the certificate to, then select the green checkmark to export the certificate.
++
+### Import your certificate
+
+This section explains how to import a certificate so that it's trusted by your ABAP server. It's important to understand which certificate needs to be imported into the SAP system. In any case, only public keys of the certificates need to be imported into the SAP system.
+
+- **If the user certificate is self-signed:** Import a user certificate.
+
+- **If user certificate is issued by an enterprise CA:** Import an enterprise CA certificate. In the event that both root and subordinate CA servers are used, import both root and subordinate CA public certificates.
+
+1. Run the **STRUST** transaction.
+
+1. Select **Display<->Change**.
+
+1. Select **Import certificate** at the bottom of the page.
+
+1. In the **Import certificate** dialog box, select the double boxes icon next to the **File path** field and locate the certificate.
+
+ 1. Locate the file containing the certificate (public key only) and select the green checkmark to import the certificate.
+
+ The certificate information is displayed in the **Certificate** section.
+
+ 1. Select **Add to Certificate List**.
+
+ The certificate will appear in the **Certificate List** area.
+
+### Associate certificate with a user account
+
+1. Run the **SM30** transaction.
+
+1. In the **Table/View** field, type **USRACLEXT**, then select **Maintain**.
+
+1. Review the output, identify whether the target user already has an associated SNC name. If not, select **New Entries**.
+
+ ![Screenshot showing how to create a new entry in USER A C L E X T table.](./media/configure-snc/usraclext-new-entry.png)
+
+1. Type the target user's username in the **User** field and the user's certificate subject name prefixed with **p:** in the **SNC Name** field, then select **Save**.
+
+ ![Screenshot showing how to create a new user in USER A C L E X T table.](./media/configure-snc/usraclext-new-user.png)
+
+### Grant logon rights using certificate
+
+1. Run the **SM30** transaction.
+
+1. In the **Table/View** field, type **VSNCSYSACL**, then select **Maintain**.
+
+1. Confirm that the table is cross-client in the informational prompt that appears.
+
+1. In **Determine Work Area: Entry** type **E** in the **Type of ACL entry** field, and select the green checkmark.
+
+1. Review the output, identify whether the target user already has an associated SNC name. If not, select **New Entries**.
+
+ ![Screenshot showing how to create a new entry in the V S N C SYS A C L table.](./media/configure-snc/vsncsysacl-new-entry.png)
+
+1. Enter your system ID and user certificate subject name with a **p:** prefix.
+
+ ![Screenshot showing how to create a new user in the V S N C SYS A C L table.](./media/configure-snc/vsncsysacl-new-user.png)
+
+1. Ensure **Entry for RFC activated** and **Entry for certificate activated** checkboxes are marked, then select **Save**.
+
+### Set up the container
+
+1. Transfer the **libsapcrypto.so** and **sapgenpse** files to the target system where the container will be created.
+
+1. Transfer the client certificate (private and public key) to the target system where the container will be created.
+
+ The client certificate and key can be in .p12, .pfx, or Base-64 .crt and .key format.
+
+1. Transfer the server certificate (public key only) to the target system where the container will be created.
+
+ The server certificate must be in Base-64 .crt format.
+
+1. If the client certificate was issued by an enterprise certification authority, transfer the issuing CA and root CA certificates to the target system where the container will be created.
+
+1. Retrieve the kickstart script from the Microsoft Sentinel GitHub repository:
+
+ ```bash
+ wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/sapcon-sentinel-kickstart.sh
+ ```
+
+1. Change the script's permissions to make it executable:
+
+ ```bash
+ chmod +x ./sapcon-sentinel-kickstart.sh
+ ```
+
+1. Run the script, specifying the following parameters:
+
+ ```bash
+ ./sapcon-sentinel-kickstart.sh \
+ --use-snc \
+ --cryptolib <path to sapcryptolib.so> \
+ --sapgenpse <path to sapgenpse> \
+ # CLIENT CERTIFICATE
+ # If client certificate is in .crt/.key format
+ --client-cert <path to client certificate public key> \
+ --client-key <path to client certificate private key> \
+ # If client certificate is in .pfx or .p12 format
+ --client-pfx <pfx filename>
+ --client-pfx-passwd <password>
+ # If client certificate issued by enterprise CA
+ --cacert <path to ca certificate> # for each CA in the trust chain
+ # SERVER CERTIFICATE
+ --server-cert <path to server certificate public key> \
+
+ ```
+
+ For example:
+
+ ```bash
+ ./sapcon-sentinel-kickstart.sh \
+ --use-snc \
+ --cryptolib /home/azureuser/libsapcrypto.so \
+ --sapgenpse /home/azureuser/sapgenpse \
+ # CLIENT CERTIFICATE
+ # If client certificate is in .crt/.key format
+ --client-cert /home/azureuser/client.crt \
+ --client-key /home/azureuser/client.key \
+ # If client certificate is in .pfx or .p12 format
+ --client-pfx /home/azureuser/client.pfx \
+ --client-pfx-passwd <password>
+ # If client certificate issued by enterprise CA
+ --cacert /home/azureuser/issuingca.crt
+ --cacert /home/azureuser/rootca.crt
+ # SERVER CERTIFICATE
+ --server-cert /home/azureuser/server.crt \
+ ```
+
+For additional information on options available in the kickstart script, review [Reference: Kickstart script](reference-kickstart.md)
+
+## Next steps
+
+Learn more about the Microsoft Sentinel SAP solutions:
+
+- [Deploy Continuous Threat Monitoring for SAP](deployment-overview.md)
+- [Prerequisites for deploying SAP continuous threat monitoring](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
+- [Deploy SAP Change Requests (CRs) and configure authorization](preparing-sap.md)
+- [Deploy and configure the SAP data connector agent container](deploy-data-connector-agent-container.md)
+- [Deploy SAP security content](deploy-sap-security-content.md)
+- [Deploy the Microsoft Sentinel SAP data connector with SNC](configure-snc.md)
+- [Enable and configure SAP auditing](configure-audit.md)
+- [Collect SAP HANA audit logs](collect-sap-hana-audit-logs.md)
+
+Troubleshooting:
+
+- [Troubleshoot your Microsoft Sentinel SAP solution deployment](sap-deploy-troubleshoot.md)
+- [Configure SAP Transport Management System](configure-transport.md)
+
+Reference files:
+
+- [Microsoft Sentinel SAP solution data reference](sap-solution-log-reference.md)
+- [Microsoft Sentinel SAP solution: security content reference](sap-solution-security-content.md)
+- [Kickstart script reference](reference-kickstart.md)
+- [Update script reference](reference-update.md)
+- [Systemconfig.ini file reference](reference-systemconfig.md)
+
+For more information, see [Microsoft Sentinel solutions](../sentinel-solutions.md).
+
sentinel Configure Transport https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/configure-transport.md
+
+ Title: Configure SAP Transport Management System to connect from Microsoft Sentinel | Microsoft Docs
+description: This article shows you how to configure the SAP Transport Management System in the event of an error or in a lab environment where it hasn't already been configured, in order to successfully deploy the Continuous Threat Monitoring solution for SAP in Microsoft Sentinel.
+++ Last updated : 04/07/2022+
+# Configure SAP Transport Management System to connect from Microsoft Sentinel
++
+This article shows you how to configure the SAP Transport Management System in order to successfully deploy the Continuous Threat Monitoring solution for SAP in Microsoft Sentinel.
+
+> [!IMPORTANT]
+> The Microsoft Sentinel SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+SAP's Transport Management System is normally already configured on production systems. However, in a lab environment, where CRs often haven't been previously installed, configuration may be required.
+
+If you get this error running the **STMS_IMPORT** transaction while [preparing your SAP environment](preparing-sap.md), you'll need to configure the Transport Management System.
+
+![Error while running STMS_IMPORT transaction](./media/configure-transport/stms-import-error.png "Error while running STMS_IMPORT transaction")
+
+## Configure Transport Management System
+
+The following steps show the process for configuring the Transport Management System.
+
+> [!IMPORTANT]
+> In production systems, always consult with a SAP administrator before making changes to your SAP environment.
+
+1. Run a new instance of **SAP Logon** and sign in to **Client number** `000` as **user** `DDIC`.
+
+ :::image type="content" source="media/configure-transport/ddic-logon.png" alt-text="Screenshot of logging into SAP as a D D I C account.":::
+
+1. Run the **STMS** transaction:
+
+ Type `STMS` in the field in the upper left corner of the screen and press the **Enter** key.
+
+1. Remove the existing TMS configuration:
+
+ 1. In the **Transport Management System** screen, select **More > Extras > Delete TMS Configuration**, and select **Yes** to confirm.
+
+ :::image type="content" source="media/configure-transport/remove-tms-configuration.png" alt-text="Screenshot of deleting existing T M S configuration.":::
+
+ 1. After deleting the configuration, you will be prompted to configure the TMS transport domain.
+
+ 1. In the **TMS: Configure Transport Domain** dialog, select **Save**.
+
+ 1. In the **Set Password for User TMSADM** dialog, define a complex password and enter it twice. Record the password in a secure location and select the green checkmark to confirm.
+
+1. Configure Transport routes:
+
+ 1. In the **Transport Management System** screen, select **Transport Routes**.
+
+ :::image type="content" source="media/configure-transport/tms-transport-routes.png" alt-text="Screenshot of configuring transport routes.":::
+
+ 1. In the **Change Transport Routes (Active)** screen, select **Display <-> Change**.
+
+ :::image type="content" source="media/configure-transport/transport-routes-display-change.png" alt-text="Screenshot of displaying and changing transport routes." lightbox="media/configure-transport/transport-routes-display-change-lightbox.png":::
+
+ 1. Select **More > Configuration > Standard Configuration > Single System**.
+
+ :::image type="content" source="media/configure-transport/transport-routes-display-singlesystem.png" alt-text="Screenshot of changing a single system transport route." lightbox="media/configure-transport/transport-routes-display-singlesystem.png":::
+
+ 1. In the **Change Transport Routes (Revised)** screen, select **Save**.
+
+ 1. In the **Configuration Short Text** screen, select **Save**.
+
+ 1. In the **Distribute and Activate** screen, select **Yes**.
+
+1. Close the SAP application signed in to client `000` as `DDIC`, and return to the SAP application signed in to client `001`.
+
+## Next steps
+
+Now that you've configured the Transport Management System, you'll be able to successfully complete the `STMS_IMPORT` transaction and you can continue [preparing your SAP environment](preparing-sap.md) for deploying the Continuous Threat Monitoring solution for SAP in Microsoft Sentinel.
+
+> [!div class="nextstepaction"]
+> [Deploy SAP Change Requests and configure authorization](preparing-sap.md#set-up-the-applications)
+
+Learn more about the Microsoft Sentinel SAP solutions:
+
+- [Deploy Continuous Threat Monitoring for SAP](deployment-overview.md)
+- [Prerequisites for deploying SAP continuous threat monitoring](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
+- [Deploy SAP Change Requests (CRs) and configure authorization](preparing-sap.md)
+- [Deploy and configure the SAP data connector agent container](deploy-data-connector-agent-container.md)
+- [Deploy SAP security content](deploy-sap-security-content.md)
+- [Deploy the Microsoft Sentinel SAP data connector with SNC](configure-snc.md)
+- [Enable and configure SAP auditing](configure-audit.md)
+- [Collect SAP HANA audit logs](collect-sap-hana-audit-logs.md)
+
+Troubleshooting:
+
+- [Troubleshoot your Microsoft Sentinel SAP solution deployment](sap-deploy-troubleshoot.md)
+
+Reference files:
+
+- [Microsoft Sentinel SAP solution data reference](sap-solution-log-reference.md)
+- [Microsoft Sentinel SAP solution: security content reference](sap-solution-security-content.md)
+- [Kickstart script reference](reference-kickstart.md)
+- [Update script reference](reference-update.md)
+- [Systemconfig.ini file reference](reference-systemconfig.md)
+
+For more information, see [Microsoft Sentinel solutions](../sentinel-solutions.md).
sentinel Deploy Data Connector Agent Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deploy-data-connector-agent-container.md
+
+ Title: Deploy and configure the Microsoft Sentinel SAP data connector agent container | Microsoft Docs
+description: This article shows you how to deploy the SAP data connector agent container in order to ingest SAP data into Microsoft Sentinel, as part of Microsoft Sentinel's Continuous Threat Monitoring solution for SAP.
+++ Last updated : 04/12/2022++
+# Deploy and configure the Microsoft Sentinel SAP data connector agent container
++
+This article shows you how to deploy the SAP data connector agent container in order to ingest SAP data into Microsoft Sentinel, as part of Microsoft Sentinel's Continuous Threat Monitoring solution for SAP.
+
+> [!IMPORTANT]
+> The Microsoft Sentinel SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Deployment milestones
+
+Deployment of the SAP continuous threat monitoring solution is divided into the following sections
+
+1. [Deployment overview](deployment-overview.md)
+
+1. [Deployment prerequisites](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
+
+1. [Prepare SAP environment](preparing-sap.md)
+
+1. **Deploy data connector agent (*You are here*)**
+
+1. [Deploy SAP security content](deploy-sap-security-content.md)
+
+1. Optional deployment steps
+ - [Configure auditing](configure-audit.md)
+ - [Configure SAP data connector to use SNC](configure-snc.md)
++
+## Data connector agent deployment overview
+
+The Continuous Threat Monitoring solution for SAP is built on first getting all your SAP log data into Microsoft Sentinel, so that all the other components of the solution can do their jobs. To accomplish this, you need to deploy the SAP data connector agent.
+
+The data connector agent runs as a container on a Linux virtual machine (VM). This VM can be hosted either in Azure, in other clouds, or on-premises. You install and configure this container using a *kickstart* script.
+
+The agent connects to your SAP system to pull the logs from it, and then sends those logs to your Microsoft Sentinel workspace. To do this, the agent has to authenticate to your SAP system - that's why you created a user and a role for the agent in your SAP system in the previous step.
+
+Your SAP authentication infrastructure, and where you deploy your VM, will determine how and where your agent configuration information, including your SAP authentication secrets, is stored. These are the options, in descending order of preference:
+
+- An Azure Key Vault, accessed through an Azure **system-assigned managed identity**
+- An Azure Key Vault, accessed through an Azure AD **registered-application service principal**
+- A plaintext **configuration file**
+
+If your **SAP authentication** infrastructure is based on **PKI**, using **X.509 certificates**, your only option is to use a configuration file. Select the **Configuration file** tab below for the instructions to deploy your agent container.
+
+If not, then your SAP configuration and authentication secrets can and should be stored in an [**Azure Key Vault**](../../key-vault/general/authentication.md). How you access your key vault depends on where your VM is deployed:
+
+- **A container on an Azure VM** can use an Azure [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) to seamlessly access Azure Key Vault. Select the **Managed identity** tab below for the instructions to deploy your agent container using managed identity.
+
+ In the event that a system-assigned managed identity can't be used, the container can also authenticate to Azure Key Vault using an [Azure AD registered-application service principal](../../active-directory/develop/app-objects-and-service-principals.md), or, as a last resort, a configuration file.
+
+- **A container on an on-premises VM**, or **a VM in a third-party cloud environment**, can't use Azure managed identity, but can authenticate to Azure Key Vault using an [Azure AD registered-application service principal](../../active-directory/develop/app-objects-and-service-principals.md). Select the **Registered application** tab below for the instructions to deploy your agent container.
+
+ If for some reason a registered-application service principal can't be used, you can use a configuration file, though this is not preferred.
+
+## Deploy the data connector agent container
+
+# [Managed identity](#tab/managed-identity)
+
+1. Run the following command to **Create a VM** in Azure (substitute actual names for the `<placeholders>`):
+
+ ```azurecli
+ az vm create --resource-group <resource group name> --name <VM Name> --image Canonical:0001-com-ubuntu-server-focal:20_04-lts-gen2:latest --admin-username <azureuser> --public-ip-address "" --size Standard_D2as_v5 --generate-ssh-keys --assign-identity
+ ```
+
+ The command above will create the VM resource, producing output that looks like this:
+
+ ```json
+ {
+ "fqdns": "",
+ "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/resourcegroupname/providers/Microsoft.Compute/virtualMachines/vmname",
+ "identity": {
+ "systemAssignedIdentity": "yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy",
+ "userAssignedIdentities": {}
+ },
+ "location": "westeurope",
+ "macAddress": "00-11-22-33-44-55",
+ "powerState": "VM running",
+ "privateIpAddress": "192.168.136.5",
+ "publicIpAddress": "",
+ "resourceGroup": "resourcegroupname",
+ "zones": ""
+ }
+ ```
+
+1. Copy the **systemAssignedIdentity** GUID, as it will be used in the coming steps.
+
+ For more information, see [Quickstart: Create a Linux virtual machine with the Azure CLI](../../virtual-machines/linux/quick-create-cli.md).
+
+ > [!IMPORTANT]
+ > After the VM is created, be sure to apply any security requirements and hardening procedures applicable in your organization.
+ >
+
+1. Run the following commands to **create a key vault** (substitute actual names for the `<placeholders>`):
+
+ ```azurecli
+ kvgp=<KVResourceGroup>
+ kvname=<keyvaultname>
+
+ #Create a key vault
+ az keyvault create \
+ --name $kvname \
+ --resource-group $kvgp
+ ```
+
+ If you'll be using an existing key vault, ignore this step.
+
+1. Copy the name of the (newly created or existing) key vault and the name of its resource group. You'll need these when you run the deployment script in the coming steps.
+
+1. Run the following command to **assign a key vault access policy** to the VM's system-assigned identity that you copied above (substitute actual names for the `<placeholders>`):
+
+ ```azurecli
+ az keyvault set-policy -n <key vault name> -g <key vault resource group> --object-id <VM system-assigned identity> --secret-permissions get list set
+ ```
+
+ This policy will allow the VM to list, read, and write secrets from/to the key vault.
+
+1. **Sign in to the newly created machine** with a user with sudo privileges.
+
+1. **Create and configure a data disk** to be mounted at the Docker root directory.
+
+1. Run the following command to **download and run the deployment Kickstart script**:
+
+ ```bash
+ wget -O sapcon-sentinel-kickstart.sh https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/sapcon-sentinel-kickstart.sh && bash ./sapcon-sentinel-kickstart.sh
+ ```
+
+ The script updates the OS components and installs the Azure CLI and Docker software.
+
+1. **Follow the on-screen instructions** to enter your SAP and key vault details and complete the deployment. When the deployment is complete, a confirmation message is displayed:
+
+ ```bash
+ The process has been successfully completed, thank you!
+ ```
+
+ Note the Docker container name in the script output. You'll use it in the next step.
+
+1. Run the following command to **configure the Docker container to start automatically**.
+
+ ```bash
+ docker update --restart unless-stopped <container-name>
+ ```
+
+ To view a list of the available containers use the command: `docker ps -a`.
+
+# [Registered application](#tab/registered-application)
+
+1. Run the following command to **create and register an application**:
+
+ ```azurecli
+ az ad sp create-for-rbac
+ ```
+
+ The command above will create the application, producing output that looks like this:
+
+ ```json
+ {
+ "appId": "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa",
+ "displayName": "azure-cli-2022-01-28-17-59-06",
+ "password": "ssssssssssssssssssssssssssssssssss",
+ "tenant": "bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb"
+ }
+ ```
+
+1. Copy the **appId**, **tenant**, and **password** from the output. You'll need these for assigning the key vault access policy and running the deployment script in the coming steps.
+
+1. Run the following commands to **create a key vault** (substitute actual names for the `<placeholders>`):
+
+ ```azurecli
+ kvgp=<KVResourceGroup>
+ kvname=<keyvaultname>
+
+ #Create a key vault
+ az keyvault create \
+ --name $kvname \
+ --resource-group $kvgp
+ ```
+
+ If you'll be using an existing key vault, ignore this step.
+
+1. Copy the name of the (newly created or existing) key vault and the name of its resource group. You'll need these for assigning the key vault access policy and running the deployment script in the coming steps.
+
+1. Run the following command to **assign a key vault access policy** to the registered application ID that you copied above (substitute actual names or values for the `<placeholders>`):
+
+ ```azurecli
+ az keyvault set-policy -n <key vault name> -g <key vault resource group> --spn <appid> --secret-permissions get list set
+ ```
+
+ For example:
+
+ ```azurecli
+ az keyvault set-policy -n sentinelkeyvault -g sentinelresourcegroup --application-id aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa --secret-permissions get list set
+ ```
+
+ This policy will allow the VM to list, read, and write secrets from/to the key vault.
+
+1. Run the following commands to **download the deployment Kickstart script** from the Microsoft Sentinel GitHub repository and **mark it executable**:
+
+ ```bash
+ wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/sapcon-sentinel-kickstart.sh
+ chmod +x ./sapcon-sentinel-kickstart.sh
+ ```
+
+1. **Run the script**, specifying the application ID, secret (the "password"), tenant ID, and key vault name that you copied in the previous steps.
+
+ ```bash
+ ./sapcon-sentinel-kickstart.sh --keymode kvsi --appid aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa --appsecret ssssssssssssssssssssssssssssssssss -tenantid bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb -kvaultname <key vault name>
+ ```
+
+ The script updates the OS components, installs the Azure CLI and Docker software and other required utilities (jq, netcat, curl), and prompts you for configuration parameter values.
+
+1. **Follow the on-screen instructions** to enter the requested details and complete the deployment. When the deployment is complete, a confirmation message is displayed:
+
+ ```bash
+ The process has been successfully completed, thank you!
+ ```
+
+ Note the Docker container name in the script output. You'll use it in the next step.
+
+1. Run the following command to **configure the Docker container to start automatically**.
+
+ ```bash
+ docker update --restart unless-stopped <container-name>
+ ```
+
+ To view a list of the available containers use the command: `docker ps -a`.
+
+# [Configuration file](#tab/config-file)
+
+1. Run the following commands to **download the deployment Kickstart script** from the Microsoft Sentinel GitHub repository and **mark it executable**:
+
+ ```bash
+ wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/sapcon-sentinel-kickstart.sh
+ chmod +x ./sapcon-sentinel-kickstart.sh
+ ```
+
+1. **Run the script**:
+
+ ```bash
+ ./sapcon-sentinel-kickstart.sh --keymode cfgf
+ ```
+
+ The script updates the OS components, installs the Azure CLI and Docker software and other required utilities (jq, netcat, curl), and prompts you for configuration parameter values.
+
+1. **Follow the on-screen instructions** to enter the requested details and complete the deployment. When the deployment is complete, a confirmation message is displayed:
+
+ ```bash
+ The process has been successfully completed, thank you!
+ ```
+
+ Note the Docker container name in the script output. You'll use it in the next step.
+
+1. Run the following command to **configure the Docker container to start automatically**.
+
+ ```bash
+ docker update --restart unless-stopped <container-name>
+ ```
+
+ To view a list of the available containers use the command: `docker ps -a`.
++++
+## Deploy the SAP data connector manually
+
+1. Transfer the [SAP NetWeaver SDK](https://aka.ms/sap-sdk-download) to the machine on which you want to install the agent.
+
+1. Install [Docker](https://www.docker.com/)
+
+1. Use the following commands (replacing <*SID*> with the name of the SAP instance) to create a folder to store the container configuration and metadata, and to download a sample systemconfig.ini file into that folder.
+
+ ````bash
+ sid=<SID>
+ mkdir -p /opt/sapcon/$sid
+ cd /opt/sapcon/$sid
+ wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/template/systemconfig.ini
+
+ ````
+
+1. Edit the systemconfig.ini file to [configure the relevant settings](reference-systemconfig.md).
+
+1. Run the following commands (replacing <*SID*> with the name of the SAP instance) to retrieve the latest container image, create a new container, and configure it to start automatically.
+
+ ````bash
+ sid=<SID>
+ docker pull mcr.microsoft.com/azure-sentinel/solutions/sapcon:latest
+ docker create -d --restart unless-stopped -v /opt/sapcon/$sid/:/sapcon-app/sapcon/config/system --name sapcon-$sid sapcon
+ ````
+
+1. Run the following command (replacing <*SID*> with the name of the SAP instance) to copy the SDK into the container.
+
+ ````bash
+ sdkfile=<sdkfilename>
+ sid=<SID>
+ docker cp $sdkfile sapcon-$sid:/sapcon-app/inst/
+ ````
+
+## Next steps
+
+Once connector is deployed, proceed to deploy Continuous Threat Monitoring for SAP solution content
+> [!div class="nextstepaction"]
+> [Deploy SAP security content](deploy-sap-security-content.md)
sentinel Deploy Sap Security Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deploy-sap-security-content.md
+
+ Title: Deploy SAP security content in Microsoft Sentinel | Microsoft Docs
+description: This article shows you how to deploy Microsoft Sentinel security content into your Microsoft Sentinel workspace. This content makes up the remaining parts of the Continuous Threat Monitoring solution for SAP.
+++ Last updated : 04/27/2022++
+# Deploy SAP security content in Microsoft Sentinel
++
+This article shows you how to deploy Microsoft Sentinel security content into your Microsoft Sentinel workspace. This content makes up the remaining parts of the Continuous Threat Monitoring solution for SAP.
+
+> [!IMPORTANT]
+> The Microsoft Sentinel SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Deployment milestones
+
+Track your SAP solution deployment journey through this series of articles:
+
+1. [Deployment overview](deployment-overview.md)
+
+1. [Deployment prerequisites](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
+
+1. [Prepare SAP environment](preparing-sap.md)
+
+1. [Deploy data connector agent](deploy-data-connector-agent-container.md)
+
+1. **Deploy SAP security content (*You are here*)**
+
+1. Optional deployment steps
+ - [Configure auditing](configure-audit.md)
+ - [Configure SAP data connector to use SNC](configure-snc.md)
++
+## Deploy SAP security content
+
+Deploy the [SAP security content](sap-solution-security-content.md) from the Microsoft Sentinel **Content hub** and **Watchlists** areas.
+
+The **Microsoft Sentinel - Continuous Threat Monitoring for SAP** solution enables the SAP data connector to be displayed in the Microsoft Sentinel **Data connectors** area. The solution also deploys the **SAP - System Applications and Products** workbook and SAP-related analytics rules.
+
+Add SAP-related watchlists to your Microsoft Sentinel workspace manually.
+
+To deploy SAP solution security content, do the following:
+
+1. In Microsoft Sentinel, on the left pane, select **Content hub (Preview)**.
+
+ The **Content hub (Preview)** page displays a filtered, searchable list of solutions.
+
+1. To open the SAP solution page, select **Continuous Threat Monitoring for SAP**.
+
+ :::image type="content" source="./media/deploy-sap-security-content/sap-solution.png" alt-text="Screenshot of the 'Microsoft Sentinel - Continuous Threat Monitoring for SAP' solution pane." lightbox="media/deploy-sap-security-content/sap-solution.png":::
+
+1. To launch the solution deployment wizard, select **Create**, and then enter the details of the Azure subscription, resource group, and Log Analytics workspace where you want to deploy the solution.
+
+1. Select **Next** to cycle through the **Data Connectors**, **Analytics**, and **Workbooks** tabs, where you can learn about the components that will be deployed with this solution.
+
+ The default name for the workbook is **SAP - System Applications and Products**. Change it in the workbooks tab as needed.
+
+ For more information, see [Microsoft Sentinel SAP solution: security content reference (public preview)](sap-solution-security-content.md).
+
+1. On the **Review + create tab** pane, wait for the **Validation Passed** message, then select **Create** to deploy the solution.
+
+ > [!TIP]
+ > You can also select **Download a template** for a link to deploy the solution as code.
+
+1. After the deployment is completed, a confirmation message appears at the upper right.
+
+ To display the newly deployed content, go to:
+
+ - **Threat Management** > **Workbooks** > **My workbooks**, to find the [built-in SAP workbooks](sap-solution-security-content.md#built-in-workbooks).
+ - **Configuration** > **Analytics** to find a series of [SAP-related analytics rules](sap-solution-security-content.md#built-in-analytics-rules).
+
+1. Add SAP-related watchlists to use in your search, detection rules, threat hunting, and response playbooks. These watchlists provide the configuration for the Microsoft Sentinel SAP Continuous Threat Monitoring solution. Do the following:
+
+ 1. Download SAP watchlists from the Microsoft Sentinel GitHub repository at https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP/Analytics/Watchlists.
+
+ 1. In the Microsoft Sentinel **Watchlists** area, add the watchlists to your Microsoft Sentinel workspace. Use the downloaded CSV files as the sources, and then customize them as needed for your environment.
+
+ [![SAP-related watchlists added to Microsoft Sentinel.](./media/deploy-sap-security-content/sap-watchlists.png)](./media/deploy-sap-security-content/sap-watchlists.png#lightbox)
+
+ For more information, see [Use Microsoft Sentinel watchlists](../watchlists.md) and [Available SAP watchlists](sap-solution-security-content.md#available-watchlists).
+
+1. In Microsoft Sentinel, go to the **Microsoft Sentinel Continuous Threat Monitoring for SAP** data connector to confirm the connection:
+
+ [![Screenshot of the Microsoft Sentinel Continuous Threat Monitoring for SAP data connector page.](./media/deploy-sap-security-content/sap-data-connector.png)](./media/deploy-sap-security-content/sap-data-connector.png#lightbox)
+
+ SAP ABAP logs are displayed on the Microsoft Sentinel **Logs** page, under **Custom logs**:
+
+ [![Screenshot of the SAP ABAP logs in the 'Custom Logs' area in Microsoft Sentinel.](./media/deploy-sap-security-content/sap-logs-in-sentinel.png)](./media/deploy-sap-security-content/sap-logs-in-sentinel.png#lightbox)
+
+ For more information, see [Microsoft Sentinel SAP solution logs reference](sap-solution-log-reference.md).
+
+## Next steps
+
+Learn more about the Microsoft Sentinel SAP solutions:
+
+- [Deploy Continuous Threat Monitoring for SAP](deployment-overview.md)
+- [Prerequisites for deploying SAP continuous threat monitoring](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
+- [Deploy SAP Change Requests (CRs) and configure authorization](preparing-sap.md)
+- [Deploy and configure the SAP data connector agent container](deploy-data-connector-agent-container.md)
+- [Deploy SAP security content](deploy-sap-security-content.md)
+- [Deploy the Microsoft Sentinel SAP data connector with SNC](configure-snc.md)
+- [Enable and configure SAP auditing](configure-audit.md)
+- [Collect SAP HANA audit logs](collect-sap-hana-audit-logs.md)
+
+Troubleshooting:
+
+- [Troubleshoot your Microsoft Sentinel SAP solution deployment](sap-deploy-troubleshoot.md)
+- [Configure SAP Transport Management System](configure-transport.md)
+
+Reference files:
+
+- [Microsoft Sentinel SAP solution data reference](sap-solution-log-reference.md)
+- [Microsoft Sentinel SAP solution: security content reference](sap-solution-security-content.md)
+- [Update script reference](reference-update.md)
+- [Systemconfig.ini file reference](reference-systemconfig.md)
+
+For more information, see [Microsoft Sentinel solutions](../sentinel-solutions.md).
sentinel Deployment Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deployment-overview.md
+
+ Title: Deploy Continuous Threat Monitoring for SAP in Microsoft Sentinel | Microsoft Docs
+description: This article introduces you to the process of deploying the Microsoft Sentinel Continuous Threat Monitoring solution for SAP.
+++ Last updated : 04/12/2022++
+# Deploy Continuous Threat Monitoring for SAP in Microsoft Sentinel
++
+This article introduces you to the process of deploying the Microsoft Sentinel Continuous Threat Monitoring solution for SAP. The full process is detailed in a whole set of articles linked under [Deployment milestones](#deployment-milestones) below.
+
+> [!IMPORTANT]
+> The Microsoft Sentinel SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Overview
+
+**Continuous Threat Monitoring for SAP** is a [Microsoft Sentinel solution](../sentinel-solutions.md) that you can use to monitor your SAP systems and detect sophisticated threats throughout the business logic and application layers. The solution includes the following components:
+- The SAP data connector for data ingestion.
+- Analytics rules and watchlists for threat detection.
+- Workbooks for interactive data visualization.
+
+The SAP data connector is an agent, installed on a VM or a physical server, that collects application logs from across the entire SAP system landscape. It then sends those logs to your Log Analytics workspace in Microsoft Sentinel. You can then use the other content in the SAP Continuous Threat Monitoring solution ΓÇô the analytics rules, workbooks, and watchlists ΓÇô to gain insight into your organization's SAP environment and to detect and respond to security threats.
+
+## Deployment milestones
+
+Follow your deployment journey through this series of articles, in which you'll learn how to navigate each of the following steps:
+
+| Milestone | Article |
+| | - |
+| **1. Deployment overview** | **YOU ARE HERE** |
+| **2. Deployment prerequisites** | [Prerequisites for deploying SAP continuous threat monitoring](prerequisites-for-deploying-sap-continuous-threat-monitoring.md) |
+| **3. Prepare SAP environment** | [Deploying SAP CRs and configuring authorization](preparing-sap.md) |
+| **4. Deploy data connector agent** | [Deploy and configure the data connector agent container](deploy-data-connector-agent-container.md) |
+| **5. Deploy SAP security content** | [Deploy SAP security content](deploy-sap-security-content.md)
+| **6. Optional steps** | - [Configure auditing](configure-audit.md)<br>- [Configure SAP data connector to use SNC](configure-snc.md)
+
+> [!NOTE]
+> Extra steps are required to configure communications between SAP data connector and SAP over a Secure Network Communications (SNC) connection. This is covered in [Deploy the Microsoft Sentinel SAP data connector with SNC](configure-snc.md) section of the guide.
+
+## Next steps
+
+Begin the deployment of SAP continuous threat monitoring solution by reviewing the Prerequisites
+> [!div class="nextstepaction"]
+> [Prerequisites](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
sentinel Preparing Sap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/preparing-sap.md
+
+ Title: Deploy SAP Change Requests (CRs) and configure authorization | Microsoft Docs
+
+description: This article shows you how to deploy the SAP Change Requests (CRs) necessary to prepare the environment for the installation of the SAP agent, so that it can properly connect to your SAP systems.
+++ Last updated : 04/07/2022+
+# Deploy SAP Change Requests (CRs) and configure authorization
++
+This article shows you how to deploy the SAP Change Requests (CRs) necessary to prepare the environment for the installation of the SAP agent, so that it can properly connect to your SAP systems.
+
+> [!IMPORTANT]
+> The Microsoft Sentinel SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Deployment milestones
+
+Track your SAP solution deployment journey through this series of articles:
+
+1. [Deployment overview](deployment-overview.md)
+
+1. [Deployment prerequisites](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
+
+1. **Prepare SAP environment (*You are here*)**
+
+1. [Deploy data connector agent](deploy-data-connector-agent-container.md)
+
+1. [Deploy SAP security content](deploy-sap-security-content.md)
+
+1. Optional deployment steps
+ - [Configure auditing](configure-audit.md)
+ - [Configure SAP data connector to use SNC](configure-snc.md)
++
+> [!IMPORTANT]
+> - This article presents a [**step-by-step guide**](#deploy-change-requests) to deploying the required CRs. It's recommended for SOC engineers or implementers who may not necessarily be SAP experts.
+> - Experienced SAP administrators that are familiar with CR deployment process may prefer to get the appropriate CRs directly from the [**SAP environment validation steps**](prerequisites-for-deploying-sap-continuous-threat-monitoring.md#sap-environment-validation-steps) section of the guide and deploy them. Note that the *NPLK900163* CR deploys a sample role, and the administrator may prefer to manually define the role according to the information in the [**Required ABAP authorizations**](#required-abap-authorizations) section below.
+
+> [!NOTE]
+>
+> It is *strongly recommended* that the deployment of SAP CRs be carried out by an experienced SAP system administrator.
+>
+> The steps below may differ according to the version of the SAP system and should be considered for demonstration purposes only.
+>
+> Make sure you've copied the details of the **SAP system version**, **System ID (SID)**, **System number**, **Client number**, **IP address**, **administrative username** and **password** before beginning the deployment process.
+>
+> For the following example, the following details are assumed:
+> - **SAP system version:** `SAP ABAP Platform 1909 Developer edition`
+> - **SID:** `A4H`
+> - **System number:** `00`
+> - **Client number:** `001`
+> - **IP address:** `192.168.136.4`
+> - **Administrator user:** `a4hadm`, however, the SSH connection to the SAP system is established with `root` user credentials.
+
+The deployment of Microsoft Sentinel's Continuous Threat Monitoring for SAP solution requires the installation of several CRs. More details about the required CRs can be found in the [SAP environment validation steps](prerequisites-for-deploying-sap-continuous-threat-monitoring.md#sap-environment-validation-steps) section of this guide.
+
+To deploy the CRs, follow the steps outlined below:
+
+## Deploy change requests
+
+### Set up the files
+
+1. Sign in to the SAP system using SSH.
+
+1. Transfer the CR files to the SAP system.
+ Alternatively, you can download the files directly onto the SAP system from the SSH prompt. Use the following commands:
+ - Download NLPK900202
+ ```bash
+ wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/K900202.NPL
+ wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/R900202.NPL
+ ```
+
+ - Download NLPK900201
+ ```bash
+ wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/K900201.NPL
+ wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/R900201.NPL
+ ```
+
+ - Download NLPK900206
+ ```bash
+ wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/K900206.NPL
+ wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/R900206.NPL
+ ```
+
+ Note that each CR consists of two files, one beginning with K and one with R.
+
+1. Change the ownership of the files to user *`<sid>`adm* and group *sapsys*. (Substitute your SAP system ID for `<sid>`.)
+
+ ```bash
+ chown <sid>adm:sapsys *.NPL
+ ```
+
+ In our example:
+ ```bash
+ chown a4hadm:sapsys *.NPL
+ ```
+
+1. Copy the cofiles (those beginning with *K*) to the `/usr/sap/trans/cofiles` folder. Preserve the permissions while copying, using the `cp` command with the `-p` switch.
+
+ ```bash
+ cp -p K*.NPL /usr/sap/trans/cofiles/
+ ```
+
+1. Copy the data files (those beginning with R) to the `/usr/sap/trans/data` folder. Preserve the permissions while copying, using the `cp` command with the `-p` switch.
+
+ ```bash
+ cp -p R*.NPL /usr/sap/trans/data/
+ ```
+
+### Set up the applications
+
+1. Launch the **SAP Logon** application and sign in to the SAP GUI console.
+
+1. Run the **STMS_IMPORT** transaction:
+
+ In the **SAP Easy Access** screen, type `STMS_IMPORT` in the field in the upper left corner of the screen and press the **Enter** key.
+
+ :::image type="content" source="media/preparing-sap/stms-import.png" alt-text="Screenshot of running the S T M S import transaction.":::
+
+ > [!CAUTION]
+ > If an error occurs at this step, then you need to configure the SAP transport management system before proceeding any further. [**See this article for instructions**](configure-transport.md).
+
+1. In the **Import Queue** window that appears, select **More > Extras > Other Requests > Add**.
+
+ :::image type="content" source="media/preparing-sap/import-queue-add.png" alt-text="Screenshot of adding an import queue.":::
+
+1. In the **Add Transport Requests to Import Queue** pop-up that appears, select the **Transp. Request** field.
+
+1. The **Transport requests** window will appear and display a list of CRs available to be deployed. Select a CR and select the green checkmark button.
+
+1. Back in the **Add Transport Request to Import Queue** window, select **Continue** (the green checkmark) or press the Enter key.
+
+1. In the **Add Transport Request** confirmation dialog, select **Yes**.
+
+1. Repeat the procedure in the preceding 5 steps to add the remaining Change Requests to be deployed.
+
+1. In the **Import Queue** window, select the **Import All Requests** icon:
+
+ :::image type="content" source="media/preparing-sap/import-all-requests.png" alt-text="Screenshot of importing all requests." lightbox="media/preparing-sap/import-all-requests-lightbox.png":::
+
+1. In **Start Import** window, select the **Target Client** field.
+
+1. The **Input Help..** dialog will appear. Select the number of the client you want to deploy the CRs to (`001` in our example), then select the green checkmark to confirm.
+
+1. Back in the **Start Import** window, select the **Options** tab, mark the **Ignore Invalid Component Version** checkbox, and select the green checkmark to confirm.
+
+ :::image type="content" source="media/preparing-sap/start-import.png" alt-text="Screenshot of the start import window.":::
+
+1. In the **Start import** confirmation dialog, select **Yes** to confirm the import.
+
+1. Back in the **Import Queue** window, select **Refresh**, wait until the import operation completes and the import queue shows as empty.
+
+1. To review the import status, in the **Import Queue** window select **More > Go To > Import History**.
+
+ :::image type="content" source="media/preparing-sap/import-history.png" alt-text="Screenshot of import history.":::
+
+1. The *NPLK900180* change request is expected to display a **Warning**. Select the entry to verify that the warnings displayed are of type "Table \<tablename\> was activated."
+
+ :::image type="content" source="media/preparing-sap/import-status.png" alt-text="Screenshot of import status display." lightbox="media/preparing-sap/import-status-lightbox.png":::
+
+ :::image type="content" source="media/preparing-sap/import-warning.png" alt-text="Screenshot of import warning message display.":::
+
+## Configure Sentinel role
+
+After the *NPLK900163* change request is deployed, a **/MSFTSEN/SENTINEL_CONNECTOR** role is created in SAP. If the role is created manually, it may bear a different name.
+
+In the examples shown here, we will use the role name **/MSFTSEN/SENTINEL_CONNECTOR**.
+
+The next step is to generate an active role profile for Microsoft Sentinel to use.
+
+1. Run the **PFCG** transaction:
+
+ In the **SAP Easy Access** screen, type `PFCG` in the field in the upper left corner of the screen and press the **Enter** key.
+
+1. In the **Role Maintenance** window, type the role name `/MSFTSEN/SENTINEL_CONNECTOR` in the **Role** field and select the **Change** button (the pencil).
+
+ :::image type="content" source="media/preparing-sap/change-role-change.png" alt-text="Screenshot of choosing a role to change.":::
+
+1. In the **Change Roles** window that appears, select the **Authorizations** tab.
+
+1. In the **Authorizations** tab, select **Change Authorization Data**.
+
+ :::image type="content" source="media/preparing-sap/change-role-change-auth-data.png" alt-text="Screenshot of changing authorization data.":::
+
+1. In the **Information** popup, read the message and select the green checkmark to confirm.
+
+1. In the **Change Role: Authorizations** window, select **Generate**.
+
+ :::image type="content" source="media/preparing-sap/change-role-authorizations.png" alt-text="Screenshot of generating authorizations." lightbox="media/preparing-sap/change-role-authorizations-lightbox.png":::
+
+ See that the **Status** field has changed from **Unchanged** to **generated**.
+
+1. Select **Back** (to the left of the SAP logo at the top of the screen).
+
+1. Back in the **Change Roles** window, verify that the **Authorizations** tab displays a green box, then select **Save**.
+
+ :::image type="content" source="media/preparing-sap/change-role-save.png" alt-text="Screenshot of saving changed role.":::
+
+### Create a user
+
+Microsoft Sentinel's Continuous Threat Monitoring solution for SAP requires a user account to connect to your SAP system. Use the following instructions to create a user account and assign it to the role that you created in the previous step.
+
+In the examples shown here, we will use the role name **/MSFTSEN/SENTINEL_CONNECTOR**.
+
+1. Run the **SU01** transaction:
+
+ In the **SAP Easy Access** screen, type `SU01` in the field in the upper left corner of the screen and press the **Enter** key.
+
+1. In the **User Maintenance: Initial Screen** screen, type in the name of the new user in the **User** field and select **Create Technical User** from the button bar.
+
+1. In the **Maintain Users** screen, select **System** from the **User Type** drop-down list. Create and enter a complex password in the **New Password** and **Repeat Password** fields, then select the **Roles** tab.
+
+1. In the **Roles** tab, in the **Role Assignments** section, enter the full name of the role - `/MSFTSEN/SENTINEL_CONNECTOR` in our example - and press **Enter**.
+
+ After pressing **Enter**, verify that the right-hand side of the **Role Assignments** section populates with data, such as **Change Start Date**.
+
+1. Select the **Profiles** tab, verify that a profile for the role appears under **Assigned Authorization Profiles**, and select **Save**.
+
+### Required ABAP authorizations
+
+The following table lists the ABAP authorizations required to ensure that SAP logs can be correctly retrieved by the account used by Microsoft Sentinel's SAP data connector.
+
+The required authorizations are listed here by log type. Only the authorizations listed for the types of logs you plan to ingest into Microsoft Sentinel are required.
+
+> [!TIP]
+> To create a role with all the required authorizations, deploy the SAP change request *NPLK900163* on the SAP system. This change request creates the **/MSFTSEN/SENTINEL_CONNECTOR** role that has all the necessary permissions for the data connector to operate.
+
+| Authorization Object | Field | Value |
+| -- | -- | -- |
+| **All RFC logs** | | |
+| S_RFC | FUGR | /OSP/SYSTEM_TIMEZONE |
+| S_RFC | FUGR | ARFC |
+| S_RFC | FUGR | STFC |
+| S_RFC | FUGR | RFC1 |
+| S_RFC | FUGR | SDIFRUNTIME |
+| S_RFC | FUGR | SMOI |
+| S_RFC | FUGR | SYST |
+| S_RFC | FUGR/FUNC | SRFC/RFC_SYSTEM_INFO |
+| S_RFC | FUGR/FUNC | THFB/TH_SERVER_LIST |
+| S_TCODE | TCD | SM51 |
+| **ABAP Application Log** | | |
+| S_APPL_LOG | ACTVT | Display |
+| S_APPL_LOG | ALG_OBJECT | * |
+| S_APPL_LOG | ALG_SUBOBJ | * |
+| S_RFC | FUGR | SXBP_EXT |
+| S_RFC | FUGR | /MSFTSEN/_APPLOG |
+| **ABAP Change Documents Log** | | |
+| S_RFC | FUGR | /MSFTSEN/_CHANGE_DOCS |
+| **ABAP CR Log** | | |
+| S_RFC | FUGR | CTS_API |
+| S_RFC | FUGR | /MSFTSEN/_CR |
+| S_TRANSPRT | ACTVT | Display |
+| S_TRANSPRT | TTYPE | * |
+| **ABAP DB Table Data Log** | | |
+| S_RFC | FUGR | /MSFTSEN/_TD |
+| S_TABU_DIS | ACTVT | Display |
+| S_TABU_DIS | DICBERCLS | &NC& |
+| S_TABU_DIS | DICBERCLS | + Any object required for logging |
+| S_TABU_NAM | ACTVT | Display |
+| S_TABU_NAM | TABLE | + Any object required for logging |
+| S_TABU_NAM | TABLE | DBTABLOG |
+| **ABAP Job Log** | | |
+| S_RFC | FUGR | SXBP |
+| S_RFC | FUGR | /MSFTSEN/_JOBLOG |
+| **ABAP Job Log, ABAP Application Log** | | |
+| S_XMI_PRD | INTERFACE | XBP |
+| **ABAP Security Audit Log - XAL** | | |
+| All RFC | S_RFC | FUGR |
+| S_ADMI_FCD | S_ADMI_FCD | AUDD |
+| S_RFC | FUGR | SALX |
+| S_USER_GRP | ACTVT | Display |
+| S_USER_GRP | CLASS | * |
+| S_XMI_PRD | INTERFACE | XAL |
+| **ABAP Security Audit Log - XAL, ABAP Job Log, ABAP Application Log** | | |
+| S_RFC | FUGR | SXMI |
+| S_XMI_PRD | EXTCOMPANY | Microsoft |
+| S_XMI_PRD | EXTPRODUCT | Microsoft Sentinel |
+| **ABAP Security Audit Log - SAL** | | |
+| S_RFC | FUGR | RSAU_LOG |
+| S_RFC | FUGR | /MSFTSEN/_AUDITLOG |
+| **ABAP Spool Log, ABAP Spool Output Log** | | |
+| S_RFC | FUGR | /MSFTSEN/_SPOOL |
+| **ABAP Workflow Log** | | |
+| S_RFC | FUGR | SWRR |
+| S_RFC | FUGR | /MSFTSEN/_WF |
+| **User Data** | | |
+| S_RFC | FUNC | RFC_READ_TABLE |
+| | |
++
+## Next steps
+
+You have now fully prepared your SAP environment. The required CRs have been deployed, a role and profile have been provisioned, and a user account has been created and assigned the proper role profile.
+
+Now you are ready to deploy the data connector agent container.
+
+> [!div class="nextstepaction"]
+> [Deploy and configure the data connector agent container](deploy-data-connector-agent-container.md)
sentinel Prerequisites For Deploying Sap Continuous Threat Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/prerequisites-for-deploying-sap-continuous-threat-monitoring.md
+
+ Title: Prerequisites for deploying SAP continuous threat monitoring in Microsoft Sentinel | Microsoft Docs
+description: This article lists the prerequisites required for deployment of the Microsoft Sentinel Continuous Threat Monitoring solution for SAP.
+++ Last updated : 04/07/2022+
+# Prerequisites for deploying SAP continuous threat monitoring in Microsoft Sentinel
++
+This article lists the prerequisites required for deployment of the Microsoft Sentinel Continuous Threat Monitoring solution for SAP.
+
+> [!IMPORTANT]
+> The Microsoft Sentinel SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Deployment milestones
+
+Track your SAP solution deployment journey through this series of articles:
+
+1. [Deployment overview](deployment-overview.md)
+
+1. **Deployment prerequisites (*You are here*)**
+
+1. [Prepare SAP environment](preparing-sap.md)
+
+1. [Deploy data connector agent](deploy-data-connector-agent-container.md)
+
+1. [Deploy SAP security content](deploy-sap-security-content.md)
+
+1. Optional deployment steps
+ - [Configure auditing](configure-audit.md)
+ - [Configure SAP data connector to use SNC](configure-snc.md)
+
+## Table of prerequisites
+
+To successfully deploy the SAP Continuous Threat Monitoring solution, you must meet the following prerequisites:
+
+### Azure prerequisites
+
+| Prerequisite | Description |
+| - | -- |
+| **Access to Microsoft Sentinel** | Make a note of your Microsoft Sentinel *workspace ID* and *primary key*.<br>You can find these details in Microsoft Sentinel: from the navigation menu, select **Settings** > **Workspace settings** > **Agents management**. Copy the *Workspace ID* and *Primary key* and paste them aside for use during the deployment process. |
+| *[Optional]* **Permissions to create Azure resources** | At a minimum, you must have the necessary permissions to deploy solutions from the Microsoft Sentinel content hub. For more information, see the [Microsoft Sentinel content hub catalog](../sentinel-solutions-catalog.md). |
+| *[Optional]* **Permissions to create an Azure key vault or access an existing one** | The recommended deployment scenario is to use Azure Key Vault to store secrets required to connect to your SAP system. For more information, see the [Azure Key Vault documentation](../../key-vault/index.yml). |
+
+### System prerequisites
+
+| Prerequisite | Description |
+| - | -- |
+| **System architecture** | The data connector component of the SAP solution is deployed as a Docker container, and each SAP client requires its own container instance.<br>The container host can be either a physical machine or a virtual machine, can be located either on-premises or in any cloud. <br>The VM hosting the container ***does not*** have to be located in the same Azure subscription as your Microsoft Sentinel workspace, or even in the same Azure AD tenant. |
+| **Virtual machine sizing recommendations** | **Minimum specification**, such as for a lab environment:<br>*Standard_B2s* VM, with:<br>- 2 cores<br>- 4 GB RAM<br><br>**Standard connector** (default):<br>*Standard_D2as_v5* VM or<br>*Standard_D2_v5* VM, with: <br>- 2 cores<br>- 8 GB RAM<br><br>**Multiple connectors**:<br>*Standard_D4as_v5* or<br>*Standard_D4_v5* VM, with: <br>- 4 cores<br>- 16 GB RAM |
+| **Administrative privileges** | Administrative privileges (root) are required on the container host machine. |
+| **Supported Linux versions** | Your Docker container host machine must run one of the following Linux distributions:<br>- Ubuntu 18.04 or higher<br>- SLES version 15 or higher<br>- RHEL version 7.7 or higher<br><br>If you have a different operating system, you can [deploy and configure the container manually](deploy-data-connector-agent-container.md#deploy-the-sap-data-connector-manually). |
+| **Network connectivity** | Ensure that the container host has access to: <br>- Microsoft Sentinel <br>- Azure key vault (in deployment scenario where Azure key vault is used to store secrets<br>- SAP system via the following TCP ports: *32xx*, *5xx13*, *33xx*, *48xx* (when SNC is used), where *xx* is the SAP instance number. |
+| **Software utilities** | The [SAP data connector deployment script](reference-kickstart.md) installs the following required software on the container host VM (depending on the Linux distribution used, the list may vary slightly): <br>- [Unzip](http://infozip.sourceforge.net/UnZip.html)<br>- [NetCat](https://sectools.org/tool/netcat/)<br>- [Docker](https://www.docker.com/)<br>- [jq](https://stedolan.github.io/jq/)<br>- [curl](https://curl.se/)<br><br>Make sure that you also have an SAP user account in order to access the SAP software download page. |
+
+### SAP prerequisites
+
+| Prerequisite | Description |
+| - | -- |
+| **Supported SAP versions** | We recommend using [SAP_BASIS versions 750 SP13](https://support.sap.com/en/my-support/software-downloads/support-package-stacks/product-versions.html#:~:text=SAP%20NetWeaver%20%20%20%20SAP%20Product%20Version,%20%20SAPKB710%3Cxx%3E%20%207%20more%20rows) or later. <br><br>Certain steps in this tutorial provide alternative instructions if you're working on the older [SAP_BASIS version 740](https://support.sap.com/en/my-support/software-downloads/support-package-stacks/product-versions.html#:~:text=SAP%20NetWeaver%20%20%20%20SAP%20Product%20Version,%20%20SAPKB710%3Cxx%3E%20%207%20more%20rows). |
+| **Required software** | SAP NetWeaver RFC SDK 7.50 ([Download here](https://aka.ms/sap-sdk-download)).<br>At the link, select **SAP NW RFC SDK 7.50** -> **Linux on X86_64 64BIT** -> **Download the latest version**. |
+| **SAP system details** | Make a note of the following SAP system details for use in this tutorial:<br>- SAP system IP address and FQDN hostname<br>- SAP system number, such as `00`<br>- SAP System ID, from the SAP NetWeaver system (for example, `NPL`) <br>- SAP client ID, such as `001` |
+| **SAP NetWeaver instance access** | The SAP data connector agent uses one of the following mechanisms to authenticate to the SAP system: <br>- SAP ABAP user/password<br>- A user with an X.509 certificate (This option requires additional configuration steps) |
+
+## SAP change request (CR) deployment
+
+Besides all the prerequisites listed above, a successful deployment of the SAP data connector depends on your SAP environment being properly configured and updated. This includes ensuring that the relevant SAP change requests (CRs), as well as a Microsoft-provided CR, are deployed on the SAP system and that a role is created in SAP to enable access for the SAP data connector.
+
+> [!NOTE]
+> Step-by-step instructions for deploying a CR and assigning the required role are available in the [**Deploying SAP CRs and configuring authorization**](preparing-sap.md) guide. Retrieve the required CRs from the links in the tables below and proceed to the step-by-step guide.
+>
+> Experienced SAP administrators may choose to create the role manually and assign it the appropriate permissions. In such a case, it is **not** necessary to deploy the CR *NPLK900163*, but you must instead create a role using the recommendations outlined in [Expert: Deploy SAP CRs and deploy required ABAP authorizations](preparing-sap.md#required-abap-authorizations). In any case, you must still deploy CR *NPLK900180* to enable the SAP data connector agent to collect data from your SAP system successfully.
++
+### SAP environment validation steps
+
+1. Ensure the following SAP notes are deployed in your SAP system, according to its version:
+
+| SAP BASIS versions | Required note |
+| | |
+| - 750 SP01 to SP12<br>- 751 SP01 to SP06<br>- 752 SP01 to SP03 | [2641084 - Standardized read access to data of Security Audit Log](https://launchpad.support.sap.com/#/notes/2641084)* |
+| - 700 to 702<br>- 710 to 711<br>- 730<br>- 731<br>- 740<br>- 750 | [2173545: CD: CHANGEDOCUMENT_READ_ALL](https://launchpad.support.sap.com/#/notes/2173545)* |
+| - 700 to 702<br>- 710 to 711<br>- 730<br>- 731<br>- 740<br>- 750 to 752 | [2502336 - CD: RSSCD100 - read only from archive, not from database](https://launchpad.support.sap.com/#/notes/2502336)* |
+| | * A SAP account is required to access SAP notes |
+
+2. Download and install one of the following SAP change requests from the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP/CR), according to the SAP version in use:
+
+| SAP BASIS versions | Required CR |
+| | |
+| - 750 and later | *NPLK900180*: [K900180.NPL](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/K900180.NPL), [R900180.NPL](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/R900180.NPL) |
+| - 740 | *NPLK900179*: [K900179.NPL](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/K900179.NPL), [R900179.NPL](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/R900179.NPL) |
+| | |
+
+3. (Optional) Download and install the following SAP change request from the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP/CR) to create a role required for the SAP data connector agent to connect to your SAP system:
+
+| SAP BASIS versions | Required CR |
+| | |
+| Any version | *NPLK900163** [K900163.NPL](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/K900163.NPL), [R900163.NPL](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/R900163.NPL)|
+| | |
+
+> [!NOTE]
+> \* The optional NPLK900163 change request deploys a sample role
++
+## Next steps
+
+After verifying that all the prerequisites have been met, proceed to the next step to deploy the required CRs to your SAP system and configure authorization.
+
+> [!div class="nextstepaction"]
+> [Deploying SAP CRs and configuring authorization](preparing-sap.md)
sentinel Reference Kickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/reference-kickstart.md
+
+ Title: Microsoft Sentinel Continuous Threat Monitoring for SAP container kickstart deployment script reference | Microsoft Docs
+description: Description of command line options available with kickstart deployment script
+++ Last updated : 03/02/2022++
+# Kickstart script reference
++
+> [!IMPORTANT]
+> The Microsoft Sentinel SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Script overview
+
+Simplify the [deployment of the SAP data connector agent container](deploy-data-connector-agent-container.md) by using the provided **Kickstart script** (available at [Microsoft Azure Sentinel SAP Continuous Threat Monitoring GitHub](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP)), which can also enable different modes of secrets storage, configure SNC, and more.
+
+## Parameter reference
+
+The following parameters are configurable. You can see examples of how these parameters are used in [Deploy and configure the SAP data connector agent container](deploy-data-connector-agent-container.md).
+
+#### Secret storage location
+
+**Parameter name:** `--keymode`
+
+**Parameter values:** `kvmi`, `kvsi`, `cfgf`
+
+**Required:** No. `kvmi` is assumed by default.
+
+**Explanation:** Specifies whether secrets (username, password, log analytics ID and shared key) should be stored in local configuration file, or in Azure Key Vault. Also controls whether authentication to Azure Key Vault is done using the VM's Azure system-assigned managed identity or an Azure AD registered-application identity.
+
+If set to `kvmi`, Azure Key Vault is used to store secrets, and authentication to Azure Key Vault is done using the virtual machine's Azure system-assigned managed identity.
+
+If set to `kvsi`, Azure Key Vault is used to store secrets, and authentication to Azure Key Vault is done using an Azure AD registered-application identity. Usage of `kvsi` mode requires `--appid`, `--appsecret` and `--tenantid` values.
+
+If set to `cfgf`, configuration file stored locally will be used to store secrets.
+
+#### ABAP server connection mode
+
+**Parameter name:** `--connectionmode`
+
+**Parameter values:** `abap`, `mserv`
+
+**Required:** No. If not specified, the default is `abap`.
+
+**Explanation:** Defines whether the data collector agent should connect to the ABAP server directly, or through a message server. Use `abap` to have the agent connect directly to the ABAP server, whose name you can define using the `--abapserver` parameter (though if you don't, [you will still be prompted for it](#abap-server-address)). Use `mserv` to connect through a message server, in which case you **must** specify the `--messageserverhost`, `--messageserverport`, and `--logongroup` parameters.
+
+#### Configuration folder location
+
+**Parameter name:** `--configpath`
+
+**Parameter values:** `<path>`
+
+**Required:** No, `/opt/sapcon/<SID>` is assumed if not specified.
+
+**Explanation:** By default kickstart initializes configuration file, metadata location to `/opt/sapcon/<SID>`. To set alternate location of configuration and metadata, use the `--configpath` parameter.
+
+#### ABAP server address
+
+**Parameter name:** `--abapserver`
+
+**Parameter values:** `<servername>`
+
+**Required:** No. If the parameter isn't specified and if the [ABAP server connection mode](#abap-server-connection-mode) parameter is set to `abap`, you will be prompted for the server hostname/IP address.
+
+**Explanation:** Used only if the connection mode is set to `abap`, this parameter contains the Fully Qualified Domain Name (FQDN), short name, or IP address of the ABAP server to connect to.
+
+#### System instance number
+
+**Parameter name:** `--systemnr`
+
+**Parameter values:** `<system number>`
+
+**Required:** No. If not specified, user will be prompted for the system number.
+
+**Explanation:** Specifies the SAP system instance number to connect to.
+
+#### System ID
+
+**Parameter name:** `--sid`
+
+**Parameter values:** `<SID>`
+
+**Required:** No. If not specified, user will be prompted for the system ID.
+
+**Explanation:** Specifies the SAP system ID to connect to.
+
+#### Client number
+
+**Parameter name:** `--clientnumber`
+
+**Parameter values:** `<client number>`
+
+**Required:** No. If not specified, user will be prompted for the client number.
+
+**Explanation:** Specifies the client number to connect to.
++
+#### Message Server Host
+
+**Parameter name:** `--messageserverhost`
+
+**Parameter values:** `<servername>`
+
+**Required:** Yes, if [ABAP server connection mode](#abap-server-connection-mode) is set to `mserv`.
+
+**Explanation:** Specifies the hostname/ip address of the message server to connect to. Can **only** be used if [ABAP server connection mode](#abap-server-connection-mode) is set to `mserv`.
+
+#### Message Server Port
+
+**Parameter name:** `--messageserverport`
+
+**Parameter values:** `<portnumber>`
+
+**Required:** Yes, if [ABAP server connection mode](#abap-server-connection-mode) is set to `mserv`.
+
+**Explanation:** Specifies the service name (port) of the message server to connect to. Can **only** be used if [ABAP server connection mode](#abap-server-connection-mode) is set to `mserv`.
+
+#### Logon group
+
+**Parameter name:** `--logongroup`
+
+**Parameter values:** `<logon group>`
+
+**Required:** Yes, if [ABAP server connection mode](#abap-server-connection-mode) is set to `mserv`.
+
+**Explanation:** Specifies the logon group to use when connecting to a message server. Can be used **only** if [ABAP server connection mode](#abap-server-connection-mode) is set to `mserv`. If the logon group name contains spaces, they should be passed in double quotes, as in the example `--logongroup "my logon group"`.
+
+#### Logon username
+
+**Parameter name:** `--sapusername`
+
+**Parameter values:** `<username>`
+
+**Required:** No, user will be prompted for username, if **not** using SNC (X.509) for authentication if not supplied.
+
+**Explanation:** Username that will be used to authenticate to ABAP server.
+
+#### Logon password
+
+**Parameter name:** `--sappassword`
+
+**Parameter values:** `<password>`
+
+**Required:** No, user will be prompted for password, if **not** using SNC (X.509) for authentication if not supplied. Password input will then be masked.
+
+**Explanation:** Password that will be used to authenticate to ABAP server.
+
+#### NetWeaver SDK file location
+
+**Parameter name:** `--sdk`
+
+**Parameter values:** `<filename>`
+
+**Required:** No, script will attempt to locate nwrfc*.zip file in the current folder, if not found, user will be prompted to supply a valid NetWeaver SDK archive file.
+
+**Explanation:** NetWeaver SDK file path. A valid SDK is required for the data collector to operate. For more information see [Prerequisites for deploying SAP continuous threat monitoring](prerequisites-for-deploying-sap-continuous-threat-monitoring.md#table-of-prerequisites).
+
+#### Enterprise Application ID
+
+**Parameter name:** `--appid`
+
+**Parameter values:** `<guid>`
+
+**Required:** Yes, if [Secret storage location](#secret-storage-location) is set to `kvsi`.
+
+**Explanation:** When Azure Key Vault authentication mode is set to `kvsi`, authentication to key vault is done using an [enterprise application (service principal) identity](deploy-data-connector-agent-container.md?tabs=registered-application#deploy-the-data-connector-agent-container). This parameter specifies the application ID.
+
+#### Enterprise Application secret
+
+**Parameter name:** `--appsecret`
+
+**Parameter values:** `<secret>`
+
+**Required:** Yes, if [Secret storage location](#secret-storage-location) is set to `kvsi`.
+
+**Explanation:** When Azure Key Vault authentication mode is set to `kvsi`, authentication to key vault is done using an [enterprise application (service principal) identity](deploy-data-connector-agent-container.md?tabs=registered-application#deploy-the-data-connector-agent-container). This parameter specifies the application secret.
+
+#### Tenant ID
+
+**Parameter name:** `--tenantid`
+
+**Parameter values:** `<guid>`
+
+**Required:** Yes, if [Secret storage location](#secret-storage-location) is set to `kvsi`.
+
+**Explanation:** When Azure Key Vault authentication mode is set to `kvsi`, authentication to key vault is done using an [enterprise application (service principal) identity](deploy-data-connector-agent-container.md?tabs=registered-application#deploy-the-data-connector-agent-container). This parameter specifies the Azure Active Directory Tenant ID.
+
+#### Key Vault Name
+
+**Parameter name:** `--kvaultname`
+
+**Parameter values:** `<key vaultname>`
+
+**Required:** No. If [Secret storage location](#secret-storage-location) is set to `kvsi` or `kvmi`, the script will prompt for the value if not supplied.
+
+**Explanation:** If [Secret storage location](#secret-storage-location) is set to `kvsi` or `kvmi`, then the key vault name (in FQDN format) should be entered here.
+
+#### Log Analytics workspace ID
+
+**Parameter name:** `--loganalyticswsid`
+
+**Parameter values:** `<id>`
+
+**Required:** No. If not supplied, the script will prompt for the workspace ID.
+
+**Explanation:** Log Analytics workspace ID where the data collector will send the data to. To locate the workspace ID, locate the Log Analytics workspace in the Azure portal: open Microsoft Sentinel, select **Settings** in the **Configuration** section, select **Workspace settings**, then select **Agents Management**.
+
+#### Log Analytics key
+
+**Parameter name:** `--loganalyticskey`
+
+**Parameter values:** `<key>`
+
+**Required:** No. If not supplied, script will prompt for the workspace key. Input will be masked in this case.
+
+**Explanation:** Primary or secondary key of the Log Analytics workspace where data collector will send the data to. To locate the workspace Primary or Secondary Key, locate the Log Analytics workspace in Azure portal: open Microsoft Sentinel, select **Settings** in the **Configuration** section, select **Workspace settings**, then select **Agents Management**.
+
+#### Use X.509 (SNC) for authentication
+
+**Parameter name:** `--use-snc`
+
+**Parameter values:** None
+
+**Required:** No. If not specified, username and password will be used for authentication. If specified, `--cryptolib`, `--sapgenpse`, combination of either `--client-cert` and `--client-key`, or `--client-pfx` and `--client-pfx-passwd` as well as `--server-cert`, and in certain cases `--cacert` switches is required.
+
+**Explanation:** Switch specifies that X.509 authentication will be used to connect to ABAP server, rather than username/password authentication. See [SNC configuration documentation](configure-snc.md) for more information.
+
+#### SAP Cryptographic library path
+
+**Parameter name:** `--cryptolib`
+
+**Parameter values:** `<sapcryptolibfilename>`
+
+**Required:** Yes, if `--use-snc` is specified.
+
+**Explanation:** Location and filename of SAP Cryptographic library (libsapcrypto.so).
+
+#### SAPGENPSE tool path
+
+**Parameter name:** `--sapgenpse`
+
+**Parameter values:** `<sapgenpsefilename>`
+
+**Required:** Yes, if `--use-snc` is specified.
+
+**Explanation:** Location and filename of the sapgenpse tool for creation and management of PSE-files and SSO-credentials.
+
+#### Client certificate public key path
+
+**Parameter name:** `--client-cert`
+
+**Parameter values:** `<client certificate filename>`
+
+**Required:** Yes, if `--use-snc` **and** certificate is in .crt/.key base-64 format.
+
+**Explanation:** Location and filename of the base-64 client public certificate. If client certificate is in .pfx format, use `--client-pfx` switch instead.
+
+#### Client certificate private key path
+
+**Parameter name:** `--client-key`
+
+**Parameter values:** `<client key filename>`
+
+**Required:** Yes, if `--use-snc` is specified **and** key is in .crt/.key base-64 format.
+
+**Explanation:** Location and filename of the base-64 client private key. If client certificate is in .pfx format, use `--client-pfx` switch instead.
+
+#### Issuing/root Certification Authority certificates
+
+**Parameter name:** `--cacert`
+
+**Parameter values:** `<trusted ca cert>`
+
+**Required:** Yes, if `--use-snc` is specified **and** the certificate is issued by an enterprise certification authority.
+
+**Explanation:** If the certificate is self-signed, it has no issuing CA, so there is no trust chain that needs to be validated. If the certificate is issued by an enterprise CA, the issuing CA certificate and any higher-level CA certificates need to be validated. Use separate instances of the `--cacert` switch for each CA in the trust chain, and supply the full filenames of the public certificates of the enterprise certificate authorities.
+
+#### Client PFX certificate path
+
+**Parameter name:** `--client-pfx`
+
+**Parameter values:** `<pfx filename>`
+
+**Required:** Yes, if `--use-snc` **and** key is in .pfx/.p12 format.
+
+**Explanation:** Location and filename of the pfx client certificate.
+
+#### Client PFX certificate password
+
+**Parameter name:** `--client-pfx-passwd`
+
+**Parameter values:** `<password>`
+
+**Required:** Yes, if `--use-snc` is used, certificate is in .pfx/.p12 format, and certificate is protected by a password.
+
+**Explanation:** PFX/P12 file password.
+
+#### Server certificate
+
+**Parameter name:** `--server-cert`
+
+**Parameter values:** `<server certificate filename>`
+
+**Required:** Yes, if `--use-snc` is used.
+
+**Explanation:** ABAP server certificate full path and name.
+
+#### HTTP proxy server URL
+
+**Parameter name:** `--http-proxy`
+
+**Parameter values:** `<proxy url>`
+
+**Required:** No
+
+**Explanation:** Containers, that cannot establish connection to Microsoft Azure services directly and require connection via a proxy server require `--http-proxy` switch to define proxy url for the container. Format of the proxy url is `http://hostname:port`.
+
+#### Confirm all prompts
+
+**Parameter name:** `--confirm-all-prompts`
+
+**Parameter values:** None
+
+**Required:** No
+
+**Explanation:** If `--confirm-all-prompts` switch is specified, script will not pause for any user confirmations and will only prompt if user input is required. Use `--confirm-all-prompts` switch to achieve a zero-touch deployment.
+
+#### Use preview build of the container
+
+**Parameter name:** `--preview`
+
+**Parameter values:** None
+
+**Required:** No
+
+**Explanation:** By default, container deployment kickstart script deploys the container with :latest tag. Public preview features are published to :latest-preview tag. To ensure container deployment script uses public preview version of the container, specify the `--preview` switch.
+
+## Next steps
+
+Learn more about the Microsoft Sentinel SAP solutions:
+
+- [Deploy Continuous Threat Monitoring for SAP](deployment-overview.md)
+- [Prerequisites for deploying SAP continuous threat monitoring](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
+- [Deploy SAP Change Requests (CRs) and configure authorization](preparing-sap.md)
+- [Deploy and configure the SAP data connector agent container](deploy-data-connector-agent-container.md)
+- [Deploy SAP security content](deploy-sap-security-content.md)
+- [Deploy the Microsoft Sentinel SAP data connector with SNC](configure-snc.md)
+- [Enable and configure SAP auditing](configure-audit.md)
+- [Collect SAP HANA audit logs](collect-sap-hana-audit-logs.md)
+
+Troubleshooting:
+
+- [Troubleshoot your Microsoft Sentinel SAP solution deployment](sap-deploy-troubleshoot.md)
+- [Configure SAP Transport Management System](configure-transport.md)
+
+Reference files:
+
+- [Microsoft Sentinel SAP solution data reference](sap-solution-log-reference.md)
+- [Microsoft Sentinel SAP solution: security content reference](sap-solution-security-content.md)
+- [Update script reference](reference-update.md)
+- [Systemconfig.ini file reference](reference-systemconfig.md)
+
+For more information, see [Microsoft Sentinel solutions](../sentinel-solutions.md).
sentinel Reference Systemconfig https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/reference-systemconfig.md
+
+ Title: Microsoft Sentinel Continuous Threat Monitoring for SAP container configuration file reference | Microsoft Docs
+description: Description of settings available in systemconfig.ini file
+++ Last updated : 03/03/2022+
+# Systemconfig.ini file reference
++
+> [!IMPORTANT]
+> The Microsoft Sentinel SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+The *systemconfig.ini* file is used to configure behavior of the data collector. Configuration options are grouped into several sections. This article lists options available and provides an explanation to the options.
+
+## Systemconfig configuration file sections
+
+| Section name | Description |
+| | -- |
+| [Secrets Source](#secrets-source-section) | This section defines where credentials are stored. |
+| [ABAP Central Instance](#abap-central-instance-section) | This section defines general options of the SAP instance to connect to. |
+| [Azure Credentials](#azure-credentials-section) | This section defines credentials to connect to Azure Log Analytics. |
+| [File Extraction ABAP](#file-extraction-abap-section) | This section defines logs and credentials that are extracted from ABAP server using SAPControl interface. |
+| [File Extraction JAVA](#file-extraction-java-section) | This section defines logs and credentials that are extracted from JAVA server using SAPControl interface. |
+| [Logs Activation Status](#logs-activation-status-section) | This section defines which logs are extracted from ABAP. |
+| [Connector Configuration](#connector-configuration-section) | This section defines miscellaneous connector options. |
+| [ABAP Table Selector](#abap-table-selector-section) | This section defines which User Master Data logs get extracted from the ABAP system. |
+
+## Secrets Source section
+```systemconfig.ini
+secrets=AZURE_KEY_VAULT|DOCKER_SECRETS|DOCKER_FIXED
+
+keyvault=<vaultname>
+# Azure Keyvault name, in case secrets = AZURE_KEY_VAULT
+
+intprefix=<prefix>
+# intprefix - Prefix for variables created in Azure Key Vault
+```
+
+## ABAP Central Instance section
+```systemconfig.ini
+[ABAP Central Instance]
+auth_type=PLAIN_USER_AND_PASSWORD|SNC_WITH_X509
+# Authentication type - username/password authentication, or X.509 authentication
+
+ashost=<hostname>
+# FQDN, hostname, or IP address of the ABAP server
+
+mshost=<hostname>
+# FQDN, hostname, or IP address of the Message server
+
+msserv=<portnumber>
+# Port number, or service name (from /etc/services) of the message server
+
+group=<logon group>
+# Logon group of the message server
+
+sysnr=<Instance number>
+# Instance number of the ABAP server
+
+sysid=<SID>
+# System ID of the ABAP server
+
+client=<Client Number>
+# Client number of the ABAP server
+
+user=<username>
+# Username to use to connect to ABAP server. Used only when secrets setting in Secrets Source section is set to DOCKER_FIXED
+
+passwd=<password>
+# Password to use to connect to ABAP server. Used only when secrets setting in Secrets Source section is set to DOCKER_FIXED
+
+snc_lib=<path to libsapcrypto>
+# Full path, to the libsapcrypto.so
+# Used when SNC is in use
+# !!! Note: the path must be valid within the container!!!
+
+snc_partnername=<distinguished name of the server certificate>
+# p: -prefixed valid SAP server SNC name, which is equal to Distinguished Name(DN) of SAP server PSE
+# Used when SNC is in use
+
+snc_qop=<SNC protection level>
+# More information available at https://docs.oracle.com/cd/E19509-01/820-5064/ggrpj/https://docsupdatetracker.net/index.html
+# Used when SNC is in use
+
+snc_myname=<distinguished name of the client certificate>
+# p: -prefixed valid client SNC name, which is equal to Distinguished Name(DN) of client PSE
+# Used when SNC is in use
+
+x509cert=<server certificate>
+# Base64 encoded server certificate value in a single line (with leading -BEGIN-CERTIFICATE and trailing -END-CERTIFICATE- removed)
+```
+
+## Azure Credentials section
+```systemconfig.ini
+[Azure Credentials]
+loganalyticswsid=<workspace ID>
+# Log Analytics workspace ID. Used only when secrets setting in Secrets Source section is set to DOCKER_FIXED
+
+publickey=<publickey>
+# Log Analytics workspace primary or secondary key. Used only when secrets setting in Secrets Source section is set to DOCKER_FIXED
+```
+
+## File Extraction ABAP section
+```systemconfig.ini
+[File Extraction ABAP]
+osuser = <SAPControl username>
+# Username to use to authenticate to SAPControl
+
+ospasswd = <SAPControl password>
+# Password to use to authenticate to SAPControl
+
+appserver = <server>
+#SAPControl server hostname/fqdn/IP address
+
+instance = <instance>
+#SAPControl instance name
++
+abapseverity = <severity>
+# 0 = All logs ; 1 = Warning ; 2 = Error
+
+abaptz = <timezone>
+# GMT FORMAT
+# example - For OS Timezone = NZST (New Zealand Standard Time) use abaptz = GMT+12
+
+```
+
+## File Extraction JAVA section
+```systemconfig.ini
+[File Extraction JAVA]
+javaosuser = <username>
+# Username to use to authenticate to JAVA server
+
+javaospasswd = <password>
+# Password to use to authenticate to JAVA server
+
+javaappserver = <server>
+#JAVA server hostname/fqdn/IP address
+
+javainstance = <instance number>
+#JAVA instance number
+
+javaseverity = <severity>
+# 0 = All logs ; 1 = Warning ; 2 = Error
+
+javatz = <timezone>
+# GMT FORMAT
+# example - For OS Timezone = NZST (New Zealand Standard Time) use abaptz = GMT+12
+```
+
+### Logs Activation Status section
+```systemconfig.ini
+[Logs Activation Status]
+# The following logs are retrieved using RFC interface
+# Specify True or False to configure whether log should be collected using the mentioned interface
+ABAPAuditLog = <True/False>
+ABAPJobLog = <True/False>
+ABAPSpoolLog = <True/False>
+ABAPSpoolOutputLog = <True/False>
+ABAPChangeDocsLog = <True/False>
+ABAPAppLog = <True/False>
+ABAPWorkflowLog = <True/False>
+ABAPCRLog = <True/False>
+ABAPTableDataLog = <True/False>
+# The following logs are retrieved using SAP Control interface and OS Login
+ABAPFilesLogs = <True/False>
+SysLog = <True/False>
+ICM = <True/False>
+WP = <True/False>
+GW = <True/False>
+# The following logs are retrieved using SAP Control interface and OS Login
+JAVAFilesLogs = <True/False>
+```
+
+### Connector Configuration section
+```systemconfig.ini
+extractuseremail = <True/False>
+apiretry = <True/False>
+auditlogforcexal = <True/False>
+auditlogforcelegacyfiles = <True/False>
+
+timechunk = <value>
+# Default timechunk value is 60 (minutes). For certain tables, data connector retrieves data from the ABAP server using timechunks (collecting all events that occurred within a certain timestamp). On busy systems this may result in large datasets, so to reduce memory and CPU utilization footprint, consider configuring to a smaller value.
+```
+
+### ABAP Table Selector section
+```systemconfig.ini
+[ABAP Table Selector]
+# Specify True or False to configure whether table should be collected from the SAP system
+AGR_TCODES_FULL = <True/False>
+USR01_FULL = <True/False>
+USR02_FULL = <True/False>
+USR02_INCREMENTAL = <True/False>
+AGR_1251_FULL = <True/False>
+AGR_USERS_FULL = <True/False>
+AGR_USERS_INCREMENTAL = <True/False>
+AGR_PROF_FULL = <True/False>
+UST04_FULL = <True/False>
+USR21_FULL = <True/False>
+ADR6_FULL = <True/False>
+ADCP_FULL = <True/False>
+USR05_FULL = <True/False>
+USGRP_USER_FULL = <True/False>
+USER_ADDR_FULL = <True/False>
+DEVACCESS_FULL = <True/False>
+AGR_DEFINE_FULL = <True/False>
+AGR_DEFINE_INCREMENTAL = <True/False>
+PAHI_FULL = <True/False>
+AGR_AGRS_FULL = <True/False>
+USRSTAMP_FULL = <True/False>
+USRSTAMP_INCREMENTAL = <True/False>
+```
+## Next steps
+
+Learn more about the Microsoft Sentinel SAP solutions:
+
+- [Deploy Continuous Threat Monitoring for SAP](deployment-overview.md)
+- [Prerequisites for deploying SAP continuous threat monitoring](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
+- [Deploy SAP Change Requests (CRs) and configure authorization](preparing-sap.md)
+- [Deploy and configure the SAP data connector agent container](deploy-data-connector-agent-container.md)
+- [Deploy SAP security content](deploy-sap-security-content.md)
+- [Deploy the Microsoft Sentinel SAP data connector with SNC](configure-snc.md)
+- [Enable and configure SAP auditing](configure-audit.md)
+- [Collect SAP HANA audit logs](collect-sap-hana-audit-logs.md)
+
+Troubleshooting:
+
+- [Troubleshoot your Microsoft Sentinel SAP solution deployment](sap-deploy-troubleshoot.md)
+- [Configure SAP Transport Management System](configure-transport.md)
+
+Reference files:
+
+- [Microsoft Sentinel SAP solution data reference](sap-solution-log-reference.md)
+- [Microsoft Sentinel SAP solution: security content reference](sap-solution-security-content.md)
+- [Kickstart script reference](reference-kickstart.md)
+- [Update script reference](reference-update.md)
+
+For more information, see [Microsoft Sentinel solutions](../sentinel-solutions.md).
sentinel Reference Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/reference-update.md
+
+ Title: Microsoft Sentinel Continuous Threat Monitoring for SAP container update script reference | Microsoft Docs
+description: Description of command line options available with update deployment script
+++ Last updated : 03/02/2022++
+# Update script reference
++
+> [!IMPORTANT]
+> The Microsoft Sentinel SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+The SAP data collector agent container uses an update script (available at [Microsoft Azure Sentinel SAP Continuous Threat Monitoring GitHub](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP)) to simplify the update process.
+
+This article shows how the script's behavior can be customized by configuring its parameters.
+
+## Script overview
+
+During the update process, the script identifies any containers running the SAP data collector agent, downloads an updated container image from the Azure Container registry, copies mounted directory settings, copies environment variables, renames the existing container with an `-OLD` suffix, and finally creates a container using the updated image. The script then starts the container with an additional `--sapconinstanceupdate` switch that verifies that the updated container can start and connect to the SAP system properly. When the container reports a successful start, the script removes the old container. It then recreates the new container to run without the `--sapconinstanceupdate` switch in order to start in normal operation mode and continue to collect data from the SAP system.
+
+## Parameter reference
+
+#### Confirm all prompts
+**Parameter name:** `--confirm-all-prompts`
+
+**Parameter values:** None
+
+**Required:** No
+
+**Explanation:** If `--confirm-all-prompts` switch is specified, script will not pause for any user confirmations. Use `--confirm-all-prompts` switch to achieve a zero-touch deployment
+
+#### Use preview build of the container
+**Parameter name:** `--preview`
+
+**Parameter values:** None
+
+**Required:** No
+
+**Explanation:** By default, the container update script deploys the container with `:latest` tag. Public preview features are published to `:latest-preview` tag. To ensure container update script uses public preview version of the container, specify the `--preview` switch.
+
+#### Do not perform a container connectivity test
+**Parameter name:** `--no-testrun`
+
+**Parameter values:** None
+
+**Required:** No
+
+**Explanation:** By default, the container update script performs a "test run" of the updated container to verify it can successfully connect to SAP system. To skip this test, specify a `--no-testrun` parameter. In such case, the script will re-create the containers using a new image without validating that containers can successfully start and connect to SAP. Use this switch with caution.
+
+#### Specify a custom SDK location
+**Parameter name:** `--sdk`
+
+**Parameter values:** `<SDK file full path>`
+
+**Required:** No
+
+**Explanation:** By default, the update script extracts SDK zip file from an existing container and copies it to the newly created container. If there is a need to update the version of the NetWeaver SDK used together with container update, specify the `--sdk` switch, specifying full path of the SDK.
+
+#### Force container update, even if version is the same
+**Parameter name:** `--force`
+
+**Parameter values:** None
+
+**Required:** No
+
+**Explanation:** Update the container, even if the image version used for existing container is the same as image available from Microsoft.
+
+#### Do container selective update
+**Parameter name:** `--containername`
+
+**Parameter values:** `Container name`
+
+**Required:** No
+
+**Explanation:** By default, the update script updates all containers running Continuous Threat Monitoring for SAP. To update a single, or multiple containers, specify `--containername <containername>` switch. Switch can be specified multiple times, e.e. `--containername sapcon-A4H --containername sapcon-QQ1 --containername sapcon-QAT`. In such case, only specified containers will be updated. If container name specified does not exist, it will be skipped by the script.
+
+## Next steps
+
+Learn more about the Microsoft Sentinel SAP solutions:
+
+- [Deploy Continuous Threat Monitoring for SAP](deployment-overview.md)
+- [Prerequisites for deploying SAP continuous threat monitoring](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
+- [Deploy SAP Change Requests (CRs) and configure authorization](preparing-sap.md)
+- [Deploy and configure the SAP data connector agent container](deploy-data-connector-agent-container.md)
+- [Deploy SAP security content](deploy-sap-security-content.md)
+- [Deploy the Microsoft Sentinel SAP data connector with SNC](configure-snc.md)
+- [Enable and configure SAP auditing](configure-audit.md)
+- [Collect SAP HANA audit logs](collect-sap-hana-audit-logs.md)
+
+Troubleshooting:
+
+- [Troubleshoot your Microsoft Sentinel SAP solution deployment](sap-deploy-troubleshoot.md)
+- [Configure SAP Transport Management System](configure-transport.md)
+
+Reference files:
+
+- [Microsoft Sentinel SAP solution data reference](sap-solution-log-reference.md)
+- [Microsoft Sentinel SAP solution: security content reference](sap-solution-security-content.md)
+- [Kickstart script reference](reference-kickstart.md)
+- [Systemconfig.ini file reference](reference-systemconfig.md)
+
+For more information, see [Microsoft Sentinel solutions](../sentinel-solutions.md).
sentinel Sap Deploy Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-deploy-troubleshoot.md
+
+ Title: Microsoft Sentinel SAP solution deployment troubleshooting | Microsoft Docs
+description: Learn how to troubleshoot specific issues that may occur in your Microsoft Sentinel SAP solution deployment.
++++ Last updated : 11/09/2021++
+# Troubleshooting your Microsoft Sentinel SAP solution deployment
++
+> [!IMPORTANT]
+> The Microsoft Sentinel SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Useful Docker commands
+
+When troubleshooting your SAP data connector, you may find the following commands useful:
+
+|Function |Command |
+|||
+|**Stop the Docker container** | `docker stop sapcon-[SID]` |
+|**Start the Docker container** |`docker start sapcon-[SID]` |
+|**View Docker system logs** | `docker logs -f sapcon-[SID]` |
+|**Enter the Docker container** | `docker exec -it sapcon-[SID] bash` |
++
+For more information, see the [Docker CLI documentation](https://docs.docker.com/engine/reference/commandline/docker/).
+
+## Review system logs
+
+We highly recommend that you review the system logs after installing or [resetting the data connector](#reset-the-sap-data-connector).
+
+Run:
+
+```bash
+docker logs -f sapcon-[SID]
+```
+
+## Enable debug mode printing
+
+**To enable debug mode printing**:
+
+1. Copy the following file to your **sapcon/[SID]** directory, and then rename it as `loggingconfig.yaml`: https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/template/loggingconfig_DEV.yaml
+
+1. [Reset the SAP data connector](#reset-the-sap-data-connector).
+
+For example, for SID `A4H`:
+
+```bash
+wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/template/loggingconfig_DEV.yaml
+ cp loggingconfig.yaml ~/sapcon/A4H
+ docker restart sapcon-A4H
+```
+
+**To disable debug mode printing again, run**:
+
+```bash
+mv loggingconfig.yaml loggingconfig.old
+ls
+docker restart sapcon-[SID]
+```
+
+## View all Docker execution logs
+
+To view all Docker execution logs for your Microsoft Sentinel SAP data connector deployment, run one of the following commands:
+
+```bash
+docker exec -it sapcon-[SID] bash && cd /sapcon-app/sapcon/logs
+```
+
+or
+
+```bash
+docker exec ΓÇôit sapcon-[SID] cat /sapcon-app/sapcon/logs/[FILE_LOGNAME]
+```
+
+Output similar to the following should be displayed:
+
+```bash
+Logs directory:
+root@644c46cd82a9:/sapcon-app# ls sapcon/logs/ -l
+total 508
+-rwxr-xr-x 1 root root 0 Mar 12 09:22 ' __init__.py'
+-rw-r--r-- 1 root root 282 Mar 12 16:01 ABAPAppLog.log
+-rw-r--r-- 1 root root 1056 Mar 12 16:01 ABAPAuditLog.log
+-rw-r--r-- 1 root root 465 Mar 12 16:01 ABAPCRLog.log
+-rw-r--r-- 1 root root 515 Mar 12 16:01 ABAPChangeDocsLog.log
+-rw-r--r-- 1 root root 282 Mar 12 16:01 ABAPJobLog.log
+-rw-r--r-- 1 root root 480 Mar 12 16:01 ABAPSpoolLog.log
+-rw-r--r-- 1 root root 525 Mar 12 16:01 ABAPSpoolOutputLog.log
+-rw-r--r-- 1 root root 0 Mar 12 15:51 ABAPTableDataLog.log
+-rw-r--r-- 1 root root 495 Mar 12 16:01 ABAPWorkflowLog.log
+-rw-r--r-- 1 root root 465311 Mar 14 06:54 API.log # view this log to see submits of data into Microsoft Sentinel
+-rw-r--r-- 1 root root 0 Mar 12 15:51 LogsDeltaManager.log
+-rw-r--r-- 1 root root 0 Mar 12 15:51 PersistenceManager.log
+-rw-r--r-- 1 root root 4830 Mar 12 16:01 RFC.log
+-rw-r--r-- 1 root root 5595 Mar 12 16:03 SystemAdmin.log
+```
+
+To copy your logs to the host operating system, run:
+
+```bash
+docker cp sapcon-[SID]:/sapcon-app/sapcon/logs /directory
+```
+
+For example:
+
+```bash
+docker cp sapcon-A4H:/sapcon-app/sapcon/logs /tmp/sapcon-logs-extract
+```
+
+## Review and update the SAP data connector configuration
+
+If you want to check the SAP data connector configuration file and make manual updates, perform the following steps:
+
+1. On your VM, in the user's home directory, open the **~/sapcon/[SID]/systemconfig.ini** file.
+1. Update the configuration if needed, and then restart the container:
+
+ ```bash
+ docker restart sapcon-[SID]
+ ```
+
+## Reset the SAP data connector
+
+The following steps reset the connector and reingest SAP logs from the last 24 hours.
+
+1. Stop the connector. Run:
+
+ ```bash
+ docker stop sapcon-[SID]
+ ```
+
+1. Delete the **metadata.db** file from the **sapcon/[SID]** directory. Run:
+
+ ```bash
+ cd ~/sapcon/<SID>
+ ls
+ mv metadata.db metadata.old
+ ```
+
+ > [!NOTE]
+ > The **metadata.db** file contains the last timestamp for each of the logs, and works to prevent duplication.
+
+1. Start the connector again. Run:
+
+ ```bash
+ docker start sapcon-[SID]
+ ```
+
+Make sure to [Review system logs](#review-system-logs) when you're done.
+++
+## Common issues
+
+After having deployed both the SAP data connector and security content, you may experience the following errors or issues:
+
+### Corrupt or missing SAP SDK file
+
+This error may occur when the connector fails to boot with PyRfc, or zip-related error messages are shown.
+
+1. Reinstall the SAP SDK.
+1. Verify that you're the correct Linux 64-bit version. As of the current date, the release filename is: **nwrfc750P_8-70002752.zip**.
+
+If you'd installed the data connector manually, make sure that you'd copied the SDK file into the Docker container.
+
+Run:
+
+```bash
+Docker cp SDK by running docker cp nwrfc750P_8-70002752.zip /sapcon-app/inst/
+```
+
+### ABAP runtime errors appear on a large system
+
+If ABAP runtime errors appear on large systems, try setting a smaller chunk size:
+
+1. Edit the **sapcon/SID/systemconfig.ini** file and define `timechunk = 5`.
+2. [Reset the SAP data connector](#reset-the-sap-data-connector).
+
+> [!NOTE]
+> The **timechunk** size is defined in minutes.
+
+### Empty or no audit log retrieved, with no special error messages
+
+1. Check that audit logging is enabled in SAP.
+1. Verify the **SM19** or **RSAU_CONFIG** transactions.
+1. Enable any events as needed.
+1. Verify whether messages arrive and exist in the SAP **SM20** or **RSAU_READ_LOG**, without any special errors appearing on the connector log.
++
+### Incorrect Microsoft Sentinel workspace ID or key
+
+If you realize that you've entered an incorrect workspace ID or key in your deployment script, update the credentials stored in Azure key vault.
+
+After verifying your credentials in Azure KeyVault, restart the container:
+
+```bash
+docker restart sapcon-[SID]
+```
+
+### Incorrect SAP ABAP user credentials in a fixed configuration
+
+A fixed configuration is when the password is stored directly in the **systemconfig.ini** configuration file.
+
+If your credentials there are incorrect, verify your credentials.
+
+Use base64 encryption to encrypt the user and password. You can use online encryption tools to do encrypt your credentials, such as https://www.base64encode.org/.
+
+### Incorrect SAP ABAP user credentials in key vault
+
+Check your credentials and fix them as needed, applying the correct values to the **ABAPUSER** and **ABAPPASS** values in Azure Key Vault.
+
+Then, restart the container:
+
+```bash
+docker restart sapcon-[SID]
+```
++
+### Missing ABAP (SAP user) permissions
+
+If you get an error message similar to: **..Missing Backend RFC Authorization..**, your SAP authorizations and role were not applied properly.
+
+1. Ensure that the **MSFTSEN/SENTINEL_CONNECTOR** role was imported as part of a [change request](prerequisites-for-deploying-sap-continuous-threat-monitoring.md) transport, and applied to the connector user.
+
+1. Run the role generation and user comparison process using the SAP transaction PFCG.
+
+### Missing data in your workbooks or alerts
+
+If you find that you're missing data in your Microsoft Sentinel workbooks or alerts, ensure that the **Auditlog** policy is properly enabled on the SAP side, with no errors in the log file.
+
+Use the **RSAU_CONFIG_LOG** transaction for this step.
++
+### Missing SAP change request
+
+If you see errors that you're missing a required SAP change request, make sure you've imported the correct SAP change request for your system.
+
+For more information, see [ValidateSAP environment validation steps](prerequisites-for-deploying-sap-continuous-threat-monitoring.md#sap-environment-validation-steps).
+
+### Network connectivity issues
+
+If you're having network connectivity issues to the SAP environment or to Microsoft Sentinel, check your network connectivity to make sure data is flowing as expected.
+
+Common issues include:
+
+- Firewalls between the docker container and the SAP hosts may be blocking traffic. The SAP host receives communication via the following TCP ports, which must be open: **32xx**, **5xx13**, and **33xx**, where **xx** is the SAP instance number.
+
+- Outbound communication from your SAP host to Microsoft Container Registry or Azure requires proxy configuration. This typically impacts the installation and requires you to configure the `HTTP_PROXY` and `HTTPS_PROXY` environmental variables. You can also ingest environment variables into the docker container when you create the container, by adding the `-e` flag to the docker `create` / `run` command.
+
+### Other unexpected issues
+
+If you have unexpected issues not listed in this article, try the following steps:
+
+- [Reset the connector and reload your logs](#reset-the-sap-data-connector)
+- [Upgrade the connector](update-sap-data-connector.md) to the latest version.
+
+> [!TIP]
+> Resetting your connector and ensuring that you have the latest upgrades are also recommended after any major configuration changes.
+
+### Retrieving an audit log fails with warnings
+
+If you attempt to retrieve an audit log, without the [required change request](prerequisites-for-deploying-sap-continuous-threat-monitoring.md#sap-environment-validation-steps) deployed or on an older / unpatched version, and the process fails with warnings, verify that the SAP Auditlog can be retrieved using one of the following methods:
+
+- Using a compatibility mode called *XAL* on older versions
+- Using a version not recently patched
+- Without the required change request installed
+
+While your system should automatically switch to compatibility mode if needed, you may need to switch it manually. To switch to compatibility mode manually:
+
+1. In the **sapcon/SID** directory, edit the **systemconfig.ini** file
+
+1. Define: `auditlogforcexal = True`
+
+1. Restart the Docker container:
+
+ ```bash
+ docker restart sapcon-[SID]
+ ```
+
+### SAPCONTROL or JAVA subsystems unable to connect
+
+Check that the OS user is valid and can run the following command on the target SAP system:
+
+```bash
+sapcontrol -nr <SID> -function GetSystemInstanceList
+```
+
+### SAPCONTROL or JAVA subsystem fails with timezone-related error message
+
+If your SAPCONTROL or JAVA subsystem fails with a timezone-related error message, such as: **Please check the configuration and network access to the SAP server - 'Etc/NZST'**, make sure that you're using standard timezone codes.
+
+For example, use `javatz = GMT+12` or `abaptz = GMT-3**`.
++
+### Unable to import the change request transports to SAP
+
+If you're not able to import the [required SAP log change requests](prerequisites-for-deploying-sap-continuous-threat-monitoring.md#sap-environment-validation-steps) and are getting an error about an invalid component version, add `ignore invalid component version` when you import the change request.
+
+### Audit log data not ingested past initial load
+
+If the SAP audit log data, visible in either the **RSAU_READ_LOAD** or **SM200** transactions, is not ingested into Microsoft Sentinel past the initial load, you may have a misconfiguration of the SAP system and the SAP host operating system.
+
+- Initial loads are ingested after a fresh installation of the SAP data connector, or after the **metadata.db** file is deleted.
+- A sample misconfiguration might be when your SAP system timezone is set to **CET** in the **STZAC** transaction, but the SAP host operating system time zone is set to **UTC**.
+
+To check for misconfigurations, run the **RSDBTIME** report in transaction **SE38**. If you find a mismatch between the SAP system and the SAP host operating system:
+
+1. Stop the Docker container. Run
+
+ ```bash
+ docker stop sapcon-[SID]
+ ```
+
+1. Delete the **metadata.db** file from the **sapcon/[SID]** directory. Run:
+
+ ```bash
+ rm ~/sapcon/[SID]/metadata.db
+ ```
+
+1. Update the SAP system and the SAP host operating system to have matching settings, such as the same time zone. For more information, see the [SAP Community Wiki](https://wiki.scn.sap.com/wiki/display/Basis/Time+zone+settings%2C+SAP+vs.+OS+level).
+
+1. Start the container again. Run:
+
+ ```bash
+ docker start sapcon-[SID]
+ ```
+
+## Next steps
+
+Learn more about the Microsoft Sentinel SAP solutions:
+
+- [Deploy Continuous Threat Monitoring for SAP](deployment-overview.md)
+- [Prerequisites for deploying SAP continuous threat monitoring](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
+- [Deploy SAP Change Requests (CRs) and configure authorization](preparing-sap.md)
+- [Deploy and configure the SAP data connector agent container](deploy-data-connector-agent-container.md)
+- [Deploy SAP security content](deploy-sap-security-content.md)
+- [Deploy the Microsoft Sentinel SAP data connector with SNC](configure-snc.md)
+- [Enable and configure SAP auditing](configure-audit.md)
+- [Collect SAP HANA audit logs](collect-sap-hana-audit-logs.md)
+
+Troubleshooting:
+
+- [Configure SAP Transport Management System](configure-transport.md)
+
+Reference files:
+
+- [Microsoft Sentinel SAP solution data reference](sap-solution-log-reference.md)
+- [Microsoft Sentinel SAP solution: security content reference](sap-solution-security-content.md)
+- [Kickstart script reference](reference-kickstart.md)
+- [Update script reference](reference-update.md)
+- [Systemconfig.ini file reference](reference-systemconfig.md)
+
+For more information, see [Microsoft Sentinel solutions](../sentinel-solutions.md).
+
sentinel Sap Solution Deploy Alternate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-solution-deploy-alternate.md
+
+ Title: Microsoft Sentinel SAP data connector expert configuration options, on-premises deployment, and SAPControl log sources | Microsoft Docs
+description: Learn how to deploy the Microsoft Sentinel data connector for SAP environments using expert configuration options and an on-premises machine. Also learn more about SAPControl log sources.
++++ Last updated : 02/22/2022++
+# Expert configuration options, on-premises deployment, and SAPControl log sources
++
+This article describes how to deploy the Microsoft Sentinel SAP data connector in an expert or custom process, such as using an on-premises machine and an Azure Key Vault to store your credentials.
+
+> [!NOTE]
+> The default, and most recommended process for deploying the Microsoft Sentinel SAP data connector is by [using an Azure VM](deploy-data-connector-agent-container.md). This article is intended for advanced users.
+
+> [!IMPORTANT]
+> The Microsoft Sentinel SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+
+## Prerequisites
+
+The basic prerequisites for deploying your Microsoft Sentinel SAP data connector are the same regardless of your deployment method.
+
+Make sure that your system complies with the prerequisites documented in the main [SAP data connector prerequisites document](prerequisites-for-deploying-sap-continuous-threat-monitoring.md) before you start.
+
+## Create your Azure key vault
+
+Create an Azure key vault that you can dedicate to your Microsoft Sentinel SAP data connector.
+
+Run the following command to create your Azure key vault and grant access to an Azure service principal:
+
+``` azurecli
+kvgp=<KVResourceGroup>
+
+kvname=<keyvaultname>
+
+spname=<sp-name>
+
+kvname=<keyvaultname>
+# Optional when Azure MI not enabled - Create sp user for AZ cli connection, save details for env.list file
+az ad sp create-for-rbac ΓÇôname $spname --role Contributor --scopes /subscriptions/<subscription_id>
+
+SpID=$(az ad sp list ΓÇôdisplay-name $spname ΓÇôquery ΓÇ£[].appIdΓÇ¥ --output tsv
+
+#Create key vault
+az keyvault create \
+ --name $kvname \
+ --resource-group $kvgp
+
+# Add access to SP
+az keyvault set-policy --name $kvname --resource-group $kvgp --object-id $spID --secret-permissions get list set
+```
+
+For more information, see [Quickstart: Create a key vault using the Azure CLI](../../key-vault/general/quick-create-cli.md).
+
+## Add Azure Key Vault secrets
+
+To add Azure Key Vault secrets, run the following script, with your own system ID and the credentials you want to add:
+
+```azurecli
+#Add Abap username
+az keyvault secret set \
+ --name <SID>-ABAPUSER \
+ --value "<abapuser>" \
+ --description SECRET_ABAP_USER --vault-name $kvname
+
+#Add Abap Username password
+az keyvault secret set \
+ --name <SID>-ABAPPASS \
+ --value "<abapuserpass>" \
+ --description SECRET_ABAP_PASSWORD --vault-name $kvname
+
+#Add Java Username
+az keyvault secret set \
+ --name <SID>-JAVAOSUSER \
+ --value "<javauser>" \
+ --description SECRET_JAVAOS_USER --vault-name $kvname
+
+#Add Java Username password
+az keyvault secret set \
+ --name <SID>-JAVAOSPASS \
+ --value "<javauserpass>" \
+ --description SECRET_JAVAOS_PASSWORD --vault-name $kvname
+
+#Add abapos username
+az keyvault secret set \
+ --name <SID>-ABAPOSUSER \
+ --value "<abaposuser>" \
+ --description SECRET_ABAPOS_USER --vault-name $kvname
+
+#Add abapos username password
+az keyvault secret set \
+ --name <SID>-ABAPOSPASS \
+ --value "<abaposuserpass>" \
+ --description SECRET_ABAPOS_PASSWORD --vault-name $kvname
+
+#Add Azure Log ws ID
+az keyvault secret set \
+ --name <SID>-LOG_WS_ID \
+ --value "<logwsod>" \
+ --description SECRET_AZURE_LOG_WS_ID --vault-name $kvname
+
+#Add Azure Log ws public key
+az keyvault secret set \
+ --name <SID>-LOG_WS_PUBLICKEY \
+ --value "<loswspubkey>" \
+ --description SECRET_AZURE_LOG_WS_PUBLIC_KEY --vault-name $kvname
+```
+
+For more information, see the [az keyvault secret](/cli/azure/keyvault/secret) CLI documentation.
+
+## Perform an expert / custom installation
+
+This procedure describes how to deploy the SAP data connector using an expert or custom installation, such as when installing on-premises.
+
+We recommend that you perform this procedure after you have a key vault ready with your SAP credentials.
+
+**To deploy the SAP data connector**:
+
+1. On your on-premises machine, download the latest SAP NW RFC SDK from the [SAP Launchpad site](https://support.sap.com) > **SAP NW RFC SDK** > **SAP NW RFC SDK 7.50** > **nwrfc750X_X-xxxxxxx.zip**.
+
+ > [!NOTE]
+ > You'll need your SAP user sign-in information in order to access the SDK, and you must download the SDK that matches your operating system.
+ >
+ > Make sure to select the **LINUX ON X86_64** option.
+
+1. On your on-premises machine, create a new folder with a meaningful name, and copy the SDK zip file into your new folder.
+
+1. Clone the Microsoft Sentinel solution GitHub repository onto your on-premises machine, and copy Microsoft Sentinel SAP solution **systemconfig.ini** file into your new folder.
+
+ For example:
+
+ ```bash
+ mkdir /home/$(pwd)/sapcon/<sap-sid>/
+ cd /home/$(pwd)/sapcon/<sap-sid>/
+ wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/template/systemconfig.ini
+ cp <**nwrfc750X_X-xxxxxxx.zip**> /home/$(pwd)/sapcon/<sap-sid>/
+ ```
+
+1. Edit the **systemconfig.ini** file as needed, using the embedded comments as a guide. For more information, see [Manually configure the SAP data connector](#manually-configure-the-sap-data-connector).
+
+ To test your configuration, you may want to add the user and password directly to the **systemconfig.ini** configuration file. While we recommend that you use [Azure Key vault](#add-azure-key-vault-secrets) to store your credentials, you can also use an **env.list** file, [Docker secrets](#manually-configure-the-sap-data-connector), or you can add your credentials directly to the **systemconfig.ini** file.
+
+1. Define the logs that you want to ingest into Microsoft Sentinel using the instructions in the **systemconfig.ini** file. For example, see [Define the SAP logs that are sent to Microsoft Sentinel](#define-the-sap-logs-that-are-sent-to-microsoft-sentinel).
+
+1. Define the following configurations using the instructions in the **systemconfig.ini** file:
+
+ - Whether to include user email addresses in audit logs
+ - Whether to retry failed API calls
+ - Whether to include cexal audit logs
+ - Whether to wait an interval of time between data extractions, especially for large extractions
+
+ For more information, see [SAL logs connector configurations](#sal-logs-connector-settings).
+
+1. Save your updated **systemconfig.ini** file in the **sapcon** directory on your machine.
+
+1. If you have chosen to use an **env.list** file for your credentials, create a temporary **env.list** file with the required credentials. Once your Docker container is running correctly, make sure to delete this file.
+
+ > [!NOTE]
+ > The following script has each Docker container connecting to a specific ABAP system. Modify your script as needed for your environment.
+ >
+
+ Run:
+
+ ```bash
+ ##############################################################
+ # Include the following section if you're using user authentication
+ ##############################################################
+ # env.list template for Credentials
+ SAPADMUSER=<SET_SAPCONTROL_USER>
+ SAPADMPASSWORD=<SET_SAPCONTROL_PASS>
+ LOGWSID=<SET SENTINEL WORKSPACE id>
+ LOGWSPUBLICKEY=<SET SENTINEL WORKSPACE KEY>
+ ABAPUSER=SET_ABAP_USER>
+ ABAPPASS=<SET_ABAP_PASS>
+ JAVAUSER=<SET_JAVA_OS_USER>
+ JAVAPASS=<SET_JAVA_OS_USER>
+ ##############################################################
+ # Include the following section if you are using Azure Keyvault
+ ##############################################################
+ # env.list template for AZ Cli when MI is not enabled
+ AZURE_TENANT_ID=<your tenant id>
+ AZURE_CLIENT_ID=<your client/app id>
+ AZURE_CLIENT_SECRET=<your password/secret for the service principal>
+ ##############################################################
+ ```
+
+1. Download and run the pre-defined Docker image with the SAP data connector installed. Run:
+
+ ```bash
+ docker pull mcr.microsoft.com/azure-sentinel/solutions/sapcon:latest-preview
+ docker run --env-file=<env.list_location> -d --restart unless-stopped -v /home/$(pwd)/sapcon/<sap-sid>/:/sapcon-app/sapcon/config/system --name sapcon-<sid> sapcon
+ rm -f <env.list_location>
+ ```
+
+1. Verify that the Docker container is running correctly. Run:
+
+ ```bash
+ docker logs ΓÇôf sapcon-[SID]
+ ```
+
+1. Continue with deploying the **Microsoft Sentinel - Continuous Threat Monitoring for SAP** solution.
+
+ Deploying the solution enables the SAP data connector to display in Microsoft Sentinel and deploys the SAP workbook and analytics rules. When you're done, manually add and customize your SAP watchlists.
+
+ For more information, see [Deploy SAP security content](deploy-sap-security-content.md).
+
+## Manually configure the SAP data connector
+
+The Microsoft Sentinel SAP solution data connector is configured in the **systemconfig.ini** file, which you cloned to your SAP data connector machine as part of the [deployment procedure](#perform-an-expert--custom-installation).
+
+The following code shows a sample **systemconfig.ini** file:
+
+```python
+[Secrets Source]
+secrets = '<DOCKER_RUNTIME/AZURE_KEY_VAULT/DOCKER_SECRETS/DOCKER_FIXED>'
+keyvault = '<SET_YOUR_AZURE_KEYVAULT>'
+intprefix = '<SET_YOUR_PREFIX>'
+
+[ABAP Central Instance]
+##############################################################
+# Define the following values according to your server configuration.
+ashost = <SET_YOUR_APPLICATION_SERVER_HOST>
+mshost = <SET_YOUR_MESSAGE_SERVER_HOST> - #In case different then App
+##############################################################
+group = <SET_YOUR_LOGON_GROUP>
+msserv = <SET_YOUR_MS_SERVICE> - #Required only if the message server service is not defined as sapms<SYSID> in /etc/services
+sysnr = <SET_YOUR_SYS_NUMBER>
+user = <SET_YOUR_USER>
+##############################################################
+# Enter your password OR your X509 SNC parameters
+passwd = <SET_YOUR_PASSWORD>
+snc_partnername = <SET_YOUR_SNC_PARTNER_NAME>
+snc_lib = <SET_YOUR_SNC_LIBRARY_PATH>
+x509cert = <SET_YOUR_X509_CERTIFICATE>
+##############################################################
+sysid = <SET_YOUR_SYSTEM_ID>
+client = <SET_YOUR_CLIENT>
+
+[Azure Credentials]
+loganalyticswsid = <SET_YOUR_LOG_ANALYTICS_WORKSPACE_ID>
+publickey = <SET_YOUR_PUBLIC_KEY>
+
+[File Extraction ABAP]
+osuser = <SET_YOUR_SAPADM_LIKE_USER>
+##############################################################
+# Enter your password OR your X509 SNC parameters
+ospasswd = <SET_YOUR_SAPADM_PASS>
+x509pkicert = <SET_YOUR_X509_PKI_CERTIFICATE>
+##############################################################
+appserver = <SET_YOUR_SAPCTRL_SERVER IP OR FQDN>
+instance = <SET_YOUR_SAP_INSTANCE NUMBER, example 10>
+abapseverity = <SET_ABAP_SEVERITY 0 = All logs ; 1 = Warning ; 2 = Error>
+abaptz = <SET_ABAP_TZ --Use ONLY GMT FORMAT-- example - For OS Timezone = NZST use abaptz = GMT+12>
+
+[File Extraction JAVA]
+javaosuser = <SET_YOUR_JAVAADM_LIKE_USER>
+##############################################################
+# Enter your password OR your X509 SNC parameters
+javaospasswd = <SET_YOUR_JAVAADM_PASS>
+javax509pkicert = <SET_YOUR_X509_PKI_CERTIFICATE>
+##############################################################
+javaappserver = <SET_YOUR_JAVA_SAPCTRL_SERVER IP ADDRESS OR FQDN>
+javainstance = <SET_YOUR_JAVA_SAP_INSTANCE for example 10>
+javaseverity = <SET_JAVA_SEVERITY 0 = All logs ; 1 = Warning ; 2 = Error>
+javatz = <SET_JAVA_TZ --Use ONLY GMT FORMAT-- example - For OS Timezone = NZST use javatz = GMT+12>
+```
+
+### Define the SAP logs that are sent to Microsoft Sentinel
+
+Add the following code to the Microsoft Sentinel SAP solution **systemconfig.ini** file to define the logs that are sent to Microsoft Sentinel.
+
+For more information, see [Microsoft Sentinel SAP solution logs reference (public preview)](sap-solution-log-reference.md).
+
+```python
+##############################################################
+# Enter True OR False for each log to send those logs to Microsoft Sentinel
+[Logs Activation Status]
+ABAPAuditLog = True
+ABAPJobLog = True
+ABAPSpoolLog = True
+ABAPSpoolOutputLog = True
+ABAPChangeDocsLog = True
+ABAPAppLog = True
+ABAPWorkflowLog = True
+ABAPCRLog = True
+ABAPTableDataLog = False
+# ABAP SAP Control Logs - Retrieved by using SAP Conntrol interface and OS Login
+ABAPFilesLogs = False
+SysLog = False
+ICM = False
+WP = False
+GW = False
+# Java SAP Control Logs - Retrieved by using SAP Conntrol interface and OS Login
+JAVAFilesLogs = False
+##############################################################
+```
+
+### SAL logs connector settings
+
+Add the following code to the Microsoft Sentinel SAP data connector **systemconfig.ini** file to define other settings for SAP logs ingested into Microsoft Sentinel.
+
+For more information, see [Perform an expert / custom SAP data connector installation](#perform-an-expert--custom-installation).
+
+```python
+##############################################################
+[Connector Configuration]
+extractuseremail = True
+apiretry = True
+auditlogforcexal = False
+auditlogforcelegacyfiles = False
+timechunk = 60
+##############################################################
+```
+
+This section enables you to configure the following parameters:
+
+|Parameter name |Description |
+|||
+|**extractuseremail** | Determines whether user email addresses are included in audit logs. |
+|**apiretry** | Determines whether API calls are retried as a failover mechanism. |
+|**auditlogforcexal** | Determines whether the system forces the use of audit logs for non-SAL systems, such as SAP BASIS version 7.4. |
+|**auditlogforcelegacyfiles** | Determines whether the system forces the use of audit logs with legacy system capabilities, such as from SAP BASIS version 7.4 with lower patch levels.|
+|**timechunk** | Determines that the system waits a specific number of minutes as an interval between data extractions. Use this parameter if you have a large amount of data expected. <br><br>For example, during the initial data load during your first 24 hours, you might want to have the data extraction running only every 30 minutes to give each data extraction enough time. In such cases, set this value to **30**. |
++
+### Configuring an ABAP SAP Control instance
+
+To ingest all ABAP logs into Microsoft Sentinel, including both NW RFC and SAP Control Web Service-based logs, configure the following ABAP SAP Control details:
+
+|Setting |Description |
+|||
+|**javaappserver** |Enter your SAP Control ABAP server host. <br>For example: `contoso-erp.appserver.com` |
+|**javainstance** |Enter your SAP Control ABAP instance number. <br>For example: `00` |
+|**abaptz** |Enter the time zone configured on your SAP Control ABAP server, in GMT format. <br>For example: `GMT+3` |
+|**abapseverity** |Enter the lowest, inclusive, severity level for which you want to ingest ABAP logs into Microsoft Sentinel. Values include: <br><br>- **0** = All logs <br>- **1** = Warning <br>- **2** = Error |
+
+### Configuring a Java SAP Control instance
+
+To ingest SAP Control Web Service logs into Microsoft Sentinel, configure the following JAVA SAP Control instance details:
+
+|Parameter |Description |
+|||
+|**javaappserver** |Enter your SAP Control Java server host. <br>For example: `contoso-java.server.com` |
+|**javainstance** |Enter your SAP Control ABAP instance number. <br>For example: `10` |
+|**javatz** |Enter the time zone configured on your SAP Control Java server, in GMT format. <br>For example: `GMT+3` |
+|**javaseverity** |Enter the lowest, inclusive, severity level for which you want to ingest Web Service logs into Microsoft Sentinel. Values include: <br><br>- **0** = All logs <br>- **1** = Warning <br>- **2** = Error |
++
+### Configuring User Master data collection
+
+To ingest tables directly from your SAP system with details about your users and role authorizations, configure your **systemconfig.ini** file with a `True`/`False` statement for each table.
+
+For example:
+
+```python
+[ABAP Table Selector]
+USR01_FULL = True
+USR02_FULL = True
+USR02_INCREMENTAL = True
+UST04_FULL = True
+AGR_USERS_FULL = True
+AGR_USERS_INCREMENTAL = True
+USR21_FULL = True
+AGR_1251_FULL = True
+ADR6_FULL = True
+AGR_TCODES_FULL = True
+DEVACCESS_FULL = True
+AGR_DEFINE_FULL = True
+AGR_DEFINE_INCREMENTAL = True
+AGR_PROF_FULL = True
+PAHI_FULL = True
+```
+
+For more information, see [Tables retrieved directly from SAP systems](sap-solution-log-reference.md#tables-retrieved-directly-from-sap-systems).
+
+## Next steps
+
+After you have your SAP data connector installed, you can add the SAP-related security content.
+
+For more information, see [Deploy the SAP solution](deploy-sap-security-content.md).
+
+For more information, see:
+
+- [Deploy the Microsoft Sentinel SAP data connector with SNC](configure-snc.md)
+- [Microsoft Sentinel SAP solution detailed SAP requirements](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
+- [Microsoft Sentinel SAP solution logs reference](sap-solution-log-reference.md)
+- [Microsoft Sentinel SAP solution: security content reference](sap-solution-security-content.md)
+- [Troubleshooting your Microsoft Sentinel SAP solution deployment](sap-deploy-troubleshoot.md)
sentinel Sap Solution Log Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-solution-log-reference.md
+
+ Title: Microsoft Sentinel SAP solution - data reference | Microsoft Docs
+description: Learn about the SAP logs, tables, and functions available from the Microsoft Sentinel SAP solution.
++++ Last updated : 02/22/2022++
+# Microsoft Sentinel SAP solution data reference (public preview)
++
+> [!IMPORTANT]
+> The Microsoft Sentinel SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+> Some logs, noted below, are not sent to Microsoft Sentinel by default, but you can manually add them as needed. For more information, see [Define the SAP logs that are sent to Microsoft Sentinel](sap-solution-deploy-alternate.md#define-the-sap-logs-that-are-sent-to-microsoft-sentinel).
+>
+
+This article describes the functions, logs, and tables available as part of the Microsoft Sentinel SAP solution and its data connector. It is intended for advanced SAP users.
+
+## Functions available from the SAP solution
+
+This section describes the [functions](/azure-monitor/logs/functions.md) that are available in your workspace after you've deployed the Continuous Threat Monitoring for SAP solution. Find these functions in the Microsoft Sentinel **Logs** page to use in your KQL queries, listed under **Workspace functions**.
+
+Users are *strongly encouraged* to use the functions as the subjects of their analysis whenever possible, instead of the underlying logs or tables. These functions are intended to serve as the principal user interface to the data. They form the basis for all the built-in analytics rules and workbooks available to you out of the box. This allows for changes to be made to the data infrastructure beneath the functions, without breaking user-created content.
+
+- [SAPUsersAssignments](#sapusersassignments)
+- [SAPUsersGetPrivileged](#sapusersgetprivileged)
+- [SAPUsersAuthorizations](#sapusersauthorizations)
+- [SAPConnectorHealth](#sapconnectorhealth)
+- [SAPConnectorOverview](#sapconnectoroverview)
+
+### SAPUsersAssignments
+
+The **SAPUsersAssignments** function gathers data from multiple SAP data sources and creates a user-centric view of the current user master data, including the roles and profiles currently assigned.
+
+This function summarizes the user assignments to roles and profiles, and returns the following data:
++
+| Field | Description | Data Source/Notes |
+| - | -- | -- |
+| User | SAP user ID | SAL only |
+| Email | SMTP address | USR21 (SMTP_ADDR) |
+| UserType | User type | USR02 (USTYP) |
+| Timezone | Time zone | USR02 (TZONE) |
+| LockedStatus | Lock status | USR02 (UFLAG) |
+| LastSeenDate | Last seen date | USR02 (TRDAT) |
+| LastSeenTime | Last seen time | USR02 (LTIME) |
+| UserGroupAuth | User group in user master maintenance | USR02 (CLASS) |
+| Profiles | Set of profiles (default maximum set size = 50) | `["Profile 1", "Profile 2",...,"profile 50"]` |
+| DirectRoles | Set of Directly assigned roles (default max set size = 50) | `["Role 1", "Role 2",...,"ΓÇ¥"Role 50"]` |
+| ChildRoles | Set of indirectly assigned roles (default max set size = 50) | `["Role 1", "Role 2",...,"ΓÇ¥"Role 50"]` |
+| Client | Client ID | |
+| SystemID | System ID | As defined in the connector |
+||||
+
+### SAPUsersGetPrivileged
+
+The **SAPUsersGetPrivileged** function returns a list of privileged users per client and system ID.
+
+Users are considered privileged when they are listed in the *SAP - Privileged Users* watchlist, have been assigned to a profile listed in *SAP - Sensitive Profiles* watchlist, or have been added to a role listed in *SAP - Sensitive Roles* watchlist.
+
+**Parameters:**
+- TimeAgo
+ - Optional
+ - Default value: 7 days
+ - Determines that the function seeks User master data from the time defined by the `TimeAgo` value until the time defined by the `now()` value.
+
+The **SAPUsersGetPrivileged** function returns the following data:
+
+| Field | Description |
+| -- | -- |
+| User | SAP user ID |
+| Client | Client ID |
+| SystemID | System ID |
+| | |
+
+### SAPUsersAuthorizations
+
+The **SAPUsersAuthorizations** function brings together data from several tables to produce a user-centric view of the current roles and authorizations assigned. Only users with active role and authorization assignments are returned.
+
+**Parameters:**
+- TimeAgo
+ - Optional
+ - Default value: 7 days
+ - Determines that the function seeks User master data from the time defined by the `TimeAgo` value until the time defined by the `now()` value.
+
+The **SAPUsersAuthorizations** function returns the following data:
+
+| Field | Description | Notes |
+| -- | -- | -- |
+| User | SAP user ID | |
+| Roles | Set of roles (default max set size = 50) | `["Role 1", "Role 2",...,"Role 50"]` |
+| AuthorizationsDetails | Set of authorizations (default max set size = 100) | `{{AuthorizationsDeatils1}`,<br>`{AuthorizationsDeatils2}`, <br>...,<br>`{AuthorizationsDeatils100}}` |
+| Client | Client ID | |
+| SystemID | System ID | |
++
+### SAPConnectorHealth
+
+The **SAPConnectorHealth** function reflects the status of the agent's and the underlying SAP system's connectivity. Based on the heartbeat log *SAP_HeartBeat_CL* and other health indicators, it returns the following data:
+
+| Field | Description |
+| | -- |
+| Agent | Agent ID in agent's configuration (automatically generated) |
+| SystemID | SAP System ID |
+| Status | Overall connectivity status |
+| Details | Connectivity details |
+| ExtendedDetails | Connectivity extended details |
+| LastSeen | Timestamp of latest activity |
+| StatusCode | Code reflecting the system's status |
++
+### SAPConnectorOverview
+
+The **SAPConnectorOverview** function shows row counts of each SAP table per System ID. It returns a list of data records per system ID, and their time generated.
+
+**Parameters:**
+- TimeAgo
+ - Optional
+ - Default value: 7 days
+ - Determines that the function seeks User master data from the time defined by the `TimeAgo` value until the time defined by the `now()` value.
++
+| Field | Description |
+| | -- |
+| TimeGenerated | A datetime value of the timestamp of the record's generation |
+| SystemID_s | A string representing the SAP System ID |
+
+Use the following Kusto query to perform a daily trend analysis:
+
+```kusto
+SAPConnectorOverview(7d)
+| summarize count() by bin(TimeGenerated, 1d), SystemID_s
+```
++
+## Logs produced by the data connector agent
+
+This section describes the SAP logs available from the Microsoft Sentinel SAP data connector, including the table names in Microsoft Sentinel, the log purposes, and detailed log schemas. Schema field descriptions are based on the field descriptions in the relevant [SAP documentation](https://help.sap.com/).
+
+For best results, use the Microsoft Sentinel functions listed below to visualize, access, and query the data.
+
+- [ABAP Application log](#abap-application-log)
+- [ABAP Change Documents log](#abap-change-documents-log)
+- [ABAP CR log](#abap-cr-log)
+- [ABAP DB table data log](#abap-db-table-data-log)
+- [ABAP Gateway log](#abap-gateway-log)
+- [ABAP ICM log](#abap-icm-log)
+- [ABAP Job log](#abap-job-log)
+- [ABAP Security Audit log](#abap-security-audit-log)
+- [ABAP Spool log](#abap-spool-log)
+- [APAB Spool Output log](#apab-spool-output-log)
+- [ABAP SysLog](#abap-syslog)
+- [ABAP Workflow log](#abap-workflow-log)
+- [ABAP WorkProcess log](#abap-workprocess-log)
+- [HANA DB Audit Trail](#hana-db-audit-trail)
+- [JAVA files](#java-files)
+- [SAP Heartbeat Log](#sap-heartbeat-log)
+
+### ABAP Application log
+
+- **Microsoft Sentinel function for querying this log**: SAPAppLog
+
+- **Related SAP documentation**: [SAP Help Portal](https://help.sap.com/viewer/56bf1265a92e4b4d9a72448c579887af/7.5.7/en-US/c769bcc9f36611d3a6510000e835363f.html)
+
+- **Log purpose**: Records the progress of an application execution so that you can reconstruct it later as needed.
+
+ Available by using RFC with a custom service based on standard services of XBP interface. This log is generated per client.
+
+#### ABAPAppLog_CL log schema
+
+| Field | Description |
+| | |
+| AppLogDateTime | Application log date time |
+| CallbackProgram | Callback program |
+| CallbackRoutine | Callback routine |
+| CallbackType | Callback type |
+| ClientID | ABAP client ID (MANDT) |
+| ContextDDIC | Context DDIC structure |
+| ExternalID | External log ID |
+| Host | Host |
+| Instance | ABAP instance, in the following syntax: `<HOST>_<SYSID>_<SYSNR>` |
+| InternalMessageSerial | Application log message serial |
+| LevelofDetail | Level of detail |
+| LogHandle | Application log handle |
+| LogNumber | Log number |
+| MessageClass | Message class |
+| MessageNumber | Message number |
+| MessageText | Message text |
+| MessageType | Message type |
+| Object | Application log object |
+| OperationMode | Operation mode |
+| ProblemClass | Problem class |
+| ProgramName | Program name |
+| SortCriterion | Sort criterion |
+| StandardText | Standard text |
+| SubObject | Application log sub object |
+| SystemID | System ID |
+| SystemNumber | System number |
+| TransactionCode | Transaction code |
+| User | User |
+| UserChange | User change |
++++
+### ABAP Change Documents log
+
+- **Microsoft Sentinel function for querying this log**: SAPChangeDocsLog
+
+- **Related SAP documentation**: [SAP Help Portal](https://help.sap.com/viewer/6f51f5216c4b10149358d088a0b7029c/7.01.22/en-US/b8686150ed102f1ae10000000a44176f.html)
+
+- **Log purpose**: Records:
+
+ - SAP NetWeaver Application Server (AS) ABAP log changes to business data objects in change documents.
+
+ - Other entities in the SAP system, such as user data, roles, addresses.
+
+ Available by using RFC with a custom service based on standard services. This log is generated per client.
+
+#### ABAPChangeDocsLog_CL log schema
++
+| Field | Description |
+| | - |
+| ActualChangeNum | Actual change number |
+| ChangedTableKey | Changed table key |
+| ChangeNumber | Change number |
+| ClientID | ABAP client ID (MANDT) |
+| CreatedfromPlannedChange | Created from planned change, in the following syntax: `(ΓÇÿXΓÇÖ , ΓÇÿ ΓÇÿ)`|
+| CurrencyKeyNew | Currency key: new value |
+| CurrencyKeyOld | Currency key: old value |
+| FieldName | Field name |
+| FlagText | Flag text |
+| Host | Host |
+| Instance | ABAP instance, in the following syntax: `<HOST>_<SYSID>_<SYSNR>` |
+| Language | Language |
+| ObjectClass | Object class, such as `BELEG`, `BPAR`, `PFCG`, `IDENTITY` |
+| ObjectID | Object ID |
+| PlannedChangeNum | Planned change number |
+| SystemID | System ID |
+| SystemNumber | System number |
+| TableName | Table name |
+| TransactionCode | Transaction code |
+| TypeofChange_Header | Header type of change, including: <br>`U` = Change; `I` = Insert; `E` = Delete Single Docu; `D` = Delete; `J` = Insert Single Docu |
+| TypeofChange_Item | Item type of change, including: <br>`U` = Change; `I` = Insert; `E` = Delete Single Docu; `D` = Delete; `J` = Insert Single Docu |
+| UOMNew | Unit of measure: new value |
+| UOMOld | Unit of measure: old value |
+| User | User |
+| ValueNew | Field content: new value |
+| ValueOld | Field content: old value |
+| Version | Version |
++
+### ABAP CR log
+
+- **Microsoft Sentinel function for querying this log**: SAPCRLog
+
+- **Related SAP documentation**: [SAP Help Portal](https://help.sap.com/viewer/56bf1265a92e4b4d9a72448c579887af/7.5.7/en-US/c769bcd5f36611d3a6510000e835363f.html)
+
+- **Log purpose**: Includes the Change & Transport System (CTS) logs, including the directory objects and customizations where changes were made.
+
+ Available by using RFC with a custom service based on standard tables and standard services. This log is generated with data across all clients.
+
+> [!NOTE]
+> In addition to application logging, change documents, and table recording, all changes that you make to your production system using the Change & Transport System are documented in the CTS and TMS logs.
+>
++
+#### ABAPCRLog_CL log schema
+
+| Field | Description |
+| | |
+| Category | Category (Workbench, Customizing) |
+| ClientID | ABAP client ID (MANDT) |
+| Description | Description |
+| Host | Host |
+| Instance | ABAP instance, in the following syntax: `<HOST>_<SYSID>_<SYSNR>` |
+| ObjectName | Object name |
+| ObjectType | Object type |
+| Owner | Owner |
+| Request | Change request |
+| Status | Status |
+| SystemID | System ID |
+| SystemNumber | System number |
+| TableKey | Table key |
+| TableName | Table name |
+| ViewName | View name |
++
+### ABAP DB table data log
+
+To have this log sent to Microsoft Sentinel, you must [add it manually to the **systemconfig.ini** file](sap-solution-deploy-alternate.md#define-the-sap-logs-that-are-sent-to-microsoft-sentinel).
+
+- **Microsoft Sentinel function for querying this log**: SAPTableDataLog
+
+- **Related SAP documentation**: [SAP Help Portal](https://help.sap.com/viewer/56bf1265a92e4b4d9a72448c579887af/7.5.7/en-US/c769bcd2f36611d3a6510000e835363f.html)
+
+- **Log purpose**: Provides logging for those tables that are critical or susceptible to audits.
+
+ Available by using RFC with a custom service. This log is generated with data across all clients.
+
+#### ABAPTableDataLog_CL log schema
+
+| Field | Description |
+| - | - |
+| DBLogID | DB log ID |
+| Host | Host |
+| Instance | ABAP instance, in the following syntax: `<HOST>_<SYSID>_<SYSNR>` |
+| Language | Language |
+| LogKey | Log key |
+| NewValue | Field new value |
+| OldValue | Field old value |
+| OperationTypeSQL | Operation type, `Insert`, `Update`, `Delete` |
+| Program | Program name |
+| SystemID | System ID |
+| SystemNumber | System number |
+| TableField | Table field |
+| TableName | Table name |
+| TransactionCode | Transaction code |
+| UserName | User |
+| VersionNumber | Version number |
++
+### ABAP Gateway log
+
+To have this log sent to Microsoft Sentinel, you must [add it manually to the **systemconfig.ini** file](sap-solution-deploy-alternate.md#define-the-sap-logs-that-are-sent-to-microsoft-sentinel).
+
+- **Microsoft Sentinel function for querying this log**: SAPOS_GW
+
+- **Related SAP documentation**: [SAP Help Portal](https://help.sap.com/viewer/62b4de4187cb43668d15dac48fc00732/7.5.7/en-US/48b2a710ca1c3079e10000000a42189b.html)
+
+- **Log purpose**: Monitors Gateway activities. Available by the SAP Control Web Service. This log is generated with data across all clients.
+
+#### ABAPOS_GW_CL log schema
+
+| Field | Description |
+| | - |
+| Host | Host |
+| Instance | ABAP instance, in the following syntax: `<HOST>_<SYSID>_<SYSNR>` |
+| MessageText | Message text |
+| Severity | Message severity: `Debug`, `Info`, `Warning`, `Error` |
+| SystemID | System ID |
+| SystemNumber | System number |
++
+### ABAP ICM log
+
+To have this log sent to Microsoft Sentinel, you must [add it manually to the **systemconfig.ini** file](sap-solution-deploy-alternate.md#define-the-sap-logs-that-are-sent-to-microsoft-sentinel).
+
+- **Microsoft Sentinel function for querying this log**: SAPOS_ICM
+
+- **Related SAP documentation**: [SAP Help Portal](https://help.sap.com/viewer/683d6a1797a34730a6e005d1e8de6f22/7.52.4/en-US/a10ec40d01e740b58d0a5231736c434e.html)
+
+- **Log purpose**: Records inbound and outbound requests and compiles statistics of the HTTP requests.
+
+ Available by the SAP Control Web Service. This log is generated with data across all clients.
+
+#### ABAPOS_ICM_CL log schema
+
+| Field | Description |
+| | - |
+| Host | Host |
+| Instance | ABAP instance, in the following syntax: `<HOST>_<SYSID>_<SYSNR>` |
+| MessageText | Message text |
+| Severity | Message severity, including: `Debug`, `Info`, `Warning`, `Error` |
+| SystemID | System ID |
+| SystemNumber | System number |
++
+### ABAP Job log
+
+- **Microsoft Sentinel function for querying this log**: SAPJobLog
+
+- **Related SAP documentation**: [SAP Help Portal](https://help.sap.com/viewer/b07e7195f03f438b8e7ed273099d74f3/7.31.19/en-US/4b2bc0974c594ba2e10000000a42189c.html)
+
+- **Log purpose**: Combines all background processing job logs (SM37).
+
+ Available by using RFC with a custom service based on standard services of XBP interfaces. This log is generated with data across all clients.
+
+#### ABAPJobLog_CL log schema
++
+| Field | Description |
+| - | -- |
+| ABAPProgram | ABAP program |
+| BgdEventParameters | Background event parameters |
+| BgdProcessingEvent | Background processing event |
+| ClientID | ABAP client ID (MANDT) |
+| DynproNumber | Dynpro number |
+| GUIStatus | GUI status |
+| Host | Host |
+| Instance | ABAP instance (HOST_SYSID_SYSNR), in the following syntax: `<HOST>_<SYSID>_<SYSNR>` |
+| JobClassification | Job classification |
+| JobCount | Job count |
+| JobGroup | Job group |
+| JobName | Job name |
+| JobPriority | Job priority |
+| MessageClass | Message class |
+| MessageNumber | Message number |
+| MessageText | Message text |
+| MessageType | Message type |
+| ReleaseUser | Job release user |
+| SchedulingDateTime | Scheduling date time |
+| StartDateTime | Start date time |
+| SystemID | System ID |
+| SystemNumber | System number |
+| TargetServer | Target server |
+| User | User |
+| UserReleaseInstance | ABAP instance - user release |
+| WorkProcessID | Work process ID |
+| WorkProcessNumber | Work process Number |
++
+### ABAP Security Audit log
+
+- **Microsoft Sentinel function for querying this log**: SAPAuditLog
+
+- **Related SAP documentation**: [SAP Help Portal](https://help.sap.com/viewer/280f016edb8049e998237fcbd80558e7/7.5.7/en-US/4d41bec4aa601c86e10000000a42189b.html)
+
+- **Log purpose**: Records the following data:
+
+ - Security-related changes to the SAP system environment, such as changes to main user records
+ - Information that provides a higher level of data, such as successful and unsuccessful sign-in attempts
+ - Information that enables the reconstruction of a series of events, such as successful or unsuccessful transaction starts
+
+ Available by using RFC XAL/SAL interfaces. SAL is available starting from version Basis 7.50. This log is generated with data across all clients.
+
+#### ABAPAuditLog_CL log schema
+
+| Field | Description |
+| -- | -- |
+| ABAPProgramName | Program name, SAL only |
+| AlertSeverity | Alert severity |
+| AlertSeverityText | Alert severity text, SAL only |
+| AlertValue | Alert value |
+| AuditClassID | Audit class ID, SAL only |
+| ClientID | ABAP client ID (MANDT) |
+| Computer | User machine, SAL only |
+| Email | User email |
+| Host | Host |
+| Instance | ABAP instance, in the following syntax: `<HOST>_<SYSID>_<SYSNR>` |
+| MessageClass | Message class |
+| MessageContainerID | Message container ID, XAL Only |
+| MessageID | Message ID, such as `‘AU1’,’AU2’…` |
+| MessageText | Message text |
+| MonitoringObjectName | MTE Monitor object name, XAL only |
+| MonitorShortName | MTE Monitor short name, XAL only |
+| SAPProcesType | System Log: SAP process type, SAL only |
+| B* - Background Processing | |
+| D* - Dialog Processing | |
+| U* - Update Tasks | |
+| SAPWPName | System Log: Work process number, SAL only |
+| SystemID | System ID |
+| SystemNumber | System number |
+| TerminalIPv6 | User machine IP, SAL only |
+| TransactionCode | Transaction code, SAL only |
+| User | User |
+| Variable1 | Message variable 1 |
+| Variable2 | Message variable 2 |
+| Variable3 | Message variable 3 |
+| Variable4 | Message variable 4 |
++
+### ABAP Spool log
+
+- **Microsoft Sentinel function for querying this log**: SAPSpoolLog
+
+- **Related SAP documentation**: [SAP Help Portal](https://help.sap.com/viewer/290ce8983cbc4848a9d7b6f5e77491b9/7.52.1/en-US/4eae791c40f72045e10000000a421937.html)
+
+- **Log purpose**: Serves as the main log for SAP Printing with the history of spool requests. (SP01).
+
+ Available by using RFC with a custom service based on standard tables. This log is generated with data across all clients.
+
+#### ABAPSpoolLog_CL log schema
+
+| Field | Description |
+| -- | |
+| ArchiveStatus | Archive status |
+| ArchiveType | Archive type |
+| ArchivingDevice | Archiving device |
+| AutoRereoute | Auto reroute |
+| ClientID | ABAP client ID (MANDT) |
+| CountryKey | Country key |
+| DeleteSpoolRequestAuto | Delete spool request auto |
+| DelFlag | Deletion flag |
+| Department | Department |
+| DocumentType | Document type |
+| ExternalMode | External mode |
+| FormatType | Format type |
+| Host | Host |
+| Instance | ABAP instance, in the following syntax: `<HOST>_<SYSID>_<SYSNR>` |
+| NumofCopies | Number of copies |
+| OutputDevice | Output device |
+| PrinterLongName | Printer long name |
+| PrintImmediately | Print immediately |
+| PrintOSCoverPage | Print OSCover page |
+| PrintSAPCoverPage | Print SAPCover page |
+| Priority | Priority |
+| RecipientofSpoolRequest | Recipient of spool request |
+| SpoolErrorStatus | Spool error status |
+| SpoolRequestCompleted | Spool request completed |
+| SpoolRequestisALogForAnotherRequest | Spool request is a log for another request |
+| SpoolRequestName | Spool request name |
+| SpoolRequestNumber | Spool request number |
+| SpoolRequestSuffix1 | Spool request suffix1 |
+| SpoolRequestSuffix2 | Spool request suffix2 |
+| SpoolRequestTitle | Spool request title |
+| SystemID | System ID |
+| SystemNumber | System number |
+| TelecommunicationsPartner | Telecommunications partner |
+| TelecommunicationsPartnerE | Telecommunications partner E |
+| TemSeGeneralcounter | Temse counter |
+| TemseNumAddProtectionRule | Temse number add protection rule |
+| TemseNumChangeProtectionRule | Temse number change protection rule |
+| TemseNumDeleteProtectionRule | Temse number delete protection rule |
+| TemSeObjectName | Temse object name |
+| TemSeObjectPart | TemSe object part |
+| TemseReadProtectionRule | Temse read protection rule |
+| User | User |
+| ValueAuthCheck | Value auth check |
++
+### APAB Spool Output log
+
+- **Microsoft Sentinel function for querying this log**: SAPSpoolOutputLog
+
+- **Related SAP documentation**: [SAP Help Portal](https://help.sap.com/viewer/290ce8983cbc4848a9d7b6f5e77491b9/7.52.1/en-US/4eae779e40f72045e10000000a421937.html)
+
+- **Log purpose**: Serves as the main log for SAP Printing with the history of spool output requests. (SP02).
+
+ Available by using RFC with a custom service based on standard tables. This log is generated with data across all clients.
+
+#### ABAPSpoolOutputLog_CL log schema
+
+| Field | Description |
+| - | -- |
+| AppServer | Application server |
+| ClientID | ABAP client ID (MANDT) |
+| Comment | Comment |
+| CopyCount | Copy count |
+| CopyCounter | Copy counter |
+| Department | Department |
+| ErrorSpoolRequestNumber | Error request number |
+| FormatType | Format type |
+| Host | Host |
+| HostName | Host name |
+| HostSpoolerID | Host spooler ID |
+| Instance | ABAP instance |
+| LastPage | Last page |
+| NumofCopies | Number of copies |
+| OutputDevice | Output device |
+| OutputRequestNumber | Output request number |
+| OutputRequestStatus | Output request status |
+| PhysicalFormatType | Physical format type |
+| PrinterLongName | Printer long name |
+| PrintRequestSize | Print request size |
+| Priority | Priority |
+| ReasonforOutputRequest | Reason for output request |
+| RecipientofSpoolRequest | Recipient of spool request |
+| SpoolNumberofOutputReqProcessed | Number of output requests - processed |
+| SpoolNumberofOutputReqWithErrors | Number of output requests - with errors |
+| SpoolNumberofOutputReqWithProblems | Number of output requests - with problems |
+| SpoolRequestNumber | Spool request number |
+| StartPage | Start page |
+| SystemID | System ID |
+| SystemNumber | System number |
+| TelecommunicationsPartner | Telecommunications partner |
+| TemSeGeneralcounter | Temse counter |
+| Title | Title |
+| User | User |
+++
+### ABAP Syslog
+
+To have this log sent to Microsoft Sentinel, you must [add it manually to the **systemconfig.ini** file](sap-solution-deploy-alternate.md#define-the-sap-logs-that-are-sent-to-microsoft-sentinel).
+
+- **Microsoft Sentinel function for querying this log**: SAPOS_Syslog
+
+- **Related SAP documentation**: [SAP Help Portal](https://help.sap.com/viewer/56bf1265a92e4b4d9a72448c579887af/7.5.7/en-US/c769bcbaf36611d3a6510000e835363f.html)
+
+- **Log purpose**: Records all SAP NetWeaver Application Server (SAP NetWeaver AS) ABAP system errors, warnings, user locks because of failed sign-in attempts from known users, and process messages.
+
+ Available by the SAP Control Web Service. This log is generated with data across all clients.
+
+#### ABAPOS_Syslog_CL log schema
++
+| Field | Description |
+| - | - |
+| ClientID | ABAP client ID (MANDT) |
+| Host | Host |
+| Instance | ABAP instance, in the following syntax: `<HOST>_<SYSID>_<SYSNR>` |
+| MessageNumber | Message number |
+| MessageText | Message text |
+| Severity | Message severity, one of the following values: `Debug`, `Info`, `Warning`, `Error` |
+| SystemID | System ID |
+| SystemNumber | System number |
+| TransacationCode | Transaction code |
+| Type | SAP process type |
+| User | User |
+++
+### ABAP Workflow log
+
+- **Microsoft Sentinel function for querying this log**: SAPWorkflowLog
+
+- **Related SAP documentation**: [SAP Help Portal](https://help.sap.com/viewer/56bf1265a92e4b4d9a72448c579887af/7.5.7/en-US/c769bcccf36611d3a6510000e835363f.html)
+
+- **Log purpose**: The SAP Business Workflow (WebFlow Engine) enables you to define business processes that aren't yet mapped in the SAP system.
+
+ For example, unmapped business processes may be simple release or approval procedures, or more complex business processes such as creating base material and then coordinating the associated departments.
+
+ Available by using RFC with a custom service based on standard tables and standard services. This log is generated per client.
+
+#### ABAPWorkflowLog_CL log schema
++
+| Field | Description |
+| - | -- |
+| ActualAgent | Actual agent |
+| Address | Address |
+| ApplicationArea | Application area |
+| CallbackFunction | Callback function |
+| ClientID | ABAP client ID (MANDT) |
+| CreationDateTime | Creation date time |
+| Creator | Creator |
+| CreatorAddress | Creator address |
+| ErrorType | Error type |
+| ExceptionforMethod | Exception for method |
+| Host | Host |
+| Instance | ABAP instance (HOST_SYSID_SYSNR), in the following syntax: `<HOST>_<SYSID>_<SYSNR>` |
+| Language | Language |
+| LogCounter | Log counter |
+| MessageNumber | Message number |
+| MessageType | Message type |
+| MethodUser | Method user |
+| Priority | Priority |
+| SimpleContainer | Simple container, packed as a list of Key-Value entities for the work item |
+| Status | Status |
+| SuperWI | Super WI |
+| SystemID | System ID |
+| SystemNumber | System number |
+| TaskID | Task ID |
+| TasksClassification | Task classifications |
+| TaskText | Task text |
+| TopTaskID | Top task ID |
+| UserCreated | User created |
+| WIText | Work item text |
+| WIType | Work item type |
+| WorkflowAction | Workflow action |
+| WorkItemID | Work item ID |
++
+### ABAP WorkProcess log
+
+To have this log sent to Microsoft Sentinel, you must [add it manually to the **systemconfig.ini** file](sap-solution-deploy-alternate.md#define-the-sap-logs-that-are-sent-to-microsoft-sentinel).
+
+- **Microsoft Sentinel function for querying this log**: SAPOS_WP
+
+- **Related SAP documentation**: [SAP Help Portal](https://help.sap.com/viewer/d0739d980ecf42ae9f3b4c19e21a4b6e/7.3.15/en-US/46fb763b6d4c5515e10000000a1553f6.html)
+
+- **Log purpose**: Combines all work process logs. (default: `dev_*`).
+
+ Available by the SAP Control Web Service. This log is generated with data across all clients.
+
+#### ABAPOS_WP_CL log schema
++
+| Field | Description |
+| | - |
+| Host | Host |
+| Instance | ABAP instance, in the following syntax: `<HOST>_<SYSID>_<SYSNR>` |
+| MessageText | Message text |
+| Severity | Message severity: `Debug`, `Info`, `Warning`, `Error` |
+| SystemID | System ID |
+| SystemNumber | System number |
+| WPNumber | Work process number |
+++
+### HANA DB Audit Trail
+
+To have this log sent to Microsoft Sentinel, you must [deploy a Microsoft Management Agent](../connect-syslog.md) to gather Syslog data from the machine running HANA DB.
++
+- **Microsoft Sentinel function for querying this log**: SAPSyslog
+
+- **Related SAP documentation**: [General](https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.03/en-US/48fd6586304c4f859bf92d64d0cd8b08.html) | [Audit Trail](https://help.sap.com/viewer/b3ee5778bc2e4a089d3299b82ec762a7/2.0.03/en-US/0a57444d217649bf94a19c0b68b470cc.html)
+
+- **Log purpose**: Records user actions, or attempted actions in the SAP HANA database. For example, enables you to log and monitor read access to sensitive data.
+
+ Available by the Sentinel Linux Agent for Syslog. This log is generated with data across all clients.
+
+#### Syslog log schema
+
+| Field | Description |
+| - | |
+| Computer | Host name |
+| HostIP | Host IP |
+| HostName | Host name |
+| ProcessID | Process ID |
+| ProcessName | Process name: `HDB*` |
+| SeverityLevel | Alert |
+| SourceSystem | Source system OS, `Linux` |
+| SyslogMessage | Message, an unparsed audit trail message |
++
+### JAVA files
+
+To have this log sent to Microsoft Sentinel, you must [add it manually to the **systemconfig.ini** file](sap-solution-deploy-alternate.md#define-the-sap-logs-that-are-sent-to-microsoft-sentinel).
++
+- **Microsoft Sentinel function for querying this log**: SAPJAVAFilesLogs
+
+- **Related SAP documentation**: [General](https://help.sap.com/viewer/2f8b1599655d4544a3d9c6d1a9b6546b/7.5.9/en-US/485059dfe31672d4e10000000a42189c.html) | [Java Security Audit Log](https://help.sap.com/viewer/1531c8a1792f45ab95a4c49ba16dc50b/7.5.9/en-US/4b6013583840584ae10000000a42189c.html)
+
+- **Log purpose**: Combines all Java files-based logs, including the Security Audit Log, and System (cluster and server process), Performance, and Gateway logs. Also includes Developer Traces and Default Trace logs.
+
+ Available by the SAP Control Web Service. This log is generated with data across all clients.
+
+#### JavaFilesLogsCL log schema
++
+| Field | Description |
+| - | -- |
+| Application | Java application |
+| ClientID | Client ID |
+| CSNComponent | CSN component, such as `BC-XI-IBD` |
+| DCComponent | DC component, such as `com.sap.xi.util.misc` |
+| DSRCounter | DSR counter |
+| DSRRootContentID | DSR context GUID |
+| DSRTransaction | DSR transaction GUID |
+| Host | Host |
+| Instance | Java instance, in the following syntax: `<HOST>_<SYSID>_<SYSNR>` |
+| Location | Java class |
+| LogName | Java logName, such as: `Available`, `defaulttrace`, `dev*`, `security`, and so on
+| MessageText | Message text |
+| MNo | Message number |
+| Pid | Process ID |
+| Program | Program name |
+| Session | Session |
+| Severity | Message severity, including: `Debug`,`Info`,`Warning`,`Error` |
+| Solution | Solution |
+| SystemID | System ID |
+| SystemNumber | System number |
+| ThreadName | Thread name |
+| Thrown | Exception thrown |
+| TimeZone | Timezone |
+| User | User |
+++
+### SAP Heartbeat Log
+
+- **Microsoft Sentinel function for querying this log**: SAPConnectorHealth
+
+- **Log purpose**: Provides heartbeat and other health information on the connectivity between the agents and the different SAP systems.
+
+ Automatically created for any agents of the SAP Connector for Microsoft Sentinel.
+
+#### SAP_HeartBeat_CL log schema
+
+| Field | Description |
+| - | -- |
+| TimeGenerated | Time of log posting event |
+| agent_id_s | Agent ID in agent's configuration (automatically generated) |
+| agent_ver_s | Agent version |
+| host_s | The agent's host name |
+| system_id_s | Netweaver ABAP System ID /<br>Netweaver SAPControl Host (preview) /<br>Java SAPControl host (preview)
+| push_timestamp_d | Timestamp of the extraction, according to the agent's time zone |
+| agent_timezone_s | Agent's time zone |
+
+## Tables retrieved directly from SAP systems
+
+This section lists the data tables that are retrieved directly from the SAP system and ingested into Microsoft Sentinel exactly as they are.
+
+To have the data from these tables ingested into Microsoft Sentinel, configure the relevant settings in the **systemconfig.ini** file. For more information, see [Configuring User Master data collection](sap-solution-deploy-alternate.md#configuring-user-master-data-collection).
+
+The data retrieved from these tables provides a clear view of the authorization structure, group membership, and user profiles. It also allows you to track the process of authorization grants and revokes, and identify and govern the risks associated with those processes.
+
+The tables listed below are required to enable functions that identify privileged users, map users to roles, groups, and authorizations.
+
+For best results, refer to these tables using the name in the **Sentinel function name** column below:
+
+| Table name | Table description | Sentinel function name |
+| --| - | - |
+| USR01 | User master record (runtime data) | SAP_USR01 |
+| USR02 | Logon data (kernel-side use) | SAP_USR02 |
+| UST04 | User masters<br>Maps users to profiles | SAP_UST04 |
+| AGR_USERS | Assignment of roles to users | SAP_AGR_USERS |
+| AGR_1251 | Authorization data for the activity group | SAP_AGR_1251 |
+| USGRP_USER | Assignment of users to user groups | SAP_USGRP_USER |
+| USR21 | User name/Address key assignment | SAP_USR21 |
+| ADR6 | Email addresses (business address services) | SAP_ADR6 |
+| USRSTAMP | Time stamp for all changes to the user | SAP_USRSTAMP |
+| ADCP | Person/Address assignment (business address services) | SAP_ADCP |
+| USR05 | User master parameter ID | SAP_USR05 |
+| AGR_PROF | Profile name for role | SAP_AGR_PROF |
+| AGR_FLAGS | Role attributes | SAP_AGR_FLAGS |
+| DEVACCESS | Table for development user | SAP_DEVACCESS |
+| AGR_DEFINE | Role definition | SAP_AGR_DEFINE |
+| AGR_AGRS | Roles in composite roles | SAP_AGR_AGRS |
+| PAHI | History of the system, database, and SAP parameters | SAP_PAHI |
+++
+## Next steps
+
+For more information, see:
+
+- [Deploy the Microsoft Sentinel solution for SAP](deployment-overview.md)
+- [Microsoft Sentinel SAP solution detailed SAP requirements](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
+- [Deploy the Microsoft Sentinel SAP data connector with SNC](configure-snc.md)
+- [Expert configuration options, on-premises deployment, and SAPControl log sources](sap-solution-deploy-alternate.md)
+- [Microsoft Sentinel SAP solution: built-in security content](sap-solution-security-content.md)
+- [Troubleshooting your Microsoft Sentinel SAP solution deployment](sap-deploy-troubleshoot.md)
sentinel Sap Solution Security Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-solution-security-content.md
+
+ Title: Microsoft Sentinel SAP solution - security content reference | Microsoft Docs
+description: Learn about the built-in security content provided by the Microsoft Sentinel SAP solution.
++++ Last updated : 04/27/2022++
+# Microsoft Sentinel SAP solution: security content reference
++
+This article details the security content available for the SAP continuous threat monitoring.
+
+> [!IMPORTANT]
+> The Microsoft Sentinel SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Available security content includes a built-in workbook and built-in analytics rules. You can also add SAP-related [watchlists](../watchlists.md) to use in your search, detection rules, threat hunting, and response playbooks.
+
+## Built-in workbooks
+
+Use the following built-in workbooks to visualize and monitor data ingested via the SAP data connector. After deploying the SAP solution, SAP workbooks are found in the **My workbooks** tab.
+
+| Workbook name | Description | Logs |
+| | | |
+| <a name="sapsystem-applications-and-products-workbook"></a>**SAP - Audit Log Browser** | Displays data such as: <br><br>General system health, including user sign-ins over time, events ingested by the system, message classes and IDs, and ABAP programs run <br><br>Severities of events occurring in your system <br><br>Authentication and authorization events occurring in your system |Uses data from the following log: <br><br>[ABAPAuditLog_CL](sap-solution-log-reference.md#abap-security-audit-log) |
+| **SAP - Suspicious Privileges Operations** | Displays data such as: <br><br>Sensitive and critical assignments <br><br>Actions and changes made to sensitive, privileged users <br><br>Changes made to roles |Uses data from the following logs: <br><br>[ABAPAuditLog_CL](sap-solution-log-reference.md#abap-security-audit-log) <br><br>[ABAPChangeDocsLog_CL](sap-solution-log-reference.md#abap-change-documents-log) |
+| **SAP - Initial Access & Attempts to Bypass SAP Security Mechanisms** | Displays data such as: <br><br>Executions of sensitive programs, code, and function modules <br><br>Configuration changes, including log deactivations <br><br>Changes made in debug mode |Uses data from the following logs: <br><br>[ABAPAuditLog_CL](sap-solution-log-reference.md#abap-security-audit-log)<br><br>[ABAPTableDataLog_CL](sap-solution-log-reference.md#abap-db-table-data-log)<br><br>[Syslog](sap-solution-log-reference.md#abap-syslog) |
+| **SAP - Persistency & Data Exfiltration** | Displays data such as: <br><br>Internet Communication Framework (ICF) services, including activations and deactivations and data about new services and service handlers <br><br> Insecure operations, including both function modules and programs <br><br>Direct access to sensitive tables | Uses data from the following logs: <br><br>[ABAPAuditLog_CL](sap-solution-log-reference.md#abap-security-audit-log) <br><br>[ABAPTableDataLog_CL](sap-solution-log-reference.md#abap-db-table-data-log)<br><br>[ABAPSpoolLog_CL](sap-solution-log-reference.md#abap-spool-log)<br><br>[ABAPSpoolOutputLog_CL](sap-solution-log-reference.md#apab-spool-output-log)<br><br>[Syslog](sap-solution-log-reference.md#abap-syslog) |
++
+For more information, see [Tutorial: Visualize and monitor your data](../monitor-your-data.md) and [Deploy SAP continuous threat monitoring](deployment-overview.md).
+
+## Built-in analytics rules
+
+The following tables list the built-in [analytics rules](deploy-sap-security-content.md) that are included in the Microsoft Sentinel SAP solution, deployed from the Microsoft Sentinel Solutions marketplace.
+
+### Built-in SAP analytics rules for initial access
+
+| Rule name | Description | Source action | Tactics |
+| | | | |
+| **SAP - High - Login from unexpected network** | Identifies a sign-in from an unexpected network. <br><br>Maintain networks in the [SAP - Networks](#networks) watchlist. | Sign in to the backend system from an IP address that is not assigned to one of the networks. <br><br>**Data sources**: SAPcon - Audit Log | Initial Access |
+| **SAP - High - SPNego Attack** | Identifies SPNego Replay Attack. | **Data sources**: SAPcon - Audit Log | Impact, Lateral Movement |
+| **SAP - High- Dialog logon attempt from a privileged user** | Identifies dialog sign-in attempts, with the **AUM** type, by privileged users in a SAP system. For more information, see the [SAPUsersGetPrivileged](sap-solution-log-reference.md#sapusersgetprivileged). | Attempt to sign in from the same IP to several systems or clients within the scheduled time interval<br><br>**Data sources**: SAPcon - Audit Log | Impact, Lateral Movement |
+| **SAP - Medium - Brute force attacks** | Identifies brute force attacks on the SAP system using RFC logons | Attempt to login from the same IP to several systems/clients within the scheduled time interval using RFC<br><br>**Data sources**: SAPcon - Audit Log | Credential Access |
+| **SAP - Medium - Multiple Logons from the same IP** | Identifies the sign-in of several users from same IP address within a scheduled time interval. <br><br>**Sub-use case**: [Persistency](#built-in-sap-analytics-rules-for-persistency) | Sign in using several users through the same IP address. <br><br>**Data sources**: SAPcon - Audit Log | Initial Access |
+| **SAP - Medium - Multiple Logons by User** | Identifies sign-ins of the same user from several terminals within scheduled time interval. <br><br>Available only via the Audit SAL method, for SAP versions 7.5 and higher. | Sign in using the same user, using different IP addresses. <br><br>**Data sources**: SAPcon - Audit Log | PreAttack, Credential Access, Initial Access, Collection <br><br>**Sub-use case**: [Persistency](#built-in-sap-analytics-rules-for-persistency) |
+| **SAP - Informational - Lifecycle - SAP Notes were implemented in system** | Identifies SAP Note implementation in the system. | Implement an SAP Note using SNOTE/TCI. <br><br>**Data sources**: SAPcon - Change Requests | - |
++
+### Built-in SAP analytics rules for data exfiltration
+
+| Rule name | Description | Source action | Tactics |
+| | | | |
+| **SAP - Medium - FTP for non authorized servers** |Identifies an FTP connection for a non-authorized server. | Create a new FTP connection, such as by using the FTP_CONNECT Function Module. <br><br>**Data sources**: SAPcon - Audit Log | Discovery, Initial Access, Command and Control |
+| **SAP - Medium - Insecure FTP servers configuration** |Identifies insecure FTP server configurations, such as when an FTP allowlist is empty or contains placeholders. | Do not maintain or maintain values that contain placeholders in the `SAPFTP_SERVERS` table, using the `SAPFTP_SERVERS_V` maintenance view. (SM30) <br><br>**Data sources**: SAPcon - Audit Log | Initial Access, Command and Control |
+| **SAP - Medium - Multiple Files Download** |Identifies multiple file downloads for a user within a specific time-range. | Download multiple files using the SAPGui for Excel, lists, and so on. <br><br>**Data sources**: SAPcon - Audit Log | Collection, Exfiltration, Credential Access |
+| **SAP - Medium - Multiple Spool Executions** |Identifies multiple spools for a user within a specific time-range. | Create and run multiple spool jobs of any type by a user. (SP01) <br><br>**Data sources**: SAPcon - Spool Log, SAPcon - Audit Log | Collection, Exfiltration, Credential Access |
+| **SAP - Medium - Multiple Spool Output Executions** |Identifies multiple spools for a user within a specific time-range. | Create and run multiple spool jobs of any type by a user. (SP01) <br><br>**Data sources**: SAPcon - Spool Output Log, SAPcon - Audit Log | Collection, Exfiltration, Credential Access |
+| **SAP - Medium - Sensitive Tables Direct Access By RFC Logon** |Identifies a generic table access by RFC sign in. <br><br> Maintain tables in the [SAP - Sensitive Tables](#tables) watchlist.<br><br> **Note**: Relevant for production systems only. | Open the table contents using SE11/SE16/SE16N.<br><br>**Data sources**: SAPcon - Audit Log | Collection, Exfiltration, Credential Access |
+| **SAP - Medium - Spool Takeover** |Identifies a user printing a spool request that was created by someone else. | Create a spool request using one user, and then output it in using a different user. <br><br>**Data sources**: SAPcon - Spool Log, SAPcon - Spool Output Log, SAPcon - Audit Log | Collection, Exfiltration, Command and Control |
+| **SAP - Low - Dynamic RFC Destination** | Identifies the execution of RFC using dynamic destinations. <br><br>**Sub-use case**: [Attempts to bypass SAP security mechanisms](#built-in-sap-analytics-rules-for-attempts-to-bypass-sap-security-mechanisms)| Execute an ABAP report that uses dynamic destinations (cl_dynamic_destination). For example, DEMO_RFC_DYNAMIC_DEST. <br><br>**Data sources**: SAPcon - Audit Log | Collection, Exfiltration |
+| **SAP - Low - Sensitive Tables Direct Access By Dialog Logon** | Identifies generic table access via dialog sign-in. | Open table contents using `SE11`/`SE16`/`SE16N`. <br><br>**Data sources**: SAPcon - Audit Log | Discovery |
++
+### Built-in SAP analytics rules for persistency
+
+| Rule name | Description | Source action | Tactics |
+| | | | |
+| **SAP - High - Activation or Deactivation of ICF Service** | Identifies activation or deactivation of ICF Services. | Activate a service using SICF.<br><br>**Data sources**: SAPcon - Table Data Log | Command and Control, Lateral Movement, Persistence |
+| **SAP - High - Function Module tested** | Identifies the testing of a function module. | Test a function module using `SE37` / `SE80`. <br><br>**Data sources**: SAPcon - Audit Log | Collection, Defense Evasion, Lateral Movement |
+| **SAP - High - HANA DB - User Admin actions** | Identifies user administration actions. | Create, update, or delete a database user. <br><br>**Data Sources**: Linux Agent - Syslog* |Privilege Escalation |
+| **SAP - High - New ICF Service Handlers** | Identifies creation of ICF Handlers. | Assign a new handler to a service using SICF.<br><br>**Data sources**: SAPcon - Audit Log | Command and Control, Lateral Movement, Persistence |
+| **SAP - High - New ICF Services** | Identifies creation of ICF Services. | Create a service using SICF.<br><br>**Data sources**: SAPcon - Table Data Log | Command and Control, Lateral Movement, Persistence |
+| **SAP - Medium - Execution of Obsolete or Insecure Function Module** |Identifies the execution of an obsolete or insecure ABAP function module. <br><br>Maintain obsolete functions in the [SAP - Obsolete Function Modules](#modules) watchlist. Make sure to activate table logging changes for the `EUFUNC` table in the backend. (SE13)<br><br> **Note**: Relevant for production systems only. | Run an obsolete or insecure function module directly using SE37. <br><br>**Data sources**: SAPcon - Table Data Log | Discovery, Command and Control |
+| **SAP - Medium - Execution of Obsolete/Insecure Program** |Identifies the execution of an obsolete or insecure ABAP program. <br><br> Maintain obsolete programs in the [SAP - Obsolete Programs](#programs) watchlist.<br><br> **Note**: Relevant for production systems only. | Run a program directly using SE38/SA38/SE80, or by using a background job. <br><br>**Data sources**: SAPcon - Audit Log | Discovery, Command and Control |
+| **SAP - Low - Multiple Password Changes by User** | Identifies multiple password changes by user. | Change user password <br><br>**Data sources**: SAPcon - Audit Log | Credential Access |
+++
+### Built-in SAP analytics rules for attempts to bypass SAP security mechanisms
+
+| Rule name | Description | Source action | Tactics |
+| | | | |
+| **SAP - High - Client Configuration Change** | Identifies changes for client configuration such as the client role or the change recording mode. | Perform client configuration changes using the `SCC4` transaction code. <br><br>**Data sources**: SAPcon - Audit Log | Defense Evasion, Exfiltration, Persistence |
+| **SAP - High - Data has Changed during Debugging Activity** | Identifies changes for runtime data during a debugging activity. <br><br>**Sub-use case**: [Persistency](#built-in-sap-analytics-rules-for-persistency) | 1. Activate Debug ("/h"). <br>2. Select a field for change and update its value.<br><br>**Data sources**: SAPcon - Audit Log | Execution, Lateral Movement |
+| **SAP - High - Deactivation of Security Audit Log** | Identifies deactivation of the Security Audit Log, | Disable security Audit Log using `SM19/RSAU_CONFIG`. <br><br>**Data sources**: SAPcon - Audit Log | Exfiltration, Defense Evasion, Persistence |
+| **SAP - High - Execution of a Sensitive ABAP Program** |Identifies the direct execution of a sensitive ABAP program. <br><br>Maintain ABAP Programs in the [SAP - Sensitive ABAP Programs](#programs) watchlist. | Run a program directly using `SE38`/`SA38`/`SE80`. <br> <br>**Data sources**: SAPcon - Audit Log | Exfiltration, Lateral Movement, Execution |
+| **SAP - High - Execution of a Sensitive Transaction Code** | Identifies the execution of a sensitive Transaction Code. <br><br>Maintain transaction codes in the [SAP - Sensitive Transaction Codes](#transactions) watchlist. | Run a sensitive transaction code. <br><br>**Data sources**: SAPcon - Audit Log | Discovery, Execution |
+| **SAP - High - Execution of Sensitive Function Module** | Identifies the execution of a sensitive ABAP function module. <br><br>**Sub-use case**: [Persistency](#built-in-sap-analytics-rules-for-persistency)<br><br>**Note**: Relevant for production systems only. <br><br>Maintain sensitive functions in the [SAP - Sensitive Function Modules](#modules) watchlist, and make sure to activate table logging changes in the backend for the EUFUNC table. (SE13) | Run a sensitive function module directly using SE37. <br><br>**Data sources**: SAPcon - Table Data Log | Discovery, Command and Control
+| **SAP - High - HANA DB - Audit Trail Policy Changes** | Identifies changes for HANA DB audit trail policies. | Create or update the existing audit policy in security definitions. <br> <br>**Data sources**: Linux Agent - Syslog | Lateral Movement, Defense Evasion, Persistence |
+| **SAP - High - HANA DB - Deactivation of Audit Trail** | Identifies the deactivation of the HANA DB audit log. | Deactivate the audit log in the HANA DB security definition. <br><br>**Data sources**: Linux Agent - Syslog | Persistence, Lateral Movement, Defense Evasion |
+| **SAP - High - RFC Execution of a Sensitive Function Module** | Sensitive function models to be used in relevant detections. <br><br>Maintain function modules in the [SAP - Sensitive Function Modules](#module) watchlist. | Run a function module using RFC. <br><br>**Data sources**: SAPcon - Audit Log | Execution, Lateral Movement, Discovery |
+| **SAP - High - System Configuration Change** | Identifies changes for system configuration. | Adapt system change options or software component modification using the `SE06` transaction code.<br><br>**Data sources**: SAPcon - Audit Log |Exfiltration, Defense Evasion, Persistence |
+| **SAP - Medium - Debugging Activities** | Identifies all debugging related activities. <br><br>**Sub-use case**: [Persistency](#built-in-sap-analytics-rules-for-persistency) |Activate Debug ("/h") in the system, debug an active process, add breakpoint to source code, and so on. <br><br>**Data sources**: SAPcon - Audit Log | Discovery |
+| **SAP - Medium - Security Audit Log Configuration Change** | Identifies changes in the configuration of the Security Audit Log | Change any Security Audit Log Configuration using `SM19`/`RSAU_CONFIG`, such as the filters, status, recording mode, and so on. <br><br>**Data sources**: SAPcon - Audit Log | Persistence, Exfiltration, Defense Evasion |
+| **SAP - Medium - Transaction is unlocked** |Identifies unlocking of a transaction. | Unlock a transaction code using `SM01`/`SM01_DEV`/`SM01_CUS`. <br><br>**Data sources**: SAPcon - Audit Log | Persistence, Execution |
+| **SAP - Low - Dynamic ABAP Program** | Identifies the execution of dynamic ABAP programming. For example, when ABAP code was dynamically created, changed, or deleted. <br><br> Maintain excluded transaction codes in the [SAP - Transactions for ABAP Generations](#transactions) watchlist. | Create an ABAP Report that uses ABAP program generation commands, such as INSERT REPORT, and then run the report. <br><br>**Data sources**: SAPcon - Audit Log | Discovery, Command and Control, Impact |
++
+### Built-in SAP analytics rules for suspicious privileges operations
+
+| Rule name | Description | Source action | Tactics |
+| | | | |
+| **SAP - High - Change in Sensitive privileged user** | Identifies changes of sensitive privileged users. <br> <br>Maintain privileged users in the [SAP - Privileged Users](#users) watchlist. | Change user details / authorizations using `SU01`. <br><br>**Data sources**: SAPcon - Audit Log | Privilege Escalation, Credential Access |
+| **SAP - High - HANA DB - Assign Admin Authorizations** | Identifies admin privilege or role assignment. | Assign a user with any admin role or privileges. <br><br>**Data sources**: Linux Agent - Syslog | Privilege Escalation |
+| **SAP - High - Sensitive privileged user logged in** | Identifies the Dialog sign-in of a sensitive privileged user. <br><br>Maintain privileged users in the [SAP - Privileged Users](#users) watchlist. | Sign in to the backend system using `SAP*` or another privileged user. <br><br>**Data sources**: SAPcon - Audit Log | Initial Access, Credential Access |
+| **SAP - High - Sensitive privileged user makes a change in other user** | Identifies changes of sensitive, privileged users in other users. | Change user details / authorizations using SU01. <br><br>**Data Sources**: SAPcon - Audit Log | Privilege Escalation, Credential Access |
+| **SAP - High - Sensitive Users Password Change and Login** | Identifies password changes for privileged users. | Change the password for a privileged user and sign into the system. <br>Maintain privileged users in the [SAP - Privileged Users](#users) watchlist.<br><br>**Data sources**: SAPcon - Audit Log | Impact, Command and Control, Privilege Escalation |
+| **SAP - High - User Creates and uses new user** | Identifies a user creating and using other users. <br><br>**Sub-use case**: [Persistency](#built-in-sap-analytics-rules-for-persistency) | Create a user using SU01, and then sign in, using the newly created user and the same IP address.<br><br>**Data sources**: SAPcon - Audit Log | Discovery, PreAttack, Initial Access |
+| **SAP - High - User Unlocks and uses other users** | Identifies a user being unlocked and used by other users. <br><br>**Sub-use case**: [Persistency](#built-in-sap-analytics-rules-for-persistency) | Unlock a user using SU01, and then sign in using the unlocked user and the same IP address.<br><br>**Data sources**: SAPcon - Audit Log, SAPcon - Change Documents Log | Discovery, PreAttack, Initial Access, Lateral Movement |
+| **SAP - Medium - Assignment of a sensitive profile** | Identifies new assignments of a sensitive profile to a user. <br><br>Maintain sensitive profiles in the [SAP - Sensitive Profiles](#profiles) watchlist. | Assign a profile to a user using `SU01`. <br><br>**Data sources**: SAPcon - Change Documents Log | Privilege Escalation |
+| **SAP - Medium - Assignment of a sensitive role** | Identifies new assignments for a sensitive role to a user. <br><br>Maintain sensitive roles in the [SAP - Sensitive Roles](#roles) watchlist.| Assign a role to a user using `SU01` / `PFCG`. <br><br>**Data sources**: SAPcon - Change Documents Log, Audit Log | Privilege Escalation |
+| **SAP - Medium - Critical authorizations assignment - New Authorization Value** | Identifies the assignment of a critical authorization object value to a new user. <br><br>Maintain critical authorization objects in the [SAP - Critical Authorization Objects](#objects) watchlist. | Assign a new authorization object or update an existing one in a role, using `PFCG`. <br><br>**Data sources**: SAPcon - Change Documents Log | Privilege Escalation |
+| **SAP - Medium - Critical authorizations assignment - New User Assignment** | Identifies the assignment of a critical authorization object value to a new user. <br><br>Maintain critical authorization objects in the [SAP - Critical Authorization Objects](#objects) watchlist. | Assign a new user to a role that holds critical authorization values, using `SU01`/`PFCG`. <br><br>**Data sources**: SAPcon - Change Documents Log | Privilege Escalation |
+| **SAP - Medium - Sensitive Roles Changes** |Identifies changes in sensitive roles. <br><br> Maintain sensitive roles in the [SAP - Sensitive Roles](#roles) watchlist. | Change a role using PFCG. <br><br>**Data sources**: SAPcon - Change Documents Log, SAPcon ΓÇô Audit Log | Impact, Privilege Escalation, Persistence |
++
+## Available watchlists
+
+The following table lists the [watchlists](deploy-sap-security-content.md) available for the Microsoft Sentinel SAP solution, and the fields in each watchlist.
+
+These watchlists provide the configuration for the Microsoft Sentinel SAP Continuous Threat Monitoring solution. The [SAP watchlists](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP/Analytics/Watchlists) are available in the Microsoft Sentinel GitHub repository.
+
+| Watchlist name | Description and fields |
+| | |
+| <a name="objects"></a>**SAP - Critical Authorization Objects** | Critical Authorizations object, where assignments should be governed. <br><br>- **AuthorizationObject**: An SAP authorization object, such as `S_DEVELOP`, `S_TCODE`, or `Table TOBJ` <br>- **AuthorizationField**: An SAP authorization field, such as `OBJTYP` or `TCD` <br>- **AuthorizationValue**: An SAP authorization field value, such as `DEBUG` <br>- **ActivityField** : SAP activity field. For most cases, this value will be `ACTVT`. For Authorizations objects without an **Activity**, or with only an **Activity** field, filled with `NOT_IN_USE`. <br>- **Activity**: SAP activity, according to the authorization object, such as: `01`: Create; `02`: Change; `03`: Display, and so on. <br>- **Description**: A meaningful Critical Authorization Object description. |
+| **SAP - Excluded Networks** | For internal maintenance of excluded networks, such as to ignore web dispatchers, terminal servers, and so on. <br><br>-**Network**: A network IP address or range, such as `111.68.128.0/17`. <br>-**Description**: A meaningful network description.|
+| **SAP Excluded Users** |System users who are signed in to the system and must be ignored. For example, alerts for multiple sign-ins by the same user. <br><br>- **User**: SAP User <br>-**Description**: A meaningful user description. |
+| <a name="networks"></a>**SAP - Networks** | Internal and maintenance networks for identification of unauthorized logins. <br><br>- **Network**: Network IP address or range, such as `111.68.128.0/17` <br>- **Description**: A meaningful network description.|
+| <a name="users"></a>**SAP - Privileged Users** | Privileged users that are under extra restrictions. <br><br>- **User**: the ABAP user, such as `DDIC` or `SAP` <br>- **Description**: A meaningful user description. |
+| <a name= "programs"></a>**SAP - Sensitive ABAP Programs** | Sensitive ABAP programs (reports), where execution should be governed. <br><br>- **ABAPProgram**: ABAP program or report, such as `RSPFLDOC` <br>- **Description**: A meaningful program description.|
+| <a name="module"></a>**SAP - Sensitive Function Module** | Internal and maintenance networks for identification of unauthorized logins. <br><br>- **FunctionModule**: An ABAP function module, such as `RSAU_CLEAR_AUDIT_LOG` <br>- **Description**: A meaningful module description. |
+| <a name="profiles"></a>**SAP - Sensitive Profiles** | Sensitive profiles, where assignments should be governed. <br><br>- **Profile**: SAP authorization profile, such as `SAP_ALL` or `SAP_NEW` <br>- **Description**: A meaningful profile description.|
+| <a name="tables"></a>**SAP - Sensitive Tables** | Sensitive tables, where access should be governed. <br><br>- **Table**: ABAP Dictionary Table, such as `USR02` or `PA008` <br>- **Description**: A meaningful table description. |
+| <a name="roles"></a>**SAP - Sensitive Roles** | Sensitive roles, where assignment should be governed. <br><br>- **Role**: SAP authorization role, such as `SAP_BC_BASIS_ADMIN` <br>- **Description**: A meaningful role description. |
+| <a name="transactions"></a>**SAP - Sensitive Transactions** | Sensitive transactions where execution should be governed. <br><br>- **TransactionCode**: SAP transaction code, such as `RZ11` <br>- **Description**: A meaningful code description. |
+| <a name="systems"></a>**SAP - Systems** | Describes the landscape of SAP systems according to role and usage.<br><br>- **SystemID**: the SAP system ID (SYSID) <br>- **SystemRole**: the SAP system role, one of the following values: `Sandbox`, `Development`, `Quality Assurance`, `Training`, `Production` <br>- **SystemUsage**: The SAP system usage, one of the following values: `ERP`, `BW`, `Solman`, `Gateway`, `Enterprise Portal` |
+| <a name="users"></a>**SAP - Excluded Users** | System users that are logged in and need to be ignored, such as for the Multiple logons by user alert. <br><br>- **User**: SAP User <br>- **Description**: A meaningful user description |
+| <a name="networks"></a>**SAP - Excluded Networks** | Maintain internal, excluded networks for ignoring web dispatchers, terminal servers, and so on. <br><br>- **Network**: Network IP address or range, such as `111.68.128.0/17` <br>- **Description**: A meaningful network description |
+| <a name="modules"></a>**SAP - Obsolete Function Modules** | Obsolete function modules, whose execution should be governed. <br><br>- **FunctionModule**: ABAP Function Module, such as TH_SAPREL <br>- **Description**: A meaningful function module description |
+| <a name="programs"></a>**SAP - Obsolete Programs** | Obsolete ABAP programs (reports), whose execution should be governed. <br><br>- **ABAPProgram**:ABAP Program, such as TH_ RSPFLDOC <br>- **Description**: A meaningful ABAP program description |
+| <a name="transactions"></a>**SAP - Transactions for ABAP Generations** | Transactions for ABAP generations whose execution should be governed. <br><br>- **TransactionCode**:Transaction Code, such as SE11. <br>- **Description**: A meaningful Transaction Code description |
+| <a name="servers"></a>**SAP - FTP Servers** | FTP Servers for identification of unauthorized connections. <br><br>- **Client**:such as 100. <br>- **FTP_Server_Name**: FTP server name, such as http://contoso.com/ <br>-**FTP_Server_Port**:FTP server port, such as 22. <br>- **Description**A meaningful FTP Server description |
+++
+## Next steps
+
+For more information, see:
+
+- [Deploying SAP continuous threat monitoring](deployment-overview.md)
+- [Microsoft Sentinel SAP solution logs reference](sap-solution-log-reference.md)
+- [Deploy the Microsoft Sentinel SAP data connector with SNC](configure-snc.md)
+- [Configuration file reference](configuration-file-reference.md)
+- [Prerequisites for deploying SAP continuous threat monitoring](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
+- [Troubleshooting your Microsoft Sentinel SAP solution deployment](sap-deploy-troubleshoot.md)
sentinel Update Sap Data Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/update-sap-data-connector.md
+
+ Title: Update Microsoft Sentinel's SAP data connector agent | Microsoft Docs
+description: This article shows you how to update an already existing SAP data connector to its latest version.
+++ Last updated : 03/02/2022++
+# Update Microsoft Sentinel's SAP data connector agent
++
+This article shows you how to update an already existing SAP data connector to its latest version.
+
+> [!IMPORTANT]
+> The Microsoft Sentinel SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+If you have a Docker container already running with an earlier version of the SAP data connector, run the SAP data connector update script to get the latest features available.
+
+## Update SAP data connector agent
+
+Make sure that you have the most recent versions of the relevant deployment scripts from the Microsoft Sentinel GitHub repository.
+
+Run:
+
+```azurecli
+wget -O sapcon-instance-update.sh https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/sapcon-instance-update.sh && bash ./sapcon-instance-update.sh
+```
+
+The SAP data connector Docker container on your machine is updated.
+
+Be sure to check for any other available updates, such as:
+
+- Relevant SAP change requests, in the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP/CR).
+- Microsoft Sentinel SAP security content, in the **Microsoft Sentinel Continuous Threat Monitoring for SAP** solution.
+- Relevant watchlists, in the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP/Analytics/Watchlists).
+
+## Next steps
+
+Learn more about the Microsoft Sentinel SAP solutions:
+
+- [Deploy Continuous Threat Monitoring for SAP](deployment-overview.md)
+- [Prerequisites for deploying SAP continuous threat monitoring](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
+- [Deploy SAP Change Requests (CRs) and configure authorization](preparing-sap.md)
+- [Deploy and configure the SAP data connector agent container](deploy-data-connector-agent-container.md)
+- [Deploy SAP security content](deploy-sap-security-content.md)
+- [Deploy the Microsoft Sentinel SAP data connector with SNC](configure-snc.md)
+- [Enable and configure SAP auditing](configure-audit.md)
+- [Collect SAP HANA audit logs](collect-sap-hana-audit-logs.md)
+
+Troubleshooting:
+
+- [Troubleshoot your Microsoft Sentinel SAP solution deployment](sap-deploy-troubleshoot.md)
+- [Configure SAP Transport Management System](configure-transport.md)
+
+Reference files:
+
+- [Microsoft Sentinel SAP solution data reference](sap-solution-log-reference.md)
+- [Microsoft Sentinel SAP solution: security content reference](sap-solution-security-content.md)
+- [Kickstart script reference](reference-kickstart.md)
+- [Update script reference](reference-update.md)
+- [Systemconfig.ini file reference](reference-systemconfig.md)
+
+For more information, see [Microsoft Sentinel solutions](../sentinel-solutions.md).
sentinel Sentinel Solutions Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sentinel-solutions-catalog.md
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
|Name |Includes |Categories |Supported by | |||||
-|**Continuous Threat Monitoring for SAP**|[Data connector](sap-deploy-solution.md), [workbooks, analytics rules, watchlists](sap-solution-security-content.md) | Application |Community |
+|**Continuous Threat Monitoring for SAP**|[Data connector](sap/deployment-overview.md), [workbooks, analytics rules, watchlists](sap/sap-solution-security-content.md) | Application |Community |
+ ## Semperis
sentinel Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new-archive.md
The SAP data connector streams a multitude of 14 application logs from the entir
To ingest SAP logs into Azure Sentinel, you must have the Azure Sentinel SAP data connector installed on your SAP environment. After the SAP data connector is deployed, deploy the rich SAP solution security content to smoothly gain insight into your organization's SAP environment and improve any related security operation capabilities.
-For more information, see [Tutorial: Deploy the Azure Sentinel solution for SAP (public preview)](sap-deploy-solution.md).
+For more information, see [Deploying SAP continuous threat monitoring](sap/deployment-overview.md).
### Threat intelligence integrations (Public preview)
service-bus-messaging Transport Layer Security Configure Minimum Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/transport-layer-security-configure-minimum-version.md
Running the following query in the Resource Graph Explorer returns a list of Ser
```kusto resources | where type =~ 'Microsoft.ServiceBus/namespaces'
-| extend minimumTlsVersion = parse\_json(properties).minimumTlsVersion
+| extend minimumTlsVersion = parse_json(properties).minimumTlsVersion
| project subscriptionId, resourceGroup, name, minimumTlsVersion ```
spring-cloud Expose Apps Gateway Tls Termination https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/expose-apps-gateway-tls-termination.md
Title: "Expose applications to the internet using Application Gateway with TLS termination" description: How to expose applications to internet using Application Gateway with TLS termination--++ Last updated 11/09/2021
spring-cloud How To Configure Palo Alto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-configure-palo-alto.md
Title: How to configure Palo Alto for Azure Spring Cloud description: How to configure Palo Alto for Azure Spring Cloud--++ Last updated 09/17/2021
spring-cloud How To Enable Availability Zone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-enable-availability-zone.md
You can enable availability zone in Azure Spring Cloud using the [Azure CLI](/cl
To create a service in Azure Spring Cloud with availability zone enabled using the Azure CLI, include the `--zone-redundant` parameter when you create your service in Azure Spring Cloud. ```azurecli
-az spring-cloud create -name <MyService> \
- -group <MyResourceGroup> \
- -location <MyLocation> \
+az spring-cloud create \
+ --resource-group <your-resource-group-name> \
+ --name <your-Azure-Spring-Cloud-instance-name> \
+ --location <location> \
--zone-redundant true ```
storage Immutable Time Based Retention Policy Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/immutable-time-based-retention-policy-overview.md
Previously updated : 12/01/2021 Last updated : 05/02/2022
To configure version-level retention policies, you must first enable version-lev
Version-level immutability on the storage account must be enabled when you create the account. When you enable version-level immutability for a new storage account, all containers subsequently created in that account automatically support version-level immutability. It's not possible to disable support for version-level immutability on a storage account after you've enabled it, nor is it possible to create a container without version-level immutability support when it's enabled for the account.
-If you have not enabled support for version-level immutability on the storage account, then you can enable support for version-level immutability on an individual container at the time that you create the container. Existing containers can also support version-level immutability, but must undergo a migration process first. This process may take some time and is not reversible. You can migrate only one container at a time per storage account. For more information about migrating a container to support version-level immutability, see [Migrate an existing container to support version-level immutability](immutable-policy-configure-version-scope.md#migrate-an-existing-container-to-support-version-level-immutability).
+If you have not enabled support for version-level immutability on the storage account, then you can enable support for version-level immutability on an individual container at the time that you create the container. Existing containers can also support version-level immutability, but must undergo a migration process first. This process may take some time and is not reversible. You can migrate ten containers at a time per storage account. For more information about migrating a container to support version-level immutability, see [Migrate an existing container to support version-level immutability](immutable-policy-configure-version-scope.md#migrate-an-existing-container-to-support-version-level-immutability).
Version-level time-based retention policies require that [blob versioning](versioning-overview.md) is enabled for the storage account. To learn how to enable blob versioning, see [Enable and manage blob versioning](versioning-enable.md). Keep in mind that enabling versioning may have a billing impact. For more information, see the **Pricing and billing** section in [Blob versioning](versioning-overview.md#pricing-and-billing).
storage Object Replication Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/object-replication-configure.md
Previously updated : 04/26/2022 Last updated : 05/02/2022
To create a replication policy in the Azure portal, follow these steps:
1. Under **Data management**, select **Object replication**. 1. Select **Set up replication rules**. 1. Select the destination subscription and storage account.
-1. In the **Container pairs** section, select a source container from the source account, and a destination container from the destination account. You can create up to 10 container pairs per replication policy using this method. If you want to configure more than 10 container pairs (up to 1,000), see [Configure object replication using a JSON file](#configure-object-replication-using-a-json-file).
+1. In the **Container pairs** section, select a source container from the source account, and a destination container from the destination account. You can create up to 10 container pairs per replication policy from the Azure portal. To configure more than 10 container pairs (up to 1000), see [Configure object replication using a JSON file](#configure-object-replication-using-a-json-file).
The following image shows a set of replication rules.
storage Object Replication Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/object-replication-overview.md
Previously updated : 04/26/2022 Last updated : 05/02/2022
Enabling change feed and blob versioning may incur additional costs. For more de
Object replication is supported for general-purpose v2 storage accounts and premium block blob accounts. Both the source and destination accounts must be either general-purpose v2 or premium block blob accounts. Object replication supports block blobs only; append blobs and page blobs are not supported.
+Customer-managed failover is not supported for either the source or the destination account in an object replication policy.
+ ## How object replication works Object replication asynchronously copies block blobs in a container according to rules that you configure. The contents of the blob, any versions associated with the blob, and the blob's metadata and properties are all copied from the source container to the destination container.
The source and destination accounts may be in the same region or in different re
### Replication rules
-Replication rules specify how Azure Storage will replicate blobs from a source container to a destination container. You can specify up to 1,000 replication rules for each replication policy. Each replication rule defines a single source and destination container, and each source and destination container can be used in only one rule, meaning that a maximum of 1,000 source containers and 1,000 destination containers may participate in a single replication policy.
+Replication rules specify how Azure Storage will replicate blobs from a source container to a destination container. You can specify up to 1000 replication rules for each replication policy. Each replication rule defines a single source and destination container, and each source and destination container can be used in only one rule, meaning that a maximum of 1000 source containers and 1000 destination containers may participate in a single replication policy.
When you create a replication rule, by default only new block blobs that are subsequently added to the source container are copied. You can specify that both new and existing block blobs are copied, or you can define a custom copy scope that copies block blobs created from a specified time onward.
storage Files Nfs Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-nfs-protocol.md
The status of items that appear in this table may change over time as support co
| [Standard tiers (Hot, Cool, and Transaction optimized)](storage-files-planning.md#storage-tiers)| ⛔ | | [POSIX-permissions](https://en.wikipedia.org/wiki/File-system_permissions#Notation_of_traditional_Unix_permissions)| ✔️ | | Root squash| ✔️ |
-| Acess same data from Windows and Linux client| Γ¢ö |
+| Access same data from Windows and Linux client| Γ¢ö |
| [Identity-based authentication](storage-files-active-directory-overview.md) | Γ¢ö | | [Azure file share soft delete](storage-files-prevent-file-share-deletion.md) | Γ¢ö | | [Azure File Sync](../file-sync/file-sync-introduction.md)| Γ¢ö |
storage Storage Files Identity Ad Ds Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-enable.md
Set-AzStorageAccount `
-ActiveDirectoryAzureStorageSid "<your-storage-account-sid>" ```
-#### (Optional) Enable AES256 encryption
+#### Enable AES-256 encryption (recommended)
-To enable AES 256 encryption, follow the steps in this section. If you plan to use RC4, skip this section.
+To enable AES-256 encryption, follow the steps in this section. If you plan to use RC4, skip this section.
The domain object that represents your storage account must meet the following requirements: - The storage account name cannot exceed 15 characters.
The domain object that represents your storage account must meet the following r
If your domain object doesn't meet those requirements, delete it and create a new domain object that does.
-Replace `<domain-object-identity>` and `<domain-name>` with your values, then use the following command to configure AES256 support:
+Replace `<domain-object-identity>` and `<domain-name>` with your values, then run the following cmdlet to configure AES-256 support:
```powershell Set-ADComputer -Identity <domain-object-identity> -Server <domain-name> -KerberosEncryptionType "AES256" ```
-After you've ran that command, replace `<domain-object-identity>` in the following script with your value, then run the script to refresh your domain object password:
+After you've run that cmdlet, replace `<domain-object-identity>` in the following script with your value, then run the script to refresh your domain object password:
```powershell $KeyName = "kerb1" # Could be either the first or second kerberos key, this script assumes we're refreshing the first
storage Storage Files Migration Nas Cloud Databox https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-nas-cloud-databox.md
Robocopy /MT:32 /NP /NFL /NDL /B /MIR /IT /COPY:DATSO /DCOPY:DAT /UNILOG:<FilePa
* To learn more about the details of the individual RoboCopy flags, check out the table in the upcoming [RoboCopy section](#robocopy). * To learn more about how to appropriately size the thread count `/MT:n`, optimize RoboCopy speed, and make RoboCopy a good neighbor in your data center, take a look at the [RoboCopy troubleshooting section](#troubleshoot).
+> [!TIP]
+> As an alternative to Robocopy, Data Box has created a data copy service. You can use this service to load files onto your Data Box with full fidelity. [Follow this data copy service tutorial](../../databox/data-box-deploy-copy-data-via-copy-service.md) and make sure to set the correct Azure file share target.
## Phase 7: Catch-up RoboCopy from your NAS
storage Storage Files Migration Nas Hybrid Databox https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-nas-hybrid-databox.md
When your Data Box arrives, it will have pre-provisioned SMB shares available fo
Follow the steps in the Azure Data Box documentation: 1. [Connect to Data Box](../../databox/data-box-deploy-copy-data.md).
-1. Copy data to Data Box.
+1. Copy data to Data Box. </br>You can use Robocopy (follow instruction below) or the new [Data Box data copy service](../../databox/data-box-deploy-copy-data-via-copy-service.md).
1. [Prepare your Data Box for upload to Azure](../../databox/data-box-deploy-picked-up.md).
-The linked Data Box documentation specifies a Robocopy command. That command isn't suitable for preserving the full file and folder fidelity. Use this command instead:
+> [!TIP]
+> As an alternative to Robocopy, Data Box has created a data copy service. You can use this service to load files onto your Data Box with full fidelity. [Follow this data copy service tutorial](../../databox/data-box-deploy-copy-data-via-copy-service.md) and make sure to set the correct Azure file share target.
+Data Box documentation specifies a Robocopy command. That command isn't suitable for preserving the full file and folder fidelity. Use this command instead:
## Phase 6: Deploy the Azure File Sync cloud resource
storage Storage Files Migration Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-overview.md
The following table classifies Microsoft tools and their current suitability for
|![Yes, recommended](media/storage-files-migration-overview/circle-green-checkmark.png)| RoboCopy | Supported. Azure file shares can be mounted as network drives. | Full fidelity.* | |![Yes, recommended](media/storage-files-migration-overview/circle-green-checkmark.png)| Azure File Sync | Natively integrated into Azure file shares. | Full fidelity.* | |![Yes, recommended](media/storage-files-migration-overview/circle-green-checkmark.png)| Storage Migration Service | Indirectly supported. Azure file shares can be mounted as network drives on SMS target servers. | Full fidelity.* |
-|![Yes, recommended](media/storage-files-migration-overview/circle-green-checkmark.png)| Data Box | Supported. | DataBox fully supports metadata. |
+|![Yes, recommended](medi) to load files onto the device)| Supported. </br>(Data Box Disks does not support large file shares) | Data Box and Data Box Heavy fully support metadata. </br>Data Box Disks does not preserve file metadata. |
|![Not fully recommended](medi) | |![Not fully recommended](media/storage-files-migration-overview/triangle-yellow-exclamation.png)| Azure Storage Explorer </br>latest version | Supported but not recommended. | Loses most file fidelity, like ACLs. Supports timestamps. | |![Not recommended](media/storage-files-migration-overview/circle-red-x.png)| Azure Data Factory | Supported. | Doesn't copy metadata. |
storage Storage Blobs Container Calculate Billing Size Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/scripts/storage-blobs-container-calculate-billing-size-powershell.md
Title: Azure PowerShell script sample - Calculate the total billing size of a blob container | Microsoft Docs description: Calculate the total size of a container in Azure Blob storage for billing purposes. -+ ms.devlang: powershell Last updated 12/29/2020-+ # Calculate the total billing size of a blob container
storage Storage Blobs Container Calculate Size Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/scripts/storage-blobs-container-calculate-size-cli.md
Title: Azure CLI Script Sample - Calculate blob container size | Microsoft Docs description: Calculate the size of a container in Azure Blob storage by totaling the size of the blobs in the container. -+ ms.devlang: azurecli Last updated 03/01/2022-+
storage Storage Blobs Container Calculate Size Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/scripts/storage-blobs-container-calculate-size-powershell.md
Title: Calculate size of a blob container with PowerShell
description: Calculate the size of a container in Azure Blob Storage by totaling the size of each of its blobs. -+ ms.devlang: powershell Last updated 12/04/2019-+ # Calculate the size of a blob container with PowerShell
-This script calculates the size of a container in Azure Blob Storage by totaling the size of the blobs in the container.
+This script calculates the size of a container in Azure Blob Storage. It first displays the total number of bytes used by the blobs within the container, then displays their individual names and lengths.
[!INCLUDE [sample-powershell-install](../../../includes/sample-powershell-install-no-ssh-az.md)]
For a script that calculates container size for billing purposes, see [Calculate
For more information on the Azure PowerShell module, see [Azure PowerShell documentation](/powershell/azure/).
-Additional storage PowerShell script samples can be found in [PowerShell samples for Azure Storage](../blobs/storage-samples-blobs-powershell.md).
+Find more PowerShell script samples in [PowerShell samples for Azure Storage](../blobs/storage-samples-blobs-powershell.md).
storage Storage Blobs Container Delete By Prefix Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/scripts/storage-blobs-container-delete-by-prefix-cli.md
Title: Azure CLI Script Sample - Delete containers by prefix | Microsoft Docs description: Delete Azure Storage blob containers based on a container name prefix, then clean up the deployment. See help links for commands used in the script sample. -+ ms.devlang: azurecli Last updated 03/01/2022-+
storage Storage Blobs Container Delete By Prefix Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/scripts/storage-blobs-container-delete-by-prefix-powershell.md
Title: Azure PowerShell Script Sample - Delete containers by prefix | Microsoft Docs description: Read an example that shows how to delete Azure Blob storage based on a prefix in the container name, using Azure PowerShell. -+ ms.devlang: powershell Last updated 06/13/2017-+
storage Storage Common Rotate Account Keys Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/scripts/storage-common-rotate-account-keys-cli.md
Title: Azure CLI Script Sample - Rotate storage account access keys | Microsoft Docs description: Create an Azure Storage account, then retrieve and rotate its account access keys. -+ ms.devlang: azurecli Last updated 03/02/2022-+
storage Storage Common Rotate Account Keys Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/scripts/storage-common-rotate-account-keys-powershell.md
Title: Rotate storage account access keys with PowerShell
description: Create an Azure Storage account, then retrieve and rotate one of its account access keys. -+ ms.devlang: powershell Last updated 12/04/2019-+
virtual-machines Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/quick-create-portal.md
Sign in to the [Azure portal](https://portal.azure.com).
## Create virtual machine
-1. Type **virtual machines** in the search.
+1. Enter *virtual machines* in the search.
1. Under **Services**, select **Virtual machines**. 1. In the **Virtual machines** page, select **Create** and then **Virtual machine**. The **Create a virtual machine** page opens.
-1. In the **Basics** tab, under **Project details**, make sure the correct subscription is selected and then choose to **Create new** resource group. Type *myResourceGroup* for the name.*.
+1. In the **Basics** tab, under **Project details**, make sure the correct subscription is selected and then choose to **Create new** resource group. Enter *myResourceGroup* for the name.*.
![Screenshot of the Project details section showing where you select the Azure subscription and the resource group for the virtual machine](./media/quick-create-portal/project-details.png)
-1. Under **Instance details**, type *myVM* for the **Virtual machine name**, and choose *Ubuntu 18.04 LTS - Gen2* for your **Image**. Leave the other defaults. The default size and pricing is only shown as an example. Size availability and pricing are dependent on your region and subscription.
+1. Under **Instance details**, enter *myVM* for the **Virtual machine name**, and choose *Ubuntu 18.04 LTS - Gen2* for your **Image**. Leave the other defaults. The default size and pricing is only shown as an example. Size availability and pricing are dependent on your region and subscription.
:::image type="content" source="media/quick-create-portal/instance-details.png" alt-text="Screenshot of the Instance details section where you provide a name for the virtual machine and select its region, image, and size.":::
Sign in to the [Azure portal](https://portal.azure.com).
1. Under **Administrator account**, select **SSH public key**.
-1. In **Username** type *azureuser*.
+1. In **Username** enter *azureuser*.
-1. For **SSH public key source**, leave the default of **Generate new key pair**, and then type *myKey* for the **Key pair name**.
+1. For **SSH public key source**, leave the default of **Generate new key pair**, and then enter *myKey* for the **Key pair name**.
![Screenshot of the Administrator account section where you select an authentication type and provide the administrator credentials](./media/quick-create-portal/administrator-account.png)
virtual-machines Scheduled Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/scheduled-events.md
Scheduled Events provides events in the following use cases:
- [Platform initiated maintenance](../maintenance-and-updates.md?bc=/azure/virtual-machines/linux/breadcrumb/toc.json&toc=/azure/virtual-machines/linux/toc.json) (for example, VM reboot, live migration or memory preserving updates for host) - Virtual machine is running on [degraded host hardware](https://azure.microsoft.com/blog/find-out-when-your-virtual-machine-hardware-is-degraded-with-scheduled-events) that is predicted to fail soon
+- Virtual machine was running on a host that suffered a hardware failure
- User-initiated maintenance (for example, a user restarts or redeploys a VM) - [Spot VM](../spot-vms.md) and [Spot scale set](../../virtual-machine-scale-sets/use-spot.md) instance evictions.
When you query Metadata Service, you must provide the header `Metadata:true` to
### Query for events You can query for scheduled events by making the following call:
-#### Bash
+#### Bash sample
``` curl -H Metadata:true http://169.254.169.254/metadata/scheduledevents?api-version=2020-07-01 ```
Each event is scheduled a minimum amount of time in the future based on the even
> [!NOTE] > In some cases, Azure is able to predict host failure due to degraded hardware and will attempt to mitigate disruption to your service by scheduling a migration. Affected virtual machines will receive a scheduled event with a `NotBefore` that is typically a few days in the future. The actual time varies depending on the predicted failure risk assessment. Azure tries to give 7 days' advance notice when possible, but the actual time varies and might be smaller if the prediction is that there is a high chance of the hardware failing imminently. To minimize risk to your service in case the hardware fails before the system-initiated migration, we recommend that you self-redeploy your virtual machine as soon as possible.-
+
+>[!NOTE]
+> In the case the host node experiences a hardware failure Azure will bypass the minimum notice period an immediately begin the recovery process for affected virtual machines. This reduces recovery time in the case that the affected VMs are unable to respond. During the recovery process an event will be created for all impacted VMs with EventType = Reboot and EventStatus = Started
+
### Polling frequency You can poll the endpoint for updates as frequently or infrequently as you like. However, the longer the time between requests, the more time you potentially lose to react to an upcoming event. Most events have 5 to 15 minutes of advance notice, although in some cases advance notice might be as little as 30 seconds. To ensure that you have as much time as possible to take mitigating actions, we recommend that you poll the service once per second.
curl -H Metadata:true -X POST -d '{"StartRequests": [{"EventId": "f020ba2e-3bc0-
> [!NOTE] > Acknowledging an event allows the event to proceed for all `Resources` in the event, not just the VM that acknowledges the event. Therefore, you can choose to elect a leader to coordinate the acknowledgement, which might be as simple as the first machine in the `Resources` field.
-## Python sample
+## Python Sample
The following sample queries Metadata Service for scheduled events and approves each outstanding event:
virtual-machines Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/quick-create-portal.md
Sign in to the Azure portal at https://portal.azure.com.
## Create virtual machine
-1. Type **virtual machines** in the search.
+1. Enter *virtual machines* in the search.
1. Under **Services**, select **Virtual machines**. 1. In the **Virtual machines** page, select **Create** and then **Virtual machine**. The **Create a virtual machine** page opens.
-1. In the **Basics** tab, under **Project details**, make sure the correct subscription is selected and then choose to **Create new** resource group. Type *myResourceGroup* for the name.
+1. In the **Basics** tab, under **Project details**, make sure the correct subscription is selected and then choose to **Create new** resource group. Enter *myResourceGroup* for the name.
![Screenshot of the Project details section showing where you select the Azure subscription and the resource group for the virtual machine](./media/quick-create-portal/project-details.png)
-1. Under **Instance details**, type *myVM* for the **Virtual machine name** and choose *Windows Server 2019 Datacenter - Gen2* for the **Image**. Leave the other defaults.
+1. Under **Instance details**, enter *myVM* for the **Virtual machine name** and choose *Windows Server 2019 Datacenter - Gen2* for the **Image**. Leave the other defaults.
:::image type="content" source="media/quick-create-portal/instance-details.png" alt-text="Screenshot of the Instance details section where you provide a name for the virtual machine and select its region, image and size.":::
virtual-machines Scheduled Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/scheduled-events.md
Scheduled Events provides events in the following use cases:
- [Platform initiated maintenance](../maintenance-and-updates.md?bc=/azure/virtual-machines/windows/breadcrumb/toc.json&toc=/azure/virtual-machines/windows/toc.json) (for example, VM reboot, live migration or memory preserving updates for host) - Virtual machine is running on [degraded host hardware](https://azure.microsoft.com/blog/find-out-when-your-virtual-machine-hardware-is-degraded-with-scheduled-events) that is predicted to fail soon
+- Virtual machine was running on a host that suffered a hardware failure
- User-initiated maintenance (for example, a user restarts or redeploys a VM) - [Spot VM](../spot-vms.md) and [Spot scale set](../../virtual-machine-scale-sets/use-spot.md) instance evictions.
-## The basics
+## The Basics
Metadata Service exposes information about running VMs by using a REST endpoint that's accessible from within the VM. The information is available via a nonroutable IP so that it's not exposed outside the VM.
When you query Metadata Service, you must provide the header `Metadata:true` to
### Query for events You can query for scheduled events by making the following call:
-#### Bash
+#### Bash sample
``` curl -H Metadata:true http://169.254.169.254/metadata/scheduledevents?api-version=2020-07-01 ```
+#### PowerShell sample
+```
+Invoke-RestMethod -Headers @{"Metadata"="true"} -Method GET -Uri "http://169.254.169.254/metadata/scheduledevents?api-version=2020-07-01" | ConvertTo-Json -Depth 64
+```
A response contains an array of scheduled events. An empty array means that currently no events are scheduled. In the case where there are scheduled events, the response contains an array of events.
Each event is scheduled a minimum amount of time in the future based on the even
> [!NOTE] > In some cases, Azure is able to predict host failure due to degraded hardware and will attempt to mitigate disruption to your service by scheduling a migration. Affected virtual machines will receive a scheduled event with a `NotBefore` that is typically a few days in the future. The actual time varies depending on the predicted failure risk assessment. Azure tries to give 7 days' advance notice when possible, but the actual time varies and might be smaller if the prediction is that there is a high chance of the hardware failing imminently. To minimize risk to your service in case the hardware fails before the system-initiated migration, we recommend that you self-redeploy your virtual machine as soon as possible.
+>[!NOTE]
+> In the case the host node experiences a hardware failure Azure will bypass the minimum notice period an immediately begin the recovery process for affected virtual machines. This reduces recovery time in the case that the affected VMs are unable to respond. During the recovery process an event will be created for all impacted VMs with EventType = Reboot and EventStatus = Started
### Polling frequency
The following JSON sample is expected in the `POST` request body. The request sh
``` curl -H Metadata:true -X POST -d '{"StartRequests": [{"EventId": "f020ba2e-3bc0-4c40-a10b-86575a9eabd5"}]}' http://169.254.169.254/metadata/scheduledevents?api-version=2020-07-01 ```
+#### PowerShell sample
+```
+Invoke-RestMethod -Headers @{"Metadata" = "true"} -Method POST -body '{"StartRequests": [{"EventId": "5DD55B64-45AD-49D3-BBC9-F57D4EA97BD7"}]}' -Uri http://169.254.169.254/metadata/scheduledevents?api-version=2020-07-01 | ConvertTo-Json -Depth 64
+```
> [!NOTE] > Acknowledging an event allows the event to proceed for all `Resources` in the event, not just the VM that acknowledges the event. Therefore, you can choose to elect a leader to coordinate the acknowledgement, which might be as simple as the first machine in the `Resources` field.
-## Python sample
+## Python Sample
The following sample queries Metadata Service for scheduled events and approves each outstanding event:
virtual-machines Partner Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/mainframe-rehosting/partner-workloads.md
For more help with mainframe emulation and services, refer to the [Azure Mainfra
- [Asysco](https://asysco.com/) system conversion technology covering source code, data, batch, scheduling, TP monitors, interfaces, security, management, and more. - [Asysco AMT Services](https://www.asysco.com/migration-services/) end-to-end services for migration projects, including inventory and analysis, design training, dress rehearsals, go-live, and post-migration support.-- [Blu Age](https://www.bluage.com/) tools for digitizing legacy business applications and databases. - [Heirloom Computing](https://www.heirloomcomputing.com/tag/convert-cobol-to-java/) services to convert mainframe COBOL, CICS, and VSAM to Java.-- [LzLabs Software Defined Mainframe](https://www.lzlabs.com/) managed software container for migrating mainframe applications to Linux computers or private, public, and hybrid cloud environments. ## Modernization services
virtual-machines Automation Configure System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-system.md
description: Define the SAP system properties for the SAP deployment automation
Previously updated : 11/17/2021 Last updated : 05/03/2022
-# Configure SAP system parameters
+# Configure SAP system parameters
-Configuration for the [SAP deployment automation framework on Azure](automation-deployment-framework.md)] happens through parameters files. You provide information about your SAP system properties in a tfvars file, which the automation framework uses for deployment.
+Configuration for the [SAP deployment automation framework on Azure](automation-deployment-framework.md)] happens through parameters files. You provide information about your SAP system properties in a tfvars file, which the automation framework uses for deployment.
-The configuration of the SAP system is done via a Terraform tfvars variable file.
+The automation supports both creating resources (greenfield deployment) or using existing resources (brownfield deployment).
-## Terraform Parameters
+For the greenfield scenario, the automation defines default names for resources, however some resource names may be defined in the tfvars file.
+For the brownfield scenario, the Azure resource identifiers for the resources must be specified.
-The table below contains the Terraform parameters, these parameters need to be entered manually if not using the deployment scripts.
+## Deployment topologies
-> [!div class="mx-tdCol2BreakAll "]
-> | Variable | Description | Type |
-> | - | - | - |
-> | `tfstate_resource_id` | Azure resource identifier for the Storage account in the SAP Library that will contain the Terraform state files | Required * |
-> | `deployer_tfstate_key` | The name of the state file for the Deployer | Required * |
-> | `landscaper_tfstate_key` | The name of the state file for the workload zone | Required * |
+The automation framework can be used to deploy the following SAP architectures:
-\* = required for manual deployments
-## Generic Parameters
+- Standalone
+- Distributed
+- Distributed (Highly Available)
+
+### Standalone
+
+In the Standalone architecture all the SAP roles are installed on a single server.
+
+To configure this topology, define the database tier values and set `enable_app_tier_deployment` to false.
+
+### Distributed
-The table below contains the parameters that define the resource group and the resource naming.
+The distributed architecture has a separate database server and application tier. The application tier can further be separated by having SAP Central Services on a virtual machine and one or more application servers.
+To configure this topology, define the database tier values and define `scs_server_count` = 1, `application_server_count` >= 1
+
+### High Availability
+
+The Distributed (Highly Available) deployment is similar to the Distributed architecture but either the database or SAP Central Services are both highly available using two virtual machines each with Pacemaker clusters.
+
+To configure this topology, define the database tier values and set `database_high_availability` to true. Set `scs_server_count = 1` and `scs_high_availability` = true and
+`application_server_count` >= 1
+
+## Environment parameters
+
+The table below contains the parameters that define the environment settings and the resource naming.
> [!div class="mx-tdCol2BreakAll "]
-> | Variable | Description | Type |
-> | -- | - | - |
+> | Variable | Description | Type |
+> | -- | -- | - |
> | `environment` | A five-character identifier for the workload zone. For example, `PROD` for a production environment and `NP` for a non-production environment. | Mandatory |
-> | `location` | The Azure region in which to deploy. | Required |
-> | `resource_group_name` | Name of the resource group to be created | Optional |
-> | `resource_group_arm_id` | Azure resource identifier for an existing resource group | Optional |
-> | `custom_prefix` | Specifies the custom prefix used in the resource naming | Optional |
-> | `use_prefix` | Controls if the resource naming includes the prefix, DEV-WEEU-SAP01-X00_xxxx | Optional |
+> | `location` | The Azure region in which to deploy. | Required |
+> | `custom_prefix` | Specifies the custom prefix used in the resource naming | Optional |
+> | `use_prefix` | Controls if the resource naming includes the prefix, DEV-WEEU-SAP01-X00_xxxx | Optional |
+> | 'name_override_file' | Name override file | Optional |
-## Network Parameters
-If the subnets are not deployed using the workload zone deployment, they can be added in the system's tfvars file.
+## Resource group parameters
-The automation framework supports both creating the virtual network and the subnets for new environment deployments (Green field) or using an existing virtual network and existing subnets for existing environment deployments (Brown field) or a combination of for new environment deployments and for existing environment deployments.
+The table below contains the parameters that define the resource group.
-Ensure that the virtual network address space is large enough to host all the resources
-The table below contains the networking parameters.
+> [!div class="mx-tdCol2BreakAll "]
+> | Variable | Description | Type |
+> | -- | -- | - |
+> | `resource_group_name` | Name of the resource group to be created | Optional |
+> | `resource_group_arm_id` | Azure resource identifier for an existing resource group | Optional |
-> [!div class="mx-tdCol2BreakAll "]
-> | Variable | Description | Type | Notes |
-> | -- | -- | | |
-> | `network_logical_name` | The logical name of the network | Required | |
-> | `network_address_space` | The address range for the virtual network | Mandatory | For new environment deployments |
-> | `admin_subnet_name` | The name of the 'admin' subnet | Optional | |
-> | `admin_subnet_address_prefix` | The address range for the 'admin' subnet | Mandatory | For new environment deployments |
-> | `admin_subnet_arm_id` | The Azure resource identifier for the 'admin' subnet | Mandatory | For existing environment deployments |
-> | `admin_subnet_nsg_name` | The name of the 'admin' Network Security Group name | Optional | |
-> | `admin_subnet_nsg_arm_id` | The Azure resource identifier for the 'admin' Network Security Group | Mandatory | For existing environment deployments |
-> | `db_subnet_name` | The name of the 'db' subnet | Optional | |
-> | `db_subnet_address_prefix` | The address range for the 'db' subnet | Mandatory | For new environment deployments |
-> | `db_subnet_arm_id` | The Azure resource identifier for the 'db' subnet | Mandatory | For existing environment deployments |
-> | `db_subnet_nsg_name` | The name of the 'db' Network Security Group name | Optional | |
-> | `db_subnet_nsg_arm_id` | The Azure resource identifier for the 'db' Network Security Group | Mandatory | For existing environment deployments |
-> | `app_subnet_name` | The name of the 'app' subnet | Optional | |
-> | `app_subnet_address_prefix` | The address range for the 'app' subnet | Mandatory | For new environment deployments |
-> | `app_subnet_arm_id` | The Azure resource identifier for the 'app' subnet | Mandatory | For existing environment deployments |
-> | `app_subnet_nsg_name` | The name of the 'app' Network Security Group name | Optional | |
-> | `app_subnet_nsg_arm_id` | The Azure resource identifier for the 'app' Network Security Group | Mandatory | For existing environment deployments |
-> | `web_subnet_name` | The name of the 'web' subnet | Optional | |
-> | `web_subnet_address_prefix` | The address range for the 'web' subnet | Mandatory | For new environment deployments |
-> | `web_subnet_arm_id` | The Azure resource identifier for the 'web' subnet | Mandatory | For existing environment deployments |
-> | `web_subnet_nsg_name` | The name of the 'web' Network Security Group name | Optional | |
-> | `web_subnet_nsg_arm_id` | The Azure resource identifier for the 'web' Network Security Group | Mandatory | For existing environment deployments |
-
-\* = Required for for existing environment deployments deployments
-
-### Database Tier Parameters
-
-The database tier defines the infrastructure for the database tier, supported database back ends are:
+### Database tier parameters
+
+The database tier defines the infrastructure for the database tier, supported database backends are:
- `HANA` - `DB2`
The database tier defines the infrastructure for the database tier, supported da
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | -- | --| -- | |
+> | `database_sid` | Defines the database SID | Required | |
> | `database_platform` | Defines the database backend | Required | | > | `database_high_availability` | Defines if the database tier is deployed highly available | Optional | See [High availability configuration](automation-configure-system.md#high-availability-configuration) | > | `database_server_count` | Defines the number of database servers | Optional | Default value is 1 |
-> | `database_vm_names` | Defines the database server virtual machine names if the default naming is not acceptable | Optional | |
+> | `database_vm_zones` | Defines the Availability Zones | Optional | |
> | `database_size` | Defines the database sizing information | Required | See [Custom Sizing](automation-configure-extra-disks.md) |
-> | `database_sid` | Defines the database SID | Required | |
> | `db_disk_sizes_filename` | Defines the custom database sizing | Optional | See [Custom Sizing](automation-configure-extra-disks.md) |
-> | `database_sid` | Defines the database SID | Required | |
-> | `database_vm_image` | Defines the Virtual machine image to use, see below | Optional | |
> | `database_vm_use_DHCP` | Controls if Azure subnet provided IP addresses should be used (dynamic) true | Optional | | > | `database_vm_db_nic_ips` | Defines the static IP addresses for the database servers (database subnet) | Optional | | > | `database_vm_admin_nic_ips` | Defines the static IP addresses for the database servers (admin subnet) | Optional | |
-> | `database_vm_authentication_type` | Defines the authentication type for the database virtual machines (key/password) | Optional | |
-> | `database_vm_zones` | Defines the Availability Zones | Optional | |
-> | `database_vm_avset_arm_ids` | Defines the existing availability sets Azure resource IDs | Optional | Primarily used together with ANF pinning|
+> | `database_vm_image` | Defines the Virtual machine image to use, see below | Optional | |
+> | `database_vm_authentication_type` | Defines the authentication type for the database virtual machines (key/password) | Optional | |
> | `database_no_avset` | Controls if the database virtual machines are deployed without availability sets | Optional | default is false | > | `database_no_ppg` | Controls if the database servers will not be placed in a proximity placement group | Optional | default is false |
-> | `hana_dual_nics` | Controls if the HANA database servers will have dual network interfaces | Optional | default is true |
+> | `database_vm_avset_arm_ids` | Defines the existing availability sets Azure resource IDs | Optional | Primarily used together with ANF pinning|
+> | `hana_dual_nics` | Controls if the HANA database servers will have dual network interfaces | Optional | default is true |
-The Virtual Machine and the operating system image is defined using the following structure:
+The Virtual Machine and the operating system image is defined using the following structure:
-```python
+```python
{ os_type="linux" source_image_id="" publisher="SUSE"
- offer="sles-sap-15-sp2"
+ offer="sles-sap-15-sp3"
sku="gen2" version="8.2.2021040902" } ```
-### Common Application Tier Parameters
+### Common application tier parameters
The application tier defines the infrastructure for the application tier, which can consist of application servers, central services servers and web dispatch servers
The application tier defines the infrastructure for the application tier, which
> | Variable | Description | Type | Notes | > | - | | --| | > | `enable_app_tier_deployment` | Defines if the application tier is deployed | Optional | |
-> | `sid` | Defines the SAP application SID | Required | |
+> | `sid` | Defines the SAP application SID | Required | |
+> | `app_tier_vm_sizing` | Lookup value defining the VM SKU and the disk layout for tha application tier servers | Optional |
> | `app_disk_sizes_filename` | Defines the custom disk size file for the application tier servers | Optional | See [Custom Sizing](automation-configure-extra-disks.md) | > | `app_tier_authentication_type` | Defines the authentication type for the application tier virtual machine(s) | Optional | | > | `app_tier_use_DHCP` | Controls if Azure subnet provided IP addresses should be used (dynamic) | Optional | | > | `app_tier_dual_nics` | Defines if the application tier server will have two network interfaces | Optional | |
-> | `app_tier_vm_sizing` | Lookup value defining the VM SKU and the disk layout for tha application tier servers | Optional |
-
-### Application Server Parameters
--
-> [!div class="mx-tdCol2BreakAll "]
-> | Variable | Description | Type | Notes |
-> | -- | - | --| |
-> | `application_server_count` | Defines the number of application servers | Required | |
-> | `application_server_sku` | Defines the Virtual machine SKU to use | Optional | |
-> | `application_server_image` | Defines the Virtual machine image to use | Required | |
-> | `application_server_zones` | Defines the availability zones to which the application servers are deployed | Optional | |
-> | `application_server_app_nic_ips[]` | List of IP addresses for the application server (app subnet) | Optional | Ignored if `app_tier_use_DHCP` is used |
-> | `application_server_app_admin_nic_ips` | List of IP addresses for the application server (admin subnet) | Optional | Ignored if `app_tier_use_DHCP` is used |
-> | `application_server_no_ppg` | Controls application server proximity placement group | Optional | |
-> | `application_server_no_avset` | Controls application server availability set placement | Optional | |
-> | `application_server_tags` | Defines a list of tags to be applied to the application servers | Optional | |
-### SAP Central Services Parameters
+### SAP Central services parameters
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | -- | -- | -| |
+> | `scs_server_count` | Defines the number of scs servers | Required | |
> | `scs_high_availability` | Defines if the Central Services is highly available | Optional | See [High availability configuration](automation-configure-system.md#high-availability-configuration) | > | `scs_instance_number` | The instance number of SCS | Optional | | > | `ers_instance_number` | The instance number of ERS | Optional | |
-> | `scs_server_count` | Defines the number of scs servers | Required | |
-> | `scs_server_sku` | Defines the Virtual machine SKU to use | Optional | |
-> | `scs_server_image` | Defines the Virtual machine image to use | Required | |
-> | `scs_server_zones` | Defines the availability zones to which the scs servers are deployed | Optional | |
-> | `scs_server_app_nic_ips[]` | List of IP addresses for the scs server (app subnet) | Optional | Ignored if `app_tier_use_DHCP` is used |
+> | `scs_server_sku` | Defines the Virtual machine SKU to use | Optional | |
+> | `scs_server_image` | Defines the Virtual machine image to use | Required | |
+> | `scs_server_zones` | Defines the availability zones to which the scs servers are deployed | Optional | |
+> | `scs_server_app_nic_ips` | List of IP addresses for the scs server (app subnet) | Optional | Ignored if `app_tier_use_DHCP` is used |
> | `scs_server_app_admin_nic_ips` | List of IP addresses for the scs server (admin subnet) | Optional | Ignored if `app_tier_use_DHCP` is used |
+> | `scs_server_loadbalancer_ips` | List of IP addresses for the scs load balancer (app subnet) | Optional | Ignored if `app_tier_use_DHCP` is used |
> | `scs_server_no_ppg` | Controls scs server proximity placement group | Optional | | > | `scs_server_no_avset` | Controls scs server availability set placement | Optional | | > | `scs_server_tags` | Defines a list of tags to be applied to the scs servers | Optional | |
+### Application server parameters
++
+> [!div class="mx-tdCol2BreakAll "]
+> | Variable | Description | Type | Notes |
+> | -- | - | --| |
+> | `application_server_count` | Defines the number of application servers | Required | |
+> | `application_server_sku` | Defines the Virtual machine SKU to use | Optional | |
+> | `application_server_image` | Defines the Virtual machine image to use | Required | |
+> | `application_server_zones` | Defines the availability zones to which the application servers are deployed | Optional | |
+> | `application_server_app_nic_ips[]` | List of IP addresses for the application server (app subnet) | Optional | Ignored if `app_tier_use_DHCP` is used |
+> | `application_server_app_admin_nic_ips` | List of IP addresses for the application server (admin subnet) | Optional | Ignored if `app_tier_use_DHCP` is used |
+> | `application_server_no_ppg` | Controls application server proximity placement group | Optional | |
+> | `application_server_no_avset` | Controls application server availability set placement | Optional | |
+> | `application_server_tags` | Defines a list of tags to be applied to the application servers | Optional | |
-### Web Dispatcher Parameters
+### Web dispatcher parameters
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | | | | | > | `webdispatcher_server_count` | Defines the number of web dispatcher servers | Required | |
-> | `webdispatcher_server_sku` | Defines the Virtual machine SKU to use | Optional | |
-> | `webdispatcher_server_image` | Defines the Virtual machine image to use | Optional | |
+> | `webdispatcher_server_sku` | Defines the Virtual machine SKU to use | Optional | |
+> | `webdispatcher_server_image` | Defines the Virtual machine image to use | Optional | |
> | `webdispatcher_server_zones` | Defines the availability zones to which the web dispatchers are deployed | Optional | | > | `webdispatcher_server_app_nic_ips[]` | List of IP addresses for the web dispatcher server (app subnet) | Optional | Ignored if `app_tier_use_DHCP` is used | > | `webdispatcher_server_app_admin_nic_ips`| List of IP addresses for the web dispatcher server (admin subnet) | Optional | Ignored if `app_tier_use_DHCP` is used |
The application tier defines the infrastructure for the application tier, which
> | `webdispatcher_server_no_avset` | Defines web dispatcher availability set placement | Optional | | > | `webdispatcher_server_tags` | Defines a list of tags to be applied to the web dispatcher servers | Optional | |
-### Anchor Virtual Machine Parameters
+## Network parameters
-The SAP deployment automation framework supports having an Anchor Virtual Machine. The anchor Virtual machine will be the first virtual machine to be deployed and is used to anchor the proximity placement group.
+If the subnets are not deployed using the workload zone deployment, they can be added in the system's tfvars file.
-The table below contains the parameters related to the anchor virtual machine.
+The automation framework can either deploy the virtual network and the subnets for new environment deployments (greenfield) or using an existing virtual network and existing subnets for existing environment deployments (brownfield).
+ - For the greenfield scenario, the virtual network address space and the subnet address prefixes must be specified
+ - For the brownfield scenario, the Azure resource identifier for the virtual network and the subnets must be specified
+
+Ensure that the virtual network address space is large enough to host all the resources.
+
+The table below contains the networking parameters.
> [!div class="mx-tdCol2BreakAll "]
-> | Variable | Description | Type |
-> | - | | -- |
-> | `deploy_anchor_vm` | Defines if the anchor Virtual Machine is used | Optional |
-> | `anchor_vm_sku` | Defines the VM SKU to use. For example, Standard_D4s_v3. | Optional |
-> | `anchor_vm_image` | Defines the VM image to use. See the following code sample. | Optional |
-> | `anchor_vm_use_DHCP` | Controls whether to use dynamic IP addresses provided by Azure subnet. | Optional |
-> | `anchor_vm_accelerated_networking` | Defines if the Anchor VM is configured to use accelerated networking | Optional |
-> | `anchor_vm_authentication_type` | Defines the authentication type for the anchor VM key and password | Optional |
-
-The Virtual Machine and the operating system image is defined using the following structure:
-```python
-{
+> | Variable | Description | Type | Notes |
+> | -- | -- | | |
+> | `network_logical_name` | The logical name of the network. | Required | |
+> | `network_address_space` | The address range for the virtual network. | Mandatory | For new environment deployments |
+> | `admin_subnet_name` | The name of the 'admin' subnet. | Optional | |
+> | `admin_subnet_address_prefix` | The address range for the 'admin' subnet. | Mandatory | For new environment deployments |
+> | `admin_subnet_arm_id` | The Azure resource identifier for the 'admin' subnet. | Mandatory | For existing environment deployments |
+> | `admin_subnet_nsg_name` | The name of the 'admin' Network Security Group name. | Optional | |
+> | `admin_subnet_nsg_arm_id` | The Azure resource identifier for the 'admin' Network Security Group | Mandatory | For existing environment deployments |
+> | `db_subnet_name` | The name of the 'db' subnet. | Optional | |
+> | `db_subnet_address_prefix` | The address range for the 'db' subnet. | Mandatory | For new environment deployments |
+> | `db_subnet_arm_id` | The Azure resource identifier for the 'db' subnet. | Mandatory | For existing environment deployments |
+> | `db_subnet_nsg_name` | The name of the 'db' Network Security Group name. | Optional | |
+> | `db_subnet_nsg_arm_id` | The Azure resource identifier for the 'db' Network Security Group. | Mandatory | For existing environment deployments |
+> | `app_subnet_name` | The name of the 'app' subnet. | Optional | |
+> | `app_subnet_address_prefix` | The address range for the 'app' subnet. | Mandatory | For new environment deployments |
+> | `app_subnet_arm_id` | The Azure resource identifier for the 'app' subnet. | Mandatory | For existing environment deployments |
+> | `app_subnet_nsg_name` | The name of the 'app' Network Security Group name. | Optional | |
+> | `app_subnet_nsg_arm_id` | The Azure resource identifier for the 'app' Network Security Group. | Mandatory | For existing environment deployments |
+> | `web_subnet_name` | The name of the 'web' subnet. | Optional | |
+> | `web_subnet_address_prefix` | The address range for the 'web' subnet. | Mandatory | For new environment deployments |
+> | `web_subnet_arm_id` | The Azure resource identifier for the 'web' subnet. | Mandatory | For existing environment deployments |
+> | `web_subnet_nsg_name` | The name of the 'web' Network Security Group name. | Optional | |
+> | `web_subnet_nsg_arm_id` | The Azure resource identifier for the 'web' Network Security Group. | Mandatory | For existing environment deployments |
+
+\* = Required for existing environment deployments
+
+### Anchor virtual machine parameters
+
+The SAP deployment automation framework supports having an Anchor virtual machine. The anchor virtual machine will be the first virtual machine to be deployed and is used to anchor the proximity placement group.
+
+The table below contains the parameters related to the anchor virtual machine.
++
+> [!div class="mx-tdCol2BreakAll "]
+> | Variable | Description | Type |
+> | - | | -- |
+> | `deploy_anchor_vm` | Defines if the anchor Virtual Machine is used | Optional |
+> | `anchor_vm_sku` | Defines the VM SKU to use. For example, Standard_D4s_v3. | Optional |
+> | `anchor_vm_image` | Defines the VM image to use. See the following code sample. | Optional |
+> | `anchor_vm_use_DHCP` | Controls whether to use dynamic IP addresses provided by Azure subnet. | Optional |
+> | `anchor_vm_accelerated_networking` | Defines if the Anchor VM is configured to use accelerated networking | Optional |
+> | `anchor_vm_authentication_type` | Defines the authentication type for the anchor VM key and password | Optional |
+
+The Virtual Machine and the operating system image is defined using the following structure:
+```python
+{
os_type="" source_image_id="" publisher="Canonical"
version="latest"
} ```
-### Authentication Parameters
+### Authentication parameters
By default the SAP System deployment uses the credentials from the SAP Workload zone. If the SAP system needs unique credentials, you can provide them using these parameters. - > [!div class="mx-tdCol2BreakAll "]
-> | Variable | Description | Type |
-> | - | -| -- |
+> | Variable | Description | Type |
+> | - | -| -- |
> | `automation_username` | Administrator account name | Optional | > | `automation_password` | Administrator password | Optional | > | `automation_path_to_public_key` | Path to existing public key | Optional | > | `automation_path_to_private_key` | Path to existing private key | Optional |
-## Other Parameters
+## Other parameters
> [!div class="mx-tdCol2BreakAll "]
-> | Variable | Description | Type |
-> | - | - | -- |
+> | Variable | Description | Type |
+> | - | - | -- |
> | `resource_offset` | Provides and offset for resource naming. The offset number for resource naming when creating multiple resources. The default value is 0, which creates a naming pattern of disk0, disk1, and so on. An offset of 1 creates a naming pattern of disk1, disk2, and so on. | Optional | > | `disk_encryption_set_id` | The disk encryption key to use for encrypting managed disks using customer provided keys | Optional | > | `use_loadbalancers_for_standalone_deployments` | Controls if load balancers are deployed for standalone installations | Optional |
By default the SAP System deployment uses the credentials from the SAP Workload
> | `use_zonal_markers` | Specifies if zonal Virtual Machines will include a zonal identifier. 'xooscs_z1_00l###' vs 'xooscs00l###'| Default value is true. |
-## NFS Support
+## NFS support
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | > | - | -- | -- |
-> | `NFS_provider` | Defines what NFS backend to use, the options are 'AFS' for Azure Files NFS or 'ANF' for Azure NetApp files. |
+> | `NFS_provider` | Defines what NFS backend to use, the options are 'AFS' for Azure Files NFS or 'ANF' for Azure NetApp files. |
> | `sapmnt_volume_size` | Defines the size (in GB) for the 'sapmnt' volume | Optional |
-### Azure Files NFS Support
+### Azure files NFS Support
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | > | - | -- | -- |
-> | `azure_files_storage_account_id` | If provided the Azure resource ID of the storage account for Azure Files | Optional |
+> | `azure_files_storage_account_id` | If provided the Azure resource ID of the storage account for Azure Files | Optional |
+## Terraform parameters
+
+The table below contains the Terraform parameters, these parameters need to be entered manually if not using the deployment scripts.
++
+> [!div class="mx-tdCol2BreakAll "]
+> | Variable | Description | Type |
+> | - | - | - |
+> | `tfstate_resource_id` | Azure resource identifier for the Storage account in the SAP Library that will contain the Terraform state files | Required * |
+> | `deployer_tfstate_key` | The name of the state file for the Deployer | Required * |
+> | `landscaper_tfstate_key` | The name of the state file for the workload zone | Required * |
+
+\* = required for manual deployments
## High availability configuration
-The high availability configuration for the database tier and the SCS tier is configured using the `database_high_availability` and `scs_high_availability` flags.
+The high availability configuration for the database tier and the SCS tier is configured using the `database_high_availability` and `scs_high_availability` flags.
-High availability configurations use Pacemaker with Azure fencing agents. The fencing agents should be configured to use a unique service principal with permissions to stop and start virtual machines. For more information see [Create Fencing Agent](high-availability-guide-suse-pacemaker.md#create-an-azure-fence-agent-stonith-device)
+High availability configurations use Pacemaker with Azure fencing agents. The fencing agents should be configured to use a unique service principal with permissions to stop and start virtual machines. For more information, see [Create Fencing Agent](high-availability-guide-suse-pacemaker.md#create-an-azure-fence-agent-stonith-device)
```azurecli-interactive az ad sp create-for-rbac --role="Linux Fence Agent Role" --scopes="/subscriptions/<subscriptionID>" --name="<prefix>-Fencing-Agent" ``` Replace `<prefix>` with the name prefix of your environment, such as `DEV-WEEU-SAP01` and `<subscriptionID>` with the workload zone subscription ID.
-
+ > [!IMPORTANT] > The name of the Fencing Agent Service Principal must be unique in the tenant. The script assumes that a role 'Linux Fence Agent Role' has already been created >
virtual-machines Automation Deployment Framework https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-deployment-framework.md
You will use the control plane of the SAP deployment automation framework to dep
> > This automation framework also follows the [Microsoft Cloud Adoption Framework for Azure](/azure/cloud-adoption-framework/).
+The automation framework can be used to deploy the following SAP architectures:
+
+- Standalone
+- Distributed
+- Distributed (Highly Available)
+
+In the Standalone architecture all the SAP roles are installed on a single server. In the distributed architecture you can separate the database server and the application tier. The application tier can further be separated in two by having SAP Central Services on a virtual machine and one or more application servers.
+
+The Distributed (Highly Available) deployment is similar to the Distributed architecture but either the datebase or SAP Central Services are both highly available using two virtual machines each with Pacemaker clusters.
+ The dependency between the control plane and the application plane is illustrated in the diagram below. In a typical deployment a single control plane is used to manage multiple SAP deployments. :::image type="content" source="./media/automation-deployment-framework/control-plane-sap-infrastructure.png" alt-text="Diagram showing the SAP deployment automation framework's dependency between the control plane and application plane.":::
virtual-machines Sap Hana Scale Out Standby Netapp Files Suse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-suse.md
Configure and prepare your OS by doing the following steps:
sudo mount -a </code></pre>
-10. **[A]** Verify that all HANA volumes are mounted with NFS protocol version **NFSv4**.
+10. **[A]** Verify that all HANA volumes are mounted with NFS protocol version **NFSv4.1**.
<pre><code> sudo nfsstat -m
virtual-network-manager How To Create Mesh Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-create-mesh-network.md
Title: 'Create a mesh network topology with Azure Virtual Network Manager (Preview)' description: Learn how to create a mesh network topology with Azure Virtual Network Manager.--++ Previously updated : 11/02/2021 Last updated : 05/02/2022
This section will help you create a network group containing the virtual network
1. Go to your Azure Virtual Network Manager instance. This how-to guide assumes you've created one using the [quickstart](create-virtual-network-manager-portal.md) guide.
-1. Select **Network groups** under *Settings*, and then select **+ Add** to create a new network group.
+1. Select **Network groups** under *Settings*, and then select **+ Create** to create a new network group.
- :::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/add-network-group.png" alt-text="Screenshot of add a network group button.":::
+ :::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/add-network-group.png" alt-text="Screenshot of Create a network group button.":::
-1. On the *Basics* tab, enter a **Name** and a **Description** for the network group.
+1. On the *Create a network group* page, enter a **Name** and a **Description** for the network group. Then select **Add** to create the network group.
- :::image type="content" source="./media/how-to-create-hub-and-spoke/basics.png" alt-text="Screenshot of basics tab for add a network group.":::
+ :::image type="content" source="./media/create-virtual-network-manager-portal/network-group-basics.png" alt-text="Screenshot of create a network group page.":::
-1. To add virtual network manually, select the **Static group members** tab. For more information, see [static members](concept-network-groups.md#static-membership).
+1. You'll see the new network group added to the *Network Groups* page.
+ :::image type="content" source="./media/create-virtual-network-manager-portal/network-groups-list.png" alt-text="Screenshot of network group page with list of network groups.":::
- :::image type="content" source="./media/how-to-create-hub-and-spoke/static-group.png" alt-text="Screenshot of static group members tab.":::
+1. From the list of network groups, select **myNetworkGroup** to manage the network group memberships.
-1. To add virtual networks dynamically, select the **Conditional statements** tab. For more information, see [dynamic membership](concept-network-groups.md#dynamic-membership).
+ :::image type="content" source="media/how-to-create-mesh-network/manage-group-membership.png" alt-text="Screenshot of manage group memberships page.":::
- :::image type="content" source="./media/how-to-create-hub-and-spoke/conditional-statements.png" alt-text="Screenshot of conditional statements tab.":::
+1. To add a virtual network manually, select the **Add** button under *Static membership*, and select the virtual networks to add. Then select **Add** to save the static membership. For more information, see [static members](concept-network-groups.md#static-membership).
-1. Once you're satisfied with the virtual networks selected for the network group, select **Review + create**. Then select **Create** once validation has passed.
+ :::image type="content" source="./media/create-virtual-network-manager-portal/add-virtual-networks.png" alt-text="Screenshot of add virtual networks to network group page.":::
+
+1. To add virtual networks dynamically, select the **Define** button under *Define dynamic membership*, and then enter the conditional statements for membership. Select **Save** to save the dynamic membership conditions. For more information, see [dynamic membership](concept-network-groups.md#dynamic-membership).
+
+ :::image type="content" source="media/how-to-create-mesh-network/define-dynamic-members.png" alt-text="Screenshot of Define dynamic membership page.":::
## Create a mesh connectivity configuration This section will guide you through how to create a mesh configuration with the network group you created in the previous section.
-1. Select **Configuration** under *Settings*, then select **+ Add a configuration**.
+1. Select **Configuration** under *Settings*, then select **+ Create**.
- :::image type="content" source="./media/how-to-create-hub-and-spoke/configuration-list.png" alt-text="Screenshot of the configurations list.":::
+ :::image type="content" source="./media/create-virtual-network-manager-portal/add-configuration.png" alt-text="Screenshot of the configurations list.":::
-1. Select **Connectivity** from the drop-down menu.
+1. Select **Connectivity configuration** from the drop-down menu.
:::image type="content" source="./media/create-virtual-network-manager-portal/configuration-menu.png" alt-text="Screenshot of configuration drop-down menu.":::
-1. On the *Add a connectivity configuration* page, enter, or select the following information:
+1. On the *Add a connectivity configuration* page, enter the following information:
- :::image type="content" source="./media/how-to-create-mesh-network/connectivity-configuration.png" alt-text="Screenshot of add a connectivity configuration page.":::
+ :::image type="content" source="media/how-to-create-mesh-network/add-config-name.png" alt-text="Screenshot of add a connectivity configuration page.":::
| Setting | Value | | - | -- | | Name | Enter a *name* for this configuration. | | Description | *Optional* Enter a description about what this configuration will do. |
- | Topology | Select the **Mesh** topology. |
- | Global Mesh | Select this option to enable cross region connectivity between virtual networks in the same network group. |
-1. Then select **+ Add network groups**.
+1. Select **Next: Topology >** and select **Mesh** as the topology. Then select **+ Add** under *Network groups*.
+
+ :::image type="content" source="media/how-to-create-mesh-network/add-connectivity-config.png" alt-text="Screenshot of Add a connectivity configuration page and options.":::
-1. On the *Add network groups* page, select the network groups you want to add to this configuration. Then select **Add** to save.
+1. On the *Add network groups* page, select the network groups you want to add to this configuration. Then click **Select** to save.
-1. Select **Add** again to create the mesh connectivity configuration.
+1. Select **Review + create** and then **Create** to create the mesh connectivity configuration.
## Deploy the mesh configuration To have this configuration take effect in your environment, you'll need to deploy the configuration to the regions where your selected virtual networks are created.
-1. Select **Deployments** under *Settings*, then select **Deploy a configuration**.
+1. Select **Deployments** under *Settings*, then select **Deploy configuration**.
-1. On the *Deploy a configuration* select the following settings:
+1. On the *Deploy a configuration* page, select the following settings:
- :::image type="content" source="./media/how-to-create-hub-and-spoke/deploy.png" alt-text="Screenshot of deploy a configuration page.":::
+ :::image type="content" source="media/how-to-create-mesh-network/deploy-config.png" alt-text="Screenshot of deploy a configuration page.":::
| Setting | Value | | - | -- |
- | Configuration type | Select **Connectivity**. |
- | Configurations | Select the name of the configuration you created in the previous section. |
- | Target regions | Select all the regions that apply to virtual networks you select for the configuration. |
+ | Configurations | Select **Include connectivity configurations in your goal state**. |
+ | Connectivity Configurations | Select the name of the configuration you created in the previous section. |
+ | Target regions | Select all the regions where the configuration will be applied to virtual networks. |
-1. Select **Deploy** and then select **OK** to commit the configuration to the selected regions.
+1. Select **Next** and then select **Deploy** to commit the configuration to the selected regions.
-1. The deployment of the configuration can take up to 15-20 minutes, select the **Refresh** button to check on the status of the deployment.
+1. The deployment of the configuration can take several minutes, select the **Refresh** button to check on the status of the deployment.
## Confirm deployment
virtual-network Resource Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/resource-health.md
This article provides guidance on how to use Azure Resource Health to monitor an
[Azure Resource Health](/azure/service-health/overview) provides information about the health of your NAT gateway resource. You can use resource health and Azure monitor notifications to keep you informed on the availability and health status of your NAT gateway resource. Resource health can help you quickly assess whether an issue is due to a problem in your Azure infrastructure or because of an Azure platform event. The resource health of your NAT gateway is evaluated by measuring the data-path availability of your NAT gateway endpoint.
-> [!IMPORTANT]
-> When you first create your NAT gateway resource and attach it to a subnet and public IP address/prefix, there is no available data as of yet to determine the health status of your NAT gateway resource. In the first few minutes after your NAT gateway is created, you may see the health status of your NAT gateway change from Unavailable to Degraded and then to Available as health data is generated. This is an expected behavior. If you have only a public IP address or only a subnet attached to your NAT gateway resource upon deployment, the health status will immmediately show as Unknown.
- You can view the status of your NAT gatewayΓÇÖs health status on the **Resource Health** page, found under **Support + troubleshooting** for your NAT gateway resource. The health of your NAT gateway resource is displayed as one of the following statuses:
virtual-network Virtual Networks Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-networks-faq.md
No. VNets are Layer-3 overlays. Azure does not support any Layer-2 semantics.
### Can I specify custom routing policies on my VNets and subnets? Yes. You can create a route table and associate it to a subnet. For more information about routing in Azure, see [Routing overview](virtual-networks-udr-overview.md#custom-routes).
+### What would be the behavior when I apply both NSG and UDR at subnet?
+For inbound traffic, NSG inbound rules are processed. For outbound, NSG outbound rules are processed followed by UDR rules.
+
+### What would be the behavior when I apply NSG at NIC and subnet for a VM?
+When NSGs are applied both at NIC & Subnets for a VM, subnet level NSG followed by NIC level NSG is processed for inbound and NIC level NSG followed by subnet level NSG for outbound traffic.
+ ### Do VNets support multicast or broadcast? No. Multicast and broadcast are not supported.
virtual-wan How To Routing Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/how-to-routing-policies.md
> > This preview is provided without a service-level agreement and isn't recommended for production workloads. Some features might be unsupported or have constrained capabilities. For more information, see [Supplemental terms of use for Microsoft Azure previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). >
-> To obtain access to the preview, please deploy two hubs in the same Azure region along side any gateways (Site-to-site VPN Gateways, Point-to-site Gateways and ExpressRouteGateways) and then reach out to previewinterhub@microsoft.com with the Virtual WAN ID, Subscription ID and Azure Region you wish to configure Routing Intent in. Expect a response within 48 business hours (Monday-Friday) with confirmation of feature enablement. Please note that any gateways created after feature enablement will need to be upgraded by the Virtual WAN team.
+> Inspecting inter-hub traffic via Azure Firewall or NVA between Virtual Hubs deployed in **different** Azure regions is available in select Azure Regions. Please reach out to previewinterhub@microsoft.com for more details.
+>
+> To obtain access to the preview, please deploy any Virtual WAN hubs and gateways (Site-to-site VPN Gateways, Point-to-site Gateways and ExpressRouteGateways) and then reach out to previewinterhub@microsoft.com with the Virtual WAN ID, Subscription ID and Azure Region you wish to configure Routing Intent in. Expect a response within 48 business hours (Monday-Friday) with confirmation of feature enablement. Please note that any gateways created after feature enablement will need to be upgraded by the Virtual WAN team.
## Background
Routing Intent and Routing policies allow you to specify how the Virtual WAN hub
While Private Traffic includes both branch and Virtual Network address prefixes, Routing Policies considers them as one entity within the Routing Intent Concepts. >[!NOTE]
-> In the gated public preview of Virtual WAN Hub routing policies, inter-hub traffic is only inspected by Azure Firewall or a Network Virtual Appliance deployed in the Virtual WAN Hub if the Virtual WAN Hubs are in the same region.
+> Inter-region traffic can be inspected by Azure Firewall or NVA for Virtual Hubs deployed in select Azure regions. For available regions, please contact previewinterhub@microsoft.com.
* **Internet Traffic Routing Policy**: When an Internet Traffic Routing Policy is configured on a Virtual WAN hub, all branch (User VPN (Point-to-site VPN), Site-to-site VPN, and ExpressRoute) and Virtual Network connections to that Virtual WAN Hub will forward Internet-bound traffic to the Azure Firewall resource, Third-Party Security provider or Network Virtual Appliance specified as part of the Routing Policy.
While Private Traffic includes both branch and Virtual Network address prefixes
* You will **not** be able to enable routing policies on your deployments with existing Custom Route tables configured or if there are static routes configured in your Default Route Table. * Currently, Private Traffic Routing Policies are not supported in Hubs with Encrypted ExpressRoute connections (Site-to-site VPN Tunnel running over ExpressRoute Private connectivity).
-* In the gated public preview of Virtual WAN Hub routing policies, inter-hub traffic is only inspected by Azure Firewall or Network Virtual Appliances deployed in the Virtual WAN Hub if the Virtual WAN Hubs are in the same region.
+* In the gated public preview of Virtual WAN Hub routing policies, inter-regional traffic is only inspected by Azure Firewall or Network Virtual Appliances deployed in the Virtual WAN Hub for traffic between select Azure regions. For more information, reach out to previewinterhub@microsoft.com.
* Routing Intent and Routing Policies currently must be configured via the custom portal link provided in Step 3 of **Prerequisites**. Routing Intents and Policies are not supported via Terraform, PowerShell, and CLI. ## Prerequisites
-1. Create a Virtual WAN. Make sure you create at least two Virtual Hubs in the **same** region. For instance, you may create a Virtual WAN with two Virtual Hubs in East US. Note that if you only wish to inspect branch-to-branch traffic, you may deploy a single Virtual WAN Hub as opposed to multiple Hubs in the same region.
+1. Create a Virtual WAN. Make sure you create at least two Virtual Hubs if you wish to inspect inter-hub traffic. For instance, you may create a Virtual WAN with two Virtual Hubs in East US. Note that if you only wish to inspect branch-to-branch traffic, you may deploy a single Virtual WAN Hub as opposed to multiple hubs.
2. Convert your Virtual WAN Hub into a Secured Virtual WAN Hub by deploying an Azure Firewall into the Virtual Hubs in the chosen region. For more information on converting your Virtual WAN Hub to a Secured Virtual WAN Hub, please see [How to secure your Virtual WAN Hub](howto-firewall.md). 3. Deploy any Site-to-site VPN, Point-to-site VPN and ExpressRoute Gateways you will use for testing. Reach out to **previewinterhub@microsoft.com** with the **Virtual WAN Resource ID** and the **Azure Virtual hub Region** you wish to configure Routing Policies in. To locate the Virtual WAN ID, open Azure portal, navigate to your Virtual WAN resource and select Settings > Properties > Resource ID. For example: ```
The following section describes common issues encountered when you configure Rou
### Troubleshooting data path
-* Currently, using Azure Firewall to inspect inter-hub traffic is only available for Virtual WAN hubs that are deployed in the **same** Azure Region.
+* Currently, using Azure Firewall to inspect inter-hub traffic is available for Virtual WAN hubs that are deployed in the **same** Azure Region. Inter-hub inspection for Virtual WAN hubs that are in different Azure regions is available on a limited basis. For a list of available regions, please email previewinterhub@mcirosoft.com.
* Currently, Private Traffic Routing Policies are not supported in Hubs with Encrypted ExpressRoute connections (Site-to-site VPN Tunnel running over ExpressRoute Private connectivity). * You can verify that the Routing Policies have been applied properly by checking the Effective Routes of the DefaultRouteTable. If Private Routing Policies are configured, you should see routes in the DefaultRouteTable for private traffic prefixes with next hop Azure Firewall. If Internet Traffic Routing Policies are configured, you should see a default (0.0.0.0/0) route in the DefaultRouteTable with next hop Azure Firewall. * If there are any Site-to-site VPN gateways or Point-to-site VPN gateways created **after** the feature has been confirmed to be enabled on your deployment, you will have to reach out again to previewinterhub@microsoft.com to get the feature enabled.
web-application-firewall Waf Engine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/waf-engine.md
description: This article provides an overview of the Azure WAF engine.
Previously updated : 04/29/2022 Last updated : 05/03/2022
There are many new features that are only supported in the Azure WAF engine. The
* Increased request body size limit to 2 MB * Increased file upload limit to 4 GB * [WAF v2 metrics](application-gateway-waf-metrics.md#application-gateway-waf-v2-metrics)
-* [Per rule exclusions](application-gateway-waf-configuration.md) and support for exclusion attributes by name
+* [Per rule exclusions](application-gateway-waf-configuration.md#per-rule-exclusions) and support for [exclusion attributes by name](application-gateway-waf-configuration.md#request-attributes-by-keys-and-values).
New WAF features will only be released with later versions of CRS on the new WAF engine.