Updates from: 01/10/2023 02:30:28
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Azure Ad B2c Global Identity Funnel Based Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/azure-ad-b2c-global-identity-funnel-based-design.md
Title: Azure Active Directory B2C global identity framework funnel-based design considerations
+ Title: Build a global identity solution with funnel-based approach
description: Learn the funnel-based design consideration for Azure AD B2C to provide customer identity management for global customers.
active-directory-b2c Azure Ad B2c Global Identity Region Based Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/azure-ad-b2c-global-identity-region-based-design.md
Title: Azure Active Directory B2C global identity framework region-based design considerations
+ Title: Build a global identity solution with region-based approach
description: Learn the region-based design consideration for Azure AD B2C to provide customer identity management for global customers.
active-directory-b2c Configure Authentication Sample Ios App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-ios-app.md
Previously updated : 07/29/2021 Last updated : 01/06/2023
This sample acquires an access token with the relevant scopes that the mobile ap
## Step 4: Get the iOS mobile app sample
-1. [Download the .zip file](https://github.com/Azure-Samples/active-directory-b2c-ios-swift-native-msal/archive/refs/heads/vNext.zip), or clone the sample web app from the [GitHub repo](https://github.com/Azure-Samples/active-directory-b2c-ios-swift-native-msal).
+1. [Download the .zip file](https://github.com/Azure-Samples/active-directory-b2c-ios-swift-native-msal/archive/refs/heads/master.zip), or clone the sample web app from the [GitHub repo](https://github.com/Azure-Samples/active-directory-b2c-ios-swift-native-msal).
```bash
- git clone https://github.com/Azure-Samples/active-directory-b2c-ios-swift-native-msal/tree/vNext.git
+ git clone https://github.com/Azure-Samples/active-directory-b2c-ios-swift-native-msal
``` 1. Use [CocoaPods](https://cocoapods.org/) to install the MSAL library. In a terminal window, go to the project root folder. This folder contains the *podfile* file. Run the following command:
active-directory-b2c Partner Akamai Secure Hybrid Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-akamai-secure-hybrid-access.md
Once the Application is deployed in a private environment and a connector is cap
| Header Name | Attribute | |--|--|
- | ps-sso-first | http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name |
- | ps-sso-last | http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname |
+ | ps-sso-first | `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name` |
+ | ps-sso-last | `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname` |
| ps-sso-EmailAddress | emailaddress | | ps-sso-uid | objectId |
active-directory-b2c Partner Arkose Labs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-arkose-labs.md
Title: Tutorial to configure Azure Active Directory B2C with Arkose Labs
+ Title: Tutorial to configure Azure Active Directory B2C with the Arkose Labs platform
-description: Tutorial to configure Azure Active Directory B2C with Arkose Labs to identify risky and fraudulent users
+description: Learn to configure Azure Active Directory B2C with the Arkose Labs platform to identify risky and fraudulent users
-+ - Previously updated : 09/13/2022 Last updated : 1/4/2023
-# Tutorial: Configure Arkose Labs with Azure Active Directory B2C
+# Tutorial: Configure Azure Active Directory B2C with the Arkose Labs platform
-In this sample tutorial, learn how to integrate Azure Active Directory (AD) B2C authentication with [Arkose Labs](https://www.arkoselabs.com/). Arkose Labs help organizations against bot attacks, account takeover attacks, and fraudulent account openings.
+In this tutorial, learn how to integrate Azure Active Directory B2C (Azure AD B2C) authentication with the [Arkose Labs](https://www.arkoselabs.com/) Arkose Protect Platform. Arkose Labs products help organizations against bot attacks, account takeover, and fraudulent account openings.
## Prerequisites To get started, you'll need: -- An Azure subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).--- [An Azure AD B2C tenant](tutorial-create-tenant.md) that is linked to your Azure subscription.--- An [Arkose Labs](https://www.arkoselabs.com/book-a-demo/) account.
+- An Azure subscription
+ - If you don't have one, you can get an [Azure free account](https://azure.microsoft.com/free/)
+- [An Azure AD B2C tenant](tutorial-create-tenant.md) linked to your Azure subscription
+- An Arkose Labs account
+ - Go to arkoselabs.com to [request a demo](https://www.arkoselabs.com/book-a-demo/)
## Scenario description
-Arkose Labs integration includes the following components:
--- **Arkose Labs** - A fraud and abuse service for protecting against bots and other automated abuse.--- **Azure AD B2C sign-up user flow** - The sign-up experience that will be using the Arkose Labs service. Will use the custom HTML and JavaScript, and API connectors to integrate with the Arkose Labs service.--- **Azure functions** - API endpoint hosted by you that works with the API connectors feature. This API is responsible for doing the server-side validation of the Arkose Labs session token.
+Arkose Labs products integration includes the following components:
-The following diagram describes how Arkose Labs integrates with Azure AD B2C.
+- **Arkose Protect Platform** - A service to protect against bots and other automated abuse
+- **Azure AD B2C sign-up user flow** - The sign-up experience that uses the Arkose Labs platform
+ - Custom HTML, JavaScript, and API connectors integrate with the Arkose platform
+- **Azure Functions** - Your hosted API endpoint that works with the API connectors feature
+ - This API validates the server-side of the Arkose Labs session token
+ - Learn more in the [Azure Functions Overview](/azure/azure-functions/functions-overview)
-![Image shows Arkose Labs architecture diagram](media/partner-arkose-labs/arkose-labs-architecture-diagram.png)
+The following diagram illustrates how the Arkose Labs platform integrates with Azure AD B2C.
-| Step | Description |
-|||
-|1 | A user signs-up and creates an account. When the user selects submit, an Arkose Labs enforcement challenge appears. |
-|2 | After the user completes the challenge, Azure AD B2C sends the status to Arkose Labs to generate a token. |
-|3 | Arkose Labs generates a token and sends it back to Azure AD B2C. |
-|4 | Azure AD B2C calls an intermediate web API to pass the sign-up form. |
-|5 | The intermediate web API sends the sign-up form to Arkose Lab for token verification. |
-|6 | Arkose Lab processes and sends the verification results back to the intermediate web API.|
-|7 | The intermediate web API sends the success or failure result from the challenge to Azure AD B2C. |
-|8 | If the challenge is successfully completed, a sign-up form is submitted to Azure AD B2C, and Azure AD B2C completes the authentication.|
+ ![Diagram of the Arkose Labs platform and Azure AD B2C integration architecture.](media/partner-arkose-labs/arkose-labs-architecture-diagram.png)
-## Onboard with Arkose Labs
+1. A user signs up and creates an account. The user selects **Submit**, and an Arkose Labs enforcement challenge appears.
+2. The user completes the challenge. Azure AD B2C sends the status to Arkose Labs to generate a token.
+3. Arkose Labs sends the token to Azure AD B2C.
+4. Azure AD B2C calls an intermediate web API to pass the sign-up form.
+5. The sign-up form goes to Arkose Labs for token verification.
+6. Arkose Labs sends verification results to the intermediate web API.
+7. The API sends a success or failure result to Azure AD B2C.
+8. If the challenge is successful, a sign-up form goes to Azure AD B2C, which completes authentication.
-1. Contact [Arkose](https://www.arkoselabs.com/book-a-demo/) and create an account.
+## Request a demo from Arkose Labs
-2. Once the account is created, navigate to https://dashboard.arkoselabs.com/login
+1. Go to arkoselabs.com to [book a demo](https://www.arkoselabs.com/book-a-demo/).
+2. Create an account.
+3. Navigate to the [Arkose Portal](https://dashboard.arkoselabs.com/login) sign-in page.
+4. In the dashboard, navigate to site settings.
+5. Locate your public key and private key. You'll use this information later.
-3. Within the dashboard, navigate to site settings to find your public key and private key. This information will be needed later to configure Azure AD B2C. The values of public and private keys are referred to as `ARKOSE_PUBLIC_KEY` and `ARKOSE_PRIVATE_KEY` in the [sample code](https://github.com/Azure-Samples/active-directory-b2c-node-sign-up-user-flow-arkose).
+> [!NOTE]
+> The public and private key values are `ARKOSE_PUBLIC_KEY` and `ARKOSE_PRIVATE_KEY`.
+> See, [Azure-Samples/active-directory-b2c-node-sign-up-user-flow-arkose](https://github.com/Azure-Samples/active-directory-b2c-node-sign-up-user-flow-arkose).
## Integrate with Azure AD B2C
-### Part 1 ΓÇô Create a ArkoseSessionToken custom attribute
-
-To create a custom attribute, follow these steps:
-
-1. Go to **Azure portal** > **Azure AD B2C**
-
-2. Select **User attributes**
-
-3. Select **Add**
-
-4. Enter **ArkoseSessionToken** as the attribute Name
-
-5. Select **Create**
-
-Learn more about [custom attributes](./user-flow-custom-attributes.md?pivots=b2c-user-flow).
-
-### Part 2 - Create a user flow
-
-The user flow can be either for **sign-up** and **sign in** or just **sign-up**. The Arkose Labs user flow will only be shown during sign-up.
-
-1. See the [instructions](./tutorial-create-user-flows.md) to create a user flow. If using an existing user flow, it must be of the **Recommended** version type.
-
-2. In the user flow settings, go to **User attributes** and select the **ArkoseSessionToken** claim.
-
-![Image shows how to select custom attributes](media/partner-arkose-labs/select-custom-attribute.png)
-
-### Part 3 - Configure custom HTML, JavaScript, and page layouts
-
-Go to the provided [HTML script](https://github.com/Azure-Samples/active-directory-b2c-node-sign-up-user-flow-arkose/blob/main/Assets/selfAsserted.html). The file contains an HTML template with JavaScript `<script>` tags that will do three things:
-
-1. Load the Arkose Labs script, which renders the Arkose Labs widget and does client-side Arkose Labs validation.
+### Create an ArkoseSessionToken custom attribute
-2. Hide the `extension_ArkoseSessionToken` input element and label, corresponding to the `ArkoseSessionToken` custom attribute, from the UI shown to the user.
+To create a custom attribute:
-3. When a user completes the Arkose Labs challenge, Arkose Labs verifies the user's response and generates a token. The callback `arkoseCallback` in the custom JavaScript sets the value of `extension_ArkoseSessionToken` to the generated token value. This value will be submitted to the API endpoint as described in the next section.
+1. Go to the [Azure portal](https://ms.portal.azure.com/#home), then to **Azure AD B2C**.
+2. Select **User attributes**.
+3. Select **Add**.
+4. Enter **ArkoseSessionToken** as the attribute Name.
+5. Select **Create**.
- See [this article](https://arkoselabs.atlassian.net/wiki/spaces/DG/pages/214176229/Standard+Setup) to learn about Arkose Labs client-side validation.
+Learn more: [Define custom attributes in Azure Active Directory B2C](./user-flow-custom-attributes.md?pivots=b2c-user-flow)
-Follow the steps mentioned to use the custom HTML and JavaScript for your user flow.
+### Create a user flow
-1. Modify [selfAsserted.html](https://github.com/Azure-Samples/active-directory-b2c-node-sign-up-user-flow-arkose/blob/main/Assets/selfAsserted.html) file so that `<ARKOSE_PUBLIC_KEY>` matches the value you generated for the client-side validation, and used to load the Arkose Labs script for your account.
+The user flow is for sign-up and sign-in, or sign-up. The Arkose Labs user flow appears during sign-up.
-2. Host the HTML page on a Cross-origin Resource Sharing (CORS) enabled web endpoint. [Create an Azure blob storage account](../storage/common/storage-account-create.md?tabs=azure-portal&toc=%2fazure%2fstorage%2fblobs%2ftoc.json) and [configure CORS](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services).
+1. [Create user flows and custom policies in Azure Active Directory B2C](./tutorial-create-user-flows.md). If using a user flow, use **Recommended**.
+2. In the user flow settings, go to **User attributes**.
+3. Select the **ArkoseSessionToken** claim.
- >[!NOTE]
- >If you have your own custom HTML, copy and paste the `<script>` elements onto your HTML page.
+ ![Screenshot of the Arkose Session Token under User attributes.](media/partner-arkose-labs/select-custom-attribute.png)
-3. Follow these steps to configure the page layouts
+### Configure custom HTML, JavaScript, and page layout
- a. From the Azure portal, go to **Azure AD B2C**
+1. Go to [Azure-Samples/active-directory-b2c-node-sign-up-user-flow-arkose](https://github.com/Azure-Samples/active-directory-b2c-node-sign-up-user-flow-arkose/blob/main/Assets/selfAsserted.html).
+2. Find the HTML template with JavaScript `<script>` tags. These do three things:
- b. Navigate to **User flows** and select your user flow
+* Load the Arkose Labs script, which renders their widget and does client-side Arkose Labs validation.
+* Hide the `extension_ArkoseSessionToken` input element and label, corresponding to the `ArkoseSessionToken` custom attribute.
+* When a user completes the Arkose Labs challenge, the user response is verified and a token generated. The callback `arkoseCallback` in the custom JavaScript sets the value of `extension_ArkoseSessionToken` to the generated token value. This value is submitted to the API endpoint.
- c. Select **Page layouts**
+ > [!NOTE]
+ > Go to developer.arkoselabs.com for [Client-Side Instructions](https://developer.arkoselabs.com/docs/standard-setup). Follow the steps to use the custom HTML and JavaScript for your user flow.
- d. Select **Local account sign up page layout**
+3. In Azure-Samples, modify [selfAsserted.html](https://github.com/Azure-Samples/active-directory-b2c-node-sign-up-user-flow-arkose/blob/main/Assets/selfAsserted.html) file so `<ARKOSE_PUBLIC_KEY>` matches the value you generated for the client-side validation.
+4. Host the HTML page on a Cross-Origin Resource Sharing (CORS) enabled web endpoint.
+5. [Create a storage account](../storage/common/storage-account-create.md?tabs=azure-portal&toc=%2fazure%2fstorage%2fblobs%2ftoc.json).
+6. [CORS support for Azure Storage](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services).
- e. Toggle **Use custom page content** to **YES**
+ >[!NOTE]
+ >If you have custom HTML, copy and paste the `<script>` elements onto your HTML page.
- f. Paste the URI where your custom HTML lives in **Use custom page content**
+7. In the Azure portal, go to **Azure AD B2C**.
+8. Navigate to **User flows**.
+9. Select your user flow.
+10. Select **Page layouts**.
+11. Select **Local account sign up page layout**.
+12. For **Use custom page content**, select **YES**.
+13. In **Use custom page content**, paste your custom HTML URI.
+14. (Optional) If you use social identity providers, repeat steps for **Social account sign-up page**.
- g. If you're using social Identity providers, repeat **step e** and **f** for **Social account sign-up page** layout.
+ ![Screenshot of Layout name options and Social acount sign-up page options, under Page layouts.](media/partner-arkose-labs/page-layouts.png)
- ![image showing page layouts](media/partner-arkose-labs/page-layouts.png)
+15. From your user flow, go to **Properties**.
+16. Select **Enable JavaScript**.
-4. From your user flow, go to **Properties** and select **Enable JavaScript** enforcing page layout (preview). See this [article](./javascript-and-page-layout.md?pivots=b2c-user-flow) to learn more.
+Learn more: [Enable JavaScript and page layout versions in Azure Active Directory B2C](./javascript-and-page-layout.md?pivots=b2c-user-flow)
-### Part 4 - Create and deploy your API
+### Create and deploy your API
-Install the [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) for Visual Studio Code.
+This section assumes you use Visual Studio Code to deploy Azure Functions. You can use the Azure portal, terminal, or command prompt to deploy.
->[!Note]
->Steps mentioned in this section assumes you are using Visual Studio Code to deploy the Azure Function. You can also use Azure portal, terminal or command prompt, or any other code editor to deploy.
+Go to Visual Studio Marketplace to install [Azure Functions](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) for Visual Studio Code.
#### Run the API locally
-1. Navigate to the Azure extension in Visual Studio code on the left navigation bar. Select **Local Project** folder representing your local Azure Function.
-
-2. Press **F5** or use the **Debug** > **Start Debugging** menu to launch the debugger and attach to the Azure Functions host. This command automatically uses the single debug configuration that Azure Function created.
-
-3. The Azure Function extension will automatically generate a few files for local development, install dependencies, and install the Function Core tools if not already present. These tools help with the debugging experience.
-
-4. Output from the Function Core tool appears in the Visual Studio Code **Terminal** panel. Once the host has started, **Alt+click** the local URL shown in the output to open the browser and run the function. In the Azure Functions explorer, right-click on the function to see the URL of the locally hosted function.
-
-To redeploy the local instance during testing, repeat steps 1 to 4.
+1. In Visual Studio code, in the left navigation, go to the Azure extension.
+2. Select the **Local Project** folder for your local Azure Function.
+3. Press **F5** or select **Debug** > **Start Debugging**. This command uses the debug configuration Azure Function created.
+4. Azure Function generates files for local development, installs dependencies, and the Function Core tools, if needed.
+5. In the Visual Studio Code **Terminal** panel, output from the Function Core tool appears.
+6. When the host starts, select **Alt+click** on the local URL in the output.
+7. The browser opens and runs the function.
+8. In the Azure Functions explorer, right-click the function to see the locally hosted function URL.
#### Add environment variables
-This sample protects the web API endpoint using [HTTP Basic authentication](https://tools.ietf.org/html/rfc7617).
-
-Username and password are stored as environment variables and not as part of the repository. See [local.settings.json](../azure-functions/functions-develop-local.md#local-settings-file) file for more information.
+The sample in this section protects the web API endpoint when using HTTP Basic authentication. Learn more on the Internet Engineering Task Force page [RFC 7617: The Basic Authentication](https://tools.ietf.org/html/rfc7617).
-1. Create a local.settings.json file in your root folder
+Username and password are stored as environment variables, not part of the repository. Learn more on [Code and test Azure Functions locally, Local settings file](../azure-functions/functions-develop-local.md#local-settings-file).
-2. Copy and paste the below code onto the file:
+1. In your root folder, create a local.settings.json file.
+2. Copy and paste the following code onto the file:
``` {
Username and password are stored as environment variables and not as part of th
} } ```
-The **BASIC_AUTH_USERNAME** and **BASIC_AUTH_PASSWORD** values are going to be the credentials used to authenticate the API call to your Azure Function. Choose your desired values.
+3. The **BASIC_AUTH_USERNAME** and **BASIC_AUTH_PASSWORD** are the credentials to authenticate the API call to your Azure Function. Select values.
-The `<ARKOSE_PRIVATE_KEY>` is the server-side secret you generated in the Arkose Labs service. It's used to call the [Arkose Labs server-side validation API](https://arkoselabs.atlassian.net/wiki/spaces/DG/pages/266214758/Server-Side+Instructions) to validate the value of the `ArkoseSessionToken` generated by the front end.
+* <ARKOSE_PRIVATE_KEY> is the server-side secret you generated in the Arkose Labs platform.
+ * It calls the Arkose Labs server-side validation API to validate the value of the `ArkoseSessionToken` generated by the front end.
+ * See, [Server-Side Instructions](https://developer.arkoselabs.com/docs/server-side-instructions-v4).
+* <B2C_EXTENSIONS_APP_ID> is the application ID used by Azure AD B2C to store custom attributes in the directory.
-The `<B2C_EXTENSIONS_APP_ID>` is the application ID of the app used by Azure AD B2C to store custom attributes in the directory. You can find this application ID by navigating to App registrations, searching for b2c-extensions-app, and copying the Application (client) ID from the **Overview** pane. Remove the `-` characters.
+4. Navigate to App registrations.
+5. Search for b2c-extensions-app.
+6. From the **Overview** pane, copy the Application (client) ID.
+7. Remove the `-` characters.
-![Image shows search by app id](media/partner-arkose-labs/search-app-id.png)
+ ![Screenshot of the display name, application ID, and creation date under App registrations.](media/partner-arkose-labs/search-app-id.png)
#### Deploy the application to the web
-1. Follow the steps mentioned in [this](/azure/javascript/tutorial-vscode-serverless-node-04) guide to deploy your Azure Function to the cloud. Copy the endpoint web URL of your Azure Function.
+1. Deploy your Azure Function to the cloud. Learn more with [Azure Functions documentation](/azure/azure-functions/).
+2. Copy the endpoint web URL of your Azure Function.
+3. After deployment, select the **Upload settings** option.
+4. Your environment variables are uploaded to the Application settings of the app service. Learn more on [Application settings in Azure](../azure-functions/functions-develop-vs-code.md?tabs=csharp#application-settings-in-azure).
-2. Once deployed, select the **Upload settings** option. It will upload your environment variables onto the [Application settings](../azure-functions/functions-develop-vs-code.md?tabs=csharp#application-settings-in-azure) of the App service. These application settings can also be configured or [managed via the Azure portal.](../azure-functions/functions-how-to-use-azure-function-app-settings.md)
-
-See [this article](../azure-functions/functions-develop-vs-code.md?tabs=csharp#republish-project-files) to learn more about Visual Studio Code development for Azure Functions.
+ >[!NOTE]
+ >You can [manage your function app](../azure-functions/functions-how-to-use-azure-function-app-settings.md). See also, [Deploy project files](../azure-functions/functions-develop-vs-code.md?tabs=csharp#republish-project-files) to learn about Visual Studio Code development for Azure Functions.
#### Configure and enable the API connector
-[Create an API connector](./add-api-connector.md) and enable it for your user flow.
-Your API connector configuration should look like:
-
-![Image shows how to configure api connector](media/partner-arkose-labs/configure-api-connector.png)
+1. Create an API connector. See, [Add an API connector to a sign-up user flow](./add-api-connector.md).
+2. Enable it for your user flow.
-- **Endpoint URL** - is the Function URL you copied earlier while you deployed Azure Function.
+ ![Screenshot of Display name, Endpoint URL, Username, and Password on Configure and an API connector.](media/partner-arkose-labs/configure-api-connector.png)
-- **Username and Password** - are the Username and Password you defined as environment variables earlier.
+- **Endpoint URL** - The Function URL you copied while you deployed Azure Function
+- **Username** - The username you defined
+- **Password** - The password you defined
-To enable the API connector, in the **API connector** settings for your user flow, select the API connector to be invoked at the **Before creating the user** step. This will invoke the API when a user selects **Create** in the sign-up flow. The API will do a server-side validation of the `ArkoseSessionToken` value, which was set by the callback of the Arkose widget `arkoseCallback`.
+3. In the **API connector** settings for your user flow, select the API connector to be invoked at **Before creating the user**.
+4. The API validates the `ArkoseSessionToken` value.
-![Image shows enabling api connector](media/partner-arkose-labs/enable-api-connector.png)
+ ![Screenshot of the entry for Before creating the user, under API connectors.](media/partner-arkose-labs/enable-api-connector.png)
## Test the user flow
-1. Open the Azure AD B2C tenant and under Policies select **User flows**.
-
-2. Select your previously created User Flow.
-
-3. Select **Run user flow** and select the settings:
-
- a. Application: select the registered app (sample is JWT)
-
- b. Reply URL: select the redirect URL
-
- c. Select **Run user flow**.
-
-4. Go through the sign-up flow and create an account
-
-5. Sign out
-
-6. Go through the sign-in flow
-
-7. An Arkose Labs puzzle will appear after you select **continue**.
-
-## Additional resources
--- [Sample codes](https://github.com/Azure-Samples/active-directory-b2c-node-sign-up-user-flow-arkose) for Azure AD B2C sign-up user flow--- [Custom policies in Azure AD B2C](./custom-policy-overview.md)--- [Get started with custom policies in Azure AD B2C](./tutorial-create-user-flows.md?pivots=b2c-custom-policy)
+1. Open the Azure AD B2C tenant.
+2. Under **Policies**, select **User flows**.
+3. Select your created user flow.
+4. Select **Run user flow**.
+5. For **Application** select the registered app (the example is JWT).
+6. For **Reply URL**, select the redirect URL.
+7. Select **Run user flow**.
+8. Perform the sign-up flow.
+9. Create an account.
+10. Sign out.
+11. Perform the sign-in flow.
+12. Select **Continue**.
+13. An Arkose Labs puzzle appears.
+
+## Resources
+
+- [Azure-Samples/active-directory-b2c-node-sign-up-user-flow-arkose](https://github.com/Azure-Samples/active-directory-b2c-node-sign-up-user-flow-arkose)
+ - Find the Azure AD B2C sign-up user flow
+- [Azure AD B2C custom policy overview](./custom-policy-overview.md)
+- [Tutorial: Create user flows and custom policies in Azure Active Directory B2C](./tutorial-create-user-flows.md?pivots=b2c-custom-policy)
active-directory-b2c Partner Strata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-strata.md
description: Learn how to integrate Azure AD B2C authentication with whoIam for user verification -+ Previously updated : 09/13/2022 Last updated : 12/16/2022
-# Tutorial for extending Azure AD B2C to protect on-premises applications using Strata
+# Tutorial to configure Azure Active Directory B2C with Strata
-In this sample tutorial, learn how to integrate Azure Active Directory (AD) B2C with Strata's [Maverics Identity Orchestrator](https://www.strata.io/maverics-identity-orchestrator/).
-Maverics Identity Orchestrator extends Azure AD B2C to protect on-premises applications. It connects to any identity system, transparently migrates users and credentials, synchronizes policies and configurations, and abstracts authentication and session management. Using Strata enterprises can quickly transition from legacy to Azure AD B2C without rewriting applications. The solution has the following benefits:
+In this tutorial, learn how to integrate Azure Active Directory B2C (Azure AD B2C) with Strata [Maverics Identity Orchestrator](https://www.strata.io/maverics-identity-orchestrator/), which helps protect on-premises applications. It connects to identity systems, migrates users and credentials, synchronizes policies and configurations, and abstracts authentication and session management. Use Strata to transition from legacy, to Azure AD B2C, without rewriting applications.
-- **Customer Single Sign-On (SSO) to on-premises hybrid apps**: Azure AD B2C supports customer SSO with Maverics Identity Orchestrator. Users sign in with their accounts that are hosted in Azure AD B2C or social Identity provider (IdP). Maverics extends SSO to apps that have been historically secured by legacy identity systems like Symantec SiteMinder.
+The solution has the following benefits:
-- **Extend standards-based SSO to apps without rewriting them**: Use Azure AD B2C to manage user access and enable SSO with Maverics Identity Orchestrator SAML or OIDC Connectors.--- **Easy configuration**: Azure AD B2C provides a simple step-by-step user interface for connecting Maverics Identity Orchestrator SAML or OIDC connectors to Azure AD B2C.
+- **Customer single sign-on (SSO) to on-premises hybrid apps** - Azure AD B2C supports customer SSO with Maverics Identity Orchestrator
+ - Users sign in with accounts hosted in Azure AD B2C or identity provider (IdP)
+ - Maverics proves SSO to apps historically secured by legacy identity systems like Symantec SiteMinder
+- **Extend standards SSO to apps** - Use Azure AD B2C to manage user access and enable SSO with Maverics Identity Orchestrator Security Assertion Markup Language (SAML) or OpenID Connect (OIDC) connectors
+- **Easy configuration** - Connect Maverics Identity Orchestrator SAML or OIDC connectors to Azure AD B2C
## Prerequisites To get started, you'll need: -- An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).--- An [Azure AD B2C tenant](./tutorial-create-tenant.md) that's linked to your Azure subscription.--- An instance of [Azure Key Vault](https://azure.microsoft.com/services/key-vault/) to store secrets that are used by Maverics Identity Orchestrator. It's used to connect to Azure AD B2C or other attribute providers such as a Lightweight Directory Access Protocol (LDAP) directory or database.--- An instance of [Maverics Identity Orchestrator](https://www.strata.io/maverics-identity-orchestrator/) that is installed and running in an Azure virtual machine or the on-premises server of your choice. For information about how to get the software and access to the installation and configuration documentation, contact [Strata](https://www.strata.io/contact/)--- An on-premises application that you'll transition from a legacy identity system to Azure AD B2C.
+- An Azure AD subscription
+ - If you don't have one, you can get an [Azure free account](https://azure.microsoft.com/free/)
+- An [Azure AD B2C tenant](./tutorial-create-tenant.md) linked to your Azure subscription
+- An instance of [Azure Key Vault](https://azure.microsoft.com/services/key-vault/) to store secrets used by Maverics Identity Orchestrator. Connect to Azure AD B2C or other attribute providers such as a Lightweight Directory Access Protocol (LDAP) directory or database.
+- An instance of [Maverics Identity Orchestrator](https://www.strata.io/maverics-identity-orchestrator/) running in an Azure virtual machine (VM), or an on-premises server. To get software and documentation, go to strata.io [Contact Strata Identity](https://www.strata.io/contact/).
+- An on-premises application to transition to Azure AD B2C
## Scenario description
-Strata's Maverics integration includes the following components:
+Maverics Identity Orchestrator integration includes the following components:
-- **Azure AD B2C**: The authorization server that's responsible for verifying the user's credentials. Authenticated users may access on-premises apps using a local account stored in the Azure AD B2C directory.--- **An external social or enterprise IdP**: Could be any OpenID Connect provider, Facebook, Google, or GitHub. For more information, see [Add an identity provider](./add-identity-provider.md). --- **Strata's Maverics Identity Orchestrator**: The service that orchestrates user sign-on and transparently passes identity to apps through HTTP headers.
+- **Azure AD B2C** - The authorization server that verifies user credentials
+ - Authenticated users access on-premises apps using a local account in the Azure AD B2C directory
+- **External social or enterprise identity provider (IdP)**: An OIDC provider, Facebook, Google, or GitHub
+ - See, [Add an identity provider to your Azure Active Directory B2C tenant](./add-identity-provider.md)
+- **Strata Maverics Identity Orchestrator**: The user sign-on service that passes identity to apps through HTTP headers
The following architecture diagram shows the implementation.
-![Image show the architecture of an Azure AD B2C integration with Strata Maverics to enable access to hybrid apps](./media/partner-strata/strata-architecture-diagram.png)
-
-| Steps | Description |
-|:-|:|
-| 1. | The user makes a request to access the on-premises hosted application. Maverics Identity Orchestrator proxies the request made by the user to the application.|
-| 2. | The Orchestrator checks the user's authentication state. If it doesn't receive a session token, or the supplied session token is invalid, then it sends the user to Azure AD B2C for authentication.|
-| 3. | Azure AD B2C sends the authentication request to the configured social IdP.|
-| 4. | The IdP challenges the user for credentials. Depending on the IdP, the user may require to do Multi-factor authentication (MFA).|
-| 5. | The IdP sends the authentication response back to Azure AD B2C. Optionally, the user may create a local account in the Azure AD B2C directory during this step.|
-| 6. | Azure AD B2C sends the user request to the endpoint specified during the Orchestrator app's registration in the Azure AD B2C tenant.|
-| 7. | The Orchestrator evaluates access policies and calculates attribute values to be included in HTTP headers forwarded to the app. During this step, the Orchestrator may call out to additional attribute providers to retrieve the information needed to set the header values correctly. The Orchestrator sets the header values and sends the request to the app.|
-| 8. | The user is now authenticated and has access to the app.|
-
-## Get Maverics Identity Orchestrator
-To get the software you'll use to integrate your legacy on-premises app with Azure AD B2C, contact [Strata](https://www.strata.io/contact/). After you get the software, follow the steps below to determine Orchestrator-specific prerequisites and perform the required installation and configuration steps.
-
-## Configure your Azure AD B2C tenant
+ ![Diagram of the Azure AD B2C integration architecture, with Maverics Identity Orchestrator, for access to hybrid apps.](./media/partner-strata/strata-architecture-diagram.png)
-1. **Register your application**
+1. The user requests access the on-premises hosted application. Maverics Identity Orchestrator proxies the request to the application.
+2. Orchestrator checks the user authentication state. If there's no session token, or the token is invalid, the user goes to Azure AD B2C for authentication
+3. Azure AD B2C sends the authentication request to the configured social IdP.
+4. The IdP challenges the user for credential. Multi-factor authentication (MFA) might be required.
+5. The IdP sends the authentication response to Azure AD B2C. The user can create a local account in the Azure AD B2C directory.
+6. Azure AD B2C sends the user request to the endpoint specified during the Orchestrator app registration in the Azure AD B2C tenant.
+7. The Orchestrator evaluates access policies and attribute values for HTTP headers forwarded to the app. Orchestrator might call to other attribute providers to retrieve information to set the header values. The Orchestrator sends the request to the app.
+8. The user is authenticated and has access to the app.
- a. [Register the Orchestrator as an application](./tutorial-register-applications.md?tabs=app-reg-ga) in Azure AD B2C tenant.
- >[!Note]
- >You'll need the tenant name and identifier, client ID, client secret, configured claims, and redirect URI later when you configure your Orchestrator instance.
+## Maverics Identity Orchestrator
- b. Grant Microsoft MS Graph API permissions to your applications. Your application will need the following permissions: `offline_access`, `openid`.
+To get software and documentation, go to strata.io [Contact Strata Identity](https://www.strata.io/contact/). Determine Orchestrator prerequisites. Install and configure.
- c. Add a redirect URI for your application. This URI will match the `oauthRedirectURL` parameter of your Orchestrator's Azure AD B2C connector configuration, for example, `https://example.com/oidc-endpoint`.
+## Configure your Azure AD B2C tenant
-2. **Create a user flow**: Create a [sign-up and sign-in user flow](./tutorial-create-user-flows.md).
+During the following instructions, document:
-3. **Add an IdP**: Choose to sign in your user with either a local account or a social or enterprise [IdP](./add-identity-provider.md).
+* Tenant name and identifier
+* Client ID
+* Client secret
+* Configured claims
+* Redirect URI
-4. **Define user attributes**: Define the attributes to be collected during sign-up.
+1. [Register a web application in Azure Active Directory B2C](./tutorial-register-applications.md?tabs=app-reg-ga) in Azure AD B2C tenant.
+2. Grant Microsoft MS Graph API permissions to your applications. Use permissions: `offline_access`, `openid`.
+3. Add a redirect URI that matches the `oauthRedirectURL` parameter of the Orchestrator Azure AD B2C connector configuration, for example, `https://example.com/oidc-endpoint`.
+4. [Create user flows and custom policies in Azure Active Directory B2C](./tutorial-create-user-flows.md).
+5. [Add an identity provider to your Azure Active Directory B2C tenant](./add-identity-provider.md). Sign in your user with a local account, a social, or enterprise.
+6. Define the attributes to be collected during sign-up.
+7. Specify attributes to be returned to the application with your Orchestrator instance.
-5. **Specify application claims**: Specify the attributes to be returned to the application via your Orchestrator instance. The Orchestrator consumes attributes from claims returned by Azure AD B2C and can retrieve additional attributes from other connected identity systems such as LDAP directories and databases. Those attributes are set in HTTP headers and sent to the upstream on-premises application.
+> [!NOTE]
+> The Orchestrator consumes attributes from claims returned by Azure AD B2C and can retrieve attributes from connected identity systems such as LDAP directories and databases. Those attributes are in HTTP headers and sent to the upstream on-premises application.
## Configure Maverics Identity Orchestrator
-In the following sections, we'll walk you through the steps required to configure your Orchestrator instance. For additional support and documentation, contact [Strata](https://www.strata.io/contact/).
+Use the instructions in the following sections to configure an Orchestrator instance.
### Maverics Identity Orchestrator server requirements You can run your Orchestrator instance on any server, whether on-premises or in a public cloud infrastructure by provider such as Azure, AWS, or GCP. -- OS: REHL 7.7 or higher, CentOS 7+--- Disk: 10 GB (small)--- Memory: 16 GB--- Ports: 22 (SSH/SCP), 443, 80--- Root access for install/administrative tasks--- Maverics Identity Orchestrator runs as user `maverics` under `systemd`--- Network egress from the server hosting Maverics Identity Orchestrator with the ability to reach your Azure AD tenant.
+- **Operating System**: REHL 7.7 or higher, CentOS 7+
+- **Disk**: 10 GB (small)
+- **Memory**: 16 GB
+- **Ports**: 22 (SSH/SCP), 443, 80
+- **Root access**: For install/administrative tasks
+- **Maverics Identity Orchestrator**: Runs as user `maverics` under `systemd`
+- **Network egress**: From the server hosting Maverics Identity Orchestrator that can reach your Azure AD tenant
### Install Maverics Identity Orchestrator
-1. Obtain the latest Maverics RPM package. Place the package on the system on which you'd like to install Maverics. If you're copying the file to a remote host, [SCP](https://www.ssh.com/ssh/scp/) is a useful tool.
-
-2. To install the Maverics package, run the following command replacing your filename in place of `maverics.rpm`.
+1. Obtain the latest Maverics RPM package.
+2. Place the package on the system you'd like to install Maverics. If you're copying to a remote host, use SSH [scp](https://www.ssh.com/ssh/scp/).
+3. Run the following command. Use your filename to replace `maverics.rpm`.
`sudo rpm -Uvf maverics.rpm`
- By default, Maverics is installed in the `/usr/local/bin` directory.
+ By default, Maverics is in the `/usr/local/bin` directory.
-3. After installing Maverics, it will run as a service under `systemd`. To verify Maverics service is running, run the following command:
+4. Maverics runs as a service under `systemd`.
+5. To verify Maverics service is running, run the following command:
`sudo service maverics status`
- If the Orchestrator installation was successful, you should see a message similar to this:
+6. The following message (or similar) appears.
``` Redirecting to /bin/systemctl status maverics.service
Redirecting to /bin/systemctl status maverics.service
ΓööΓöÇ330772 /usr/local/bin/maverics --config /etc/maverics/maverics.yaml ```
-4. If the Maverics service fails to start, execute the following command to investigate the problem:
+> [!NOTE]
+> If Maverics fails to start, execute the following command:
`journalctl --unit=maverics.service --reverse`
- The most recent log entry will appear at the beginning of the output.
-
-After installing Maverics, the default `maverics.yaml` file is created in the `/etc/maverics` directory.
+ The most recent log entry appears in the output.
-Configure your Orchestrator to protect the application. Integrate with Azure AD B2C, store, and retrieve secrets from [Azure Key Vault](https://azure.microsoft.com/services/key-vault/?OCID=AID2100131_SEM_bf7bdd52c7b91367064882c1ce4d83a9:G:s&ef_id=bf7bdd52c7b91367064882c1ce4d83a9:G:s&msclkid=bf7bdd52c7b91367064882c1ce4d83a9). Define the location where the Orchestrator should read its configuration from.
+7. The default `maverics.yaml` file is created in the `/etc/maverics` directory.
+8. Configure your Orchestrator to protect the application.
+9. Integrate with Azure AD B2C, and store.
+10. Retrieve secrets from [Azure Key Vault](https://azure.microsoft.com/services/key-vault/?OCID=AID2100131_SEM_bf7bdd52c7b91367064882c1ce4d83a9:G:s&ef_id=bf7bdd52c7b91367064882c1ce4d83a9:G:s&msclkid=bf7bdd52c7b91367064882c1ce4d83a9).
+11. Define the location from where the Orchestrator reads its configuration.
### Supply configuration using environment variables
-Provide config to your Orchestrator instances through environment variables.
+Configure your Orchestrator instances with environment variables.
`MAVERICS_CONFIG`
-This environment variable tells the Orchestrator instance which YAML configuration files to use and where to find them during startup or restarts. Set the environment variable in `/etc/maverics/maverics.env`.
+This environment variable informs the Orchestrator instance what YAML configuration files to use, and where to find them during startup or restart. Set the environment variable in `/etc/maverics/maverics.env`.
-### Create the Orchestrator's TLS configuration
+### Create the Orchestrator TLS configuration
-The `tls` field in your `maverics.yaml` declares the transport layer security configurations your Orchestrator instance will use. Connectors can use TLS objects and the Orchestrator server.
+The `tls` field in `maverics.yaml` declares the transport layer security configurations your Orchestrator instance uses. Connectors use TLS objects and the Orchestrator server.
-The `maverics` key is reserved for the Orchestrator server. All other keys are available and can be used to inject a TLS object into a given connector.
+The `maverics` key is reserved for the Orchestrator server. Use other keys to inject a TLS object into a connector.
```yaml tls:
tls:
### Configure the Azure AD B2C Connector
-Orchestrators use Connectors to integrate with authentication and attribute providers. In this case, this Orchestrators App Gateway uses the Azure AD B2C connector as both an authentication and attribute provider. Azure AD B2C uses the social IdP for authentication and then acts as an attribute provider to the Orchestrator, passing attributes in claims set in HTTP headers.
+Orchestrators use Connectors to integrate with authentication and attribute providers. The Orchestrators App Gateway uses the Azure AD B2C connector as an authentication and attribute provider. Azure AD B2C uses the social IdP for authentication and then provides attributes to the Orchestrator, passing them in claims set in HTTP headers.
-This Connector's configuration corresponds to the app registered in the Azure AD B2C tenant.
+The Connector configuration corresponds to the app registered in the Azure AD B2C tenant.
-1. Copy the client ID, secret, and redirect URI from your app config in your tenant.
-
-2. Give your Connector a name, shown here as `azureADB2C`, and set the connector `type` to be `azure`. Take note of the Connector name as this value is used in other configuration parameters below.
-
-3. For this integration, the `authType` should be set to `oidc`.
-
-4. Set the client ID you copied in step 1 as the value for the `oauthClientID` parameter.
-
-5. Set the client secret you copied in step 1 as the value for the `oauthClientSecret` parameter.
-
-6. Set the redirect URI you copied in step 1 as the value for the `oauthRedirectURL` parameter.
-
-7. The Azure AD B2C OIDC Connector uses the well-known OIDC endpoint to discover metadata, including URLs and signing keys. Set the value of `oidcWellKnownURL` to your tenant's endpoint.
+1. From your app config, copy the Client ID, Client secret, and redirect URI into your tenant.
+2. Enter a Connector name (example is `azureADB2C`).
+3. Set the connector `type` to be `azure`.
+4. Make a note of the Connector name. You'll use this value in other configuration parameters.
+5. Set the `authType` to `oidc`.
+6. For the `oauthClientID` parameter, set the Client ID you copied.
+7. For the `oauthClientSecret` parameter, set the Client secret you copied.
+8. For the `oauthRedirectURL` parameter, set the redirect URI you copied.
+9. The Azure AD B2C OIDC Connector uses the OIDC endpoint to discover metadata, including URLs and signing keys. For the tenant endpoint, use `oidcWellKnownURL`.
```yaml connectors:
connectors:
### Define Azure AD B2C as your authentication provider
-An authentication provider determines how to do authentication for a user who has not presented a valid session as part of the app resource request. Configuration in your Azure AD B2C tenant determines how to challenge a user for credentials and apply additional authentication policies. For example, to require a second factor to complete the authentication process and decide which claims should be returned to the Orchestrator App Gateway after authentication succeeds.
+An authentication provider determines authentication for users who don't present a valid session during an app resource request. Azure AD B2C tenant configuration determines how users are challenged for credentials, while it applies other authentication policies. An example is to require a second factor to complete authentication and decide what is returned to the Orchestrator App Gateway, after authentication.
-The value for the `authProvider` must match your Connector's `name` value.
+The value for the `authProvider` must match your Connector `name` value.
```yaml authProvider: azureADB2C ```
-### Protect your on-premises app with an Orchestrator App Gateway
-
-The Orchestrator's App Gateway configuration declares how Azure AD B2C should protect your application and how users should access the app.
-
-1. Create a name for your App gateway. You can use a friendly name or fully qualified hostname as an identifier for your app.
+### Protect on-premises apps with an Orchestrator App Gateway
-2. Set the `location`. The example here uses the app's root `/`, however, can be any URL path of your application.
-
-3. Define the protected application in `upstream` using the host:port convention: `https://example.com:8080`.
+The Orchestrator App Gateway configuration declares how Azure AD B2C protects your application and how users access the app.
+1. Enter an App gateway name.
+2. Set the `location`. The example uses the app root `/`.
+3. Define the protected application in `upstream`. Use the host:port convention: `https://example.com:8080`.
4. Set the values for error and unauthorized pages.-
-5. Define the HTTP header names and attribute values that must be provided to the application to establish authentication and control access to the app. Header names are arbitrary and typically correspond to the configuration of the app. Attribute values are namespaced by the Connector that supplies them. In the example below, the values returned from Azure AD B2C are prefixed with the Connector name `azureADB2C` where the suffix is the name of the attribute that contains the required value, for example `given_name`.
-
-6. Set the policies to be evaluated and enforced. Three actions are defined: `allowUnauthenticated`, `allowAnyAuthenticated`, and `allowIfAny`. Each action is associated to a `resource` and the policy is evaluated for that `resource`.
+5. Define the HTTP header names and attribute values for the application to establish authentication and control. Header names typically correspond to app configuration. Attribute values are namespaced by the Connector. In the example, values returned from Azure AD B2C are prefixed with the Connector name `azureADB2C`. The suffix is the attribute name with the required value, for example `given_name`.
+6. Set the policies. Three actions are defined: `allowUnauthenticated`, `allowAnyAuthenticated`, and `allowIfAny`. Each action is associated with a `resource`. Policy is evaluated for that `resource`.
>[!NOTE]
->Both `headers` and `policies` use JavaScript or GoLang service extensions to implement arbitrary logic that significantly enhances the default capabilities.
+>`headers` and `policies` use JavaScript or GoLang service extensions to implement arbitrary logic.
```yaml appgateways:
appgateways:
azureADB2C.customAttribute: Rewards Member ```
-### Use Azure Key Vault as your secrets provider
+### Azure Key Vault as secrets provider
-It's important to secure the secrets your Orchestrator uses to connect to Azure AD B2C and any other identity system. Maverics will default to loading secrets in plain text out of `maverics.yaml`, however, in this tutorial, you'll use Azure Key Vault as the secrets provider.
+Secure the secrets your Orchestrator uses to connect to Azure AD B2C, and other identity systems. Maverics load secrets in plain text out of `maverics.yaml`, however, in this tutorial, use Azure Key Vault as the secrets provider.
-Follow the instructions to [create a new Key Vault](../key-vault/secrets/quick-create-portal.md) that your Orchestrator instance will use as a secrets provider. Add your secrets to your vault and take note of the `SECRET NAME` given to each secret. For example, `AzureADB2CClientSecret`.
+Follow the instructions in, [Quickstart: Set and retrieve a secret from Azure Key Vault using the Azure portal](../key-vault/secrets/quick-create-portal.md). Add your secrets to the vault and make a note of the `SECRET NAME` for each secret. For example, `AzureADB2CClientSecret`.
To declare a value as a secret in a `maverics.yaml` config file, wrap the secret with angle brackets:
connectors:
oauthClientSecret: <AzureADB2CClientSecret> ```
-The value specified within the angle brackets must correspond to the `SECRET NAME` given to secret in your Azure Key Vault.
+The value in the angle brackets must correspond to the `SECRET NAME` given to a secret in your Azure Key Vault.
-To load secrets from Azure Key Vault, set the environment variable `MAVERICS_SECRET_PROVIDER` in the file `/etc/maverics/maverics.env`, with the credentials found in the azure-credentials.json file, using the following pattern:
+To load secrets from Azure Key Vault, set the environment variable `MAVERICS_SECRET_PROVIDER` in the file `/etc/maverics/maverics.env`, with the credentials found in the azure-credentials.json file. Use the following pattern:
`MAVERICS_SECRET_PROVIDER='azurekeyvault://<KEYVAULT NAME>.vault.azure.net?clientID=<APPID>&clientSecret=<PASSWORD>&tenantID=<TENANT>'`
-### Put everything together
+### Complete the configuration
-Here is how the Orchestrator's configuration will appear when you complete the configurations outlined above.
+The following information illustrates how Orchestrator configuration appears.
```yaml version: 0.4.2
appgateways:
## Test the flow 1. Navigate to the on-premises application URL, `https://example.com/sonar/dashboard`.-
-2. The Orchestrator should redirect to the page you configured in your user flow.
-
-3. Select the IdP from the list on the page.
-
-4. Once you're redirected to the IdP, supply your credentials as requested, including an MFA token if required by that IdP.
-
-5. After successfully authenticating, you should be redirected to Azure AD B2C, which forwards the app request to the Orchestrator redirect URI.
-
-6. The Orchestrator evaluates policies, calculates headers, and sends the user to the upstream application.
-
-7. You should see the requested application.
+2. The Orchestrator redirects to the user flow page.
+3. From the list, select the IdP.
+4. Enter credentials, including an MFA token, if required by the IdP.
+5. You're redirected to Azure AD B2C, which forwards the app request to the Orchestrator redirect URI.
+6. The Orchestrator evaluates policies, and calculates headers.
+7. The requested application appears.
## Next steps
-For additional information, review the following articles:
--- [Custom policies in Azure AD B2C](./custom-policy-overview.md)--- [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)
+- [Azure AD B2C custom policy overview](./custom-policy-overview.md)
+- [Tutorial: Create user flows and custom policies in Azure Active Directory B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)
active-directory-b2c Partner Whoiam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-whoiam.md
description: In this tutorial, learn how to integrate Azure AD B2C authentication with WhoIAM for user verification. -+ Previously updated : 09/13/2022 Last updated : 12/19/2022
-# Tutorial for configuring WhoIAM with Azure Active Directory B2C
+# Tutorial to configure Azure Active Directory B2C with WhoIAM
-In this sample tutorial, we provide guidance on how to configure [WhoIAM](https://www.whoiam.ai/brims/) Branded Identity Management System (BRIMS) in your environment and integrate it with Active Directory B2C (Azure AD B2C).
+In this tutorial, learn how to configure WhoIAM Branded Identity Management System (BRIMS) in your environment and integrate it with Azure Active Directory B2C (Azure AD B2C). The BRIMS apps and services are deployed in your environment. They provide user verification with voice, SMS, and email. BRIMS works with your identity and access management solution and is platform-agnostic.
+
+Learn more: [WhoIAM, Products and Services, Branded Identity Management System](https://www.whoiam.ai/brims/)
-BRIMS is a set of apps and services that's deployed in your environment. It provides voice, SMS, and email verification of your user base. BRIMS works in conjunction with your existing identity and access management solution and is platform agnostic.
## Prerequisites To get started, you'll need: -- An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).--- [An Azure AD B2C tenant](./tutorial-create-tenant.md) that's linked to your Azure subscription.--- A WhoIAM [trial account](https://www.whoiam.ai/contact-us/).
+- An Azure AD subscription
+ - If you don't have one, you can get an [Azure free account](https://azure.microsoft.com/free/)
+- [An Azure AD B2C tenant](./tutorial-create-tenant.md) linked to your Azure subscription
+- A WhoIAM trial account
+ - Go to [WhoIAM, Contact us](https://www.whoiam.ai/contact-us/) to get started
## Scenario description The WhoIAM integration includes the following components: -- An Azure AD B2C tenant. It's the authorization server that verifies the user's credentials based on custom policies defined in it. It's also known as the identity provider.--- An administration portal for managing clients and their configurations.--- An API service that exposes various features through endpoints. --- Azure Cosmos DB, which acts as the back end for both the BRIMS administration portal and the API service.
+- **Azure AD B2C tenant** - The authorization server that verifies user credentials, based on custom policies, know as the identity provider (IdP)
+- **Administration portal** - To manage clients and their configurations
+- **API service** - To expose various features through endpoints
+- **Azure Cosmos DB** - The back end for the BRIMS administration portal and API service
-The following architecture diagram shows the implementation.
+The following diagram shows the implementation architecture.
-![Diagram of the architecture of Azure AD B2C integration with WhoIAM.](media/partner-whoiam/whoiam-architecture-diagram.png)
+ ![Diagram of Azure AD B2C integration with WhoIAM.](media/partner-whoiam/whoiam-architecture-diagram.png)
-|Step | Description |
-|:--| :--|
-| 1. | The user arrives at a page to start the sign-up or sign-in request to an app that uses Azure AD B2C as its identity provider.
-| 2. | As part of authentication, the user requests to either verify ownership of their email or phone or use their voice as a biometric verification factor.
-| 3. | Azure AD B2C makes a call to the BRIMS API service and passes on the user's email address, phone number, and voice recording.
-| 4. | BRIMS uses predefined configurations such as fully customizable email and SMS templates to interact with the user in their respective language in a way that's consistent with the app's style.
-| 5. | After a user's identity verification is complete, BRIMS returns a token to Azure AD B2C to indicate the outcome of the verification. Azure AD B2C then either grants the user access to the app or fails their authentication attempt.
+1. The user signs up or signs in to request an app that uses Azure AD B2C as IdP
+2. The user requests ownership verification of their email, phone, or they use voice as biometric verification
+3. Azure AD B2C calls to the BRIMS API service and passes the user attributes
+4. BRIMS interacts with the user in their own language
+5. After verification, BRIMS returns a token to Azure AD B2C, which grants access, or doesn't.
## Sign up with WhoIAM 1. Contact [WhoIAM](https://www.whoiam.ai/contact-us/) and create a BRIMS account.
+2. Configure the following Azure
-2. Use the sign-up guidelines made available to you and configure the following Azure
-
- - [Azure Key Vault](https://azure.microsoft.com/services/key-vault/): Used for secure storage of passwords, such as mail service passwords.
-
- - [Azure App Service](https://azure.microsoft.com/services/app-service/): Used to host the BRIMS API and admin portal services.
-
- - [Azure Active Directory](https://azure.microsoft.com/services/active-directory/): Used to authenticate administrative users for the admin portal.
-
- - [Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/): Used to store and retrieve settings.
-
- - [Application Insights](../azure-monitor/app/app-insights-overview.md) (optional): Used to log in to both the API and the admin portal.
+ * [Key Vault](https://azure.microsoft.com/services/key-vault/): Store passwords
+ * [App Service](https://azure.microsoft.com/services/app-service/): Host the BRIMS API and admin portal services
+ * [Azure Active Directory](https://azure.microsoft.com/services/active-directory/): Authenticate administrative users for the portal
+ * [Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/): Store and retrieve settings
+ * [Application Insights overview](../azure-monitor/app/app-insights-overview.md) (optional): Sign in to the API and the portal
3. Deploy the BRIMS API and the BRIMS administration portal in your Azure environment.-
-4. Azure AD B2C custom policy samples are available in your BRIMS sign-up documentation. Follow the documentation to configure your app and use the BRIMS platform for user identity verification.
-
-For more information about WhoIAM's BRIMS, see the [product documentation](https://www.whoiam.ai/brims/).
+4. Follow the documentation to configure your app. Use BRIMS for user identity verification. Azure AD B2C custom policy samples are in the BRIMS sign-up documentation.
+For more information about WhoIAM BRIMS, request documentation on [WhoIAM, Contact Us](https://www.whoiam.ai/brims/).
## Test the user flow
-1. Open the Azure AD B2C tenant. Under **Policies**, select **Identity Experience Framework**.
-
-2. Select your previously created **SignUpSignIn**.
-
-3. Select **Run user flow** and then:
-
- a. For **Application**, select the registered app (the sample is JWT).
+1. Open the Azure AD B2C tenant.
+2. Under **Policies**, select **Identity Experience Framework**.
+3. Select the created **SignUpSignIn**.
+4. Select **Run user flow**.
+5. For **Application**, select the registered app (example is JWT).
+6. For **Reply URL**, select the **redirect URL**.
+7. Select **Run user flow**.
+8. Complete the sign-up flow
+9. Create an account.
+10. After the user attribute is created, the BRIMS service is called.
- b. For **Reply URL**, select the **redirect URL**.
-
- c. Select **Run user flow**.
-
-4. Go through the sign-up flow and create an account.
-
-5. The BRIMS service will be called during the flow, after the user attribute is created. If the flow is incomplete, check that the user isn't saved in the directory.
+> [!TIP]
+> If the flow is incomplete, confirm the user is saved in the directory.
## Next steps
-For additional information, review the following articles:
--- [Custom policies in Azure AD B2C](./custom-policy-overview.md)--- [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)
+- [Azure AD B2C custom policy overview](./custom-policy-overview.md)
+- [Tutorial: Create user flows and custom policies in Azure Active Directory B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)
active-directory-b2c Partner Zscaler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-zscaler.md
Title: Tutorial - Configure Azure Active Directory B2C with Zscaler
+ Title: Tutorial - Configure Zscaler Private access with Azure Active Directory B2C
+ description: Learn how to integrate Azure AD B2C authentication with Zscaler. -+ Previously updated : 09/13/2022 Last updated : 12/20/2022 # Tutorial: Configure Zscaler Private Access with Azure Active Directory B2C
-In this tutorial, you'll learn how to integrate Azure Active Directory B2C (Azure AD B2C) authentication with [Zscaler Private Access (ZPA)](https://www.zscaler.com/products/zscaler-private-access). ZPA delivers policy-based, secure access to private applications and assets without the cost, hassle, or security risks of a virtual private network (VPN). The Zscaler secure hybrid access offering enables a zero-attack surface for consumer-facing applications when it's combined with Azure AD B2C.
+In this tutorial, learn how to integrate Azure Active Directory B2C (Azure AD B2C) authentication with Zscaler Private Access (ZPA). ZPA is policy-based, secure access to private applications and assets without the overhead or security risks of a virtual private network (VPN). Zscaler secure hybrid access reduces attack surface for consumer-facing applications when combined with Azure AD B2C.
+
+Learn more: Go to [Zscaler](https://www.zscaler.com/products/zscaler-private-access) and select Products & Solutions, Products.
## Prerequisites Before you begin, youΓÇÖll need: -- An Azure subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). -- [An Azure AD B2C tenant](./tutorial-create-tenant.md) that's linked to your Azure subscription. -- [A ZPA subscription](https://azuremarketplace.microsoft.com/marketplace/apps/aad.zscalerprivateaccess?tab=Overview).
+- An Azure subscription
+ - If you don't have one, you can get an [Azure free account](https://azure.microsoft.com/free/)
+- [An Azure AD B2C tenant](./tutorial-create-tenant.md) linked to your Azure subscription
+- A ZPA subscription
+ - Go to [Azure Marketplace, Zscaler Private Access](https://azuremarketplace.microsoft.com/marketplace/apps/aad.zscalerprivateaccess?tab=Overview)
## Scenario description ZPA integration includes the following components: -- **Azure AD B2C**: The identity provider (IdP) that's responsible for verifying the userΓÇÖs credentials. It's also responsible for signing up a new user. -- **ZPA**: The service that's responsible for securing the web application by enforcing [zero-trust access](https://www.microsoft.com/security/blog/2018/12/17/zero-trust-part-1-identity-and-access-management/#:~:text=Azure%20Active%20Directory%20%28Azure%20AD%29%20provides%20the%20strong%2C,to%20express%20their%20access%20requirements%20in%20simple%20terms.). -- **The web application**: Hosts the service that the user is trying to access.
+- **Azure AD B2C** - The identity provider (IdP) that verifies user credentials
+- **ZPA** - Secures web applications by enforcing Zero Trust access
+ - See, [Zero Trust defined](https://www.microsoft.com/security/blog/2018/12/17/zero-trust-part-1-identity-and-access-management/#:~:text=Azure%20Active%20Directory%20%28Azure%20AD%29%20provides%20the%20strong%2C,to%20express%20their%20access%20requirements%20in%20simple%20terms)
+- **Web application** - Hosts the service users access
The following diagram shows how ZPA integrates with Azure AD B2C.
-![Diagram of Zscaler architecture, showing how ZPA integrates with Azure AD B2C.](media/partner-zscaler/zscaler-architecture-diagram.png)
-
-The sequence is described in the following table:
+ ![Diagram of Zscaler architecture, the ZPA and Azure AD B2C integration.](media/partner-zscaler/zscaler-architecture-diagram.png)
-|Step | Description |
-| :--:| :--|
-| 1 | A user arrives at a ZPA user portal or a ZPA browser-access application.
-| 2 | ZPA requires user context information before it can decide whether to allow the user to access the web application. To authenticate the user, ZPA performs a SAML redirect to the Azure AD B2C login page.
-| 3 | The user arrives at the Azure AB B2C login page. New users sign up to create an account, and existing users log in with their existing credentials. Azure AD B2C validates the user's identity.
-| 4 | Upon successful authentication, Azure AD B2C redirects the user back to ZPA along with the SAML assertion. ZPA verifies the SAML assertion and sets the user context.
-| 5 | ZPA evaluates access policies for the user. If the user is allowed to access the web application, the connection is allowed to pass through.
+1. A user arrives at the ZPA portal, or a ZPA browser-access application, to request access
+2. ZPA collects user attributes. ZPA performs a SAML redirect to the Azure AD B2C sign-in page.
+3. New users sign up and create an account. Current users sign in with credentials. Azure AD B2C validates user identity.
+4. Azure AD B2C redirects the user to ZPA with the SAML assertion, which ZPA verifies. ZPA sets the user context.
+5. ZPA evaluates access policies. The request is allowed or it isn't.
## Onboard to ZPA
-This tutorial assumes that you already have a working ZPA setup. If you're getting started with ZPA, refer to the [step-by-step configuration guide for ZPA](https://help.zscaler.com/zpa/step-step-configuration-guide-zpa).
-
-## Integrate ZPA with Azure AD B2C
-
-### Step 1: Configure Azure AD B2C as an IdP on ZPA
-
-To configure Azure AD B2C as an [IdP on ZPA](https://help.zscaler.com/zpa/configuring-idp-single-sign), do the following:
-
-1. Log in to the [ZPA Admin Portal](https://admin.private.zscaler.com).
-
-1. Go to **Administration** > **IdP Configuration**.
-
-1. Select **Add IdP Configuration**.
+This tutorial assumes ZPA is installed and running.
- The **Add IdP Configuration** pane opens.
+To get started with ZPA, go to help.zscaler.com for [Step-by-Step Configuration Guide for ZPA](https://help.zscaler.com/zpa/step-step-configuration-guide-zpa).
- ![Screenshot of the "IdP Information" tab on the "Add IdP Configuration" pane.](media/partner-zscaler/add-idp-configuration.png)
-
-1. Select the **IdP Information** tab, and then do the following:
-
- a. In the **Name** box, enter **Azure AD B2C**.
- b. Under **Single Sign-On**, select **User**.
- c. In the **Domains** drop-down list, select the authentication domains that you want to associate with this IdP.
-
-1. Select **Next**.
-
-1. Select the **SP Metadata** tab, and then do the following:
-
- a. Under **Service Provider URL**, copy or note the value for later use.
- b. Under **Service Provider Entity ID**, copy or note the value for later use.
-
- ![Screenshot of the "SP Metadata" tab on the "Add IdP Configuration" pane.](media/partner-zscaler/sp-metadata.png)
-
-1. Select **Pause**.
+## Integrate ZPA with Azure AD B2C
-After you've configured Azure AD B2C, the rest of the IdP configuration resumes.
+### Configure Azure AD B2C as an IdP on ZPA
-### Step 2: Configure custom policies in Azure AD B2C
+Configure Azure AD B2C as an IdP on ZPA.
->[!Note]
->This step is required only if you havenΓÇÖt already configured custom policies. If you already have one or more custom policies, you can skip this step.
+For more information, see [Configuring an IdP for single sign-on](https://help.zscaler.com/zpa/configuring-idp-single-sign).
-To configure custom policies on your Azure AD B2C tenant, see [Get started with custom policies in Azure Active Directory B2C](./tutorial-create-user-flows.md?pivots=b2c-custom-policy).
+1. Sign in to the [ZPA Admin portal](https://admin.private.zscaler.com).
+2. Go to **Administration** > **IdP Configuration**.
+3. Select **Add IdP Configuration**.
+4. The **Add IdP Configuration** pane appears.
-### Step 3: Register ZPA as a SAML application in Azure AD B2C
+ ![Screenshot of the IdP Information tab on the Add IdP Configuration pane.](media/partner-zscaler/add-idp-configuration.png)
-To configure a SAML application in Azure AD B2C, see [Register a SAML application in Azure AD B2C](./saml-service-provider.md).
+5. Select the **IdP Information** tab
+6. In the **Name** box, enter **Azure AD B2C**.
+7. Under **Single Sign-On**, select **User**.
+8. In the **Domains** drop-down list, select the authentication domains to associate with the IdP.
+9. Select **Next**.
+10. Select the **SP Metadata** tab.
+11. Under **Service Provider URL**, copy the value to use later.
+12. Under **Service Provider Entity ID**, copy the value to user later.
-In step ["Upload your policy"](./saml-service-provider.md#upload-your-policy), copy or note the IdP SAML metadata URL that's used by Azure AD B2C. You'll need it later.
+ ![Screenshot of the Service Provider Entity ID option on the SP Metadata tab.](media/partner-zscaler/sp-metadata.png)
-Follow the instructions through step ["Configure your application in Azure AD B2C"](./saml-service-provider.md#configure-your-application-in-azure-ad-b2c). In step 4.2, update the app manifest properties as follows:
+13. Select **Pause**.
-- For **identifierUris**: Use the Service Provider Entity ID that you copied or noted earlier in "Step 1.6.b". -- For **samlMetadataUrl**: Skip this property, because ZPA doesn't host a SAML metadata URL. -- For **replyUrlsWithType**: Use the Service Provider URL that you copied or noted earlier in "Step 1.6.a". -- For **logoutUrl**: Skip this property, because ZPA doesn't support a logout URL.
+### Configure custom policies in Azure AD B2C
-The rest of the steps aren't relevant to this tutorial.
+>[!IMPORTANT]
+>Configure custom policies in Azure AD B2C if you havenΓÇÖt configured custom policies.
-### Step 4: Extract the IdP SAML metadata from Azure AD B2C
+For more information, see [Tutorial: Create user flows and custom policies in Azure Active Directory B2C](./tutorial-create-user-flows.md?pivots=b2c-custom-policy).
-Next, you need to obtain a SAML metadata URL in the following format:
+### Register ZPA as a SAML application in Azure AD B2C
-`https://<tenant-name>.b2clogin.com/<tenant-name>.onmicrosoft.com/<policy-name>/Samlp/metadata`
+1. [Register a SAML application in Azure AD B2C](./saml-service-provider.md).
+2. During registration, in **Upload your policy**, copy the IdP SAML metadata URL used by Azure AD B2C to use later.
+3. Follow the instructions until **Configure your application in Azure AD B2C**.
+4. For step 4.2, update the app manifest properties
-Note that `<tenant-name>` is the name of your Azure AD B2C tenant, and `<policy-name>` is the name of the custom SAML policy that you created in the preceding step.
+ * For **identifierUris**, enter the Service Provider Entity ID you copied
+ * For **samlMetadataUrl**, skip this entry
+ * For **replyUrlsWithType**, enter the Service Provider URL you copied
+ * For **logoutUrl**, skip this entry
-For example, the URL might be:
+The remaining steps aren't required.
-`https://safemarch.b2clogin.com/safemarch.onmicrosoft.com/B2C_1A_signup_signin_saml/Samlp/metadata`.
+### Extract the IdP SAML metadata from Azure AD B2C
-Open a web browser and go to the SAML metadata URL. Right-click anywhere on the page, select **Save as**, and then save the file to your computer for use in the next step.
+1. Obtain a SAML metadata URL in the following format:
-### Step 5: Complete the IdP configuration on ZPA
+ `https://<tenant-name>.b2clogin.com/<tenant-name>.onmicrosoft.com/<policy-name>/Samlp/metadata`
-Complete the [IdP configuration in the ZPA Admin Portal](https://help.zscaler.com/zpa/configuring-idp-single-sign) that you partially configured earlier in "Step 1: Configure Azure AD B2C as an IdP on ZPA".
+> [!NOTE]
+> `<tenant-name>` is your Azure AD B2C tenant, and `<policy-name>` is the custom SAML policy that you created.
+> The URL might be:
+> `https://safemarch.b2clogin.com/safemarch.onmicrosoft.com/B2C_1A_signup_signin_saml/Samlp/metadata`.
-1. In the [ZPA Admin Portal](https://admin.private.zscaler.com), go to **Administration** > **IdP Configuration**.
+2. Open a web browser.
+3. Go to the SAML metadata URL.
+4. Right-click on the page.
+5. Select **Save as**.
+6. Save the file to your computer to use later.
-1. Select the IdP that you configured in "Step 1", and then select **Resume**.
+### Complete IdP configuration on ZPA
-1. On the **Add IdP Configuration** pane, select the **Create IdP** tab, and then do the following:
+To complete the IdP configuration:
- a. Under **IdP Metadata File**, upload the metadata file that you saved earlier in "Step 4: Extract the IdP SAML metadata from Azure AD B2C".
- b. Verify that the **Status** for the IdP configuration is **Enabled**.
- c. Select **Save**.
+1. Go to the [ZPA Admin portal](https://admin.private.zscaler.com).
+2. Select **Administration** > **IdP Configuration**.
+3. Select the IdP you configured, and then select **Resume**.
+4. On the **Add IdP Configuration** pane, select the **Create IdP** tab.
+5. Under **IdP Metadata File**, upload the metadata file you saved.
+6. Under **Status**, verify the configuration is **Enabled**.
+7. Select **Save**.
- ![Screenshot of the "Create IdP" tab on the "Add IdP Configuration" pane.](media/partner-zscaler/create-idp.png)
+ ![Screenshot of Enabled status, under SAML attributes, on the Add IdP Configuration pane.](media/partner-zscaler/create-idp.png)
## Test the solution
-Go to a ZPA user portal or a browser-access application, and test the sign-up or sign-in process. The test should result in a successful SAML authentication.
+To confirm SAML authentication, go to a ZPA user portal or a browser-access application, and test the sign-up or sign-in process.
## Next steps
-For more information, review the following articles:
--- [Get started with custom policies in Azure AD B2C](./tutorial-create-user-flows.md?pivots=b2c-custom-policy)
+- [Tutorial: Create user flows and custom policies in Azure Active Directory B2C](./tutorial-create-user-flows.md?pivots=b2c-custom-policy)
- [Register a SAML application in Azure AD B2C](./saml-service-provider.md)-- [Step-by-step configuration guide for ZPA](https://help.zscaler.com/zpa/step-step-configuration-guide-zpa)-- [Configure an IdP for single sign-on](https://help.zscaler.com/zpa/configuring-idp-single-sign)
+- [Step-by-Step Configuration Guide for ZPA](https://help.zscaler.com/zpa/step-step-configuration-guide-zpa)
+- [Configuring an IdP for single sign-on](https://help.zscaler.com/zpa/configuring-idp-single-sign)
active-directory-b2c Saml Identity Provider Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/saml-identity-provider-technical-profile.md
Previously updated : 01/11/2022 Last updated : 01/05/2023
Each SAML identity provider has different steps to expose and set the service pr
The following example shows a URL address to the SAML metadata of an Azure AD B2C technical profile: ```
-https://your-tenant-name.b2clogin.com/your-tenant-name/your-policy/samlp/metadata?idptp=your-technical-profile
+https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/your-policy/samlp/metadata?idptp=your-technical-profile
``` Replace the following values:
active-directory-b2c Secure Api Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/secure-api-management.md
https://<tenant-name>.b2clogin.com/99999999-0000-0000-0000-999999999999/v2.0/
## Configure the inbound policy in Azure API Management
-You're now ready to add the inbound policy in Azure API Management that validates API calls. By adding a [JSON web token (JWT) validation](../api-management/api-management-access-restriction-policies.md#ValidateJWT) policy that verifies the audience and issuer in an access token, you can ensure that only API calls with a valid token are accepted.
+You're now ready to add the inbound policy in Azure API Management that validates API calls. By adding a [JSON web token (JWT) validation](../api-management/validate-jwt-policy.md) policy that verifies the audience and issuer in an access token, you can ensure that only API calls with a valid token are accepted.
1. In the [Azure portal](https://portal.azure.com), go to your Azure API Management instance. 1. Select **APIs**.
active-directory-b2c Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/whats-new-docs.md
Welcome to what's new in Azure Active Directory B2C documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the B2C service, see [What's new in Azure Active Directory](../active-directory/fundamentals/whats-new.md) and [Azure AD B2C developer release notes](custom-policy-developer-notes.md)
+## December 2022
+
+### New articles
+
+- [Build a global identity solution with funnel-based approach](azure-ad-b2c-global-identity-funnel-based-design.md)
+- [Azure Active Directory B2C global identity framework proof of concept for funnel-based configuration](azure-ad-b2c-global-identity-proof-of-concept-funnel.md)
+- [Azure Active Directory B2C global identity framework proof of concept for region-based configuration](azure-ad-b2c-global-identity-proof-of-concept-regional.md)
+- [Build a global identity solution with region-based approach](azure-ad-b2c-global-identity-region-based-design.md)
+- [Azure Active Directory B2C global identity framework](azure-ad-b2c-global-identity-solutions.md)
+
+### Updated articles
+
+- [Set up a resource owner password credentials flow in Azure Active Directory B2C](add-ropc-policy.md)
+- [Use API connectors to customize and extend sign-up user flows and custom policies with external identity data sources](api-connectors-overview.md)
+- [Azure Active Directory B2C: Region availability & data residency](data-residency.md)
+- [Tutorial: Configure Experian with Azure Active Directory B2C](partner-experian.md)
+- [Tutorial: Configure Microsoft Dynamics 365 Fraud Protection with Azure Active Directory B2C](partner-dynamics-365-fraud-protection.md)
+- [Tutorial: Configure Azure Active Directory B2C with Datawiza to provide secure hybrid access](partner-datawiza.md)
+- [Configure TheAccessHub Admin Tool with Azure Active Directory B2C](partner-n8identity.md)
+- [Tutorial: Configure Cloudflare Web Application Firewall with Azure Active Directory B2C](partner-cloudflare.md)
+- [Set up a password reset flow in Azure Active Directory B2C](add-password-reset-policy.md)
+- [What is Azure Active Directory B2C?](overview.md)
+- [Technical and feature overview of Azure Active Directory B2C](technical-overview.md)
+ ## November 2022 ### New articles
Welcome to what's new in Azure Active Directory B2C documentation. This article
- [Application types that can be used in Active Directory B2C](application-types.md) - [Publish your Azure Active Directory B2C app to the Azure Active Directory app gallery](publish-app-to-azure-ad-app-gallery.md) - [Quickstart: Set up sign in for a desktop app using Azure Active Directory B2C](quickstart-native-app-desktop.md)-- [Register a single-page application (SPA) in Azure Active Directory B2C](tutorial-register-spa.md)
+- [Register a single-page application (SPA) in Azure Active Directory B2C](tutorial-register-spa.md)
active-directory-domain-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/policy-reference.md
Title: Built-in policy definitions for Azure Active Directory Domain Services description: Lists Azure Policy built-in policy definitions for Azure Active Directory Domain Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/28/2022 Last updated : 01/05/2023
active-directory-domain-services Secure Your Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/secure-your-domain.md
Previously updated : 08/17/2022 Last updated : 01/04/2023
This article shows you how to harden a managed domain by using setting setting s
- Disable NTLM password hash synchronization - Disable the ability to change passwords with RC4 encryption - Enable Kerberos armoring
+- LDAP signing
+- LDAP channel binding
## Prerequisites
To complete this article, you need the following resources:
1. Choose your managed domain, such as *aaddscontoso.com*. 1. On the left-hand side, select **Security settings**. 1. Click **Enable** or **Disable** for the following settings:
- - **TLS 1.2 only mode**
- - **NTLM authentication**
- - **Password synchronization from on-premises**
- - **NTLM password synchronization from on-premises**
- - **RC4 encryption**
- - **Kerberos armoring**
+ - **TLS 1.2 Only Mode**
+ - **NTLM v1 Authentication**
+ - **NTLM Password Synchronization**
+ - **Kerberos RC4 Encryption**
+ - **Kerberos Armoring**
+ - **LDAP Signing**
+ - **LDAP Channel Binding**
![Screenshot of Security settings to disable weak ciphers and NTLM password hash sync](media/secure-your-domain/security-settings.png)
active-directory-domain-services Tutorial Create Instance Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-instance-advanced.md
Previously updated : 06/16/2022 Last updated : 01/04/2023 #Customer intent: As an identity administrator, I want to create an Azure Active Directory Domain Services managed domain and define advanced configuration options so that I can synchronize identity information with my Azure Active Directory tenant and provide Domain Services connectivity to virtual machines and applications in Azure.
To complete this tutorial, you need the following resources and privileges:
Although not required for Azure AD DS, it's recommended to [configure self-service password reset (SSPR)][configure-sspr] for the Azure AD tenant. Users can change their password without SSPR, but SSPR helps if they forget their password and need to reset it. > [!IMPORTANT]
-> After you create a managed domain, you can't then move the managed domain to a different resource group, virtual network, subscription, etc. Take care to select the most appropriate subscription, resource group, region, and virtual network when you deploy the managed domain.
+> After you create a managed domain, you can't move it to a different subscription, resource group, or region. Take care to select the most appropriate subscription, resource group, and region when you deploy the managed domain.
## Sign in to the Azure portal
Some considerations for this dedicated virtual network subnet include the follow
* The subnet must have at least 3-5 available IP addresses in its address range to support the Azure AD DS resources. * Don't select the *Gateway* subnet for deploying Azure AD DS. It's not supported to deploy Azure AD DS into a *Gateway* subnet. * Don't deploy any other virtual machines to the subnet. Applications and VMs often use network security groups to secure connectivity. Running these workloads in a separate subnet lets you apply those network security groups without disrupting connectivity to your managed domain.
-* You can't move your managed domain to a different virtual network after you enable Azure AD DS.
For more information on how to plan and configure the virtual network, see [networking considerations for Azure Active Directory Domain Services][network-considerations].
active-directory-domain-services Tutorial Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-instance.md
Previously updated : 06/16/2022 Last updated : 01/04/2023 #Customer intent: As an identity administrator, I want to create an Azure Active Directory Domain Services managed domain so that I can synchronize identity information with my Azure Active Directory tenant and provide Domain Services connectivity to virtual machines and applications in Azure.
To complete this tutorial, you need the following resources and privileges:
Although not required for Azure AD DS, it's recommended to [configure self-service password reset (SSPR)][configure-sspr] for the Azure AD tenant. Users can change their password without SSPR, but SSPR helps if they forget their password and need to reset it. > [!IMPORTANT]
-> You can't move the managed domain to a different subscription, resource group, region, virtual network, or subnet after you create it. Take care to select the most appropriate subscription, resource group, region, virtual network, and subnet when you deploy the managed domain.
+> You can't move the managed domain to a different subscription, resource group, or region after you create it. Take care to select the most appropriate subscription, resource group, and region when you deploy the managed domain.
## Sign in to the Azure portal
active-directory Concept Authentication Methods Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-methods-manage.md
Previously updated : 12/06/2022 Last updated : 01/07/2023 +
Tenants are set to either Pre-migration or Migration in Progress by default, dep
:::image type="content" border="true" source="./media/concept-authentication-methods-manage/reason.png" alt-text="Screenshot of reasons for rollback.":::
-## Known issues
-
-* Currently, all users must be enabled for at least one MFA method that isn't passwordless and the user can register in interrupt mode. Possible methods include Microsoft Authenticator, SMS, voice calls, and software OATH/mobile app code. The method(s) can be enabled in any policy. If a user is not eligible for at least one of those methods, the user will see an error during registration and when visiting My Security Info. We're working to improve this experience to enable fully passwordless configurations.
+>[!NOTE]
+>After all authentication methods are fully migrated, the following elements of the legacy SSPR policy remain active:
+> - The **Number of methods required to reset** control: admins can continue to change how many authentication methods must be verified before a user can perform SSPR.
+> - The SSPR administrator policy: admins can continue to register and use any methods listed under the legacy SSPR administrator policy or methods they're enabled to use in the Authentication methods policy.
+>
+> In the future, both of these features will be integrated with the Authentication methods policy.
## Next steps
active-directory Concept Authentication Passwordless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-passwordless.md
The following providers offer FIDO2 security keys of different form factors that
||:--:|::|::|::|:--:|--| | AuthenTrend | ![y] | ![y]| ![y]| ![y]| ![n] | https://authentrend.com/about-us/#pg-35-3 | | Ciright | ![n] | ![n]| ![y]| ![n]| ![n] | https://www.cyberonecard.com/ |
+| Crayonic | ![y] | ![n]| ![y]| ![y]| ![n] | https://www.crayonic.com/keyvault |
| Ensurity | ![y] | ![y]| ![n]| ![n]| ![n] | https://www.ensurity.com/contact | | Excelsecu | ![y] | ![y]| ![y]| ![y]| ![n] | https://www.excelsecu.com/productdetail/esecufido2secu.html | | Feitian | ![y] | ![y]| ![y]| ![y]| ![y] | https://shop.ftsafe.us/pages/microsoft |
The following providers offer FIDO2 security keys of different form factors that
| GoTrustID Inc. | ![n] | ![y]| ![y]| ![y]| ![n] | https://www.gotrustid.com/idem-key | | HID | ![n] | ![y]| ![y]| ![n]| ![n] | https://www.hidglobal.com/contact-us | | Hypersecu | ![n] | ![y]| ![n]| ![n]| ![n] | https://www.hypersecu.com/hyperfido |
+| Identiv | ![n] | ![y]| ![y]| ![n]| ![n] | https://www.identiv.com/products/logical-access-control/utrust-fido2-security-keys/nfc |
| IDmelon Technologies Inc. | ![y] | ![y]| ![y]| ![y]| ![n] | https://www.idmelon.com/#idmelon | | Kensington | ![y] | ![y]| ![n]| ![n]| ![n] | https://www.kensington.com/solutions/product-category/why-biometrics/ | | KONA I | ![y] | ![n]| ![y]| ![y]| ![n] | https://konai.com/business/security/fido |
+| Movenda | ![y] | ![n]| ![y]| ![y]| ![n] | https://www.movenda.com/en/authentication/fido2/overview |
| NeoWave | ![n] | ![y]| ![y]| ![n]| ![n] | https://neowave.fr/en/products/fido-range/ | | Nymi | ![y] | ![n]| ![y]| ![n]| ![n] | https://www.nymi.com/nymi-band | | Octatco | ![y] | ![y]| ![n]| ![n]| ![n] | https://octatco.com/ |
The following providers offer FIDO2 security keys of different form factors that
| Yubico | ![y] | ![y]| ![y]| ![n]| ![y] | https://www.yubico.com/solutions/passwordless/ | - <!--Image references--> [y]: ./media/fido2-compatibility/yes.png [n]: ./media/fido2-compatibility/no.png
active-directory Concept Mfa Licensing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-mfa-licensing.md
The following table provides a list of the features that are available in the va
| Access Reviews | | | | | ΓùÅ | | Entitlements Management | | | | | ΓùÅ | | Privileged Identity Management (PIM), just-in-time access | | | | | ΓùÅ |
+| Lifecycle Workflows (preview) | | | | | ΓùÅ |
## Compare multi-factor authentication policies
active-directory How To Authentication Methods Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-authentication-methods-manage.md
Previously updated : 12/12/2022 Last updated : 01/07/2023 +
active-directory How To Mfa Number Match https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-number-match.md
description: Learn how to use number matching in MFA notifications
Previously updated : 12/14/2022 Last updated : 01/06/2023
To create the registry key that overrides push notifications:
Value = TRUE 1. Restart the NPS Service.
-If you're using Remote Desktop Gateway, the user account must be configured for phone verification, or Microsoft Authenticator push notifications. If neither option is configured, the user won't be able to meet the Azure AD MFA challenge, and Remote Desktop Gateway sign-in will fail. In this case, you can set OVERRIDE_NUMBER_MATCHING_WITH_OTP = FALSE.
+If you're using Remote Desktop Gateway and the user is registered for OTP code along with Microsoft Authenticator push notifications, the user won't be able to meet the Azure AD MFA challenge and Remote Desktop Gateway sign-in will fail. In this case, you can set OVERRIDE_NUMBER_MATCHING_WITH_TOP = FALSE to fall back to push notifications with Microsoft Authenticator.
### Apple Watch supported for Microsoft Authenticator
They'll see a prompt to supply a verification code. They must select their accou
### Can I opt out of number matching?
-Yes, currently you can disable number matching. We highly recommend that you enable number matching for all users in your tenant to protect yourself from MFA fatigue attacks. Microsoft will enable number matching for all tenants by February 27, 2023. After protection is enabled by default, users can't opt out of number matching in Microsoft Authenticator push notifications.
+Yes, currently you can disable number matching. We highly recommend that you enable number matching for all users in your tenant to protect yourself from MFA fatigue attacks. To protect the ecosystem and mitigate these threats, Microsoft will enable number matching for all tenants starting February 27, 2023. After protection is enabled by default, users can't opt out of number matching in Microsoft Authenticator push notifications.
### Does number matching only apply if Microsoft Authenticator is set as the default authentication method?
active-directory How To Mfa Registration Campaign https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-registration-campaign.md
Previously updated : 06/23/2022 Last updated : 01/05/2023
Yes. If they have been scoped for the nudge using the policy.
It's the same as snoozing.
+**Why donΓÇÖt some users see a nudge when there is a conditional access policy for "Register security information"?**
+
+A nudge won't appear if a user is in scope for a conditional access policy that blocks access to the **Register security information** page.
## Next steps
active-directory Troubleshoot Sspr Writeback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/troubleshoot-sspr-writeback.md
Previously updated : 02/22/2022 Last updated : 01/05/2023
The following more specific issues may occur with password writeback. If you hav
| Federated, pass-through authentication, or password-hash-synchronized users who attempt to reset their passwords, see an error after they submit their password. The error indicates that there was a service problem. <br> <br> In addition to this problem, during password reset operations, you might see an error in your event logs from the Azure AD Connect service indicating an "Object could not be found" error. | This error usually indicates that the sync engine is unable to find either the user object in the Azure AD connector space or the linked metaverse (MV) or Azure AD connector space object. <br> <br> To troubleshoot this problem, make sure that the user is indeed synchronized from on-premises to Azure AD via the current instance of Azure AD Connect and inspect the state of the objects in the connector spaces and MV. Confirm that the Active Directory Certificate Services (AD CS) object is connected to the MV object via the "Microsoft.InfromADUserAccountEnabled.xxx" rule.| | Federated, pass-through authentication, or password-hash-synchronized users who attempt to reset their passwords see an error after they submit their password. The error indicates that there was a service problem. <br> <br> In addition to this problem, during password reset operations, you might see an error in your event logs from the Azure AD Connect service that indicates that there's a "Multiple matches found" error. | This indicates that the sync engine detected that the MV object is connected to more than one AD CS object via "Microsoft.InfromADUserAccountEnabled.xxx". This means that the user has an enabled account in more than one forest. This scenario isn't supported for password writeback. | | Password operations fail with a configuration error. The application event log contains Azure AD Connect error 6329 with the text "0x8023061f (The operation failed because password synchronization is not enabled on this Management Agent)". | This error occurs if the Azure AD Connect configuration is changed to add a new Active Directory forest (or to remove and readd an existing forest) after the password writeback feature has already been enabled. Password operations for users in these recently added forests fail. To fix the problem, disable and then re-enable the password writeback feature after the forest configuration changes have been completed.
-| SSPR_0029: We are unable to reset your password due to an error in your on-premises configuration. Please contact your admin and ask them to investigate. | Problem: Password writeback has been enabled following all of the required steps, but when attempting to change a password you receive "SSPR_0029: Your organization hasnΓÇÖt properly set up the on-premises configuration for password reset." Checking the event logs on the Azure AD Connect system shows that the management agent credential was denied access.Possible Solution: Use RSOP on the Azure AD Connect system and your domain controllers to see if the policy "Network access: Restrict clients allowed to make remote calls to SAM" found under Computer Configuration > Windows Settings > Security Settings > Local Policies > Security Options is enabled. Edit the policy to include the MSOL_XXXXXXX management account as an allowed user. |
+| SSPR_0029: We are unable to reset your password due to an error in your on-premises configuration. Please contact your admin and ask them to investigate. | Problem: Password writeback has been enabled following all of the required steps, but when attempting to change a password you receive "SSPR_0029: Your organization hasnΓÇÖt properly set up the on-premises configuration for password reset." Checking the event logs on the Azure AD Connect system shows that the management agent credential was denied access.Possible Solution: Use RSOP on the Azure AD Connect system and your domain controllers to see if the policy "Network access: Restrict clients allowed to make remote calls to SAM" found under Computer Configuration > Windows Settings > Security Settings > Local Policies > Security Options is enabled. Edit the policy to include the MSOL_XXXXXXX management account as an allowed user. For more information, see [Troubleshoot error SSPR_0029: Your organization hasn't properly set up the on-premises configuration for password reset](/troubleshoot/azure/active-directory/password-writeback-error-code-sspr-0029).|
## Password writeback event log error codes
active-directory Howto Conditional Access Policy Risk User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-risk-user.md
Previously updated : 08/22/2022 Last updated : 01/06/2023
Organizations can choose to deploy this policy using the steps outlined below or
1. Under **Configure user risk levels needed for policy to be enforced**, select **High**. 1. Select **Done**. 1. Under **Access controls** > **Grant**.
- 1. Select **Grant access**, **Require password change**.
+ 1. Select **Grant access**, **Require multifactor authentication** and **Require password change**.
1. Select **Select**. 1. Under **Session**. 1. Select **Sign-in frequency**.
active-directory Howto Continuous Access Evaluation Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-continuous-access-evaluation-troubleshoot.md
Previously updated : 06/09/2022 Last updated : 01/05/2023
There are multiple sign-in requests for each authentication. Some will be shown
### Searching for specific sign-in attempts
-Use filters to narrow your search. For example, if a user signed in to Teams, use the Application filter and set it to Teams. Admins may need to check the sign-ins from both interactive and non-interactive tabs to locate the specific sign-in. To further narrow the search, admins may apply multiple filters.
+Sign in logs contain information on Success as well as failure events. Use filters to narrow your search. For example, if a user signed in to Teams, use the Application filter and set it to Teams. Admins may need to check the sign-ins from both interactive and non-interactive tabs to locate the specific sign-in. To further narrow the search, admins may apply multiple filters.
## Continuous access evaluation workbooks
active-directory Location Condition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/location-condition.md
Previously updated : 08/15/2022 Last updated : 01/09/2023
If you select **Determine location by IP address (IPv4 only)**, the system will
If you select **Determine location by GPS coordinates**, the user will need to have the Microsoft Authenticator app installed on their mobile device. Every hour, the system will contact the userΓÇÖs Microsoft Authenticator app to collect the GPS location of the userΓÇÖs mobile device.
-The first time the user is required to share their location from the Microsoft Authenticator app, the user will receive a notification in the app. The user will need to open the app and grant location permissions.
+The first time the user is required to share their location from the Microsoft Authenticator app, the user will receive a notification in the app. The user will need to open the app and grant location permissions.
-For the next 24 hours, if the user is still accessing the resource and granted the app permission to run in the background, the device's location is shared silently once per hour.
--- After 24 hours, the user must open the app and approve the notification. -- Users who have number matching or additional context enabled in the Microsoft Authenticator app won't receive notifications silently and must open the app to approve notifications.
+Every hour the user is accessing resources covered by the policy they will need to approve a push notification from the app.
Every time the user shares their GPS location, the app does jailbreak detection (Using the same logic as the Intune MAM SDK). If the device is jailbroken, the location isn't considered valid, and the user isn't granted access.
active-directory Workload Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/workload-identity.md
Previously updated : 11/21/2022 Last updated : 01/05/2023 -+ # Conditional Access for workload identities
-Conditional Access policies have historically applied only to users when they access apps and services like SharePoint online or the Azure portal. We are now extending support for Conditional Access policies to be applied to service principals owned by the organization. We call this capability Conditional Access for workload identities.
+Conditional Access policies have historically applied only to users when they access apps and services like SharePoint online or the Azure portal. We're now extending support for Conditional Access policies to be applied to service principals owned by the organization. We call this capability Conditional Access for workload identities.
A [workload identity](../develop/workload-identities-overview.md) is an identity that allows an application or service principal access to resources, sometimes in the context of a user. These workload identities differ from traditional user accounts as they:
A [workload identity](../develop/workload-identities-overview.md) is an identity
These differences make workload identities harder to manage and put them at higher risk for compromise. > [!IMPORTANT]
-> Conditional Access policies can be scoped to service principals in Azure AD with Workload Identities Premium licenses.
+> Workload Identities Premium licenses are required to create or modify Conditional Access policies scoped to service principals.
+> In directories without appropriate licenses, Conditional Access policies created prior to the release of Workload Identities Premium will be available for deletion only.
> [!NOTE] > Policy can be applied to single tenant service principals that have been registered in your tenant. Third party SaaS and multi-tenanted apps are out of scope. Managed identities are not covered by policy.
active-directory Access Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/access-tokens.md
Previously updated : 7/20/2022 Last updated : 12/28/2022 -
A *non-password-based* login is one where the user didn't type in a password to
- Voice - PIN
-Check out [Primary Refresh Tokens](../devices/concept-primary-refresh-token.md) for more details on primary refresh tokens.
+For more information, see [Primary Refresh Tokens](../devices/concept-primary-refresh-token.md).
## Next steps
active-directory Active Directory Claims Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-claims-mapping.md
Previously updated : 10/21/2022 Last updated : 01/06/2023
In this article, we walk through a few common scenarios that can help you unders
In the following examples, you create, update, link, and delete policies for service principals. Claims-mapping policies can only be assigned to service principal objects. If you're new to Azure Active Directory (Azure AD), we recommend that you [learn about how to get an Azure AD tenant](quickstart-create-new-tenant.md) before you proceed with these examples.
-When creating a claims-mapping policy, you can also emit a claim from a directory extension attribute in tokens. Use _ExtensionID_ for the extension attribute instead of _ID_ in the `ClaimsSchema` element. For more info on extension attributes, see [Using directory extension attributes](active-directory-schema-extensions.md).
+When creating a claims-mapping policy, you can also emit a claim from a directory extension attribute in tokens. Use _ExtensionID_ for the extension attribute instead of _ID_ in the `ClaimsSchema` element. For more information about using extension attributes, see [Using directory extension attributes](active-directory-schema-extensions.md).
The [Azure AD PowerShell Module public preview release](https://www.powershellgallery.com/packages/AzureADPreview) is required to configure claims-mapping policies. The PowerShell module is in preview, while the claims mapping and token creation runtime in Azure is generally available. Updates to the preview PowerShell module could require you to update or change your configuration scripts.
active-directory Active Directory Jwt Claims Customization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-jwt-claims-customization.md
+
+ Title: Customize app JSON Web Token (JWT) claims (Preview)
+description: Learn how to customize the claims issued by Microsoft identity platform in the JSON web token (JWT) token for enterprise applications.
+++++++ Last updated : 12/19/2022++++
+# Customize claims issued in the JSON web token (JWT) for enterprise applications (Preview)
+
+The Microsoft identity platform supports single sign-on (SSO) with most enterprise applications, including both applications pre-integrated in the Azure AD app gallery and custom applications. When a user authenticates to an application through the Microsoft identity platform using the OIDC protocol, the Microsoft identity platform sends a token to the application. And then, the application validates and uses the token to log the user in instead of prompting for a username and password.
+
+These JSON Web tokens (JWT) used by OIDC & OAuth applications (preview) contain pieces of information about the user known as *claims*. A *claim* is information that an identity provider states about a user inside the token they issue for that user.
+
+In an [OIDC response](v2-protocols-oidc.md), *claims* data is typically contained in the ID Token issued by the identity provider in the form of a JWT.
+
+## View or edit claims
+
+Besides [optional claims](active-directory-optional-claims.md), you can view, create or edit the attributes and claims issued in the OIDC token to the application. To edit claims, open the application in Azure portal through the Enterprise Applications experience. Then select **Single sign-on** blade in the left-hand menu and open the **Attributes & Claims** section.
++
+Claims customization may be required for various reasons by an application. A good example is when an application has been written to require a different set of claim URIs or claim values. Using the **Attributes & Claims** section you can add or remove a claim for your application. You can also create a custom claim that is specific for an application based on the use case.
+
+You can also assign any constant (static) value to any claims, which you define in Azure AD. The following steps outline how to assign a constant value:
+
+1. In the [Azure portal](https://portal.azure.com/), on the **Attributes & Claims** section, Select **Edit** to edit the claims.
+1. Select the required claim that you want to modify.
+1. Enter the constant value without quotes in the **Source attribute** as per your organization, and then select **Save**.
++
+The constant value is displayed on the Attributes overview.
++
+## Special claims transformations
+
+You can use the following special claims transformations functions.
+
+| Function | Description |
+|-|-|
+| **ExtractMailPrefix()** | Removes the domain suffix from either the email address or the user principal name. This function extracts only the first part of the user name being passed through (for example, "joe_smith" instead of joe_smith@contoso.com). |
+| **ToLower()** | Converts the characters of the selected attribute into lowercase characters. |
+| **ToUpper()** | Converts the characters of the selected attribute into uppercase characters. |
+
+## Add application-specific claims
+
+To add application-specific claims:
+
+1. In **User Attributes & Claims**, select **Add new claim** to open the **Manage user claims** page.
+1. Enter the **name** of the claims. The value doesn't strictly need to follow a URI pattern. If you need a URI pattern, you can put that in the **Namespace** field.
+1. Select the **Source** where the claim is going to retrieve its value. You can select a user attribute from the source attribute dropdown or apply a transformation to the user attribute before emitting it as a claim.
+
+### Claim transformations
+
+To apply a transformation to a user attribute:
+
+1. In **Manage claim**, select *Transformation* as the claim source to open the **Manage transformation** page.
+1. Select the function from the transformation dropdown. Depending on the function selected, you'll have to provide parameters and a constant value to evaluate in the transformation. Refer to the following table for more information about the available functions.
+1. **Treat source as multivalued** is a checkbox indicating whether the transform should be applied to all values or just the first. By default, transformations are only applied to the first element in a multi-value claim, by checking this box it ensures it's applied to all. This checkbox is only be enabled for multi-valued attributes, for example `user.proxyaddresses`.
+1. To apply multiple transformations, select **Add transformation**. You can apply a maximum of two transformations to a claim. For example, you could first extract the email prefix of the `user.mail`. Then, make the string upper case.
+
+ :::image type="content" source="./media/active-directory-jwt-claims-customization/sso-saml-multiple-claims-transformation.png" alt-text="Screenshot of claims transformation.":::
+
+You can use the following functions to transform claims.
+
+| Function | Description |
+|-|-|
+| **ExtractMailPrefix()** | Removes the domain suffix from either the email address or the user principal name. This function extracts only the first part of the user name being passed through (for example, "joe_smith" instead of joe_smith@contoso.com). |
+| **Join()** | Creates a new value by joining two attributes. Optionally, you can use a separator between the two attributes. For NameID claim transformation, the Join() function has specific behavior when the transformation input has a domain part. It removes the domain part from input before joining it with the separator and the selected parameter. For example, if the input of the transformation is 'joe_smith@contoso.com' and the separator is '@' and the parameter is 'fabrikam.com', this input combination results in 'joe_smith@fabrikam.com'. |
+| **ToLowercase()** | Converts the characters of the selected attribute into lowercase characters. |
+| **ToUppercase()** | Converts the characters of the selected attribute into uppercase characters. |
+| **Contains()** | Outputs an attribute or constant if the input matches the specified value. Otherwise, you can specify another output if there's no match. <br/>For example, if you want to emit a claim where the value is the user's email address if it contains the domain "@contoso.com", otherwise you want to output the user principal name. To perform this function, you configure the following values:<br/>*Parameter 1(input)*: user.email<br/>*Value*: "@contoso.com"<br/>Parameter 2 (output): user.email<br/>Parameter 3 (output if there's no match): user.userprincipalname |
+| **EndWith()** | Outputs an attribute or constant if the input ends with the specified value. Otherwise, you can specify another output if there's no match.<br/>For example, if you want to emit a claim where the value is the user's employee ID if the employee ID ends with "000", otherwise you want to output an extension attribute. To perform this function, you configure the following values:<br/>*Parameter 1(input)*: user.employeeid<br/>*Value*: "000"<br/>Parameter 2 (output): user.employeeid<br/>Parameter 3 (output if there's no match): user.extensionattribute1 |
+| **StartWith()** | Outputs an attribute or constant if the input starts with the specified value. Otherwise, you can specify another output if there's no match.<br/>For example, if you want to emit a claim where the value is the user's employee ID if the country/region starts with "US", otherwise you want to output an extension attribute. To perform this function, you configure the following values:<br/>*Parameter 1(input)*: user.country<br/>*Value*: "US"<br/>Parameter 2 (output): user.employeeid<br/>Parameter 3 (output if there's no match): user.extensionattribute1 |
+| **Extract() - After matching** | Returns the substring after it matches the specified value.<br/>For example, if the input's value is "Finance_BSimon", the matching value is "Finance_", then the claim's output is "BSimon". |
+| **Extract() - Before matching** | Returns the substring until it matches the specified value.<br/>For example, if the input's value is "BSimon_US", the matching value is "_US", then the claim's output is "BSimon". |
+| **Extract() - Between matching** | Returns the substring until it matches the specified value.<br/>For example, if the input's value is "Finance_BSimon_US", the first matching value is "Finance\_", the second matching value is "\_US", then the claim's output is "BSimon". |
+| **ExtractAlpha() - Prefix** | Returns the prefix alphabetical part of the string.<br/>For example, if the input's value is "BSimon_123", then it returns "BSimon". |
+| **ExtractAlpha() - Suffix** | Returns the suffix alphabetical part of the string.<br/>For example, if the input's value is "123_Simon", then it returns "Simon". |
+| **ExtractNumeric() - Prefix** | Returns the prefix numerical part of the string.<br/>For example, if the input's value is "123_BSimon", then it returns "123". |
+| **ExtractNumeric() - Suffix** | Returns the suffix numerical part of the string.<br/>For example, if the input's value is "BSimon_123", then it returns "123". |
+| **IfEmpty()** | Outputs an attribute or constant if the input is null or empty.<br/>For example, if you want to output an attribute stored in an extensionattribute if the employee ID for a given user is empty. To perform this function, you configure the following values:<br/>Parameter 1(input): user.employeeid<br/>Parameter 2 (output): user.extensionattribute1<br/>Parameter 3 (output if there's no match): user.employeeid |
+| **IfNotEmpty()** | Outputs an attribute or constant if the input isn't null or empty.<br/>For example, if you want to output an attribute stored in an extensionattribute if the employee ID for a given user isn't empty. To perform this function, you configure the following values:<br/>Parameter 1(input): user.employeeid<br/>Parameter 2 (output): user.extensionattribute1 |
+| **Substring() - Fixed Length** (Preview)| Extracts parts of a string claim type, beginning at the character at the specified position, and returns the specified number of characters.<br/>SourceClaim - The claim source of the transform that should be executed.<br/>StartIndex - The zero-based starting character position of a substring in this instance.<br/>Length - The length in characters of the substring.<br/>For example:<br/>sourceClaim - PleaseExtractThisNow<br/>StartIndex - 6<br/>Length - 11<br/>Output: ExtractThis |
+| **Substring() - EndOfString** (Preview) | Extracts parts of a string claim type, beginning at the character at the specified position, and returns the rest of the claim from the specified start index. <br/>SourceClaim - The claim source of the transform that should be executed.<br/>StartIndex - The zero-based starting character position of a substring in this instance.<br/>For example:<br/>sourceClaim - PleaseExtractThisNow<br/>StartIndex - 6<br/>Output: ExtractThisNow |
+| **RegexReplace()** (Preview) | RegexReplace() transformation accepts as input parameters:<br/>- Parameter 1: a user attribute as regex input<br/>- An option to trust the source as multivalued<br/>- Regex pattern<br/>- Replacement pattern. The replacement pattern may contain static text format along with a reference that points to regex output groups and more input parameters.<br/><br/>More instructions about how to use the RegexReplace() transformation are described later in this article. |
+
+If you need other transformations, submit your idea in the [feedback forum in Azure AD](https://feedback.azure.com/d365community/forum/22920db1-ad25-ec11-b6e6-000d3a4f0789) under the *SaaS application* category.
+
+## Regex-based claims transformation
+
+The following image shows an example of the first level of transformation:
++
+The following table provides information about the first level of transformations. The actions listed in the table correspond to the labels in the previous image. Select **Edit** to open the claims transformation blade.
+
+| Action | Field | Description |
+| :-- | :- | :- |
+| 1 | Transformation | Select the **RegexReplace()** option from the **Transformation** options to use the regex-based claims transformation method for claims transformation. |
+| 2 | Parameter 1 | The input for the regular expression transformation. For example, user.mail that has a user email address such as `admin@fabrikam.com`. |
+| 3 | Treat source as multivalued | Some input user attributes can be multi-value user attributes. If the selected user attribute supports multiple values and the user wants to use multiple values for the transformation, they need to select **Treat source as multivalued**. If selected, all values are used for the regex match, otherwise only the first value is used. |
+| 4 | Regex pattern | A regular expression that is evaluated against the value of user attribute selected as *Parameter 1*. For example a regular expression to extract the user alias from the user's email address would be represented as `(?'domain'^.*?)(?i)(\@fabrikam\.com)$`. |
+| 5 | Add additional parameter | More than one user attribute can be used for the transformation. The values of the attributes would then be merged with regex transformation output. Up to five additional parameters are supported. |
+| 6 | Replacement pattern | The replacement pattern is the text template, which contains placeholders for regex outcome. All group names must be wrapped inside the curly braces such as {group-name}. Let's say the administration wants to use user alias with some other domain name, for example `xyz.com` and merge country name with it. In this case, the replacement pattern would be `{country}.{domain}@xyz.com`, where `{country}` is the value of input parameter and `{domain}` is the group output from the regular expression evaluation. In such a case, the expected outcome is `US.swmal@xyz.com`. |
+
+The following image shows an example of the second level of transformation:
++
+The following table provides information about the second level of transformations. The actions listed in the table correspond to the labels in the previous image.
+
+| Action | Field | Description |
+| :-- | :- | :- |
+| 1 | Transformation | Regex-based claims transformations aren't limited to the first transformation and can be used as the second level transformation as well. Any other transformation method can be used as the first transformation. |
+| 2 | Parameter 1 | If **RegexReplace()** is selected as a second level transformation, output of first level transformation is used as an input for the second level transformation. The second level regex expression should match the output of the first transformation or the transformation won't be applied. |
+| 3 | Regex pattern | **Regex pattern** is the regular expression for the second level transformation. |
+| 4 | Parameter input | User attribute inputs for the second level transformations. |
+| 5 | Parameter input | Administrators can delete the selected input parameter if they don't need it anymore. |
+| 6 | Replacement pattern | The replacement pattern is the text template, which contains placeholders for regex outcome group name, input parameter group name, and static text value. All group names must be wrapped inside the curly braces such as `{group-name}`. Let's say the administration wants to use user alias with some other domain name, for example `xyz.com` and merge country name with it. In this case, the replacement pattern would be `{country}.{domain}@xyz.com`, where `{country}` is the value of input parameter and `{domain}` is the group output from the regular expression evaluation. In such a case, the expected outcome is `US.swmal@xyz.com`. |
+| 7 | Test transformation | The RegexReplace() transformation is evaluated only if the value of the selected user attribute for *Parameter 1* matches with the regular expression provided in the **Regex pattern** textbox. If they don't match, the default claim value is added to the token. To validate regular expression against the input parameter value, a test experience is available within the transform blade. This test experience operates on dummy values only. When additional input parameters are used, the name of the parameter is added to the test result instead of the actual value. To access the test section, select **Test transformation**. |
+
+The following image shows an example of testing the transformations:
++
+The following table provides information about testing the transformations. The actions listed in the table correspond to the labels in the previous image.
+
+| Action | Field | Description |
+| :-- | :- | :- |
+| 1 | Test transformation | Select the close or (X) button to hide the test section and re-render the **Test transformation** button again on the blade. |
+| 2 | Test regex input | Accepts input that is used for the regular expression test evaluation. In case regex-based claims transformation is configured as a second level transformation, a value is provided that would be the expected output of the first transformation. |
+| 3 | Run test | After the test regex input is provided and the **Regex pattern**, **Replacement pattern** and **Input parameters** are configured, the expression can be evaluated by selecting **Run test**. |
+| 4 | Test transformation result | If evaluation succeeds, an output of test transformation will be rendered against the **Test transformation result** label. |
+| 5 | Remove transformation | The second level transformation can be removed by selecting **Remove transformation**. |
+| 6 | Specify output if no match | When a regex input value is configured against the *Parameter 1* that doesn't match the **Regular expression**, the transformation is skipped. In such cases, the alternate user attribute can be configured, which is added to the token for the claim by checking **Specify output if no match**. |
+| 7 | Parameter 3 | If an alternate user attribute needs to be returned when there's no match and **Specify output if no match** is checked, an alternate user attribute can be selected using the dropdown. This dropdown is available against **Parameter 3 (output if no match)**. |
+| 8 | Summary | At the bottom of the blade, a full summary of the format is displayed that explains the meaning of the transformation in simple text. |
+| 9 | Add | After the configuration settings for the transformation are verified, it can be saved to a claims policy by selecting **Add**. Changes won't be saved unless **Save** is selected on the **Manage Claim** blade. |
+
+RegexReplace() transformation is also available for the group claims transformations.
+
+### Transformation validations
+
+When the following conditions occur after **Add** or **Run test** is selected, a message is displayed that provides more information about the issue:
+
+* Input parameters with duplicate user attributes aren't allowed.
+* Unused input parameters found. Defined input parameters should have respective usage into the Replacement pattern text.
+* The provided test regex input doesn't match with the provided regular expression.
+* The source for the groups into the replacement pattern isn't found.
+
+## Emit claims based on conditions
+
+You can specify the source of a claim based on user type and the group to which the user belongs.
+
+The user type can be:
+
+* **Any** - All users are allowed to access the application.
+* **Members**: Native member of the tenant
+* **All guests**: User is brought over from an external organization with or without Azure AD.
+* **AAD guests**: Guest user belongs to another organization using Azure AD.
+* **External guests**: Guest user belongs to an external organization that doesn't have Azure AD.
+
+One scenario where the user type is helpful is when the source of a claim is different for a guest and an employee accessing an application. You can specify that if the user is an employee, the NameID is sourced from user.email. If the user is a guest, then the NameID is sourced from user.extensionattribute1.
+
+To add a claim condition:
+
+1. In **Manage claim**, expand the Claim conditions.
+1. Select the user type.
+1. Select the group(s) to which the user should belong. You can select up to 50 unique groups across all claims for a given application.
+1. Select the **Source** where the claim is going to retrieve its value. You can select a user attribute from the source attribute dropdown or apply a transformation to the user attribute before emitting it as a claim.
+
+The order in which you add the conditions are important. Azure AD first evaluates all conditions with source `Attribute` and then evaluates all conditions with source `Transformation` to decide which value to emit in the claim. Conditions with the same source are evaluated from top to bottom. The last value, which matches the expression is emitted in the claim. Transformations such as `IsNotEmpty` and `Contains` act like restrictions.
+
+For example, Britta Simon is a guest user in the Contoso tenant. Britta belongs to another organization that also uses Azure AD. Given the following configuration for the Fabrikam application, when Britta tries to sign in to Fabrikam, the Microsoft identity platform evaluates the conditions.
+
+First, the Microsoft identity platform verifies whether Britta's user type is **All guests**. Because this is true, the Microsoft identity platform assigns the source for the claim to `user.extensionattribute1`. Second, the Microsoft identity platform verifies whether Britta's user type is **AAD guests**, because this is also true, the Microsoft identity platform assigns the source for the claim to `user.mail`. Finally, the claim is emitted with a value of `user.mail` for Britta.
++
+As another example, consider when Britta Simon tries to sign in and the following configuration is used. Azure AD first evaluates all conditions with source `Attribute`. Because Britta's user type is **AAD guests**, `user.mail` is assigned as the source for the claim. Next, Azure AD evaluates the transformations. Because Britta is a guest, `user.extensionattribute1` is now the new source for the claim. Because Britta is in **AAD guests**, `user.othermail` is now the source for this claim. Finally, the claim is emitted with a value of `user.othermail` for Britta.
++
+As a final example, consider what happens if Britta has no `user.othermail` configured or it's empty. In both cases the condition entry is ignored, and the claim falls back to `user.extensionattribute1` instead.
+
+## Advanced claims options
+
+Advanced claims options can be configured for OIDC applications to expose the same claim as SAML tokens and vice versa for applications that intend to use the same claim for both SAML2.0 and OIDC response tokens.
+
+Advanced claim options can be configured by checking the box under **Advanced Claims Options** in the **Manage claims** blade.
+
+## Next steps
+
+* [Configure single sign-on on applications that aren't in the Azure AD application gallery](../manage-apps/configure-saml-single-sign-on.md)
active-directory Active Directory Optional Claims https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-optional-claims.md
Title: Provide optional claims to Azure AD apps
description: How to add custom or additional claims to the SAML 2.0 and JSON Web Tokens (JWT) tokens issued by Microsoft identity platform. - Previously updated : 04/04/2022 Last updated : 12/28/2022 - + # Provide optional claims to your app
This section covers the configuration options under optional claims for changing
] } ```
-3) Emit group names in the format of samAccountName for on-prem synced groups and display name for cloud groups in SAML and OIDC ID Tokens for the groups assigned to the application:
+3) Emit group names in the format of samAccountName for on-premises synced groups and display name for cloud groups in SAML and OIDC ID Tokens for the groups assigned to the application:
**Application manifest entry:**
active-directory Active Directory Saml Claims Customization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-saml-claims-customization.md
Title: Customize app SAML token claims
+ Title: Customize SAML token claims
description: Learn how to customize the claims issued by Microsoft identity platform in the SAML token for enterprise applications.
Previously updated : 06/28/2022 Last updated : 12/19/2022 - # Customize claims issued in the SAML token for enterprise applications
-Today, the Microsoft identity platform supports single sign-on (SSO) with most enterprise applications, including both applications pre-integrated in the Azure AD app gallery and custom applications. When a user authenticates to an application through the Microsoft identity platform using the SAML 2.0 protocol, the Microsoft identity platform sends a token to the application (via an HTTP POST). And then, the application validates and uses the token to log the user in instead of prompting for a username and password. These SAML tokens contain pieces of information about the user known as *claims*.
+The Microsoft identity platform supports single sign-on (SSO) with most enterprise applications, including both applications pre-integrated in the Azure Active Directory (Azure AD) application gallery and custom applications. When a user authenticates to an application through the Microsoft identity platform using the SAML 2.0 protocol, the Microsoft identity platform sends a token to the application. And then, the application validates and uses the token to log the user in instead of prompting for a username and password.
-A *claim* is information that an identity provider states about a user inside the token they issue for that user. In [SAML token](https://en.wikipedia.org/wiki/SAML_2.0), this data is typically contained in the SAML Attribute Statement. The userΓÇÖs unique ID is typically represented in the SAML Subject also called as Name Identifier.
+These SAML tokens contain pieces of information about the user known as *claims*. A *claim* is information that an identity provider states about a user inside the token they issue for that user. In a [SAML token](https://en.wikipedia.org/wiki/SAML_2.0), *claims* data is typically contained in the SAML Attribute Statement. The user's unique ID is typically represented in the SAML Subject also referred to as the name identifier (nameID).
-By default, the Microsoft identity platform issues a SAML token to your application that contains a `NameIdentifier` claim with a value of the userΓÇÖs username (also known as the user principal name) in Azure AD, which can uniquely identify the user. The SAML token also contains other claims that include the userΓÇÖs email address, first name, and last name.
+By default, the Microsoft identity platform issues a SAML token to an application that contains a `NameIdentifier` claim with a value of the user's username (also known as the user principal name) in Azure AD, which can uniquely identify the user. The SAML token also contains other claims that include the user's email address, first name, and last name.
-To view or edit the claims issued in the SAML token to the application, open the application in Azure portal. Then open the **User Attributes & Claims** section.
+## View or edit claims
-![Open the User Attributes & Claims section in the Azure portal](./media/active-directory-saml-claims-customization/sso-saml-user-attributes-claims.png)
+To view or edit the claims issued in the SAML token to the application, open the application in Azure portal. Then open the **Attributes & Claims** section.
+ There are two possible reasons why you might need to edit the claims issued in the SAML token: * The application requires the `NameIdentifier` or NameID claim to be something other than the username (or user principal name) stored in Azure AD. * The application has been written to require a different set of claim URIs or claim values.
-## Editing nameID
+## Edit nameID
To edit the NameID (name identifier value): 1. Open the **Name identifier value** page. 1. Select the attribute or transformation you want to apply to the attribute. Optionally, you can specify the format you want the NameID claim to have.
- ![Edit the NameID (name identifier) value](./media/active-directory-saml-claims-customization/saml-sso-manage-user-claims.png)
+ :::image type="content" source="./media/active-directory-saml-claims-customization/saml-sso-manage-user-claims.png" alt-text="Screenshot of editing the NameID (name identifier) value in the Azure portal.":::
### NameID format
-If the SAML request contains the element NameIDPolicy with a specific format, then the Microsoft identity platform will honor the format in the request.
+If the SAML request contains the element NameIDPolicy with a specific format, then the Microsoft identity platform honors the format in the request.
-If the SAML request doesn't contain an element for NameIDPolicy, then the Microsoft identity platform will issue the NameID with the format you specify. If no format is specified, the Microsoft identity platform will use the default source format associated with the claim source selected. If a transformation results in a null or illegal value, Azure AD will send a persistent pairwise identifier in the nameIdentifier.
+If the SAML request doesn't contain an element for NameIDPolicy, then the Microsoft identity platform issues the NameID with the format you specify. If no format is specified, the Microsoft identity platform uses the default source format associated with the claim source selected. If a transformation results in a null or illegal value, Azure AD sends a persistent pairwise identifier in the nameIdentifier.
-From the **Choose name identifier format** dropdown, you can select one of the following options.
+From the **Choose name identifier format** dropdown, select one of the options in the following table.
| NameID format | Description | ||-|
-| **Default** | Microsoft identity platform will use the default source format. |
-| **Persistent** | Microsoft identity platform will use Persistent as the NameID format. |
-| **Email address** | Microsoft identity platform will use EmailAddress as the NameID format. |
-| **Unspecified** | Microsoft identity platform will use Unspecified as the NameID format. |
-|**Windows domain qualified name**| Microsoft identity platform will use the WindowsDomainQualifiedName format.|
+| **Default** | Microsoft identity platform uses the default source format. |
+| **Persistent** | Microsoft identity platform uses Persistent as the NameID format. |
+| **Email address** | Microsoft identity platform uses EmailAddress as the NameID format. |
+| **Unspecified** | Microsoft identity platform uses Unspecified as the NameID format. |
+|**Windows domain qualified name**| Microsoft identity platform uses the WindowsDomainQualifiedName format.|
Transient NameID is also supported, but isn't available in the dropdown and can't be configured on Azure's side. To learn more about the NameIDPolicy attribute, see [Single sign-On SAML protocol](single-sign-on-saml-protocol.md).
Select the desired source for the `NameIdentifier` (or NameID) claim. You can se
| employeeid | Employee ID of the user | | Directory extensions | Directory extensions [synced from on-premises Active Directory using Azure AD Connect Sync](../hybrid/how-to-connect-sync-feature-directory-extensions.md) | | Extension Attributes 1-15 | On-premises extension attributes used to extend the Azure AD schema |
-| pairwiseidΓÇï | Persistent form of user identifier |
+| pairwiseid | Persistent form of user identifier |
-For more info, see [Table 3: Valid ID values per source](reference-claims-mapping-policy-type.md#table-3-valid-id-values-per-source).
+For more information about identifier values, see [Table 3: Valid ID values per source](reference-claims-mapping-policy-type.md#table-3-valid-id-values-per-source).
-You can also assign any constant (static) value to any claims, which you define in Azure AD. The steps below outline how to assign a constant value:
+Any constant (static) value can be assigned to any claim that is defined in Azure AD. The following steps outline how to assign a constant value:
-1. In the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>, on the **User Attributes & Claims** section, click on the **Edit** icon to edit the claims.
-1. Click on the required claim which you want to modify.
+1. In the [Azure portal](https://portal.azure.com/), in the **User Attributes & Claims** section, select **Edit** to edit the claims.
+1. Select the required claim that you want to modify.
1. Enter the constant value without quotes in the **Source attribute** as per your organization and click **Save**.
- ![Org Attributes & Claims section in the Azure portal](./media/active-directory-saml-claims-customization/organization-attribute.png)
+ :::image type="content" source="./media/active-directory-saml-claims-customization/organization-attribute.png" alt-text="Screenshot of the organization Attributes & Claims section in the Azure portal.":::
-1. The constant value will be displayed as below.
+1. The constant value will be displayed as shown in the following image.
- ![Edit Attributes & Claims section in the Azure portal](./media/active-directory-saml-claims-customization/edit-attributes-claims.png)
+ :::image type="content" source="./media/active-directory-saml-claims-customization/edit-attributes-claims.png" alt-text="Screenshot of editing in the Attributes & Claims section in the Azure portal.":::
-### Special claims - transformations
+## Special claims transformations
-You can also use the claims transformations functions.
+You can use the following special claims transformations functions.
| Function | Description | |-|-|
-| **ExtractMailPrefix()** | Removes the domain suffix from either the email address or the user principal name. This extracts only the first part of the user name being passed through (for example, "joe_smith" instead of joe_smith@contoso.com). |
+| **ExtractMailPrefix()** | Removes the domain suffix from either the email address or the user principal name. This function extracts only the first part of the user name being passed through (for example, "joe_smith" instead of joe_smith@contoso.com). |
| **ToLower()** | Converts the characters of the selected attribute into lowercase characters. | | **ToUpper()** | Converts the characters of the selected attribute into uppercase characters. |
-## Adding application-specific claims
+## Add application-specific claims
To add application-specific claims:
To add application-specific claims:
To apply a transformation to a user attribute: 1. In **Manage claim**, select *Transformation* as the claim source to open the **Manage transformation** page.
-2. Select the function from the transformation dropdown. Depending on the function selected, you'll have to provide parameters and a constant value to evaluate in the transformation. Refer to the table below for more information about the available functions.
-3. (preview) `Treat source as multivalued` is a checkbox indicating if the transform should be applied to all values or just the first. By default, transformations will only be applied to the first element in a multi value claim, by checking this box it ensures it's applied to all. This checkbox will only be enabled for multivalued attributes, for example `user.proxyaddresses`.
-4. To apply multiple transformations, click on **Add transformation**. You can apply a maximum of two transformations to a claim. For example, you could first extract the email prefix of the `user.mail`. Then, make the string upper case.
+1. Select the function from the transformation dropdown. Depending on the function selected, you'll have to provide parameters and a constant value to evaluate in the transformation. Refer to the following table for more information about the available functions.
+1. **Treat source as multivalued** is a checkbox indicating whether the transform should be applied to all values or just the first. By default, transformations are only applied to the first element in a multi-value claim, by checking this box it ensures it's applied to all. This checkbox is only be enabled for multi-valued attributes, for example `user.proxyaddresses`.
+1. To apply multiple transformations, select **Add transformation**. You can apply a maximum of two transformations to a claim. For example, you could first extract the email prefix of the `user.mail`. Then, make the string upper case.
- ![Multiple claims transformation](./media/active-directory-saml-claims-customization/sso-saml-multiple-claims-transformation.png)
+ :::image type="content" source="./media/active-directory-saml-claims-customization/sso-saml-multiple-claims-transformation.png" alt-text="Screenshot of claims transformation.":::
You can use the following functions to transform claims. | Function | Description | |-|-|
-| **ExtractMailPrefix()** | Removes the domain suffix from either the email address or the user principal name. This extracts only the first part of the user name being passed through (for example, "joe_smith" instead of joe_smith@contoso.com). |
-| **Join()** | Creates a new value by joining two attributes. Optionally, you can use a separator between the two attributes. For NameID claim transformation, the Join() function has specific behavior when the transformation input has a domain part. It will remove the domain part from input before joining it with the separator and the selected parameter. For example, if the input of the transformation is ΓÇÿjoe_smith@contoso.comΓÇÖ and the separator is ΓÇÿ@ΓÇÖ and the parameter is ΓÇÿfabrikam.comΓÇÖ, this will result in joe_smith@fabrikam.com. |
+| **ExtractMailPrefix()** | Removes the domain suffix from either the email address or the user principal name. This function extracts only the first part of the user name being passed through (for example, "joe_smith" instead of joe_smith@contoso.com). |
+| **Join()** | Creates a new value by joining two attributes. Optionally, you can use a separator between the two attributes. For NameID claim transformation, the Join() function has specific behavior when the transformation input has a domain part. It removes the domain part from input before joining it with the separator and the selected parameter. For example, if the input of the transformation is 'joe_smith@contoso.com' and the separator is '@' and the parameter is 'fabrikam.com', this input combination results in 'joe_smith@fabrikam.com'. |
| **ToLowercase()** | Converts the characters of the selected attribute into lowercase characters. | | **ToUppercase()** | Converts the characters of the selected attribute into uppercase characters. |
-| **Contains()** | Outputs an attribute or constant if the input matches the specified value. Otherwise, you can specify another output if thereΓÇÖs no match.<br/>For example, if you want to emit a claim where the value is the userΓÇÖs email address if it contains the domain ΓÇ£@contoso.comΓÇ¥, otherwise you want to output the user principal name. To do this, you would configure the following values:<br/>*Parameter 1(input)*: user.email<br/>*Value*: "@contoso.com"<br/>Parameter 2 (output): user.email<br/>Parameter 3 (output if there's no match): user.userprincipalname |
-| **EndWith()** | Outputs an attribute or constant if the input ends with the specified value. Otherwise, you can specify another output if thereΓÇÖs no match.<br/>For example, if you want to emit a claim where the value is the userΓÇÖs employee ID if the employee ID ends with ΓÇ£000ΓÇ¥, otherwise you want to output an extension attribute. To do this, you would configure the following values:<br/>*Parameter 1(input)*: user.employeeid<br/>*Value*: "000"<br/>Parameter 2 (output): user.employeeid<br/>Parameter 3 (output if there's no match): user.extensionattribute1 |
-| **StartWith()** | Outputs an attribute or constant if the input starts with the specified value. Otherwise, you can specify another output if thereΓÇÖs no match.<br/>For example, if you want to emit a claim where the value is the userΓÇÖs employee ID if the country/region starts with "US", otherwise you want to output an extension attribute. To do this, you would configure the following values:<br/>*Parameter 1(input)*: user.country<br/>*Value*: "US"<br/>Parameter 2 (output): user.employeeid<br/>Parameter 3 (output if there's no match): user.extensionattribute1 |
+| **Contains()** | Outputs an attribute or constant if the input matches the specified value. Otherwise, you can specify another output if there's no match. <br/>For example, if you want to emit a claim where the value is the user's email address if it contains the domain "@contoso.com", otherwise you want to output the user principal name. To perform this function, you configure the following values:<br/>*Parameter 1(input)*: user.email<br/>*Value*: "@contoso.com"<br/>Parameter 2 (output): user.email<br/>Parameter 3 (output if there's no match): user.userprincipalname |
+| **EndWith()** | Outputs an attribute or constant if the input ends with the specified value. Otherwise, you can specify another output if there's no match.<br/>For example, if you want to emit a claim where the value is the user's employee ID if the employee ID ends with "000", otherwise you want to output an extension attribute. To perform this function, you configure the following values:<br/>*Parameter 1(input)*: user.employeeid<br/>*Value*: "000"<br/>Parameter 2 (output): user.employeeid<br/>Parameter 3 (output if there's no match): user.extensionattribute1 |
+| **StartWith()** | Outputs an attribute or constant if the input starts with the specified value. Otherwise, you can specify another output if there's no match.<br/>For example, if you want to emit a claim where the value is the user's employee ID if the country/region starts with "US", otherwise you want to output an extension attribute. To perform this function, you configure the following values:<br/>*Parameter 1(input)*: user.country<br/>*Value*: "US"<br/>Parameter 2 (output): user.employeeid<br/>Parameter 3 (output if there's no match): user.extensionattribute1 |
| **Extract() - After matching** | Returns the substring after it matches the specified value.<br/>For example, if the input's value is "Finance_BSimon", the matching value is "Finance_", then the claim's output is "BSimon". | | **Extract() - Before matching** | Returns the substring until it matches the specified value.<br/>For example, if the input's value is "BSimon_US", the matching value is "_US", then the claim's output is "BSimon". | | **Extract() - Between matching** | Returns the substring until it matches the specified value.<br/>For example, if the input's value is "Finance_BSimon_US", the first matching value is "Finance\_", the second matching value is "\_US", then the claim's output is "BSimon". |
You can use the following functions to transform claims.
| **ExtractAlpha() - Suffix** | Returns the suffix alphabetical part of the string.<br/>For example, if the input's value is "123_Simon", then it returns "Simon". | | **ExtractNumeric() - Prefix** | Returns the prefix numerical part of the string.<br/>For example, if the input's value is "123_BSimon", then it returns "123". | | **ExtractNumeric() - Suffix** | Returns the suffix numerical part of the string.<br/>For example, if the input's value is "BSimon_123", then it returns "123". |
-| **IfEmpty()** | Outputs an attribute or constant if the input is null or empty.<br/>For example, if you want to output an attribute stored in an extensionattribute if the employee ID for a given user is empty. To do this, you would configure the following values:<br/>Parameter 1(input): user.employeeid<br/>Parameter 2 (output): user.extensionattribute1<br/>Parameter 3 (output if there's no match): user.employeeid |
-| **IfNotEmpty()** | Outputs an attribute or constant if the input isn't null or empty.<br/>For example, if you want to output an attribute stored in an extensionattribute if the employee ID for a given user is not empty. To do this, you would configure the following values:<br/>Parameter 1(input): user.employeeid<br/>Parameter 2 (output): user.extensionattribute1 |
-| **Substring() ΓÇô Fixed Length** (Preview)| Extracts parts of a string claim type, beginning at the character at the specified position, and returns the specified number of characters.<br/>SourceClaim - The claim source which the transform should be executed.<br/>StartIndex - The zero-based starting character position of a substring in this instance.<br/>Length - The length in characters of the substring.<br/>For example:<br/>sourceClaim ΓÇô PleaseExtractThisNow<br/>StartIndex ΓÇô 6<br/>Length ΓÇô 11<br/>Output: ExtractThis |
-| **Substring() ΓÇô EndOfString** (Preview) | Extracts parts of a string claim type, beginning at the character at the specified position, and returns the rest of the claim from the specified start index. <br/>SourceClaim - The claim source which the transform should be executed.<br/>StartIndex - The zero-based starting character position of a substring in this instance.<br/>For example:<br/>sourceClaim ΓÇô PleaseExtractThisNow<br/>StartIndex ΓÇô 6<br/>Output: ExtractThisNow |
-| **RegexReplace()** (Preview) | RegexReplace() transformation accepts as input parameters:<br />- Parameter 1: a user attribute as regex input<br />- An option to trust the source as multivalued<br />- Regex pattern<br />- Replacement pattern. The replacement pattern may contain static text format along with reference pointing to regex output groups and additional input parameters.<br /><br/>Additional instructions on how to use RegexReplace() Transformation described below. |
-
-If you need additional transformations, submit your idea in the [feedback forum in Azure AD](https://feedback.azure.com/d365community/forum/22920db1-ad25-ec11-b6e6-000d3a4f0789) under the *SaaS application* category.
-
-## How to use the RegexReplace() Transformation
-
-1. Select the edit button (pencil icon) to open the claims transformation blade.
-1. Select the ΓÇ£RegexReplace()ΓÇ¥ option from the ΓÇ£TransformationΓÇ¥ options to use regex-based claims transformation method for claims transformation.
-1. ΓÇ£Parameter 1ΓÇ¥ is the source user input attribute which will be an input for the regular expression transformation. For example, user.mail which will have user email address such as admin@contoso.com.
-1. Some input user attributes can be multi-value user attributes. If the selected user attribute supports multiple values and the user wants to use multiple values for the transformation, they need to check the ΓÇ£Treat source as multivaluedΓÇ¥ checkbox. If an administrator checks the checkbox, all values will be used for regex match, otherwise only the first value will be used.
-1. The ΓÇ£Regex patternΓÇ¥ textbox accepts a regular expression which will be evaluated against the value of user attribute selected as ΓÇ£parameter 1ΓÇ¥. For example a regular expression to extract user alias from the userΓÇÖs email address would be represented as: ΓÇ£(?'domain'^.*?)(?i)(\@contoso\.com)$ΓÇ¥
-1. By using the ΓÇ£Add additional parameterΓÇ¥ button, an administrator can choose more user attributes, which can be used for the transformation. The values of the additional attributes would then be merged with regex transformation output. Currently, up to five additional parameters are supported.
- <br />To illustrate, let's use user.country attribute as an input parameter. The value of this attribute is ΓÇ£USΓÇ¥. To merge this into the replacement pattern the administrator needs to refer to it as {country} inside the replacement pattern. Once the administrator selects the user attribute for the parameter, an info balloon for the parameter will explain how the parameter can be used inside the replacement pattern.
-1. The ΓÇ£Replacement patternΓÇ¥ textbox accepts the replacement pattern. Replacement pattern is the text template, which contains placeholders for regex outcome group name, input parameter group name, and static text value. All group names must be wrapped inside the curly braces such as {group-name}. LetΓÇÖs say the administration wants to use user alias with some other domain name e.g. xyz.com and merge country name with it. In this case the replacement pattern would be ΓÇ£{country}.{domain}@xyz.comΓÇ¥, where {country} will be the value of input parameter and {domain} will be the group output from the regular expression evaluation. In such a case, the expected outcome will be ΓÇ£US.swmal@xyz.comΓÇ¥
-
-1. RegexReplace() transformation will be evaluated only if the value of the selected user attribute for ΓÇ£Parameter 1ΓÇ¥ matches with the regular expression provided in ΓÇ£Regex patternΓÇ¥ textbox. If they do not match, the default claim value will be added to the token. To validate regular expression against the input parameter value, a test experience is available within the transform blade. This test experience operates on dummy values only. When additional input parameters are used, the name of the parameter will be added to the test result instead of the actual value. You can see a sample output in point 18. To access the test section an administrator can select the ΓÇ£Test transformationΓÇ¥ button.
-
-1. Regex-based claims transformations are not limited to the first transformation and can be used as the second level transformation as well. Any other transformation method can be used as the first transformation.
-
-1. If RegexReplace() is selected as a second level transformation, output of first level transformation will be used as an input for the second level transformation. The second level regex expression should match the output of the first transformation else the transformation won't be applied.
-
-1. Same as point 5 above, ΓÇ£Regex patternΓÇ¥ is the regular expression for the second level transformation.
-
-1. These are the inputs user attributes for the second level transformations.
+| **IfEmpty()** | Outputs an attribute or constant if the input is null or empty.<br/>For example, if you want to output an attribute stored in an extensionattribute if the employee ID for a given user is empty. To perform this function, you configure the following values:<br/>Parameter 1(input): user.employeeid<br/>Parameter 2 (output): user.extensionattribute1<br/>Parameter 3 (output if there's no match): user.employeeid |
+| **IfNotEmpty()** | Outputs an attribute or constant if the input isn't null or empty.<br/>For example, if you want to output an attribute stored in an extensionattribute if the employee ID for a given user isn't empty. To perform this function, you configure the following values:<br/>Parameter 1(input): user.employeeid<br/>Parameter 2 (output): user.extensionattribute1 |
+| **Substring() - Fixed Length** (Preview)| Extracts parts of a string claim type, beginning at the character at the specified position, and returns the specified number of characters.<br/>SourceClaim - The claim source of the transform that should be executed.<br/>StartIndex - The zero-based starting character position of a substring in this instance.<br/>Length - The length in characters of the substring.<br/>For example:<br/>sourceClaim - PleaseExtractThisNow<br/>StartIndex - 6<br/>Length - 11<br/>Output: ExtractThis |
+| **Substring() - EndOfString** (Preview) | Extracts parts of a string claim type, beginning at the character at the specified position, and returns the rest of the claim from the specified start index. <br/>SourceClaim - The claim source of the transform that should be executed.<br/>StartIndex - The zero-based starting character position of a substring in this instance.<br/>For example:<br/>sourceClaim - PleaseExtractThisNow<br/>StartIndex - 6<br/>Output: ExtractThisNow |
+| **RegexReplace()** (Preview) | RegexReplace() transformation accepts as input parameters:<br/>- Parameter 1: a user attribute as regex input<br/>- An option to trust the source as multivalued<br/>- Regex pattern<br/>- Replacement pattern. The replacement pattern may contain static text format along with a reference that points to regex output groups and more input parameters.<br/><br/>More instructions about how to use the RegexReplace() transformation are described later in this article. |
-1. Administrators can delete the selected input parameter if they donΓÇÖt need it anymore.
+If you need other transformations, submit your idea in the [feedback forum in Azure AD](https://feedback.azure.com/d365community/forum/22920db1-ad25-ec11-b6e6-000d3a4f0789) under the *SaaS application* category.
-1. Once administrator selects ΓÇ£Test transformationΓÇ¥, the ΓÇ£Test transformationΓÇ¥ section will be displayed, and ΓÇ£Test transformationΓÇ¥ button goes away.
+## Regex-based claims transformation
-1. Select the close or (X) button to hide the test section and re-render the ΓÇ£Test transformationΓÇ¥ button again on the blade.
+The following image shows an example of the first level of transformation:
-1. The ΓÇ£Test regex inputΓÇ¥ textbox accepts the dummy input, which will be used as an input for regular expression test evaluation. In case regex-based claims transformation is configured as a second level transformation, the administrator needs to provided a dummy value, which would be the expected output of the first transformation.
-1. Once the administrator provides the test regex input and configures the ΓÇ£Regex patternΓÇ¥, ΓÇ£Replacement patternΓÇ¥ and ΓÇ£Input parametersΓÇ¥, they can evaluate the expression by clicking on the ΓÇ£Run testΓÇ¥ button.
+The following table provides information about the first level of transformations. The actions listed in the table correspond to the labels in the previous image. Select **Edit** to open the claims transformation blade.
-1. If evaluation succeeds, an output of test transformation will be rendered against the ΓÇ£Test transformation resultΓÇ¥ label.
+| Action | Field | Description |
+| :-- | :- | :- |
+| 1 | Transformation | Select the **RegexReplace()** option from the **Transformation** options to use the regex-based claims transformation method for claims transformation. |
+| 2 | Parameter 1 | The input for the regular expression transformation. For example, user.mail that has a user email address such as `admin@fabrikam.com`. |
+| 3 | Treat source as multivalued | Some input user attributes can be multi-value user attributes. If the selected user attribute supports multiple values and the user wants to use multiple values for the transformation, they need to select **Treat source as multivalued**. If selected, all values are used for the regex match, otherwise only the first value is used. |
+| 4 | Regex pattern | A regular expression that is evaluated against the value of user attribute selected as *Parameter 1*. For example a regular expression to extract the user alias from the user's email address would be represented as `(?'domain'^.*?)(?i)(\@fabrikam\.com)$`. |
+| 5 | Add additional parameter | More than one user attribute can be used for the transformation. The values of the attributes would then be merged with regex transformation output. Up to five additional parameters are supported. |
+| 6 | Replacement pattern | The replacement pattern is the text template, which contains placeholders for regex outcome. All group names must be wrapped inside the curly braces such as `{group-name}`. Let's say the administration wants to use user alias with some other domain name, for example `xyz.com` and merge country name with it. In this case, the replacement pattern would be `{country}.{domain}@xyz.com`, where `{country}` is the value of input parameter and `{domain}` is the group output from the regular expression evaluation. In such a case, the expected outcome is `US.swmal@xyz.com`. |
-1. The administrator can remove the second level transformation by using ΓÇ£Remove transformationΓÇ¥ button.
+The following image shows an example of the second level of transformation:
-1. When a regex input value is configured against the ΓÇ£Parameter 1ΓÇ¥ which doesn't matches the ΓÇ£Regular expressionΓÇ¥, the transformation is skipped. In such cases, the administrator can configure the alternate user attribute, which will be added to the token for the claim by checking the checkbox for ΓÇ£Specify output if no matchΓÇ¥.
-1. If an administrator wants to return alternate user attribute in case of no match and checked the ΓÇ£Specify output if no matchΓÇ¥ checkbox, they can select alternate user attribute by using the dropdown. This dropdown is available against ΓÇ£Parameter 3 (output if no match)ΓÇ¥.
+The following table provides information about the second level of transformations. The actions listed in the table correspond to the labels in the previous image.
-1. At the bottom of the blade a full summary of the format is displayed which explains the meaning of transformation in simple text.
+| Action | Field | Description |
+| :-- | :- | :- |
+| 1 | Transformation | Regex-based claims transformations aren't limited to the first transformation and can be used as the second level transformation as well. Any other transformation method can be used as the first transformation. |
+| 2 | Parameter 1 | If **RegexReplace()** is selected as a second level transformation, output of first level transformation is used as an input for the second level transformation. The second level regex expression should match the output of the first transformation or the transformation won't be applied. |
+| 3 | Regex pattern | **Regex pattern** is the regular expression for the second level transformation. |
+| 4 | Parameter input | User attribute inputs for the second level transformations. |
+| 5 | Parameter input | Administrators can delete the selected input parameter if they don't need it anymore. |
+| 6 | Replacement pattern | The replacement pattern is the text template, which contains placeholders for regex outcome group name, input parameter group name, and static text value. All group names must be wrapped inside the curly braces such as `{group-name}`. Let's say the administration wants to use user alias with some other domain name, for example `xyz.com` and merge country name with it. In this case, the replacement pattern would be `{country}.{domain}@xyz.com`, where `{country}` is the value of input parameter and {domain} is the group output from the regular expression evaluation. In such a case, the expected outcome is `US.swmal@xyz.com`. |
+| 7 | Test transformation | The RegexReplace() transformation is evaluated only if the value of the selected user attribute for *Parameter 1* matches with the regular expression provided in the **Regex pattern** textbox. If they don't match, the default claim value is added to the token. To validate regular expression against the input parameter value, a test experience is available within the transform blade. This test experience operates on dummy values only. When additional input parameters are used, the name of the parameter is added to the test result instead of the actual value. To access the test section, select **Test transformation**. |
-1. Once the administrator is satisfied with the configuration settings for the transformation, they can save it to claims policy by selecting the ΓÇ£AddΓÇ¥ button. Changes wonΓÇÖt be saved unless the administrator manually selects the ΓÇ£SaveΓÇ¥ toolbar button available on ΓÇ£Manage ClaimΓÇ¥ blade.
+The following image shows an example of testing the transformations:
-RegexReplace() transformation is also available for the group claims transformations.
-### RegexReplace() Transform Validations
-Input parameters with duplicate user attributes aren't allowed. If duplicate user attributes are selected, the following validation message will be rendered after the administrator selects ΓÇ£AddΓÇ¥ or ΓÇ£Run testΓÇ¥ button.
+The following table provides information about testing the transformations. The actions listed in the table correspond to the labels in the previous image.
+| Action | Field | Description |
+| :-- | :- | :- |
+| 1 | Test transformation | Select the close or (X) button to hide the test section and re-render the **Test transformation** button again on the blade. |
+| 2 | Test regex input | Accepts input that is used for the regular expression test evaluation. In case regex-based claims transformation is configured as a second level transformation, a value is provided that would be the expected output of the first transformation. |
+| 3 | Run test | After the test regex input is provided and the **Regex pattern**, **Replacement pattern** and **Input parameters** are configured, the expression can be evaluated by selecting **Run test**. |
+| 4 | Test transformation result | If evaluation succeeds, an output of test transformation will be rendered against the **Test transformation result** label. |
+| 5 | Remove transformation | The second level transformation can be removed by selecting **Remove transformation**. |
+| 6 | Specify output if no match | When a regex input value is configured against the *Parameter 1* that doesn't match the **Regular expression**, the transformation is skipped. In such cases, the alternate user attribute can be configured, which is added to the token for the claim by checking **Specify output if no match**. |
+| 7 | Parameter 3 | If an alternate user attribute needs to be returned when there's no match and **Specify output if no match** is checked, an alternate user attribute can be selected using the dropdown. This dropdown is available against **Parameter 3 (output if no match)**. |
+| 8 | Summary | At the bottom of the blade, a full summary of the format is displayed that explains the meaning of the transformation in simple text. |
+| 9 | Add | After the configuration settings for the transformation are verified, it can be saved to a claims policy by selecting **Add**. Changes won't be saved unless **Save** is selected on the **Manage Claim** blade. |
-When unused input parameters found, the following message will be rendered on click of ΓÇ£AddΓÇ¥ and ΓÇ£Run testΓÇ¥ button click. Defined input parameters should have respective usage into the Replacement pattern text.
+RegexReplace() transformation is also available for the group claims transformations.
+### RegexReplace() transformation validations
-With test experience, if provided test regex input doesn't match with the provided regular expression then following message will be displayed. This validation needs input value hence it wonΓÇÖt be applied when user clicks on ΓÇ£AddΓÇ¥ button.
+When the following conditions occur after **Add** or **Run test** is selected, a message is displayed that provides more information about the issue:
-
-With test experience, when source for the groups into the replacement pattern not found user will receive following message. This validation wonΓÇÖt be applied when user clicks on ΓÇ£AddΓÇ¥ button.
-
+* Input parameters with duplicate user attributes aren't allowed.
+* Unused input parameters found. Defined input parameters should have respective usage into the Replacement pattern text.
+* The provided test regex input doesn't match with the provided regular expression.
+* The source for the groups into the replacement pattern isn't found.
## Add the UPN claim to SAML tokens
-The `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn` claim is part of the [SAML restricted claim set](reference-claims-mapping-policy-type.md#table-2-saml-restricted-claim-set), so you can't add it in the **User Attributes & Claims** section. As a workaround, you can add it as an [optional claim](active-directory-optional-claims.md) through **App registrations** in the Azure portal.
-
-Open the app in **App registrations** and select **Token configuration** and then **Add optional claim**. Select the **SAML** token type, choose **upn** from the list, and click **Add** to get the claim in the token.
+The `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn` claim is part of the [SAML restricted claim set](reference-claims-mapping-policy-type.md#table-2-saml-restricted-claim-set), so you can't add it in the **Attributes & Claims** section. As a workaround, you can add it as an [optional claim](active-directory-optional-claims.md) through **App registrations** in the Azure portal.
+Open the application in **App registrations**, select **Token configuration**, and then select **Add optional claim**. Select the **SAML** token type, choose **upn** from the list, and then click **Add** to add the claim to the token.
-## Emitting claims based on conditions
+## Emit claims based on conditions
-You can specify the source of a claim based on user type and the group to which the user belongs.
+You can specify the source of a claim based on user type and the group to which the user belongs.
The user type can be:-- **Any**: All users are allowed to access the application.-- **Members**: Native member of the tenant-- **All guests**: User is brought over from an external organization with or without Azure AD.-- **AAD guests**: Guest user belongs to another organization using Azure AD.-- **External guests**: Guest user belongs to an external organization that doesn't have Azure AD.
+* **Any** - All users are allowed to access the application.
+* **Members**: Native member of the tenant
+* **All guests**: User is brought over from an external organization with or without Azure AD.
+* **AAD guests**: Guest user belongs to another organization using Azure AD.
+* **External guests**: Guest user belongs to an external organization that doesn't have Azure AD.
-One scenario where this is helpful is when the source of a claim is different for a guest and an employee accessing an application. You may want to specify that if the user is an employee the NameID is sourced from user.email, but if the user is a guest then the NameID is sourced from user.extensionattribute1.
+One scenario where the user type is helpful is when the source of a claim is different for a guest and an employee accessing an application. You can specify that if the user is an employee, the NameID is sourced from user.email. If the user is a guest, then the NameID is sourced from user.extensionattribute1.
To add a claim condition: 1. In **Manage claim**, expand the Claim conditions.
-2. Select the user type.
-3. Select the group(s) to which the user should belong. You can select up to 50 unique groups across all claims for a given application.
-4. Select the **Source** where the claim is going to retrieve its value. You can select a user attribute from the source attribute dropdown or apply a transformation to the user attribute before emitting it as a claim.
+1. Select the user type.
+1. Select the group(s) to which the user should belong. You can select up to 50 unique groups across all claims for a given application.
+1. Select the **Source** where the claim is going to retrieve its value. You can select a user attribute from the source attribute dropdown or apply a transformation to the user attribute before emitting it as a claim.
+
+The order in which you add the conditions are important. Azure AD first evaluates all conditions with source `Attribute` and then evaluates all conditions with source `Transformation` to decide which value to emit in the claim. Conditions with the same source are evaluated from top to bottom. The last value, which matches the expression is emitted in the claim. Transformations such as `IsNotEmpty` and `Contains` act like restrictions.
-The order in which you add the conditions are important. Azure AD first evaluates all conditions with source `Attribute` and then evaluates all conditions with source `Transformation` to decide which value to emit in the claim. Conditions with the same source are evaluated from top to bottom. The last value, which matches the expression will be emitted in the claim. Transformations such as `IsNotEmpty` and `Contains` act like additional restrictions.
+For example, Britta Simon is a guest user in the Contoso tenant. Britta belongs to another organization that also uses Azure AD. Given the following configuration for the Fabrikam application, when Britta tries to sign in to Fabrikam, the Microsoft identity platform evaluates the conditions.
-For example, Britta Simon is a guest user in the Contoso tenant. Britta belongs to another organization that also uses Azure AD. Given the below configuration for the Fabrikam application, when Britta tries to sign in to Fabrikam, the Microsoft identity platform will evaluate the conditions as follows.
+First, the Microsoft identity platform verifies whether Britta's user type is **All guests**. Because this is true, the Microsoft identity platform assigns the source for the claim to `user.extensionattribute1`. Second, the Microsoft identity platform verifies whether Britta's user type is **AAD guests**, because this is also true, the Microsoft identity platform assigns the source for the claim to `user.mail`. Finally, the claim is emitted with a value of `user.mail` for Britta.
-First, the Microsoft identity platform verifies if Britta's user type is **All guests**. Since, this is true then the Microsoft identity platform assigns the source for the claim to `user.extensionattribute1`. Second, the Microsoft identity platform verifies if Britta's user type is **AAD guests**, since this is also true then the Microsoft identity platform assigns the source for the claim to `user.mail`. Finally, the claim is emitted with value `user.mail` for Britta.
+As another example, consider when Britta Simon tries to sign in and the following configuration is used. Azure AD first evaluates all conditions with source `Attribute`. Because Britta's user type is **AAD guests**, `user.mail` is assigned as the source for the claim. Next, Azure AD evaluates the transformations. Because Britta is a guest, `user.extensionattribute1` is now the new source for the claim. Because Britta is in **AAD guests**, `user.othermail` is now the source for this claim. Finally, the claim is emitted with a value of `user.othermail` for Britta.
-As another example, consider when Britta Simon tries to sign in and the following configuration is used. Azure AD first evaluates all conditions with source `Attribute`. Because Britta's user type is **AAD guests**, `user.mail` is assigned as the source for the claim. Next, Azure AD evaluates the transformations. Because Britta is a guest, `user.extensionattribute1` is now the new source for the claim. Because Britta is in **AAD guests**, `user.othermail` is now the source for this claim. Finally, the claim is emitted with value `user.othermail` for Britta.
+As a final example, consider what happens if Britta has no `user.othermail` configured or it's empty. In both cases the condition entry is ignored, and the claim falls back to `user.extensionattribute1` instead.
-As a final example, letΓÇÖs consider what happens if Britta has no `user.othermail` configured or it's empty. In both cases the condition entry is ignored, and the claim will fall back to `user.extensionattribute1` instead.
+## Advanced SAML claims options
-## Advanced SAML Claims Options
-The following table lists advanced options that can be configured for an application.
+Advanced claims options can be configured for SAML2.0 applications to expose the same claim to OIDC tokens and vice versa for applications that intend to use the same claim for both SAML2.0 and OIDC response tokens.
+
+Advanced claim options can be configured by checking the box under **Advanced SAML Claims Options** in the **Manage claims** blade.
+
+The following table lists other advanced options that can be configured for an application.
| Option | Description | |--|-|
-| Append application ID to issuer | Automatically adds the application ID to the issuer claim. This option ensures a unique claim value for each instance when there are multiple instances of the same application. This setting is ignored if a custom signing key isn't configured for the application. |ΓÇ»
+| Append application ID to issuer | Automatically adds the application ID to the issuer claim. This option ensures a unique claim value for each instance when there are multiple instances of the same application. This setting is ignored if a custom signing key isn't configured for the application. |
| Override audience claim | Allows for the overriding of the audience claim sent to the application. The value provided must be a valid absolute URI. This setting is ignored if a custom signing key isn't configured for the application. |
-| Include attribute name format | If selected, Azure Active Directory adds an additional attribute called `NameFormat` that describes the format of the name to restricted, core, and optional claims for the application. For more information, see, [Claims mapping policy type](reference-claims-mapping-policy-type.md#claim-sets) |
--
+| Include attribute name format | If selected, Azure Active Directory adds an attribute called `NameFormat` that describes the format of the name to restricted, core, and optional claims for the application. For more information, see, [Claims mapping policy type](reference-claims-mapping-policy-type.md#claim-sets) |
## Next steps
-* [Application management in Azure AD](../manage-apps/what-is-application-management.md)
-* [Configure single sign-on on applications that aren't in the Azure AD application gallery](../manage-apps/configure-saml-single-sign-on.md)
-* [Troubleshoot SAML-based single sign-on](../manage-apps/debug-saml-sso-issues.md)
+* [Configure single sign-on for applications that aren't in the Azure AD application gallery](../manage-apps/configure-saml-single-sign-on.md)
active-directory Active Directory Schema Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-schema-extensions.md
Previously updated : 07/29/2020 Last updated : 01/06/2023 -+ # Using directory extension attributes in claims
active-directory Authentication Vs Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/authentication-vs-authorization.md
Title: Authentication vs. authorization description: Learn about the basics of authentication and authorization in the Microsoft identity platform. -+
Last updated 11/02/2022-+ -+ #Customer intent: As an application developer, I want to understand the basic concepts of authentication and authorization in the Microsoft identity platform.
active-directory Authorization Basics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/authorization-basics.md
description: Learn about the basics of authorization in the Microsoft identity p
-
Previously updated : 06/16/2022 Last updated : 01/06/2023 --+ #Customer intent: As an application developer, I want to understand the basic concepts of authorization in the Microsoft identity platform.
active-directory Custom Rbac For Developers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/custom-rbac-for-developers.md
description: Learn about what custom RBAC is and why it's important to implement
-
Previously updated : 08/19/2022 Last updated : 01/06/2023 -+ #Customer intent: As a developer, I want to learn about custom RBAC and why I need to use it in my application. # Role-based access control for application developers
-Role-based access control (RBAC) allows certain users or groups to have specific permissions to access and manage resources. Application RBAC differs from [Azure role-based access control](../../role-based-access-control/overview.md) and [Azure AD role-based access control](../roles/custom-overview.md#understand-azure-ad-role-based-access-control). Azure custom roles and built-in roles are both part of Azure RBAC, which is used to help manage Azure resources. Azure AD RBAC is used to manage Azure AD resources. This article explains application-specific RBAC.
+Role-based access control (RBAC) allows certain users or groups to have specific permissions to access and manage resources. Application RBAC differs from [Azure role-based access control](../../role-based-access-control/overview.md) and [Azure AD role-based access control](../roles/custom-overview.md#understand-azure-ad-role-based-access-control). Azure custom roles and built-in roles are both part of Azure RBAC, which is used to help manage Azure resources. Azure AD RBAC is used to manage Azure AD resources. This article explains application-specific RBAC. For information about implementing application-specific RBAC, see [How to add app roles to your application and receive them in the token](./howto-add-app-roles-in-azure-ad-apps.md).
## Roles definitions
Although either app roles or groups can be used for authorization, key differenc
## Next steps -- [How to add app roles to your application and receive them in the token](./howto-add-app-roles-in-azure-ad-apps.md) - [Azure Identity Management and access control security best practices](../../security/fundamentals/identity-management-best-practices.md)
active-directory Developer Support Help Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/developer-support-help-options.md
description: Learn where to get help and find answers to your questions as you b
- Previously updated : 03/09/2022 Last updated : 12/29/2022
If you need help with one of the Microsoft Authentication Libraries (MSAL), open
## Share your product ideas
-Have an idea for improving the for the Microsoft identity platform? Browse and vote for ideas submitted by others or submit your own:
+Have an idea for improving the Microsoft identity platform? Browse and vote for ideas submitted by others or submit your own:
https://feedback.azure.com/d365community/forum/22920db1-ad25-ec11-b6e6-000d3a4f0789
active-directory Howto Configure Publisher Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-configure-publisher-domain.md
Title: Configure an app's publisher domain description: Learn how to configure an app's publisher domain to let users know where their information is being sent. -+ Previously updated : 11/11/2022- Last updated : 01/05/2023+ # Configure an app's publisher domain
-An appΓÇÖs publisher domain informs users where their information is being sent. The publisher domain also acts as an input or prerequisite for [publisher verification](publisher-verification-overview.md).
+An appΓÇÖs publisher domain informs users where their information is being sent. The publisher domain also acts as an input or prerequisite for [publisher verification](publisher-verification-overview.md). Depending on when the app was registered and the status of the Publisher Verification, it would be displayed directly to the user on the [application's consent prompt](application-consent-experience.md). An applicationΓÇÖs publisher domain is displayed to users (depending on the state of Publisher Verification) on the consent UX to let users know where their information is being sent for trustworthiness.
-In an app's [consent prompt](application-consent-experience.md), either the publisher domain or the publisher verification status appears. Which information is shown depends on whether the app is a [multitenant app](/azure/architecture/guide/multitenant/overview), when the app was registered, and the app's publisher verification status.
+In an app's consent prompt, either the publisher domain or the publisher verification status appears. Which information is shown depends on whether the app is a [multitenant app](/azure/architecture/guide/multitenant/overview), when the app was registered, and the app's publisher verification status.
+
+## Understand multitenant apps
A *multitenant app* is an app that supports user accounts that are outside a single organizational directory. For example, a multitenant app might support all Azure Active Directory (Azure AD) work or school accounts, or it might support both Azure AD work or school accounts and personal Microsoft accounts.
active-directory Howto Implement Rbac For Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-implement-rbac-for-apps.md
description: Learn how to implement role-based access control in your applicatio
-
Previously updated : 06/16/2022 Last updated : 01/06/2023 --+ #Customer intent: As an application developer, I want to learn how to implement role-based access control in my applications so I can make sure that only those users with the right access privileges can access the functionality of them.
active-directory Identity Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/identity-videos.md
description: A list of videos about modern authentication and the Microsoft iden
- Previously updated : 08/03/2020 Last updated : 01/06/2023
active-directory Msal Acquire Cache Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-acquire-cache-tokens.md
The format of the scope value varies depending on the resource (the API) receivi
For Microsoft Graph only, the `user.read` scope maps to `https://graph.microsoft.com/User.Read`, and both scope formats can be used interchangeably.
-Certain web APIs such as the Azure Resource Manager API (https://management.core.windows.net/) expect a trailing forward slash ('/') in the audience claim (`aud`) of the access token. In this case, pass the scope as `https://management.core.windows.net//user_impersonation`, including the double forward slash ('//').
+Certain web APIs such as the Azure Resource Manager API (`https://management.core.windows.net/`) expect a trailing forward slash ('/') in the audience claim (`aud`) of the access token. In this case, pass the scope as `https://management.core.windows.net//user_impersonation`, including the double forward slash ('//').
Other APIs might require that *no scheme or host* is included in the scope value, and expect only the app ID (a GUID) and the scope name, for example:
active-directory Msal Logging Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-logging-java.md
This article shows how to enable MSAL4J logging using the logback framework in a
} ```
-In your tenant, you'll need separate app registrations for the web app and the web API. For app registration and exposing the web API scope, follow the steps in the scenario [A web app that authenticates users and calls web APIs](/scenario-web-app-call-api-overview).
+In your tenant, you'll need separate app registrations for the web app and the web API. For app registration and exposing the web API scope, follow the steps in the scenario [A web app that authenticates users and calls web APIs](/azure/active-directory/develop/scenario-web-app-call-api-overview).
For instructions on how to bind to other logging frameworks, see the [SLF4J manual](http://www.slf4j.org/manual.html).
active-directory Reference Aadsts Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-aadsts-error-codes.md
The `error` field has several possible values - review the protocol documentatio
| AADSTS50000 | TokenIssuanceError - There's an issue with the sign-in service. [Open a support ticket](../fundamentals/active-directory-troubleshooting-support-howto.md) to resolve this issue. | | AADSTS50001 | InvalidResource - The resource is disabled or doesn't exist. Check your app's code to ensure that you have specified the exact resource URL for the resource you're trying to access. | | AADSTS50002 | NotAllowedTenant - Sign-in failed because of a restricted proxy access on the tenant. If it's your own tenant policy, you can change your restricted tenant settings to fix this issue. |
+| AADSTS500011 | InvalidResourceServicePrincipalNotFound - The resource principal named {name} was not found in the tenant named {tenant}. This can happen if the application has not been installed by the administrator of the tenant or consented to by any user in the tenant. You might have sent your authentication request to the wrong tenant. If you expect the app to be installed, you may need to provide administrator permissions to add it. Check with the developers of the resource and application to understand what the right setup for your tenant is. |
| AADSTS500021 | Access to '{tenant}' tenant is denied. AADSTS500021 indicates that the tenant restriction feature is configured and that the user is trying to access a tenant that isn't in the list of allowed tenants specified in the header `Restrict-Access-To-Tenant`. For more information, see [Use tenant restrictions to manage access to SaaS cloud applications](../manage-apps/tenant-restrictions.md).| | AADSTS500022 | Access to '{tenant}' tenant is denied. AADSTS500022 indicates that the tenant restriction feature is configured and that the user is trying to access a tenant that isn't in the list of allowed tenants specified in the header `Restrict-Access-To-Tenant`. For more information, see [Use tenant restrictions to manage access to SaaS cloud applications](../manage-apps/tenant-restrictions.md).| | AADSTS50003 | MissingSigningKey - Sign-in failed because of a missing signing key or certificate. This might be because there was no signing key configured in the app. To learn more, see the troubleshooting article for error [AADSTS50003](/troubleshoot/azure/active-directory/error-code-aadsts50003-cert-or-key-not-configured). If you still see issues, contact the app owner or an app admin. |
The `error` field has several possible values - review the protocol documentatio
| AADSTS50010 | AudienceUriValidationFailed - Audience URI validation for the app failed since no token audiences were configured. | | AADSTS50011 | InvalidReplyTo - The reply address is missing, misconfigured, or doesn't match reply addresses configured for the app. As a resolution ensure to add this missing reply address to the Azure Active Directory application or have someone with the permissions to manage your application in Active Directory do this for you. To learn more, see the troubleshooting article for error [AADSTS50011](/troubleshoot/azure/active-directory/error-code-aadsts50011-reply-url-mismatch).| | AADSTS50012 | AuthenticationFailed - Authentication failed for one of the following reasons:<ul><li>The subject name of the signing certificate isn't authorized</li><li>A matching trusted authority policy was not found for the authorized subject name</li><li>The certificate chain isn't valid</li><li>The signing certificate isn't valid</li><li>Policy isn't configured on the tenant</li><li>Thumbprint of the signing certificate isn't authorized</li><li>Client assertion contains an invalid signature</li></ul> |
-| AADSTS50013 | InvalidAssertion - Assertion is invalid because of various reasons - The token issuer doesn't match the api version within its valid time range -expired -malformed - Refresh token in the assertion isn't a primary refresh token. |
-| AADSTS50014 | GuestUserInPendingState - The user's redemption is in a pending state. The guest user account isn't fully created yet. |
+| AADSTS50013 | InvalidAssertion - Assertion is invalid because of various reasons - The token issuer doesn't match the api version within its valid time range -expired -malformed - Refresh token in the assertion isn't a primary refresh token. Contact the app developer. |
+| AADSTS50014 | GuestUserInPendingState - The user account doesnΓÇÖt exist in the directory. An application likely chose the wrong tenant to sign into, and the currently logged in user was prevented from doing so since they did not exist in your tenant. If this user should be able to log in, add them as a guest. For further information, please visit [add B2B users](/azure/active-directory/b2b/add-users-administrator). |
| AADSTS50015 | ViralUserLegalAgeConsentRequiredState - The user requires legal age group consent. | | AADSTS50017 | CertificateValidationFailed - Certification validation failed, reasons for the following reasons:<ul><li>Cannot find issuing certificate in trusted certificates list</li><li>Unable to find expected CrlSegment</li><li>Cannot find issuing certificate in trusted certificates list</li><li>Delta CRL distribution point is configured without a corresponding CRL distribution point</li><li>Unable to retrieve valid CRL segments because of a timeout issue</li><li>Unable to download CRL</li></ul>Contact the tenant admin. |
-| AADSTS50020 | UserUnauthorized - Users are unauthorized to call this endpoint. |
+| AADSTS50020 | UserUnauthorized - Users are unauthorized to call this endpoint. User account '{email}' from identity provider '{idp}' does not exist in tenant '{tenant}' and cannot access the application '{appid}'({appName}) in that tenant. This account needs to be added as an external user in the tenant first. Sign out and sign in again with a different Azure Active Directory user account. If this user should be a member of the tenant, they should be invited via the [B2B system](/azure/active-directory/b2b/add-users-administrator). For additional information, visit [AADSTS50020](/troubleshoot/azure/active-directory/error-code-aadsts50020-user-account-identity-provider-does-not-exist). |
| AADSTS500212 | NotAllowedByOutboundPolicyTenant - The user's administrator has set an outbound access policy that doesn't allow access to the resource tenant. | | AADSTS500213 | NotAllowedByInboundPolicyTenant - The resource tenant's cross-tenant access policy doesn't allow this user to access this tenant. | | AADSTS50027 | InvalidJwtToken - Invalid JWT token because of the following reasons:<ul><li>doesn't contain nonce claim, sub claim</li><li>subject identifier mismatch</li><li>duplicate claim in idToken claims</li><li>unexpected issuer</li><li>unexpected audience</li><li>not within its valid time range </li><li>token format isn't proper</li><li>External ID token from issuer failed signature verification.</li></ul> | | AADSTS50029 | Invalid URI - domain name contains invalid characters. Contact the tenant admin. | | AADSTS50032 | WeakRsaKey - Indicates the erroneous user attempt to use a weak RSA key. | | AADSTS50033 | RetryableError - Indicates a transient error not related to the database operations. |
-| AADSTS50034 | UserAccountNotFound - To sign into this application, the account must be added to the directory. |
+| AADSTS50034 | UserAccountNotFound - To sign into this application, the account must be added to the directory. This error can occur because the user mis-typed their username, or isn't in the tenant. An application may have chosen the wrong tenant to sign into, and the currently logged in user was prevented from doing so since they did not exist in your tenant. If this user should be able to log in, add them as a guest. See docs here: [Add B2B users](/azure/active-directory/external-identities/add-users-administrator). |
| AADSTS50042 | UnableToGeneratePairwiseIdentifierWithMissingSalt - The salt required to generate a pairwise identifier is missing in principle. Contact the tenant admin. | | AADSTS50043 | UnableToGeneratePairwiseIdentifierWithMultipleSalts | | AADSTS50048 | SubjectMismatchesIssuer - Subject mismatches Issuer claim in the client assertion. Contact the tenant admin. |
The `error` field has several possible values - review the protocol documentatio
| AADSTS50071 | SignoutMessageExpired - The logout request has expired. | | AADSTS50072 | UserStrongAuthEnrollmentRequiredInterrupt - User needs to enroll for second factor authentication (interactive). | | AADSTS50074 | UserStrongAuthClientAuthNRequiredInterrupt - Strong authentication is required and the user did not pass the MFA challenge. |
-| AADSTS50076 | UserStrongAuthClientAuthNRequired - Due to a configuration change made by the admin, or because you moved to a new location, the user must use multi-factor authentication to access the resource. Retry with a new authorize request for the resource. |
+| AADSTS50076 | UserStrongAuthClientAuthNRequired - Due to a configuration change made by the admin such as a Conditional Access policy, per-user enforcement, or because you moved to a new location, the user must use multi-factor authentication to access the resource. Retry with a new authorize request for the resource. |
| AADSTS50078 | UserStrongAuthExpired- Presented multi-factor authentication has expired due to policies configured by your administrator, you must refresh your multi-factor authentication to access '{resource}'.|
-| AADSTS50079 | UserStrongAuthEnrollmentRequired - Due to a configuration change made by the administrator, or because the user moved to a new location, the user is required to use multi-factor authentication. |
+| AADSTS50079 | UserStrongAuthEnrollmentRequired - Due to a configuration change made by the admin such as a Conditional Access policy, per-user enforcement, or because the user moved to a new location, the user is required to use multi-factor authentication. Either a managed user needs to register security info to complete multi-factor authentication, or a federated user needs to get the multi-factor claim from the federated identity provider. |
| AADSTS50085 | Refresh token needs social IDP login. Have user try signing-in again with username -password | | AADSTS50086 | SasNonRetryableError | | AADSTS50087 | SasRetryableError - A transient error has occurred during strong authentication. Please try again. |
The `error` field has several possible values - review the protocol documentatio
| AADSTS50124 | ClaimsTransformationInvalidInputParameter - Claims Transformation contains invalid input parameter. Contact the tenant admin to update the policy. | | AADSTS501241 | Mandatory Input '{paramName}' missing from transformation ID '{transformId}'. This error is returned while Azure AD is trying to build a SAML response to the application. NameID claim or NameIdentifier is mandatory in SAML response and if Azure AD failed to get source attribute for NameID claim, it will return this error. As a resolution, ensure you add claim rules in *Azure portal* > *Azure Active Directory* > *Enterprise Applications* > *Select your application* > *Single Sign-On* > *User Attributes & Claims* > *Unique User Identifier (Name ID)*. | | AADSTS50125 | PasswordResetRegistrationRequiredInterrupt - Sign-in was interrupted because of a password reset or password registration entry. |
-| AADSTS50126 | InvalidUserNameOrPassword - Error validating credentials due to invalid username or password. |
+| AADSTS50126 | InvalidUserNameOrPassword - Error validating credentials due to invalid username or password. The user didn't enter the right credentials. It's expected to see some number of these errors in your logs due to users making mistakes. |
| AADSTS50127 | BrokerAppNotInstalled - User needs to install a broker app to gain access to this content. | | AADSTS50128 | Invalid domain name - No tenant-identifying information found in either the request or implied by any provided credentials. | | AADSTS50129 | DeviceIsNotWorkplaceJoined - Workplace join is required to register the device. |
The `error` field has several possible values - review the protocol documentatio
| AADSTS50140 | KmsiInterrupt - This error occurred due to "Keep me signed in" interrupt when the user was signing-in. This is an expected part of the login flow, where a user is asked if they want to remain signed into their current browser to make further logins easier. For more information, see [The new Azure AD sign-in and “Keep me signed in” experiences rolling out now!](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/the-new-azure-ad-sign-in-and-keep-me-signed-in-experiences/m-p/128267). You can [open a support ticket](../fundamentals/active-directory-troubleshooting-support-howto.md) with Correlation ID, Request ID, and Error code to get more details.| | AADSTS50143 | Session mismatch - Session is invalid because user tenant doesn't match the domain hint due to different resource. [Open a support ticket](../fundamentals/active-directory-troubleshooting-support-howto.md) with Correlation ID, Request ID, and Error code to get more details. | | AADSTS50144 | InvalidPasswordExpiredOnPremPassword - User's Active Directory password has expired. Generate a new password for the user or have the user use the self-service reset tool to reset their password. |
-| AADSTS50146 | MissingCustomSigningKey - This app is required to be configured with an app-specific signing key. It is either not configured with one, or the key has expired or isn't yet valid. |
+| AADSTS50146 | MissingCustomSigningKey - This app is required to be configured with an app-specific signing key. It is either not configured with one, or the key has expired or isn't yet valid. Please contact the owner of the application. |
| AADSTS50147 | MissingCodeChallenge - The size of the code challenge parameter isn't valid. | | AADSTS501481 | The Code_Verifier doesn't match the code_challenge supplied in the authorization request.| | AADSTS501491 | InvalidCodeChallengeMethodInvalidSize - Invalid size of Code_Challenge parameter.|
The `error` field has several possible values - review the protocol documentatio
| AADSTS51000 | RequiredFeatureNotEnabled - The feature is disabled. | | AADSTS51001 | DomainHintMustbePresent - Domain hint must be present with on-premises security identifier or on-premises UPN. | | AADSTS1000104| XCB2BResourceCloudNotAllowedOnIdentityTenant - Resource cloud {resourceCloud} isn't allowed on identity tenant {identityTenant}. {resourceCloud} - cloud instance which owns the resource. {identityTenant} - is the tenant where signing-in identity is originated from. |
-| AADSTS51004 | UserAccountNotInDirectory - The user account doesnΓÇÖt exist in the directory. |
+| AADSTS51004 | UserAccountNotInDirectory - The user account doesnΓÇÖt exist in the directory. An application likely chose the wrong tenant to sign into, and the currently logged in user was prevented from doing so since they did not exist in your tenant. If this user should be able to log in, add them as a guest. For further information, please visit [add B2B users](/azure/active-directory/b2b/add-users-administrator). |
| AADSTS51005 | TemporaryRedirect - Equivalent to HTTP status 307, which indicates that the requested information is located at the URI specified in the location header. When you receive this status, follow the location header associated with the response. When the original request method was POST, the redirected request will also use the POST method. | | AADSTS51006 | ForceReauthDueToInsufficientAuth - Integrated Windows authentication is needed. User logged in using a session token that is missing the integrated Windows authentication claim. Request the user to log in again. | | AADSTS52004 | DelegationDoesNotExistForLinkedIn - The user has not provided consent for access to LinkedIn resources. |
-| AADSTS53000 | DeviceNotCompliant - Conditional Access policy requires a compliant device, and the device isn't compliant. The user must enroll their device with an approved MDM provider like Intune. |
+| AADSTS53000 | DeviceNotCompliant - Conditional Access policy requires a compliant device, and the device isn't compliant. The user must enroll their device with an approved MDM provider like Intune. For additional information, please visit [Conditional Access device remediation](/azure/active-directory/conditional-access/troubleshoot-conditional-access). |
| AADSTS53001 | DeviceNotDomainJoined - Conditional Access policy requires a domain joined device, and the device isn't domain joined. Have the user use a domain joined device. | | AADSTS53002 | ApplicationUsedIsNotAnApprovedApp - The app used isn't an approved app for Conditional Access. User needs to use one of the apps from the list of approved apps to use in order to get access. |
-| AADSTS53003 | BlockedByConditionalAccess - Access has been blocked by Conditional Access policies. The access policy does not allow token issuance. |
+| AADSTS53003 | BlockedByConditionalAccess - Access has been blocked by Conditional Access policies. The access policy does not allow token issuance. If this is unexpected, see the conditional access policy that applied to this request in the Azure Portal or contact your administrator. For additional information, please visit [troubleshooting sign-in with Conditional Access](/azure/active-directory/conditional-access/troubleshoot-conditional-access). |
| AADSTS53004 | ProofUpBlockedDueToRisk - User needs to complete the multi-factor authentication registration process before accessing this content. User should register for multi-factor authentication. | | AADSTS53010 | ProofUpBlockedDueToSecurityInfoAcr - Cannot configure multi-factor authentication methods because the organization requires this information to be set from specific locations or devices. | | AADSTS53011 | User blocked due to risk on home tenant. |
+| AADSTS530034 | DelegatedAdminBlockedDueToSuspiciousActivity - A delegated administrator was blocked from accessing the tenant due to account risk in their home tenant. |
| AADSTS54000 | MinorUserBlockedLegalAgeGroupRule | | AADSTS54005 | OAuth2 Authorization code was already redeemed, please retry with a new valid code or use an existing refresh token. | | AADSTS65001 | DelegationDoesNotExist - The user or administrator has not consented to use the application with ID X. Send an interactive authorization request for this user and resource. |
The `error` field has several possible values - review the protocol documentatio
| AADSTS67003 | ActorNotValidServiceIdentity | | AADSTS70000 | InvalidGrant - Authentication failed. The refresh token isn't valid. Error may be due to the following reasons:<ul><li>Token binding header is empty</li><li>Token binding hash does not match</li></ul> | | AADSTS70001 | UnauthorizedClient - The application is disabled. To learn more, see the troubleshooting article for error [AADSTS70001](/troubleshoot/azure/active-directory/error-code-aadsts70001-app-not-found-in-directory). |
+| AADSTS700011 | UnauthorizedClientAppNotFoundInOrgIdTenant - Application with identifier {appIdentifier} was not found in the directory. A client application requested a token from your tenant, but the client app doesn't exist in your tenant, so the call failed. |
| AADSTS70002 | InvalidClient - Error validating the credentials. The specified client_secret does not match the expected value for this client. Correct the client_secret and try again. For more info, see [Use the authorization code to request an access token](v2-oauth2-auth-code-flow.md#redeem-a-code-for-an-access-token). | | AADSTS700025 | InvalidClientPublicClientWithCredential - Client is public so neither 'client_assertion' nor 'client_secret' should be presented. | | AADSTS70003 | UnsupportedGrantType - The app returned an unsupported grant type. |
The `error` field has several possible values - review the protocol documentatio
| AADSTS700054 | Response_type 'id_token' isn't enabled for the application. The application requested an ID token from the authorization endpoint, but did not have ID token implicit grant enabled. Go to Azure portal > Azure Active Directory > App registrations > Select your application > Authentication > Under 'Implicit grant and hybrid flows', make sure 'ID tokens' is selected.| | AADSTS70007 | UnsupportedResponseMode - The app returned an unsupported value of `response_mode` when requesting a token. | | AADSTS70008 | ExpiredOrRevokedGrant - The refresh token has expired due to inactivity. The token was issued on XXX and was inactive for a certain amount of time. |
+| AADSTS700082 | ExpiredOrRevokedGrantInactiveToken - The refresh token has expired due to inactivity. The token was issued on {issueDate} and was inactive for {time}. Expected part of the token lifecycle - the user went an extended period of time without using the application, so the token was expired when the app attempted to refresh it. |
| AADSTS700084 | The refresh token was issued to a single page app (SPA), and therefore has a fixed, limited lifetime of {time}, which can't be extended. It is now expired and a new sign in request must be sent by the SPA to the sign in page. The token was issued on {issueDate}.| | AADSTS70011 | InvalidScope - The scope requested by the app is invalid. | | AADSTS70012 | MsaServerError - A server error occurred while authenticating an MSA (consumer) user. Try again. If it continues to fail, [open a support ticket](../fundamentals/active-directory-troubleshooting-support-howto.md) |
The `error` field has several possible values - review the protocol documentatio
| AADSTS80010 | OnPremisePasswordValidationEncryptionException - The Authentication Agent is unable to decrypt password. | | AADSTS80012 | OnPremisePasswordValidationAccountLogonInvalidHours - The users attempted to log on outside of the allowed hours (this is specified in AD). | | AADSTS80013 | OnPremisePasswordValidationTimeSkew - The authentication attempt could not be completed due to time skew between the machine running the authentication agent and AD. Fix time sync issues. |
+| AADSTS80014 | OnPremisePasswordValidationAuthenticationAgentTimeout - Validation request responded after maximum elapsed time exceeded. Open a support ticket with the error code, correlation ID, and timestamp to get more details on this error. |
| AADSTS81004 | DesktopSsoIdentityInTicketIsNotAuthenticated - Kerberos authentication attempt failed. | | AADSTS81005 | DesktopSsoAuthenticationPackageNotSupported - The authentication package isn't supported. | | AADSTS81006 | DesktopSsoNoAuthorizationHeader - No authorization header was found. |
The `error` field has several possible values - review the protocol documentatio
| AADSTS81010 | DesktopSsoAuthTokenInvalid - Seamless SSO failed because the user's Kerberos ticket has expired or is invalid. | | AADSTS81011 | DesktopSsoLookupUserBySidFailed - Unable to find user object based on information in the user's Kerberos ticket. | | AADSTS81012 | DesktopSsoMismatchBetweenTokenUpnAndChosenUpn - The user trying to sign in to Azure AD is different from the user signed into the device. |
-| AADSTS90002 | InvalidTenantName - The tenant name wasn't found in the data store. Check to make sure you have the correct tenant ID. |
+| AADSTS90002 | InvalidTenantName - The tenant name wasn't found in the data store. Check to make sure you have the correct tenant ID. The application developer will receive this error if their app attempts to sign into a tenant that we cannot find. Often, this is because a cross-cloud app was used against the wrong cloud, or the developer attempted to sign in to a tenant derived from an email address, but the domain isn't registered. |
| AADSTS90004 | InvalidRequestFormat - The request isn't properly formatted. | | AADSTS90005 | InvalidRequestWithMultipleRequirements - Unable to complete the request. The request isn't valid because the identifier and login hint can't be used together. | | AADSTS90006 | ExternalServerRetryableError - The service is temporarily unavailable.|
The `error` field has several possible values - review the protocol documentatio
| AADSTS90051 | InvalidNationalCloudId - The national cloud identifier contains an invalid cloud identifier. | | AADSTS90055 | TenantThrottlingError - There are too many incoming requests. This exception is thrown for blocked tenants. | | AADSTS90056 | BadResourceRequest - To redeem the code for an access token, the app should send a POST request to the `/token` endpoint. Also, prior to this, you should provide an authorization code and send it in the POST request to the `/token` endpoint. Refer to this article for an overview of [OAuth 2.0 authorization code flow](v2-oauth2-auth-code-flow.md). Direct the user to the `/authorize` endpoint, which will return an authorization_code. By posting a request to the `/token` endpoint, the user gets the access token. Log in the Azure portal, and check **App registrations > Endpoints** to confirm that the two endpoints were configured correctly. |
-| AADSTS90072 | PassThroughUserMfaError - The external account that the user signs in with doesn't exist on the tenant that they signed into; so the user can't satisfy the MFA requirements for the tenant. This error also might occur if the users are synced, but there is a mismatch in the ImmutableID (sourceAnchor) attribute between Active Directory and Azure AD. The account must be added as an external user in the tenant first. Sign out and sign in with a different Azure AD user account. |
+| AADSTS900561 | BadResourceRequestInvalidRequest - The endpoint only accepts {valid_verbs} requests. Received a {invalid_verb} request. {valid_verbs} represents a list of HTTP verbs supported by the endpoint (for example, POST), {invalid_verb} is an HTTP verb used in the current request (for example, GET). This can be due to developer error, or due to users pressing the back button in their browser, triggering a bad request. It can be ignored. |
+| AADSTS90072 | PassThroughUserMfaError - The external account that the user signs in with doesn't exist on the tenant that they signed into; so the user can't satisfy the MFA requirements for the tenant. This error also might occur if the users are synced, but there is a mismatch in the ImmutableID (sourceAnchor) attribute between Active Directory and Azure AD. The account must be added as an external user in the tenant first. Sign out and sign in with a different Azure AD user account. For more information, please visit [configuring external identities](/azure/active-directory/external-identities/external-identities-overview). |
| AADSTS90081 | OrgIdWsFederationMessageInvalid - An error occurred when the service tried to process a WS-Federation message. The message isn't valid. | | AADSTS90082 | OrgIdWsFederationNotSupported - The selected authentication policy for the request isn't currently supported. | | AADSTS90084 | OrgIdWsFederationGuestNotAllowed - Guest accounts aren't allowed for this site. |
active-directory Reference App Multi Instancing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-app-multi-instancing.md
Previously updated : 06/28/2022 Last updated : 01/06/2023 -
-# Configure SAML app multi-instancing for an application in Azure Active Directory   
-App multi-instancing refers to the need for the configuration of multiple instances of the same application within a tenant.  For example, the organization has multiple Amazon Web Services accounts, each of which needs a separate service principal to handle instance-specific claims mapping (adding the AccountID claim for that AWS tenant) and roles assignment.  Or the customer has multiple instances of Box, which doesn’t need special claims mapping, but does need separate service principals for separate signing keys. 
+# Configure SAML app multi-instancing for an application in Azure Active Directory
-## IDP versus SP initiated SSO    
-A user can sign-in to an application one of two ways, either through the application directly, which is known as service provider (SP) initiated single sign-on (SSO), or by going directly to the identity provider (IDP), known as IDP initiated SSO. Depending on which approach is used within your organization you'll need to follow the appropriate instructions below.ΓÇ»
+App multi-instancing refers to the need for the configuration of multiple instances of the same application within a tenant. For example, the organization has multiple Amazon Web Services accounts, each of which needs a separate service principal to handle instance-specific claims mapping (adding the AccountID claim for that AWS tenant) and roles assignment. Or the customer has multiple instances of Box, which doesn't need special claims mapping, but does need separate service principals for separate signing keys.
-## SP Initiated  
-In the SAML request of SP initiated SSO, the Issuer specified is usually the App ID Uri. Utilizing App ID Uri doesn’t allow the customer to distinguish which instance of an application is being targeted when using SP initiated SSO.  
+## IDP versus SP initiated SSO
-## SP Initiated Configuration InstructionsΓÇ»
-Update the SAML single sign-on service URL configured within the service provider for each instance to include the service principal guid as part of the URL. For example, the general SSO sign-in URL for SAML would have been `https://login.microsoftonline.com/<tenantid>/saml2`, the URL can now be updated to target a specific service principal as follows `https://login.microsoftonline.com/<tenantid>/saml2/<issuer>`.ΓÇ»
+A user can sign-in to an application one of two ways, either through the application directly, which is known as service provider (SP) initiated single sign-on (SSO), or by going directly to the identity provider (IDP), known as IDP initiated SSO. Depending on which approach is used within your organization you'll need to follow the appropriate instructions below.
-Only service principal identifiers in GUID format are accepted for the ΓÇÿissuerΓÇÖ value. The service principal identifiers override the issuer in the SAML request and response, and the rest of the flow is completed as usual. There's one exception: if the application requires the request to be signed, the request is rejected even if the signature was valid. The rejection is done to avoid any security risks with functionally overriding values in a signed request.ΓÇ»
+## SP Initiated
-## IDP Initiated  
-The IDP initiated feature exposes two settings for each application.  
+In the SAML request of SP initiated SSO, the Issuer specified is usually the App ID Uri. Utilizing App ID Uri doesn't allow the customer to distinguish which instance of an application is being targeted when using SP initiated SSO.
-- An “audience override” option exposed for configuration by using claims mapping or the portal.  The intended use case is applications that require the same audience for multiple instances. This setting is ignored if no custom signing key is configured for the application.   
+## SP Initiated Configuration Instructions
-- An ΓÇ£issuer with application idΓÇ¥ flag to indicate the issuer should be unique for each application instead of unique for each tenant.ΓÇ» This setting is ignored if no custom signing key is configured for the application.ΓÇ»
+Update the SAML single sign-on service URL configured within the service provider for each instance to include the service principal guid as part of the URL. For example, the general SSO sign-in URL for SAML would have been `https://login.microsoftonline.com/<tenantid>/saml2`, the URL can now be updated to target a specific service principal as follows `https://login.microsoftonline.com/<tenantid>/saml2/<issuer>`.
-## IDP Initiated Configuration InstructionsΓÇ»
-1. Open any SSO enabled enterprise app and navigate to the SAML single sign on blade.  
-1. Select the ΓÇÿEditΓÇÖ button on the ΓÇÿUser Attributes & ClaimsΓÇÖ panel.
+Only service principal identifiers in GUID format are accepted for the `issuer` value. The service principal identifiers override the issuer in the SAML request and response, and the rest of the flow is completed as usual. There's one exception: if the application requires the request to be signed, the request is rejected even if the signature was valid. The rejection is done to avoid any security risks with functionally overriding values in a signed request.
+
+## IDP Initiated
+
+The IDP initiated feature exposes two settings for each application.
+
+- An **audience override** option exposed for configuration by using claims mapping or the portal. The intended use case is applications that require the same audience for multiple instances. This setting is ignored if no custom signing key is configured for the application.
+
+- An **issuer with application id** flag to indicate the issuer should be unique for each application instead of unique for each tenant. This setting is ignored if no custom signing key is configured for the application.
+
+## IDP Initiated Configuration Instructions
+
+1. Open any SSO enabled enterprise app and navigate to the SAML single sign on blade.
+1. Select **Edit** on the **User Attributes & Claims** panel.
![Edit Configuration](./media/reference-app-multi-instancing/userattributesclaimsedit.png) 1. Open the advanced options blade. ![Open Advanced Options](./media/reference-app-multi-instancing/advancedoptionsblade.png)
-1. Configure both options according to your preferences and hit save.
+1. Configure both options according to your preferences and then select **Save**.
![Configure Options](./media/reference-app-multi-instancing/advancedclaimsoptions.png) -- ## Next steps - To explore the claims mapping policy in graph see [Claims mapping policy](/graph/api/resources/claimsMappingPolicy?view=graph-rest-1.0&preserve-view=true)
active-directory Reference Claims Mapping Policy Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-claims-mapping-policy-type.md
Previously updated : 09/16/2022 Last updated : 01/06/2023 -+ # Claims mapping policy type
active-directory Secure Group Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/secure-group-access-control.md
description: Learn about how groups are used to securely control access to resou
- Previously updated : 6/16/2022 Last updated : 01/06/2023 --+ # Customer intent: As a developer, I want to learn how to most securely use Azure AD groups to control access to resources.
Azure Active Directory (Azure AD) allows the use of groups to manage access to r
To learn more about the benefits of groups for access control, see [manage access to an application](../manage-apps/what-is-access-management.md).
-While developing an application, authorize access with the [groups claim](/graph/api/resources/application?view=graph-rest-1.0#properties&preserve-view=true). To learn more, see how to [configure group claims for applications with Azure AD](../hybrid/how-to-connect-fed-group-claims.md).
+While developing an application, authorize access with the groups claim. To learn more, see how to [configure group claims for applications with Azure AD](../hybrid/how-to-connect-fed-group-claims.md).
Today, many applications select a subset of groups with the `securityEnabled` flag set to `true` to avoid scale challenges, that is, to reduce the number of groups returned in the token. Setting the `securityEnabled` flag to be true for a group doesn't guarantee that the group is securely managed.
active-directory Secure Least Privileged Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/secure-least-privileged-access.md
description: Learn how the principle of least privilege can help increase the se
- Previously updated : 06/16/2022 Last updated : 01/06/2023 --+ # Customer intent: As a developer, I want to learn about the principle of least privilege and the features of the Microsoft identity platform that I can use to make sure my application and its users are restricted to actions and have access to only the data they need perform their tasks.
active-directory Security Best Practices For App Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/security-best-practices-for-app-registration.md
description: Learn about the best practices and general guidance for security re
- Previously updated : 06/17/2022 Last updated : 01/06/2023 -+ # Security best practices for application properties in Azure Active Directory
Scenarios that required **implicit flow** can now use **Auth code flow** to redu
Consider the following guidance related to implicit flow: -- Understand if [implicit flow is required](./v2-oauth2-implicit-grant-flow.md#suitable-scenarios-for-the-oauth2-implicit-grant). Don't use implicit flow unless [explicitly required](./v2-oauth2-implicit-grant-flow.md#suitable-scenarios-for-the-oauth2-implicit-grant).
+- Understand if [implicit flow is required](./v2-oauth2-implicit-grant-flow.md#suitable-scenarios-for-the-oauth2-implicit-grant). Don't use implicit flow unless explicitly required.
- If the application was configured to receive access tokens using implicit flow, but doesn't actively use them, turn off the setting to protect from misuse. - Use separate applications for valid implicit flow scenarios.
active-directory Security Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/security-tokens.md
description: Learn about the basics of security tokens in the Microsoft identity
- Previously updated : 11/1/2022 Last updated : 01/06/2023 - + #Customer intent: As an application developer, I want to understand the basic concepts of security tokens in the Microsoft identity platform.
active-directory Tutorial Blazor Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-blazor-server.md
Finally, because the app calls a protected API (in this case Microsoft Graph), i
Run the following command to download the templates for `Microsoft.Identity.Web`, which we'll make use of in this tutorial. ```dotnetcli
-dotnet new install Microsoft.Identity.Web.ProjectTemplates
+dotnet new --install Microsoft.Identity.Web.ProjectTemplates
``` Then, run the following command to create the application. Replace the placeholders in the command with the proper information from your app's overview page and execute the command in a command shell. The output location specified with the `-o|--output` option creates a project folder if it doesn't exist and becomes part of the app's name.
active-directory V2 Oauth2 Auth Code Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-oauth2-auth-code-flow.md
Title: Microsoft identity platform and OAuth 2.0 authorization code flow
description: Protocol reference for the Microsoft identity platform's implementation of the OAuth 2.0 authorization code grant - Previously updated : 07/29/2022 Last updated : 01/05/2023
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/whats-new-docs.md
Previously updated : 12/01/2022 Last updated : 01/05/2023
Welcome to what's new in the Microsoft identity platform documentation. This article lists new docs that have been added and those that have had significant updates in the last three months.
+## December 2022
+
+### New articles
+
+- [Block workload identity federation on managed identities using a policy](workload-identity-federation-block-using-azure-policy.md)
+- [Troubleshooting the configured permissions limits](troubleshoot-required-resource-access-limits.md)
+
+### Updated articles
+
+- [Quickstart: Protect an ASP.NET Core web API with the Microsoft identity platform](quickstart-v2-aspnet-core-web-api.md)
+- [Quickstart: Get a token and call the Microsoft Graph API by using a console app's identity](quickstart-v2-netcore-daemon.md)
+- [Tutorial: Sign in users and call a protected API from a Blazor WebAssembly app](tutorial-blazor-webassembly.md)
+- [A web API that calls web APIs: Code configuration](scenario-web-api-call-api-app-configuration.md)
+- [Web app that signs in users: Code configuration](scenario-web-app-sign-user-app-configuration.md)
+- [Web app that signs in users: App registration](scenario-web-app-sign-user-app-registration.md)
+- [Microsoft identity platform docs: What's new](whats-new-docs.md)
+- [Tutorial: Create a Blazor Server app that uses the Microsoft identity platform for authentication](tutorial-blazor-server.md)
## November 2022 ### New articles
Welcome to what's new in the Microsoft identity platform documentation. This art
- [Quickstart: Register an application with the Microsoft identity platform](quickstart-register-app.md) - [Tutorial: Sign in users and call the Microsoft Graph API from a JavaScript single-page application](tutorial-v2-javascript-spa.md) - [Tutorial: Sign in users and call the Microsoft Graph API from a React single-page app (SPA) using auth code flow](tutorial-v2-react.md)-
-## September 2022
-
-### New articles
--- [Configure a user-assigned managed identity to trust an external identity provider (preview)](workload-identity-federation-create-trust-user-assigned-managed-identity.md)-- [Important considerations and restrictions for federated identity credentials](workload-identity-federation-considerations.md)-
-### Updated articles
--- [How to use Continuous Access Evaluation enabled APIs in your applications](app-resilience-continuous-access-evaluation.md)-- [Run automated integration tests](test-automate-integration-testing.md)-- [Tutorial: Sign in users and call the Microsoft Graph API from a JavaScript single-page application (SPA)](tutorial-v2-javascript-spa.md)
active-directory Workload Identity Federation Create Trust Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation-create-trust-gcp.md
Title: Access Azure resources from Google Cloud without credentials description: Access Azure AD protected resources from a service running in Google Cloud without using secrets or certificates. Use workload identity federation to set up a trust relationship between an app in Azure AD and an identity in Google Cloud. The workload running in Google Cloud can get an access token from Microsoft identity platform and access Azure AD protected resources. -+ Previously updated : 08/07/2022- Last updated : 01/06/2023+ #Customer intent: As an application developer, I want to create a trust relationship with a Google Cloud identity so my service in Google Cloud can access Azure AD protected resources without managing secrets.
class ClientAssertionCredential implements TokenCredential {
// Get the ID token from Google. return getGoogleIDToken() // calling this directly just for clarity,
- // this should be a callback
- // pass this as a client assertion to the confidential client app
- .then((clientAssertion:any)=> {
- var msalApp: any;
- msalApp = new msal.ConfidentialClientApplication({
- auth: {
- clientId: this.clientID,
- authority: this.aadAuthority + this.tenantID,
- clientAssertion: clientAssertion,
- }
+
+ let aadAudience = "api://AzureADTokenExchange"
+ const jwt = axios({
+ url: "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/identity?audience="
+ + aadAudience,
+ method: "GET",
+ headers: {
+ "Metadata-Flavor": "Google"
+ }}).then(response => {
+ console.log("AXIOS RESPONSE");
+ return response.data;
+ });
+ return jwt;
+ .then(function(aadToken) {
+ // return in form expected by TokenCredential.getToken
+ let returnToken = {
+ token: aadToken.accessToken,
+ expiresOnTimestamp: aadToken.expiresOn.getTime(),
+ };
+ return (returnToken);
+ })
+ .catch(function(error) {
+ // error stuff
});
- return msalApp.acquireTokenByClientCredential({ scopes })
- })
- .then(function(aadToken) {
- // return in form expected by TokenCredential.getToken
- let returnToken = {
- token: aadToken.accessToken,
- expiresOnTimestamp: aadToken.expiresOn.getTime(),
- };
- return (returnToken);
- })
- .catch(function(error) {
- // error stuff
- });
+ }
}
-}
export default ClientAssertionCredential; ```
active-directory Workload Identity Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation.md
The workflow for exchanging an external token for an access token is the same, h
1. When the checks are satisfied, Microsoft identity platform issues an access token to the external workload. 1. The external workload accesses Azure AD protected resources using the access token from Microsoft identity platform. A GitHub Actions workflow, for example, uses the access token to publish a web app to Azure App Service.
-The Microsoft identity platform stores only the first 25 signing keys when they're downloaded from the external IdP's OIDC endpoint. If the external IdP exposes more than 25 signing keys, you may experience errors when using Workload Identity Federation.
+The Microsoft identity platform stores only the first 100 signing keys when they're downloaded from the external IdP's OIDC endpoint. If the external IdP exposes more than 100 signing keys, you may experience errors when using Workload Identity Federation.
## Next steps Learn more about how workload identity federation works:
Learn more about how workload identity federation works:
- How to create, delete, get, or update [federated identity credentials](workload-identity-federation-create-trust.md) on an app registration. - How to create, delete, get, or update [federated identity credentials](workload-identity-federation-create-trust-user-assigned-managed-identity.md) on a user-assigned managed identity. - Read the [GitHub Actions documentation](https://docs.github.com/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-azure) to learn more about configuring your GitHub Actions workflow to get an access token from Microsoft identity provider and access Azure resources.-- For information about the required format of JWTs created by external identity providers, read about the [assertion format](active-directory-certificate-credentials.md#assertion-format).
+- For information about the required format of JWTs created by external identity providers, read about the [assertion format](active-directory-certificate-credentials.md#assertion-format).
active-directory Zero Trust For Developers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/zero-trust-for-developers.md
description: Learn how using Zero Trust principles can help increase the securit
- Previously updated : 06/16/2022 Last updated : 01/06/2023 -+ # Customer intent: As a developer, I want to learn about the Zero Trust principles and the features of the Microsoft identity platform that I can use to build applications that are Zero Trust-ready.
active-directory Howto Vm Sign In Azure Ad Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-windows.md
Previously updated : 06/16/2022 Last updated : 01/05/2023
There are many security benefits of using Azure AD-based authentication to log i
- When users join or leave your team, you can update the Azure RBAC policy for the VM to grant access as appropriate. - When employees leave your organization and their user accounts are disabled or removed from Azure AD, they no longer have access to your resources. - Configure Conditional Access policies to require multifactor authentication (MFA) and other signals, such as user sign-in risk, before you can RDP into Windows VMs. -- Use Azure deploy and audit policies to require Azure AD login for Windows VMs and to flag the use of unapproved local accounts on the VMs.
+- Use Azure Policy to deploy and audit policies to require Azure AD login for Windows VMs and to flag the use of unapproved local accounts on the VMs.
- Use Intune to automate and scale Azure AD join with mobile device management (MDM) auto-enrollment of Azure Windows VMs that are part of your virtual desktop infrastructure (VDI) deployments. MDM auto-enrollment requires Azure AD Premium P1 licenses. Windows Server VMs don't support MDM enrollment.
Set-MsolUser -UserPrincipalName username@contoso.com -StrongAuthenticationRequir
If you haven't deployed Windows Hello for Business and if that isn't an option for now, you can configure a Conditional Access policy that excludes the Azure Windows VM Sign-In app from the list of cloud apps that require MFA. To learn more about Windows Hello for Business, see [Windows Hello for Business overview](/windows/security/identity-protection/hello-for-business/hello-identity-verification). > [!NOTE]
-> Windows Hello for Business PIN authentication with RDP has been supported for several versions of Windows 10. Support for biometric authentication with RDP was added in Windows 10 version 1809. Using Windows Hello for Business authentication during RDP is available only for deployments that use a certificate trust model. It's currently not available for a key trust model.
+> Windows Hello for Business PIN authentication with RDP has been supported for several versions of Windows 10. Support for biometric authentication with RDP was added in Windows 10 version 1809. Using Windows Hello for Business authentication during RDP is available for deployments that use a certificate trust model or key trust model.
Share your feedback about this feature or report problems with using it on the [Azure AD feedback forum](https://feedback.azure.com/d365community/forum/22920db1-ad25-ec11-b6e6-000d3a4f0789).
active-directory Groups Self Service Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-self-service-management.md
Previously updated : 08/01/2022 Last updated : 01/09/2023
# Set up self-service group management in Azure Active Directory
-You can enable users to create and manage their own security groups or Microsoft 365 groups in Azure Active Directory (Azure AD), part of Microsoft Entra. The owner of the group can approve or deny membership requests, and can delegate control of group membership. Self-service group management features are not available for mail-enabled security groups or distribution lists.
+You can enable users to create and manage their own security groups or Microsoft 365 groups in Azure Active Directory (Azure AD), part of Microsoft Entra. The owner of the group can approve or deny membership requests, and can delegate control of group membership. Self-service group management features are not available for [mail-enabled security groups or distribution lists](../fundamentals/concept-learn-about-groups.md).
-## Self-service group membership defaults
+## Self-service group membership
-When security groups are created in the Azure portal or using Azure AD PowerShell, only the group's owners can update membership. Security groups created by self-service in the [Access panel](https://account.activedirectory.windowsazure.com/r#/joinGroups) and all Microsoft 365 groups are available to join for all users, whether owner-approved or auto-approved. In the Access panel, you can change membership options when you create the group.
+You can allow users to create security groups, which are used to manage access to shared resources. Security groups can be created by users in Azure portals, using Azure AD PowerShell, or from the [MyApps Groups Access panel](https://account.activedirectory.windowsazure.com/r#/groups). Only the group's owners can update membership, but you can provide group owners the ability to approve or deny membership requests from the MyApp Groups Access panel. Security groups created by self-service through the MyApps Groups Access panel are available to join for all users, whether owner-approved or auto-approved. In the MyApps Groups Access panel, you can change membership options when you create the group.
+
+Microsoft 365 groups, which provide collaboration opportunities for your users, can be created in any of the Microsoft 365 applications, such as SharePoint, Microsoft Teams, and Planner. Microsoft 365 groups can also be created in Azure portals, using Azure AD PowerShell, or from the MyApp Groups Access panel. For more information on the difference between security groups and Microsoft 365 groups, see [Learn about groups](../fundamentals/concept-learn-about-groups.md#what-to-know-before-creating-a-group)
Groups created in | Security group default behavior | Microsoft 365 group default behavior | - |
-[Azure AD PowerShell](../enterprise-users/groups-settings-cmdlets.md) | Only owners can add members<br>Visible but not available to join in Access panel | Open to join for all users
-[Azure portal](https://portal.azure.com) | Only owners can add members<br>Visible but not available to join in Access panel<br>Owner is not assigned automatically at group creation | Open to join for all users
-[Access panel](https://account.activedirectory.windowsazure.com/r#/joinGroups) | Open to join for all users<br>Membership options can be changed when the group is created | Open to join for all users<br>Membership options can be changed when the group is created
+[Azure AD PowerShell](../enterprise-users/groups-settings-cmdlets.md) | Only owners can add members<br>Visible but not available to join in MyApp Groups Access panel | Open to join for all users
+[Azure portal](https://portal.azure.com) | Only owners can add members<br>Visible but not available to join in MyApp Groups Access panel<br>Owner is not assigned automatically at group creation | Open to join for all users
+[MyApps Groups Access panel](https://account.activedirectory.windowsazure.com/r#/joinGroups) | Open to join for all users<br>Membership options can be changed when the group is created | Open to join for all users<br>Membership options can be changed when the group is created
## Self-service group management scenarios * **Delegated group management**
- An example is an administrator who is managing access to a SaaS application that the company is using. Managing these access rights is becoming cumbersome, so this administrator asks the business owner to create a new group. The administrator assigns access for the application to the new group, and adds to the group all people already accessing the application. The business owner then can add more users, and those users are automatically provisioned to the application. The business owner doesn't need to wait for the administrator to manage access for users. If the administrator grants the same permission to a manager in a different business group, then that person can also manage access for their own group members. Neither the business owner nor the manager can view or manage each other's group memberships. The administrator can still see all users who have access to the application and block access rights if needed.
+ An example is an administrator who is managing access to a Software as a Service (SaaS) application that the company is using. Managing these access rights is becoming cumbersome, so this administrator asks the business owner to create a new group. The administrator assigns access for the application to the new group, and adds to the group all people already accessing the application. The business owner then can add more users, and those users are automatically provisioned to the application. The business owner doesn't need to wait for the administrator to manage access for users. If the administrator grants the same permission to a manager in a different business group, that person can also manage access for their own group members. Neither the business owner nor the manager can view or manage each other's group memberships. The administrator can still see all users who have access to the application and block access rights if needed.
* **Self-service group management**
- An example of this scenario is two users who both have SharePoint Online sites that they set up independently. They want to give each other's teams access to their sites. To accomplish this, they can create one group in Azure AD, and in SharePoint Online each of them selects that group to provide access to their sites. When someone wants access, they request it from the Access Panel, and after approval they get access to both SharePoint Online sites automatically. Later, one of them decides that all people accessing the site should also get access to a particular SaaS application. The administrator of the SaaS application can add access rights for the application to the SharePoint Online site. From then on, any requests that get approved gives access to the two SharePoint Online sites and also to this SaaS application.
+ An example of this scenario is two users who both have SharePoint Online sites that they set up independently. They want to give each other's teams access to their sites. To accomplish this, they can create one group in Azure AD, and in SharePoint Online each of them selects that group to provide access to their sites. When someone wants access, they request it from the MyApp Groups Access Panel, and after approval they get access to both SharePoint Online sites automatically. Later, one of them decides that all people accessing the site should also get access to a particular SaaS application. The administrator of the SaaS application can add access rights for the application to the SharePoint Online site. From then on, any requests that get approved give access to the two SharePoint Online sites and also to this SaaS application.
## Make a group available for user self-service
You can also use **Owners who can assign members as group owners in the Azure po
When users can create groups, all users in your organization are allowed to create new groups and then can, as the default owner, add members to these groups. You can't specify individuals who can create their own groups. You can specify individuals only for making another group member a group owner. > [!NOTE]
-> An Azure Active Directory Premium (P1 or P2) license is required for users to request to join a security group or Microsoft 365 group and for owners to approve or deny membership requests. Without an Azure Active Directory Premium license, users can still manage their groups in the Access Panel, but they can't create a group that requires owner approval in the Access Panel, and they can't request to join a group.
+> An Azure Active Directory Premium (P1 or P2) license is required for users to request to join a security group or Microsoft 365 group and for owners to approve or deny membership requests. Without an Azure Active Directory Premium license, users can still manage their groups in the MyApp Groups Access panel, but they can't create a group that requires owner approval and they can't request to join a group.
## Group settings
-The group settings enable to control who can create security and Microsoft 365 groups.
+The group settings enable you to control who can create security and Microsoft 365 groups.
![Azure Active Directory security groups setting change.](./media/groups-self-service-management/security-groups-setting.png)
active-directory Licensing Group Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-group-advanced.md
Previously updated : 06/24/2022 Last updated : 01/09/2023
Use the following information and examples to gain a more advanced understanding
## Usage location
-Some Microsoft services are not available in all locations. Before a license can be assigned to a user, the administrator should specify the **Usage location** property on the user. In [the Azure portal](https://portal.azure.com), you can specify usage location in **User** &gt; **Profile** &gt; **Settings**.
+Some Microsoft services aren't available in all locations. For group license assignment, any users without a usage location specified inherit the location of the directory. If you have users in multiple locations, make sure to reflect that correctly in your user resources before adding users to groups with licenses. Before a license can be assigned to a user, the administrator should specify the **Usage location** property on the user.
-For group license assignment, any users without a usage location specified inherit the location of the directory. If you have users in multiple locations, make sure to reflect that correctly in your user resources before adding users to groups with licenses.
+1. Sign in to the [Azure portal](https://portal.azure.com) in the **User Administrator** role.
+1. Go to **Azure AD** > **Users** and select a user.
+1. Select **Edit properties**.
+1. Select the **Settings** tab and enter a location for the user.
+1. Select the **Save** button.
> [!NOTE]
-> Group license assignment will never modify an existing usage location value on a user. We recommend that you always set usage location as part of your user creation flow in Azure AD (for example, via AAD Connect configuration) - that will ensure the result of license assignment is always correct, and users do not receive services in locations that are not allowed.
+> Group license assignment will never modify an existing usage location value on a user. We recommend that you always set usage location as part of your user creation flow in Azure AD (for example, via [Azure AD Connect](../hybrid/whatis-azure-ad-connect.md) configuration). Following such a process ensures the result of license assignment is always correct, and users do not receive services in locations that are not allowed.
## Use group-based licensing with dynamic groups
-You can use group-based licensing with any security group, which means it can be combined with Azure AD dynamic groups. Dynamic groups run rules against user resource attributes to automatically add and remove users from groups.
+You can use group-based licensing with any security group, including dynamic groups. Dynamic groups run rules against user resource attributes to automatically add and remove members. Attributes can be department, job title, work location, or other custom attribute. Each group is assigned the licenses that you want members to receive. If an attribute changes, the member leaves the group, and the licenses are removed.
-For example, you can create a dynamic group for some set of products you want to assign to users. Each group is populated by a rule adding users by their attributes, and each group is assigned the licenses that you want them to receive. You can assign the attribute on-premises and sync it with Azure AD, or you can manage the attribute directly in the cloud.
-
-Licenses are assigned to the user shortly after they are added to the group. When the attribute is changed, the user leaves the groups and the licenses are removed.
-
-### Example
-
-Consider the example of an on-premises identity management solution that decides which users should have access to Microsoft web services. It uses **extensionAttribute1** to store a string value representing the licenses the user should have. Azure AD Connect syncs it with Azure AD.
-
-Users might need one license but not another, or might need both. Here's an example, in which you are distributing Office 365 Enterprise E5 and Enterprise Mobility + Security (EMS) licenses to users in groups:
-
-#### Office 365 Enterprise E5: base services
-
-![Screenshot of Office 365 Enterprise E5 base services](./media/licensing-group-advanced/o365-e5-base-services.png)
-
-#### Enterprise Mobility + Security: licensed users
-
-![Screenshot of Enterprise Mobility + Security licensed users](./media/licensing-group-advanced/o365-e5-licensed-users.png)
-
-For this example, modify one user and set their extensionAttribute1 to the value of `EMS;E5_baseservices;` if you want the user to have both licenses. You can make this modification on-premises. After the change syncs with the cloud, the user is automatically added to both groups, and licenses are assigned.
-
-![Screenshot showing how to set the user's extensionAttribute1](./media/licensing-group-advanced/user-set-extensionAttribute1.png)
+You can assign the attribute on-premises and sync it with Azure AD, or you can manage the attribute directly in the cloud.
> [!WARNING] > Use caution when modifying an existing groupΓÇÖs membership rule. When a rule is changed, the membership of the group will be re-evaluated and users who no longer match the new rule will be removed (users who still match the new rule will not be affected during this process). Those users will have their licenses removed during the process which may result in loss of service, or in some cases, loss of data.
For this example, modify one user and set their extensionAttribute1 to the value
A user can be a member of multiple groups with licenses. Here are some things to consider: -- Multiple licenses for the same product can overlap, and they result in all enabled services being applied to the user. An example could be that *E3 base services* contains the foundation services to deploy first, to all users, and *E3 extended services* contains additional services (Sway and Planner) to deploy only to some users. You can add the user to both groups. As a result, the user has 7 of the 12 services in the product enabled, while using only one license for this product.
+- Multiple licenses for the same product can overlap, and they result in all enabled services being applied to the user. An example could be that *M365-P1* contains the foundational services to deploy to all users, and *M365-P2* contains the P2 services to deploy only to some users. You can add a user to one or both groups and only use one license for the product.
-- Selecting the *E3* license shows more details, including information about which services are enabled for the user by by the group license assignment.
+- Select a license to view more details, including information about which services are enabled for the user by the group license assignment.
## Direct licenses coexist with group licenses
-When a user inherits a license from a group, you can't directly remove or modify that license assignment in the user's properties. You can change the license assignment only in the group and the changes are then propagated to all users. If you need to assign any additional features to a user that has their license from a group license assignment you must create another group to assign the additional features to the user.
-
-Directly assigned licenses can be removed, and donΓÇÖt affect a user's inherited licenses. Consider the user who inherits an Office 365 Enterprise E3 license from a group.
-
-Initially, the user inherits the license only from the *E3 basic services* group, which enables four service plans.
+When a user inherits a license from a group, you can't directly remove or modify that license in the user's properties. You can change the license assignment only in the group and the changes are then propagated to all group members. If you need to assign other features to a user that has their license from a group license assignment, you must create another group to assign the other features to the user.
-1. Select **Assign** to directly assign an E3 license to the user. For example, if you want to disable all service plans except Yammer Enterprise.
+When you use group-based licensing, consider the following scenarios:
- As a result, the user still uses only one license of the E3 product. But the direct assignment enables the Yammer Enterprise service for that user only. You can see which services are enabled by the group membership versus the direct assignment.
+- Group members inherit licenses assigned to the group.
+- License options for group-based licenses must be changed at the group level.
+- If different license options need to be assigned to a user, create a new group, assign a license to the group, then add the user to that group.
+- Users still use only one license of a product if different license options for that product are used in the different group-based licenses.
-1. When you use direct assignment, the following operations are allowed:
+When you use direct assignment, the following operations are allowed:
- - Yammer Enterprise can be turned off for a individual user. Because the service is assigned directly to the user, it can be changed.
- - Additional services can be enabled as well, as part of the directly assigned license.
- - The **Remove** button can be used to remove the direct license from the user. You can see that the user then has the inherited group license and only the original services remain enabled.
+- Licenses not already assigned through group-based licensing can be changed for an individual user.
+- Other services can be enabled, as part of a directly assigned license.
+- Directly assigned licenses can be removed and donΓÇÖt affect a user's inherited licenses.
## Managing new services added to products
-When Microsoft adds a new service to a product license plan, it is enabled by default in all groups to which you have assigned the product license. Users in your organization who are subscribed to notifications about product changes will receive emails ahead of time notifying them about the upcoming service additions.
+When Microsoft adds a new service to a product license plan, it's enabled by default in all groups to which you've assigned the product license. Users in your organization who are subscribed to notifications about product changes will receive emails ahead of time notifying them about the upcoming service additions.
As an administrator, you can review all groups affected by the change and take action, such as disabling the new service in each group. For example, if you created groups targeting only specific services for deployment, you can revisit those groups and make sure that any newly added services are disabled.
-Here is an example of what this process may look like:
+Here's an example of what this process may look like:
-1. Originally, you assigned the *Office 365 Enterprise E5* product to several groups. One of those groups, called *O365 E5 - Exchange only* was designed to enable only the *Exchange Online (Plan 2)* service for its members.
+1. Originally, you assigned the *Microsoft 365 E5* product to several groups. One of those groups, called *Microsoft 365 E5 - Exchange only* was designed to enable only the *Exchange Online (Plan 2)* service for its members.
-2. You received a notification from Microsoft that the E5 product will be extended with a new service - *Microsoft Stream*. When the service becomes available in your organization, you can do the following:
+2. You received a notification from Microsoft that the E5 product will be extended with a new service - *Microsoft Stream*. When the service becomes available in your organization, you can complete the following steps:
-3. Go to the [**Azure Active Directory > Licenses > All products**](https://portal.azure.com/#blade/Microsoft_AAD_IAM/LicensesMenuBlade/Products) blade and select *Office 365 Enterprise E5*, then select **Licensed Groups** to view a list of all groups with that product.
+3. Go to [**Azure Active Directory > Licenses > All products**](https://portal.azure.com/#blade/Microsoft_AAD_IAM/LicensesMenuBlade/Products) and select *Microsoft 365 Enterprise E5*, then select **Licensed Groups** to view a list of all groups with that product.
-4. Click on the group you want to review (in this case, *O365 E5 - Exchange only*). This will open the **Licenses** tab. Clicking on the E5 license will open a blade listing all enabled services.
+4. Select the group you want to review (in this case, *Microsoft 365 E5 - Exchange only*). The **Licenses** tab opens. Select the E5 license to view all enabled services.
> [!NOTE] > The *Microsoft Stream* service has been automatically added and enabled in this group, in addition to the *Exchange Online* service: ![Screenshot of new service added to a group license](./media/licensing-group-advanced/manage-new-services.png)
-5. If you want to disable the new service in this group, click the **On/Off** toggle next to the service and click the **Save** button to confirm the change. Azure AD will now process all users in the group to apply the change; any new users added to the group will not have the *Microsoft Stream* service enabled.
+5. If you want to disable the new service in this group, select the On/Off toggle next to the service, and select the **Save** button to confirm the change. Azure AD will now process all users in the group to apply the change; any new users added to the group won't have the *Microsoft Stream* service enabled.
> [!NOTE] > Users may still have the service enabled through some other license assignment (another group they are members of or a direct license assignment).
You can use a PowerShell script to check if users have a license assigned direct
![Screenshot of the Get-Msolaccountsku cmdlet](./medilet.png)
-3. Use the *AccountSkuId* value for the license you are interested in with [this PowerShell script](licensing-ps-examples.md#check-if-user-license-is-assigned-directly-or-inherited-from-a-group). This will produce a list of users who have this license with the information about how the license is assigned.
+3. Use the *AccountSkuId* value for the license you're interested in with [this PowerShell script](licensing-ps-examples.md#check-if-user-license-is-assigned-directly-or-inherited-from-a-group). A list populates the users who have this license and information about how the license is assigned.
## Use Audit logs to monitor group-based licensing activity
You can use [Azure AD audit logs](../reports-monitoring/concept-audit-logs.md) t
- when the system started processing a group license change, and when it finished - what license changes were made to a user as a result of a group license assignment.
->[!NOTE]
-> Audit logs are available on most blades in the Azure Active Directory section of the portal. Depending on where you access them, filters may be pre-applied to only show activity relevant to the context of the blade. If you are not seeing the results you expect, examine [the filtering options](../reports-monitoring/concept-audit-logs.md#filtering-audit-logs) or access the unfiltered audit logs under [**Azure Active Directory > Activity > Audit logs**](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Audit).
+Audit logs related to group-based licensing can be accessed from the Audit logs in the Groups or Licensing areas of Azure AD or use the following filter combinations from the main Audit logs:
-### Find out who modified a group license
+- **Service**: Core Directory
+- **Category**: GroupManagement or UserManagement
-1. Set the **Activity** filter to *Set group license* and click **Apply**.
-2. The results include all cases of licenses being set or modified on groups.
- >[!TIP]
- > You can also type the name of the group in the *Target* filter to scope the results.
+![Screenshot of the Azure AD audit logs with Core Directory and GroupManagement filter options highlighted.](media/licensing-group-advanced/audit-logs-group-licensing-filters.png)
-3. Select an item in the list to see the details of what has changed. Under *Modified Properties* both old and new values for the license assignment are listed.
+### Find out who modified a license
-Here is an example of recent group license changes, with details:
+1. To see the logs for group license changes, use the following Audit log filter options:
+ - **Service**: Core Directory
+ - **Category**: GroupManagement
+ - **Activity**: Set group license
+1. Select a row in the resulting table to view the details.
+1. Select the **Modified Properties** tab see the old and new values for the license agreement.
-![Screenshot that shows the "Audit logs" page with a list item selected and the "Activity Details Audit log" pane open.](./media/licensing-group-advanced/audit-group-license-change.png)
+The following example shows the filter settings listed above, plus the *Target* filter set to all groups that start with "EMS."
-### Find out when group changes started and finished processing
+![Screenshot of the Azure AD audit logs including a Target filter.](media/licensing-group-advanced/audit-log-group-licensing-target-filter.png)
+
+To see license changes for a specific user, use the following filters:
+- **Service**: Core Directory
+- **Category**: UserManagement
+- **Activity**: Change user license
-When a license changes on a group, Azure AD will start applying the changes to all users.
+### Find out when group changes started and finished processing
-1. To see when groups started processing, set the **Activity** filter to *Start applying group based license to users*. Note that the actor for the operation is *Microsoft Azure AD Group-Based Licensing* - a system account that is used to execute all group license changes.
- >[!TIP]
- > Click an item in the list to see the *Modified Properties* field - it shows the license changes that were picked up for processing. This is useful if you made multiple changes to a group and you are not sure which one was processed.
+When a license changes on a group, Azure AD will start applying the changes to all users, but the changes could take time to process.
-2. Similarly, to see when groups finished processing, use the filter value *Finish applying group based license to users*.
- > [!TIP]
- > In this case, the *Modified Properties* field contains a summary of the results - this is useful to quickly check if processing resulted in any errors. Sample output:
- > ```
- > Modified Properties
- > ...
- > Name : Result
- > Old Value : []
- > New Value : [Users successfully assigned licenses: 6, Users for whom license assignment failed: 0.];
- > ```
+1. To see when groups started processing, use the following filters:
+ - **Service**: Core Directory
+ - **Category**: GroupManagement
+ - **Activity**: Start applying group based license to users
+1. Select a row in the resulting table to view the details.
+1. Select the **Modified Properties** tab see the license changes that were picked up for processing.
+ - Use these details if you're making multiple changes to a group and aren't sure which license processed.
+ - The actor for the operation is *Microsoft Azure AD Group-Based Licensing*, which is a system account that is used to execute all group license changes.
-3. To see the complete log for how a group was processed, including all user changes, set the following filters:
- - **Initiated By (Actor)**: "Microsoft Azure AD Group-Based Licensing"
- - **Date Range** (optional): custom range for when you know a specific group started and finished processing
+To see when groups finished processing, change the **Activity** filter to *Finish applying group based license to users*. In this case, the **Modified Properties** field contains a summary of the results, which is useful to quickly check if processing resulted in any errors. Sample output:
+> ```
+> Modified Properties
+> ...
+> Name : Result
+> Old Value : []
+> New Value : [Users successfully assigned licenses: 6, Users for whom license assignment failed: 0.];
+> ```
-This sample output shows the start of processing, all resulting user changes, and the finish of processing.
+To see the complete log for how a group was processed, including all user changes, add the following filters:
+- **Target**: Group name
+- **Initiated By (Actor)**: Microsoft Azure AD Group-Based Licensing (case-sensitive)
+- **Date Range** (optional): Custom range for when you know a specific group started and finished processing
-![Screenshot group license changes](./media/licensing-group-advanced/audit-group-processing-log.png)
+This sample output shows the start and finish of processing the license change.
->[!TIP]
-> Clicking items related to *Change user license* will show details for license changes applied to each individual user.
+![Screenshot of the Azure AD audit log filters and start and end times of license changes.](./media/licensing-group-advanced/audit-log-license-start-finish.png)
## Deleting a group with an assigned license
-It is not possible to delete a group with an active license assigned. An administrator could delete a group not realizing that it will cause licenses to be removed from users - for this reason we require any licenses to be removed from the group first, before it can be deleted.
+It isn't possible to delete a group with an active license assigned. An administrator could delete a group not realizing that it will cause licenses to be removed from users. For this reason we require any licenses to be removed from the group first, before it can be deleted.
-When trying to delete a group in the Azure portal you may see an error notification like this:
+When trying to delete a group in the Azure portal, you may see an error notification like this:
![Screenshot group deletion failed](./media/licensing-group-advanced/groupdeletionfailed.png) Go to the **Licenses** tab on the group and see if there are any licenses assigned. If yes, remove those licenses and try to delete the group again.
-You may see similar errors when trying to delete the group through PowerShell or Graph API. If you are using a group synced from on-premises, Azure AD Connect may also report errors if it is failing to delete the group in Azure AD. In all such cases, make sure to check if there are any licenses assigned to the group, and remove them first.
+You may see similar errors when trying to delete the group through PowerShell or Graph API. If you're using a group synced from on-premises, Azure AD Connect may also report errors if it's failing to delete the group in Azure AD. In all such cases, make sure to check if there are any licenses assigned to the group, and remove them first.
## Limitations and known issues If you use group-based licensing, it's a good idea to familiarize yourself with the following list of limitations and known issues. -- Group-based licensing currently does not support groups that contain other groups (nested groups). If you apply a license to a nested group, only the immediate first-level user members of the group have the licenses applied.
+- Group-based licensing currently doesn't support groups that contain other groups (nested groups). If you apply a license to a nested group, only the immediate first-level user members of the group have the licenses applied.
- The feature can only be used with security groups, and Microsoft 365 groups that have securityEnabled=TRUE. -- The [Microsoft 365 admin center](https://admin.microsoft.com) does not currently support group-based licensing. If a user inherits a license from a group, this license appears in the Office admin portal as a regular user license. If you try to modify that license or try to remove the license, the portal returns an error message. Inherited group licenses cannot be modified directly on a user.
+- The [Microsoft 365 admin center](https://admin.microsoft.com) doesn't currently support group-based licensing. If a user inherits a license from a group, this license appears in the Office admin portal as a regular user license. If you try to modify that license or try to remove the license, the portal returns an error message. Inherited group licenses can't be modified directly on a user.
-- When licenses are assigned or modified for a large group (for example, 100,000 users), it could impact performance. Specifically, the volume of changes generated by Azure AD automation might negatively impact the performance of your directory synchronization between Azure AD and on-premises systems.
+- When licenses are assigned or modified for a large group (for example, 100,000 users), it could affect performance. Specifically, the volume of changes generated by Azure AD automation might negatively affect the performance of your directory synchronization between Azure AD and on-premises systems.
-- If you are using dynamic groups to manage your userΓÇÖs membership, verify that the user is part of the group, which is necessary for license assignment. If not, [check processing status for the membership rule](groups-create-rule.md) of the dynamic group.
+- If you're using dynamic groups to manage your userΓÇÖs membership, verify that the user is part of the group, which is necessary for license assignment. If not, [check processing status for the membership rule](groups-create-rule.md) of the dynamic group.
-- In certain high load situations, it may take a long time to process license changes for groups or membership changes to groups with existing licenses. If you see your changes take more than 24 hours to process group size of 60K users or less, please [open a support ticket](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/supportRequest) to allow us to investigate.
+- In certain high load situations, it may take a long time to process license changes for groups or membership changes to groups with existing licenses. If you see your changes take more than 24 hours to process group size of 60 K users or less, please [open a support ticket](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/supportRequest) to allow us to investigate.
-- License management automation does not automatically react to all types of changes in the environment. For example, you might have run out of licenses, causing some users to be in an error state. To free up the available seat count, you can remove some directly assigned licenses from other users. However, the system does not automatically react to this change and fix users in that error state.
+- License management automation doesn't automatically react to all types of changes in the environment. For example, you might have run out of licenses, causing some users to be in an error state. To free up the available seat count, you can remove some directly assigned licenses from other users. However, the system doesn't automatically react to this change and fix users in that error state.
- As a workaround to these types of limitations, you can go to the **Group** blade in Azure AD, and click **Reprocess**. This command processes all users in that group and resolves the error states, if possible.
+ As a workaround to these types of limitations, you can go to **Azure AD** > **Groups** > select a group > select **Licenses** > select **Reprocess**. This command processes all users in that group and resolves the error states, if possible.
## Next steps
active-directory Add User Without Invite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/add-user-without-invite.md
-- Title: Add B2B guests without an invitation link or email - Azure AD
-description: You can let a guest user add other guest users to your Azure AD without redeeming an invitation in Azure Active Directory B2B collaboration.
----- Previously updated : 09/15/2022--------
-# Add B2B collaboration guest users without an invitation link or email
-
-You can now invite guest users by sending out a [direct link](redemption-experience.md#redemption-through-a-direct-link) to a shared app. With this method, guest users no longer need to use the invitation email, except in some special cases. A guest user clicks the app link, reviews and accepts the privacy terms, and then seamlessly accesses the app. For more information, see [B2B collaboration invitation redemption](redemption-experience.md).
-
-Before this new method was available, you could invite guest users without requiring the invitation email by adding an inviter (from your organization or from a partner organization) to the [**Guest inviter** directory role](external-collaboration-settings-configure.md#assign-the-guest-inviter-role-to-a-user), and then having the inviter add guest users to the directory, groups, or applications through the UI or through PowerShell. (If using PowerShell, you can suppress the invitation email altogether). For example:
-
-1. A user in the host organization (for example, WoodGrove) invites one user from the partner organization (for example, Sam@litware.com) as Guest.
-2. The administrator in the host organization [sets up policies](external-collaboration-settings-configure.md) that allow Sam to identify and add other users from the partner organization (Litware). (Sam must be added to the **Guest inviter** role.)
-3. Now, Sam can add other users from Litware to the WoodGrove directory, groups, or applications without needing invitations to be redeemed. If Sam has the appropriate enumeration privileges in Litware, it happens automatically.
-
-This original method still works. However, there's a small difference in behavior. If you use PowerShell, you'll notice that an invited guest account now has a **PendingAcceptance** status instead of immediately showing **Accepted**. Although the status is pending, the guest user can still sign in and access the app without clicking an email invitation link. The pending status means that the user has not yet gone through the [consent experience](redemption-experience.md#consent-experience-for-the-guest), where they accept the privacy terms of the inviting organization. The guest user sees this consent screen when they sign in for the first time.
-
-If you invite a user to the directory, the guest user must access the resource tenant-specific Azure portal URL directly (such as https://portal.azure.com/*resourcetenant*.onmicrosoft.com) to view and agree to the privacy terms.
-
-## Next steps
--- [What is Azure AD B2B collaboration?](what-is-b2b.md)-- [B2B collaboration invitation redemption](redemption-experience.md)-- [Delegate invitations for Azure Active Directory B2B collaboration](external-collaboration-settings-configure.md)-- [How do information workers add B2B collaboration users?](add-users-information-worker.md)-
active-directory Customize Invitation Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customize-invitation-api.md
Check out the invitation API reference in [https://developer.microsoft.com/graph
- [What is Azure AD B2B collaboration?](what-is-b2b.md) - [Add and invite guest users](add-users-administrator.md) - [The elements of the B2B collaboration invitation email](invitation-email-elements.md)+
active-directory External Collaboration Settings Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/external-collaboration-settings-configure.md
When a B2B user signs into a resource tenant to collaborate, a sign-in log is ge
See the following articles on Azure AD B2B collaboration: - [What is Azure AD B2B collaboration?](what-is-b2b.md)-- [Add B2B collaboration guest users without an invitation](add-user-without-invite.md) - [Adding a B2B collaboration user to a role](./add-users-administrator.md)
active-directory Invitation Email Elements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/invitation-email-elements.md
# The elements of the B2B collaboration invitation email - Azure Active Directory
-Invitation emails are a critical component to bring partners on board as B2B collaboration users in Azure AD. ItΓÇÖs [not required that you send an email to invite someone using B2B collaboration](add-user-without-invite.md), but it gives the user all the information they need to decide if they accept your invite or not. It also gives them a link they can always refer to in the future when they need to return to your resources.
+Invitation emails are a critical component to bring partners on board as B2B collaboration users in Azure AD. ItΓÇÖs [not required that you send an email to invite someone using B2B collaboration](redemption-experience.md#redemption-through-a-direct-link), but it gives the user all the information they need to decide if they accept your invite or not. It also gives them a link they can always refer to in the future when they need to return to your resources.
![Screenshot showing the B2B invitation email](media/invitation-email-elements/invitation-email.png)
See the following articles on Azure AD B2B collaboration:
- [How do Azure Active Directory admins add B2B collaboration users?](add-users-administrator.md) - [How do information workers add B2B collaboration users?](add-users-information-worker.md) - [B2B collaboration invitation redemption](redemption-experience.md)-- [Add B2B collaboration users without an invitation](add-user-without-invite.md)
active-directory User Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/user-properties.md
Previously updated : 10/12/2022 Last updated : 01/09/2023
Microsoft account | This user is homed in a Microsoft account and authenticates
google.com | This user has a Gmail account and has signed up by using self-service to the other organization. facebook.com | This user has a Facebook account and has signed up by using self-service to the other organization. mail | This user has signed up by using Azure AD Email one-time passcode (OTP).
-phone | This user has an email address that doesn't match a verified Azure AD domain or a SAML/WS-Fed domain, and isn't a Gmail address or Microsoft account.
{issuer URI} | This user is homed in an external organization that doesn't use Azure Active Directory as their identity provider, but instead uses a SAML/WS-Fed-based identity provider. The issuer URI is shown when the Identities field is clicked. ### Directory synced
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/whats-new-docs.md
Welcome to what's new in Azure Active Directory External Identities documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the External Identities service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).
+## December 2022
+
+### Updated articles
+
+- [Azure Active Directory B2B collaboration invitation redemption](redemption-experience.md)
+- [Azure Active Directory B2B collaboration API and customization](customize-invitation-api.md)
+- [Azure Active Directory External Identities: What's new](whats-new-docs.md)
+- [Auditing and reporting a B2B collaboration user](auditing-and-reporting.md)
+ ## November 2022 ### Updated articles
Welcome to what's new in Azure Active Directory External Identities documentatio
- [Add Microsoft account (MSA) as an identity provider for External Identities](microsoft-account.md) - [How users in your organization can invite guest users to an app](add-users-information-worker.md)
-## September 2022
-
-### Updated articles
--- [Self-service sign-up](self-service-sign-up-overview.md)-- [Properties of an Azure Active Directory B2B collaboration user](user-properties.md)-- [Azure Active Directory (Azure AD) identity provider for External Identities](azure-ad-account.md)-- [Add Google as an identity provider for B2B guest users](google-federation.md)-- [Email one-time passcode authentication](one-time-passcode.md)-- [Add B2B collaboration guest users without an invitation link or email](add-user-without-invite.md)-- [Identity Providers for External Identities](identity-providers.md)-- [Tutorial: Use PowerShell to bulk invite Azure AD B2B collaboration users](bulk-invite-powershell.md)-- [B2B collaboration user claims mapping in Azure Active Directory](claims-mapping.md)-- [Azure Active Directory External Identities: What's new](whats-new-docs.md)-- [Azure Active Directory B2B collaboration invitation redemption](redemption-experience.md)-- [Add Azure Active Directory B2B collaboration users in the Azure portal](add-users-administrator.md)-- [Leave an organization as an external user](leave-the-organization.md)-- [Grant B2B users in Azure AD access to your on-premises applications](hybrid-cloud-to-on-premises.md)
active-directory 2 Secure Access Current State https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/2-secure-access-current-state.md
Title: Discover the current state of external collaboration with Azure Active Directory
-description: Learn methods to discover the current state of your collaboration.
+description: Learn methods to discover the current state of your collaboration
Previously updated : 09/02/2022 Last updated : 12/15/2022
# Discover the current state of external collaboration in your organization
-Before discovering the current state of your external collaboration, you should [determine your desired security posture](1-secure-access-posture.md). You'll consider your organizationΓÇÖs needs for centralized vs. delegated control, and any relevant governance, regulatory, and compliance targets.
+Before you learn about the current state of your external collaboration, determine a security posture. Consider centralized vs. delegated control, also governance, regulatory, and compliance targets.
-Individuals in your organization are probably already collaborating with users from other organizations. Collaboration can be through features in productivity applications like Microsoft 365, by emailing, or by otherwise sharing resources with external users. The pillars of your governance plan will form as you discover:
+Learn more: [Determine your security posture for external users](1-secure-access-posture.md)
-* The users who are initiating external collaboration.
-* The external users and organizations you're collaborating with.
-* The access being granted to external users.
+Users in your organization likely collaborate with users from other organizations. Collaboration can occur with productivity applications like Microsoft 365, by email, or sharing resources with external users. The foundation of your governance plan can include:
-## Users initiating external collaboration
-
-The users initiating external collaboration best understand the applications most relevant for external collaboration, and when that access should end. Understanding these users can help you determine who should be delegated permission to inviting external users, create access packages, and complete access reviews.
-
-To find users who are currently collaborating, review the [Microsoft 365 audit log for sharing and access request activities](/microsoft-365/compliance/search-the-audit-log-in-security-and-compliance#sharing-and-access-request-activities). You can also review the [Azure AD audit log for details on who invited B2B](../external-identities/auditing-and-reporting.md) users to your directory.
-
-## Find current collaboration partners
+* Users initiating external collaboration
+* Collaboration with external users and organizations
+* Access granted to external users
-External users may be [Azure AD B2B users](../external-identities/what-is-b2b.md) (preferable) with partner-managed credentials, or external users with locally provisioned credentials. These users are typically (but not always) marked with a UserType of Guest. You can enumerate guest users through the [Microsoft Graph API](/graph/api/user-list?tabs=http), [PowerShell](/graph/api/user-list?tabs=http), or the [Azure portal](../enterprise-users/users-bulk-download.md).
+## Users initiating external collaboration
-There are also tools specifically designed to identify existing Azure AD B2B collaboration such as identifying external Azure AD tenants, and which external users are accessing what applications. These tools include a [PowerShell module](https://github.com/AzureAD/MSIdentityTools/wiki/Get-MSIDCrossTenantAccessActivity) and an [Azure Monitor workbook](../reports-monitoring/workbook-cross-tenant-access-activity.md).
+Users seeking external collaboration know the applications needed for their work, and when access ends. Therefore, determine users with delegated permission to invite external users, create access packages, and complete access reviews.
-### Use email domains and companyName property
+To find collaborating users:
-External organizations can be determined by the domain names of external user email addresses. If consumer identity providers such as Google are supported, this may not be possible. In this case we recommend that you write the companyName attribute to clearly identify the userΓÇÖs external organization.
+* [Microsoft 365, audit log activities](/microsoft-365/compliance/audit-log-activities?view=o365-worldwide&preserve-view=true)
+* [Auditing and reporting a B2B collaboration user](../external-identities/auditing-and-reporting.md)
-### Use allow or blocklists
+## Collaboration with external users and organizations
-Consider whether your organization wants to allow collaboration with only specific organizations. Alternatively, consider if your organization wants to block collaboration with specific organizations. At the tenant level, there is an [allow or blocklist](../external-identities/allow-deny-list.md), which can be used to control overall B2B invitations and redemptions regardless of source (such as Microsoft Teams, Microsoft SharePoint, or the Azure portal).
+External users might be Azure AD B2B users with partner-managed credentials, or external users with locally provisioned credentials. Typically, these users are a UserType of Guest. See, [B2B collaboration overview](../external-identities/what-is-b2b.md).
-If youΓÇÖre using entitlement management, you can also scope access packages to a subset of your partners by using the Specific connected organizations setting as shown below.
+You can enumerate guest users with:
-![Screenshot of allowlisting or blocklisting in creating a new access package.](media/secure-external-access/2-new-access-package.png)
+* [Microsoft Graph API](/graph/api/user-list?tabs=http)
+* [PowerShell](/graph/api/user-list?tabs=http)
+* [Azure portal](../enterprise-users/users-bulk-download.md)
-## Find access being granted to external users
+There are tools to identify Azure AD B2B collaboration, external Azure AD tenants and users accessing applications:
-Once you have an inventory of external users and organizations, you can determine the access granted to these users using the Microsoft Graph API to determine Azure AD [group membership](/graph/api/resources/groups-overview) or [direct application assignment](/graph/api/resources/approleassignment) in Azure AD.
+* [PowerShell module](https://github.com/AzureAD/MSIdentityTools/wiki/Get-MSIDCrossTenantAccessActivity)
+* [Azure Monitor workbook](../reports-monitoring/workbook-cross-tenant-access-activity.md)
-### Enumerate application-specific permissions
+### Email domains and companyName property
-You may also be able to perform application-specific permission enumeration. For example, you can programmatically generate a permission report for SharePoint Online by using [this script](https://gallery.technet.microsoft.com/office/SharePoint-Online-c9ec4f64).
+Determine external organizations with the domain names of external user email addresses. This discovery might not be possible with consumer identity providers such as Google. We recommend you write the companyName attribute to identify external organizations.
-Specifically investigate access to all of your business-sensitive and business-critical apps so that you are fully aware of any external access.
+### Allowlist, blocklist, and entitlement management
-### Detect ad hoc sharing
+For your organization to collaborate with, or block, specific organizations, at the tenant level, there is allowlist or blocklist. Use this feature to control B2B invitations and redemptions regardless of source (such as Microsoft Teams, SharePoint, or the Azure portal). See, [Allow or block invitations to B2B users from specific organizations](../external-identities/allow-deny-list.md).
-If your email and network plans enable it, you can investigate content being shared through email or through unauthorized software as a service (SaaS) apps. [Microsoft 365 Data Loss Protection](/microsoft-365/compliance/data-loss-prevention-policies) helps you identify, prevent, and monitor the accidental sharing of sensitive information across your Microsoft 365 infrastructure. [Microsoft Defender for Cloud Apps](https://www.microsoft.com/microsoft-365/enterprise-mobility-security/cloud-app-security) can help you identify the use of unauthorized SaaS apps in your environment.
+If you use entitlement management, you can confine access packages to a subset of partners with the **Specific connected organizations** option, under New access packages, in Identity Governance.
-## Next steps
+ ![Screenshot of the Specific connected organizations option, under New access packages.](media/secure-external-access/2-new-access-package.png)
-See the following articles on securing external access to resources. We recommend you take the actions in the listed order.
+## External user access
-1. [Determine your security posture for external access](1-secure-access-posture.md)
+After you have an inventory of external users and organizations, determine the access to grant to these users. You can use the Microsoft Graph API to determine Azure AD group membership or application assignment.
-2. [Discover your current state](2-secure-access-current-state.md) (You are here.)
+* [Working with groups in Microsoft Graph](/graph/api/resources/groups-overview?context=graph%2Fcontext&view=graph-rest-1.0&preserve-view=true)
+* [Applications API overview](/graph/applications-concept-overview?view=graph-rest-1.0&preserve-view=true)
-3. [Create a governance plan](3-secure-access-plan.md)
+### Enumerate application permissions
-4. [Use groups for security](4-secure-access-groups.md)
+Investigate access to your sensitive apps for awareness about external access. See, [Grant or revoke API permissions programmatically](/graph/permissions-grant-via-msgraph?view=graph-rest-1.0&tabs=http&pivots=grant-application-permissions&preserve-view=true).
-5. [Transition to Azure AD B2B](5-secure-access-b2b.md)
+### Detect informal sharing
-6. [Secure access with Entitlement Management](6-secure-access-entitlement-managment.md)
+If your email and network plans are enabled, you can investigate content sharing through email or unauthorized software as a service (SaaS) apps.
-7. [Secure access with Conditional Access policies](7-secure-access-conditional-access.md)
+* Identify, prevent, and monitor accidental sharing
+ * [Learn about data loss prevention](/microsoft-365/compliance/dlp-learn-about-dlp?view=o365-worldwide&preserve-view=true )
+* Identify unauthorized apps
+ * [Microsoft Defender for Cloud Apps](/security/business/siem-and-xdr/microsoft-defender-cloud-apps?rtc=1)
-8. [Secure access with Sensitivity labels](8-secure-access-sensitivity-labels.md)
+## Next steps
-9. [Secure access to Microsoft Teams, OneDrive, and SharePoint](9-secure-access-teams-sharepoint.md)
+* [Determine your security posture for external access](1-secure-access-posture.md)
+* [Create a security plan for external access](3-secure-access-plan.md)
+* [Securing external access with groups](4-secure-access-groups.md)
+* [Transition to governed collaboration with Azure Active Directory B2B collaboration](5-secure-access-b2b.md)
+* [Manage external access with entitlement management](6-secure-access-entitlement-managment.md)
+* [Manage external access with Conditional Access policies](7-secure-access-conditional-access.md)
+* [Control access with sensitivity labels](8-secure-access-sensitivity-labels.md)
+* [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business](9-secure-access-teams-sharepoint.md)
active-directory 3 Secure Access Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/3-secure-access-plan.md
Title: Create a security plan for external access to Azure Active Directory
-description: Plan the security for external access to your organization's resources..
+description: Plan the security for external access to your organization's resources.
Previously updated : 09/13/2022 Last updated : 12/15/2022
# Create a security plan for external access
-Now that you have [determined your desired security posture security posture for external access](1-secure-access-posture.md) and [discovered your current collaboration state](2-secure-access-current-state.md), you can create an external user security and governance plan.
+Before you create an external-access security plan, ensure the following conditions are met.
-This plan should document the following:
+* [Determine your security posture for external access](1-secure-access-posture.md)
+* [Discover the current state of external collaboration in your organization](2-secure-access-current-state.md)
-* The applications and other resources that should be grouped for access.
+For your security plan, document the following information:
-* The appropriate sign-in conditions for external users. These can include device state, sign-in location, client application requirements, and user risk.
+* Applications and resources to be grouped for access
+* Sign-in conditions for external users
+ * Device state, sign-in location, client application requirements, and user risk
+* Policies that determine when to review and remove access
+* User populations to be grouped for a similar experience
-* Business policies on when to review and remove access.
+After you document the information, use Microsoft identity and access management policies, or another identity provider (IdP) to implement the plan.
-* User populations to be grouped and treated similarly.
+## Resources to be grouped for access
-Once these areas are documented, you can use identity and access management policies from Microsoft or any other identity provider (IdP) to implement this plan.
+To group resources for access:
-## Document resources to be grouped for access
+* Microsoft Teams groups files, conversation threads, and other resources. Formulate an external access strategy for Microsoft Teams.
+ * See, [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business](9-secure-access-teams-sharepoint.md)
+* Use entitlement management access packages to create and delegate management of packages of applications, groups, teams, SharePoint sites, etc.
+ * [Create a new access package in entitlement management](/azure/active-directory/governance/entitlement-management-access-package-create)
+* Apply Conditional Access policies to up to 250 applications, with the same access requirements
+ * [What is Conditional Access?](/azure/active-directory/conditional-access/overview)
+* Use Cross Tenant Access Settings Inbound Access to define access for application groups of external users
+ * [Overview: Cross-tenant access with Azure AD External Identities](/azure/active-directory/external-identities/cross-tenant-access-overview)
-There are multiple ways to group resources for access.
+Document the applications to be grouped. Considerations include:
-* Microsoft Teams groups files, conversation threads, and other resources in one place. You should formulate an external access strategy for Microsoft Teams. See [Secure access to Teams, OneDrive, and SharePoint](9-secure-access-teams-sharepoint.md).
+* **Risk profile** - Assess the risk if a bad actor gains access to an application.
+ * Identify application as high, medium, or low risk. Avoid grouping high-risk with low-risk.
+ * Document applications that can't be shared with external users
+* **Compliance frameworks** - Determine compliance frameworks for apps
+ * Identify access and review requirements
+* **Applications for roles or departments** - Assess applications to be grouped for a role or department access
+* **Collaboration applications** - Identify collaboration applications external users can access, such as Teams and SharePoint
+ * For productivity applications, external users might have licenses, or you might provide access
-* Entitlement Management Access Packages enable you to create and delegate management of packages of Applications, Groups, Teams, SharePoint sites, and other resources to which you can grant access.
+For application and resource group access by external users, document the following information:
-* Conditional Access policies can be applied to up to 250 applications with the same access requirements.
+* Descriptive group name, for example High_Risk_External_Access_Finance
+* Applications and resources in the group
+* Application and resource owners and contact information
+* Access is controlled by IT, or delegated to a business owner
+* Prerequisites for access: background check, training, etc.
+* Compliance requirements to access resources
+* Challenges, for example multi-factor authentication (MFA) for some resources
+* Cadence for reviews, by whom, and where it's documented
-* Cross Tenant Access Settings Inbound Access can define what application groups of external users are allowed to access.
+> [!TIP]
+> Use this type of governance plan for internal access.
-However you will manage access, you must document which applications should be grouped together. Considerations should include:
-
-* **Risk profile**. What is the risk to your business if a bad actor gained access to an application? Consider coding each application as high, medium, or low risk. Be cautious about grouping high-risk applications with low-risk ones.
-
- * Document applications that should never be shared with external users as well.
-
-* **Compliance Frameworks**. What if any compliance frameworks must an application meet? What are the access and review requirements?
-
-* **Applications for specific job roles or departments**. Are there applications that should be grouped because all users in a specific job role or department will need access?
-
-* **Collaboration-focused applications**. What collaboration-focused applications should external users be able to access? Microsoft Teams and SharePoint may need to be accessible. For productivity applications within Office 365, like Word and Excel, will external users bring their own licenses, or will you need to license them and provide access?
-
-For each grouping of applications and resources that you want to make accessible to external users , document the following:
-
-* A descriptive name for the group, for example *High_Risk_External_Access_Finance*.
-
-* Complete list of all applications and resources in the group.
-
-* Application and resource owners and contact information.
-
-* Whether the access is controlled by IT, or delegated to the business owner.
-
-* Any prerequisites, for example completing a background check or a training, for access.
-
-* Any compliance requirements for accessing the resources.
-
-* Any additional challenges, for example requiring multi-factor-authentication for specific resources.
-
-* How often access will be reviewed, by whom, and where it will be documented.
+## Document sign-in conditions for external users
-This type of governance plan can and should also be completed for internal access as well.
+Determine the sign-in requirements for external users who request access. Base requirements on the resource risk profile, and the user's risk assessment during sign-in. Configure sign-in conditions using Conditional Access: a condition and an outcome. For example, you can require MFA.
-## Document sign-in conditions for external users
+Learn more: [What is Conditional Access?](../conditional-access/overview.md)
-As part of your plan you must determine the sign-in requirements for your external users as they access resources. Sign-in requirements are often based on the risk profile of the resources, and the risk assessment of the usersΓÇÖ sign-in.
+**Resource risk-profile sign-in conditions**
-Sign-in conditions are configured in [Azure AD Conditional Access](../conditional-access/overview.md) and are made up of a condition and an outcome. For example, when to require multi-factor authentication
+Consider the following risk-based policies to trigger MFA.
-**Resource risk-based sign-in conditions.**
+* **Low** - MFA for some application sets
+* **Medium** - MFA when other risks are present
+* **High** - External users always use MFA
-| Application Risk Profile| Consider these policies for triggering multi-factor authentication |
-| - |-|
-| Low risk| Require MFA for specific application sets |
-| Med risk| Require MFA when other risks present |
-| High risk| Require MFA always for external users |
+Learn more:
+* [Tutorial: Enforce multi-factor authentication for B2B guest users](../external-identities/b2b-tutorial-require-mfa.md)
+* Trust MFA from external tenants
+ * See, [Configure cross-tenant access settings for B2B collaboration, Modify inbound access settings](../external-identities/cross-tenant-access-settings-b2b-collaboration.md#modify-inbound-access-settings)
-Today, you can [enforce multi-factor authentication for B2B users in your tenant](../external-identities/b2b-tutorial-require-mfa.md). You can also trust the MFA from external tenants to satisfy your MFA requirements using [Cross Tenant Access Settings](../external-identities/cross-tenant-access-settings-b2b-collaboration.md#modify-inbound-access-settings).
+### User and device sign-in conditions
-**User- and device-based sign in conditions**.
+Use the following table to help assess policy to address risk.
-| User or sign-in risk| Consider these policies |
-| - | - |
+| User or sign-in risk| Proposed policy |
+| | |
| Device| Require compliant devices | | Mobile apps| Require approved apps |
-| Identity protection shows high risk| Require user to change password |
-| Network location| Require sign in from a specific IP address range to highly confidential projects |
-
-Today, to use device state as an input to a policy, the device must be either be registered or joined to your tenant or [Cross Tenant Access Settings](../external-identities/cross-tenant-access-settings-b2b-collaboration.md#modify-inbound-access-settings) must be configured to trust the device claims from the home tenant.
+| Identity protection is high risk| Require user to change password |
+| Network location| To access confidential projects, require sign-in from an IP address range |
-[Identity Protection risk-based policies](../conditional-access/howto-conditional-access-policy-risk.md) can be used. However, issues must be mitigated in the userΓÇÖs home tenant.
+To use device state as policy input, the device is registered or joined to your tenant. Configure cross-tenant access settings must be configured to trust the device claims from the home tenant. See, [Modify inbound access settings](../external-identities/cross-tenant-access-settings-b2b-collaboration.md#modify-inbound-access-settings).
-For [network locations](../conditional-access/howto-conditional-access-policy-location.md), you can restrict access to any IP addresses range that you own. You might use this if you only want external partners accessing an application while they are on site at your organization.
+You can use identity-protection risk policies. However, mitigate issue in the user home tenant. See, [Common Conditional Access policy: Sign-in risk-based multifactor authentication](../conditional-access/howto-conditional-access-policy-risk.md).
-[Learn more about conditional access policies](../conditional-access/overview.md).
+For network locations, you can restrict access to IP addresses ranges you own. Use this method if external partners access applications while at your location. See, [Conditional Access: Block access by location](../conditional-access/howto-conditional-access-policy-location.md)
## Document access review policies
-Document your business policies for when you need to review access to resources, and when you need to remove account access for external users. Inputs to these decisions may include:
-
-* Requirements detailed in any compliance frameworks.
+Document policies that dictate when to review resource access, and remove account access for external users. Inputs might include:
+* Compliance frameworks requirements
* Internal business policies and processes- * User behavior
-While your policies will be highly customized to your needs, consider the following:
-
-* **Entitlement Management Access Reviews**. Use the functionality in Entitlement Management to
-
- * [Automatically expire access packages](../governance/entitlement-management-access-package-lifecycle-policy.md), and thus external user access to the included resources.
-
- * Set a [required review frequency](../governance/entitlement-management-access-reviews-create.md) for access reviews.
+Your policies will be customized, however consider the following parameters:
- * If you are using [connected organizations](../governance/entitlement-management-organization.md) to group all users from a single partner, schedule regular reviews with the business owner and the partner representative.
+* **Entitlement management access reviews**:
+ * [Change lifecycle settings for an access package in entitlement management](../governance/entitlement-management-access-package-lifecycle-policy.md)
+ * [Create an access review of an access package in entitlement management](../governance/entitlement-management-access-reviews-create.md)
+ * [Add a connected organization in entitlement management](../governance/entitlement-management-organization.md): group users from a partner and schedule reviews
+* **Microsoft 365 groups**:
+ * [Microsoft 365 group expiration policy](/microsoft-365/solutions/microsoft-365-groups-expiration-policy?view=o365-worldwide&preserve-view=true)
+* **Options**:
+ * If external users don't use access packages or Microsoft 365 groups, determine when accounts become inactive or deleted
+ * Remove sign-in for accounts that don't sign in for 90 days
+ * Regularly assess access for external users
-* **Microsoft 365 Groups**. Set a [group expiration policy](/microsoft-365/solutions/microsoft-365-groups-expiration-policy) for Microsoft 365 Groups to which external users are invited.
+## Access control methods
-* **Other options**. If external users have access outside of Entitlement Management access packages or Microsoft 365 groups, set up business process to review when accounts should be made inactive or deleted. For example:
+Some features, for example entitlement management, are available with an Azure AD Premium 2 (P2) license. Microsoft 365 E5 and Office 365 E5 licenses include Azure AD P2 licenses.
- * Remove sign-in ability for any account not signed in to for 90 days.
-
- * Assess access needs and take action at the end of every project with external users.
-
-## Determine your access control methods
-
-Now that you know what you want to control access to, how those assets should be grouped for common access, and required sign-in and access review policies, you can decide on how to accomplish your plan.
-
-Some functionality, for example [Entitlement Management](../governance/entitlement-management-overview.md), is only available with an Azure AD Premium 2 (P2) licenses. Microsoft 365 E5 and Office 365 E5 licenses include Azure AD P2 licenses.
-
-Other combinations of Microsoft 365, Office 365 and Azure AD also enable some functionality for managing external users. See [Information Protection](/office365/servicedescriptions/microsoft-365-service-descriptions/microsoft-365-tenantlevel-services-licensing-guidance/microsoft-365-security-compliance-licensing-guidance) for more informationΓÇï.
+Other combinations of Microsoft 365, Office 365, and Azure AD have functionality to manage external users. See, [Microsoft 365 guidance for security & compliance](/office365/servicedescriptions/microsoft-365-service-descriptions/microsoft-365-tenantlevel-services-licensing-guidance/microsoft-365-security-compliance-licensing-guidance).
> [!NOTE]
-> Licenses are per user. Therefore, you can have specific users, including administrators and business owners delegated access control, at the Azure AD P2 or Microsoft 365 E5 level without enabling those licenses for all users. Your first 50,000 external users are free. If you do not enable P2 licenses for your other internal users, they will not be able to use entitlement management functionality like Access packages.
+> Licenses are for one user. Therefore users, administrators, and business owners can have delegated access control. This scenario can occur with Azure AD P2 or Microsoft 365 E5, and you don't have to enable licenses for all users. The first 50,000 external users are free. If you don't enable P2 licenses for other internal users, they can't use entitlement management.
+## Govern access with Azure AD P2 and Microsoft 365 or Office 365 E5
-## Govern access with Azure AD P2 and Microsoft / Office 365 E5
-Azure AD P2 and Microsoft 365 E5 have the full suite of security and governance tools.
+Azure AD P2 and Microsoft 365 E5 have all the security and governance tools.
-### Provisioning, signing in, reviewing access, and deprovisioning. Bolded entries are preferred methods
-
-| Feature| Provision external users| Enforce sign-in reqs.| Review access| Deprovision access |
-| - | - | - | - | - |
-| Azure AD B2B Collaboration| Invite via email, OTP, self-service| | **Periodic review per partner**| Remove account<br>Restrict sign in |
-| Entitlement Management| **Add user via assignment or self-service access**ΓÇï| | Access reviews|**Expiration of, or removal from, access package**|
-| Office 365 Groups| | | Review group memberships| Expiration or deletion of group<br> Removal form group |
-| Azure AD security groups| | **Conditional access policies** (Add external users to security groups as necessary)| | |
+### Provision, sign-in, review access, and deprovision access
+Entries in bold are recommended.
+| Feature| Provision external users| Enforce sign-in requirements| Review access| Deprovision access |
+| - | - | - | - | - |
+| Azure AD B2B collaboration| Invite via email, one-time password (OTP), self-service|N/A| **Periodic partner review**| Remove account<br>Restrict sign-in |
+| Entitlement management| **Add user by assignment or self-service access**|N/A| Access reviews|**Expiration of, or removal from, access package**|
+| Office 365 groups|N/A|N/A| Review group memberships| Group expiration or deletion<br> Removal from group |
+| Azure AD security groups|N/A| **Conditional Access policies**: Add external users to security groups as needed|N/A| N/A|
- ### Access to resources. Bolded entries are preferred methods
+### Resource access
+
+Entries in bold are recommended.
-|Feature | APP & resource access| SharePoint & OneDrive access| Teams access| Email & document security |
+|Feature | App and resource access| SharePoint and OneDrive access| Teams access| Email and document security |
| - |-|-|-|-|
-| Entitlement Management| **Add user via assignment or self-service accessΓÇï**| **Access packages**| **Access packages**| |
-| Office 365 Group| | Access to site(s) (and associated content) ΓÇïincluded with group| Access to teams (and associated content)ΓÇïincluded with group| |
-| Sensitivity labels| | **Manually and automatically classify and restrict access**| **Manually and automatically classify and restrict access**| **Manually and automatically classify and restrict access** |
-| Azure AD security groups| **Conditional Access policies for access not included in access packages**| | | |
+| Entitlement management| **Add user by assignment or self-service access**| **Access packages**| **Access packages**| N/A|
+| Office 365 Group|N/A | Access to site(s) and group content| Access to teams and group content|N/A|
+| Sensitivity labels|N/A| **Manually and automatically classify and restrict access**| **Manually and automatically classify and restrict access**| **Manually and automatically classify and restrict access** |
+| Azure AD security groups| **Conditional Access policies for access not included in access packages**|N/A|N/A|N/A|
+### Entitlement management 
-### Entitlement Management 
+Use entitlement management to provision and deprovision access to groups and teams, applications, and SharePoint sites. Define the connected organizations allowed access, self-service requests, and approval workflows. To ensure access ends correctly, define expiration policies and access reviews for packages.
-[Entitlement management access packages](../governance/entitlement-management-access-package-create.md) enable provisioning and deprovisioning access to Groups and Teams, Applications, and SharePoint sites. You can define which connected organizations are allowed access, whether self-service requests are allowed, and what approval workflows are required (if any) to grant access. To ensure that access doesnΓÇÖt stay around longer than necessary, you can define expiration policies and access reviews for each access package.
-
-
+Learn more: [Create a new access package in entitlement management](../governance/entitlement-management-access-package-create.md)
-## Govern access with Azure AD P1 and Microsoft / Office 365 E3
-You can achieve robust governance with Azure AD P1 and Microsoft 365 E3
+## Governance with Azure AD P1, Microsoft 365, Office 365 E3
-### Provisioning, signing in, reviewing access, and deprovisioning
+### Provision, sign-in, review access, and deprovision access
+Items in bold are recommended.
|Feature | Provision external users| Enforce sign-in requirements| Review access| Deprovision access | | - |-|-|-|-|
-| Azure AD B2B Collaboration| **Invite via email, OTP, self-service**| Direct B2B federation| **Periodic review per partner**| Remove account<br>Restrict sign in |
-| Microsoft or Office 365 Groups| | | | Expiration of or deletion of group.<br>Removal from group. |
-| Security groups| | **Add external users to security groups (org, team, project, etc.)**| | |
-| Conditional Access policies| | **Sign-in Conditional Access policies for external users**| | |
+| Azure AD B2B collaboration| **Invite by email, OTP, self-service**| Direct B2B federation| **Periodic partner review**| Remove account<br>Restrict sign-in |
+| Microsoft 365 or Office 365 groups|N/A|N/A|N/A|Group expiration or deletion<br>Removal from group |
+| Security groups|N/A| **Add external users to security groups (org, team, project, etc.)**|N/A| N/A|
+| Conditional Access policies|N/A| **Sign-in Conditional Access policies for external users**|N/A|N/A|
+### Resource access
- ### Access to resources.
-
-|Feature | APP & resource access| SharePoint & OneDrive access| Teams access| Email & document security |
+|Feature | App and resource access| SharePoint and OneDrive access| Teams access| Email and document security |
| - |-|-|-|-|
-| Microsoft or Office 365 Groups| | **Access to site(s) included with group (and associated content)**|**Access to teams included with Microsoft 365 group (and associated content)**| |
-| Sensitivity labels| | Manually classify and restrict access| Manually classify and restrict access.| Manually classify to restrict and encrypt |
-| Conditional Access Policies| Conditional Access policies for access control| | | |
-| Additional methods| | Restrict SharePoint site access granularly with security groups.<br>Disallow direct sharing.| **Restrict external invitations from within teams**| |
-
+| Microsoft 365 or Office 365 groups|N/A| **Access to group site(s) and associated content**|**Access to Microsoft 365 group teams and associated content**|N/A|
+| Sensitivity labels|N/A| Manually classify and restrict access| Manually classify and restrict access| Manually classify to restrict and encrypt |
+| Conditional Access policies| Conditional Access policies for access control|N/A|N/A|N/A|
+| Other methods|N/A| Restrict SharePoint site access with security groups<br>Disallow direct sharing| **Restrict external invitations from a team**|N/A|
### Next steps
-See the following articles on securing external access to resources. We recommend you take the actions in the listed order.
-
-1. [Determine your security posture for external access](1-secure-access-posture.md)
-
-2. [Discover your current state](2-secure-access-current-state.md)
-
-3. [Create a governance plan](3-secure-access-plan.md) (You are here.)
-
-4. [Use groups for security](4-secure-access-groups.md)
-
-5. [Transition to Azure AD B2B](5-secure-access-b2b.md)
-
-6. [Secure access with Entitlement Management](6-secure-access-entitlement-managment.md)
-
-7. [Secure access with Conditional Access policies](7-secure-access-conditional-access.md)
-
-8. [Secure access with Sensitivity labels](8-secure-access-sensitivity-labels.md)
-
-9. [Secure access to Microsoft Teams, OneDrive, and SharePoint](9-secure-access-teams-sharepoint.md)
+* [Determine your security posture for external access](1-secure-access-posture.md)
+* [Discover the current state of external collaboration in your organization](2-secure-access-current-state.md)
+* [Securing external access with groups](4-secure-access-groups.md)
+* [Transition to governed collaboration with Azure Active Directory B2B collaboration](5-secure-access-b2b.md)
+* [Manage external access with entitlement management](6-secure-access-entitlement-managment.md)
+* [Secure access with Conditional Access policies](7-secure-access-conditional-access.md)
+* [Control access with sensitivity labels](8-secure-access-sensitivity-labels.md)
+* [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business](9-secure-access-teams-sharepoint.md)
active-directory 5 Secure Access B2b https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/5-secure-access-b2b.md
Previously updated : 09/13/2022 Last updated : 12/14/2022
# Transition to governed collaboration with Azure Active Directory B2B collaboration
-Getting your collaboration under control is key to securing external access to your resources. Before going forward with this article, be sure that you have:
+Understanding collaboration helps secure external access to your resources. We recommend you read the following articles, first:
-* [Determined your security posture](1-secure-access-posture.md)
+* [Determine your security posture for external access](1-secure-access-posture.md)
+* [Discover the current state of external collaboration in your organization](2-secure-access-current-state.md)
+* [Create a security plan for external access](3-secure-access-plan.md)
+* [Securing external access with groups](4-secure-access-groups.md)
-* [Discovered your current state](2-secure-access-current-state.md)
+Use the information in this article to move external collaboration into Azure Active Directory B2B (Azure AD B2B) collaboration.
-* [Created a security plan](3-secure-access-plan.md)
+* See, [B2B collaboration overview](../external-identities/what-is-b2b.md)
+* Learn about: [External Identities in Azure Active Directory](../external-identities/external-identities-overview.md)
-* [Understood how groups and security work together](4-secure-access-groups.md)
+## Control collaboration
-Once youΓÇÖve done those things, you're ready to move into controlled collaboration. This article will guide you to move all your external collaboration into [Azure Active Directory B2B collaboration](../external-identities/what-is-b2b.md) (Azure AD B2B). Azure AD B2B is a feature of [Azure AD External Identities](../external-identities/external-identities-overview.md).
+You can limit the organizations your users collaborate with (inbound and outbound), and who in your organization can invite guests. Most organizations permit business units to decide collaboration, and delegate approval and oversight. For example, organizations in government, education, and financial often don't permit open collaboration. You can use Azure AD features to control collaboration.
-## Control who your organization collaborates with
+You can control access your tenant, by deploying one or more of the following solutions:
-You can decide whether to limit which organizations your users can collaborate with (inbound and outbound), and who within your organization can invite guests. Most organizations take the approach of permitting business units to decide with whom they collaborate, and delegating the approval and oversight as needed. For example, some government, education, and financial services organizations don't permit open collaboration. You may wish to use the Azure AD features to scope collaboration, as discussed in the rest of this section.
-
-You have several options on how to control who is allowed to access your tenant. These options include:
--- **External Collaboration Settings** ΓÇô Restrict the email domains that invitations can be sent to. --- **Cross Tenant Access Settings** ΓÇô Control what applications can be accessed by guests on a per user/group/tenant basis (inbound). Also controls what external Azure AD tenants and applications your own users can access (outbound). --- **Connected Organizations** ΓÇô Control what organizations are allowed to request Access Packages in Entitlement Management. -
-Depending on the requirements of your organization, you may need to deploy one or more of these solutions.
+- **External Collaboration Settings** ΓÇô Restrict the email domains that invitations got to
+- **Cross Tenant Access Settings** ΓÇô Control application access by guests by user, group, or tenant (inbound). Control external Azure AD tenant and application access for users (outbound)
+- **Connected Organizations** ΓÇô Determine what organizations can request Access Packages in Entitlement Management
### Determine collaboration partners
-First, ensure you have documented the organizations you are currently collaborating with, and if necessary, the domains for those organizations' users. Note that domain-based restrictions may be impractical, since one collaboration partner may have multiple domains, and a partner could add domains at any time. For example, a partner may have multiple business units with separate domains and add more domains as they configure more synchronization.
+Document the organizations you collaborate with, and organization users' domains, if needed. Domain-based restrictions might be impractical. One collaboration partner can have multiple domains, and a partner can add domains. For example, a partner with multiple business units, with separate domains, and add more domains as they configure synchronization.
-If your users have already started using Azure AD B2B, you can discover what external Azure AD tenants your users are currently collaborating with via the sign-in logs, [PowerShell](https://github.com/AzureAD/MSIdentityTools/wiki/Get-MSIDCrossTenantAccessActivity), or a [built-in workbook](../reports-monitoring/workbook-cross-tenant-access-activity.md).
+If your users use Azure AD B2B, you can discover the external Azure AD tenants they're collaborating, with via the sign-in logs, PowerShell, or a workbook. Learn more:
-Next, determine if you want to enable future collaboration with
+* [Get MsIdCrossTenantAccessActivity](https://github.com/AzureAD/MSIdentityTools/wiki/Get-MSIDCrossTenantAccessActivity)
+* [Cross-tenant access activity workbook](../reports-monitoring/workbook-cross-tenant-access-activity.md)
-- any external organization (most inclusive)
+You can enable future collaboration with:
-- all external organizations except those explicitly denied--- only specific external organizations (most restrictive)
+- External organizations (most inclusive)
+- External organizations (but not denied organizations)
+- Specific external organizations (most restrictive)
> [!NOTE]
-> The more restrictive your collaboration settings, the more likely that your users will go outside of your approved collaboration framework. We recommend enabling the broadest collaboration your security needs will allow, and closely reviewing that collaboration rather than being overly restrictive.
-
-Also note that limiting to a single domain may inadvertently prevent authorized collaboration with organizations, which have other unrelated domains for their users. For example, if doing business with an organization Contoso, the initial point of contact with Contoso might be one of their US-based employees who has an email with a ".com" domain. However if you only allow the ".com" domain you may inadvertently omit their Canadian employees who have ".ca" domain.
+> If your collaboration settings are highly restrictive, your users might go outside the collaboration framework. We recommend you enable a broad collaboration that your security requirements allow.
-There are circumstances in which you would want to only allow specific collaboration partners for a subset of users. For example, a university may want to restrict student accounts from accessing external tenants but need to allow faculty to collaborate with external organizations.
+Limits to one domain can prevent authorized collaboration with organizations that have other unrelated domains. For example, the initial point of contact with Contoso might be a US-based employee with email that has a .com domain. However if you allow only the com domain. you can omit Canadian employees who have the ca domain.
-### Using allow and blocklists with External Collaboration Settings
+You can allow specific collaboration partners for a subset of users. For example, a university restricts student accounts from accessing external tenants, but allows faculty to collaborate with external organizations.
-You can use an allowlist or blocklist to [restrict invitations to B2B users](../external-identities/allow-deny-list.md) from specific organizations. You can use only an allow or a blocklist, not both.
+### Allowlist and blocklist with External Collaboration Settings
-* An [allowlist](../external-identities/allow-deny-list.md) limits collaboration to only those domains listed; all other domains are effectively on the blocklist.
+You can use an allowlist or blocklist to from specific organizations. You can use only an allow or a blocklist, not both.
-* A [blocklist](../external-identities/allow-deny-list.md) allows collaboration with any domain not on the blocklist.
+* **Allowlist** - Limit collaboration to a list of domains. All other domains are on the blocklist.
+* **Blocklist** - Allow collaboration with domains not on the blocklist
-> [!NOTE]
-> Limiting to a predefined domain may inadvertently prevent authorized collaboration with organizations, which have other domains for their users. For example, if doing business with an organization Contoso, the initial point of contact with Contoso might be one of their US-based employees who has an email with a ".com" domain. However, if you only allow the ".com" domain you may inadvertently omit their Canadian employees who have ".ca" domain.
+Learn more: [Allow or block invitations to B2B users from specific organizations](../external-identities/allow-deny-list.md)
> [!IMPORTANT]
-> These lists do not apply to users who are already in your directory. By default, they also do not apply to OneDrive for Business and SharePoint allow/blocklists which are separate unless you enable the [SharePoint/OneDrive B2B integration](/sharepoint/sharepoint-azureb2b-integration).
+> These lists don't apply to users in your directory. By default, they don't apply to OneDrive for Business and SharePoint allowlist or blocklists. These lists are separate, but you can enable [SharePoint-OneDrive B2B integration](/sharepoint/sharepoint-azureb2b-integration.md).
-Some organizations use a list of known ΓÇÿbad actorΓÇÖ domains provided by their managed security provider for their blocklist. For example, if the organization is legitimately doing business with Contoso and using a .com domain, there may be an unrelated organization that has been using the Contoso .org domain and attempting a phishing attack to impersonate Contoso employees.
+Some organizations have a blocklist of bad-actor domains from a managed security provider. For example, if the organization does business with Contoso and uses a com domain, an unrelated organization can use the org domain, and attempt a phishing attack.
-### Using Cross Tenant Access Settings
+### Cross Tenant Access Settings
-You can control both inbound and outbound access using Cross Tenant Access Settings. In addition, you can trust MFA, Compliant device, and hybrid Azure Active Directory joined device (HAADJ) claims from all or a subset of external Azure AD tenants. When you configure an organization specific policy, it applies to the entire Azure AD tenant and will cover all users from that tenant regardless of the userΓÇÖs domain suffix.
+You can control inbound and outbound access using Cross Tenant Access Settings. In addition, you can trust multi-factor authentication (MFA), a compliant device, and hybrid Azure Active Directory joined device (HAADJ) claims from external Azure AD tenants. When you configure an organizational policy, it applies to the Azure AD tenant and covers users in that tenant, regardless of domain suffix.
-You can enable collaboration across Microsoft clouds such as Microsoft Azure China 21Vianet or Microsoft Azure Government with additional configuration. Determine if any of your collaboration partners reside in a different Microsoft cloud. If so, you should [enable collaboration with these partners using Cross Tenant Access Settings](../external-identities/cross-cloud-settings.md).
+You can enable collaboration across Microsoft clouds such as Microsoft Azure operated by 21Vianet (Azure China) or Microsoft Azure Government. Determine if your collaboration partners reside in a different Microsoft cloud. Learn more: [Configure Microsoft cloud settings for B2B collaboration (Preview)](../external-identities/cross-cloud-settings.md).
-If you wish to allow inbound access to only specific tenants (allowlist), you can set the default policy to block access and then create organization policies to granularly allow access on a per user, group, and application basis.
+You can allow inbound access to specific tenants (allowlist), and set the default policy to block access. You then create organizational policies that allow access by user, group, or application.
-If you wish to block access to specific tenants (blocklist), you can set the default policy as allow and then create organization policies that block access to those specific tenants.
+You can block access to tenants (blocklist). Set the default policy to Allow and then create organizational policies that block access to some tenants.
> [!NOTE]
-> Cross Tenant Access Settings Inbound Access does not prevent the invitations from being sent or redeemed. However, it does control what applications can be accessed and whether a token is issued to the guest user or not. Even if the guest can redeem an invitation, if the policy blocks access to all applications, the user will not have access to anything.
+> Cross Tenant Access Settings Inbound Access does not prevent invitations from being sent or redeemed. However, it does control applications access and whether a token is issued to the guest user. If the guest can redeem an invitation, policy blocks application access.
-If you wish to control what external organizations your users can access, you can configure outbound access policies following the same pattern as inbound access ΓÇô allow/blocklist. Configure the default and organization-specific policies as desired. [Learn more about configuring inbound and outbound access policies](../external-identities/cross-tenant-access-settings-b2b-collaboration.md).
+To control external organizations users access, configure outbound access policies similarly to inbound access: allowlist and blocklist. Configure default and organization-specific policies.
-> [!NOTE]
-> Cross Tenant Access Settings only applies to Azure AD tenants. If you need to control access to partners who do not use Azure AD, you must use External Collaboration Settings.
+Learn more: [Configure cross-tenant access settings for B2B collaboration](../external-identities/cross-tenant-access-settings-b2b-collaboration.md)
-### Using Entitlement Management and Connected Organizations
+> [!NOTE]
+> Cross Tenant Access Settings apply to Azure AD tenants. To control access for partners not using Azure AD, use External Collaboration Settings.
-If you want to use Entitlement Management to ensure guest lifecycle is governed automatically, you can create Access Packages and publish them to any external user or only to Connected Organizations. Connected Organizations support Azure AD tenants and any other domain. When you create an Access Package you can restrict access only to specific Connected Organizations. This is covered in greater detail in the next section. [Learn more about Entitlement Management](../governance/entitlement-management-overview.md).
+### Entitlement Management and Connected Organizations
-## Control how external users gain access
+Use Entitlement Management to ensure automatic guest-lifecycle governance. Create Access Packages and publish them to external users or to Connected Organizations, which support Azure AD tenants and other domains. When you create an Access Package restrict access to specific Connected Organizations.
-There are many ways to collaborate with external partners using Azure AD B2B. To begin collaboration, you invite or otherwise enable your partner to access your resources. Users can gain access by responding to :
+Learn more: [What is entitlement management?](../governance/entitlement-management-overview.md)
-* Redeeming [an invitation sent via an email](../external-identities/redemption-experience.md), or [a direct link to share](../external-identities/redemption-experience.md) a resource. Users can gain access by:
+## Control external user access
-* Requesting access [through an application](../external-identities/self-service-sign-up-overview.md) you create
+To begin collaboration, invite or enable a partner to access resources. Users gain access by:
-* Requesting access through the [My Access](../governance/entitlement-management-request-access.md) portal
+* [Azure Active Directory B2B collaboration invitation redemption](../external-identities/redemption-experience.md)
+* [Self-service sign-up](../external-identities/self-service-sign-up-overview.md)
+* [Requesting access to an access package in entitlement management](../governance/entitlement-management-request-access.md)
-When you enable Azure AD B2B, you enable the ability to invite guest users via direct links and email invitations by default. Self Service sign-up and publishing Access Packages to the My Access portal require additional configuration.
+When you enable Azure AD B2B, you can invite guest users with links and email invitations. Self service sign-up, and publishing Access Packages to the My Access portal, require more configuration.
-> [NOTE]
-> Self Service sign-up does not enforce the allow/blocklist in External Collaboration Settings. Cross Tenant Access Settings will apply. You can also integrate your own allow/blocklist with Self Service sign-up using [custom API connectors](../external-identities/self-service-sign-up-add-api-connector.md).
+> [!NOTE]
+> Self service sign-up enforces no allowlist or blocklist in External Collaboration Settings. Use Cross Tenant Access Settings. You can integrate allowlists and blocklists with self service sign-up using custom API connectors. See, [Add an API connector to a user flow](../external-identities/self-service-sign-up-add-api-connector.md).
-### Control who can invite guest users
+### Guest user invitations
Determine who can invite guest users to access resources.
-* The most restrictive setting is to allow only administrators and those users granted the [guest inviter role](../external-identities/external-collaboration-settings-configure.md) to invite guests.
-
-* If your security requirements allow it, we recommend allowing all users with a userType of Member to invite guests.
-
-* Determine if you want users with a userType of Guest, which is the default account type for Azure AD B2B users, to be able to invite other guests.
-
-![Screenshot of guest invitation settings.](media/secure-external-access/5-guest-invite-settings.png)
+* Most restrictive: Allow only administrators and users with the Guest Inviter role
+ * See, [Configure external collaboration settings](../external-identities/external-collaboration-settings-configure.md)
+* If security requirements permit, allow all UserType of Member to invite guests
+* Determine if UserType of Guest, the default Azure AD B2B user account, can invite guests
-### Collect additional information about external users
-
-If you use Azure AD entitlement management, you can configure questions for external users to answer. The questions will then be shown to approvers to help them make a decision. You can configure different sets of questions for each [access package policy](../governance/entitlement-management-access-package-approval-policy.md) so that approvers can have relevant information for the access they're approving. For example, if one access package is intended for vendor access, then the requestor may be asked for their vendor contract number. A different access package intended for suppliers, may ask for their country of origin.
-
-If you use a self-service portal, you can use [API connectors](../external-identities/api-connectors-overview.md) to collect additional attributes about users as they sign up. You can then potentially use those attributes to assign access. For example, if during the sign-up process you collect their supplier ID, you could use that attribute to dynamically assign them to a group or access package for that supplier. You can create custom attributes in the Azure portal and use them in your self-service sign-up user flows. You can also read and write these attributes by using the [Microsoft Graph API](../../active-directory-b2c/microsoft-graph-operations.md).
-
-### Troubleshoot invitation redemption to Azure AD users
+ ![Screenshot of guest invitation settings.](media/secure-external-access/5-guest-invite-settings.png)
-There are three instances when invited guest users from a collaboration partner using Azure AD will have trouble redeeming an invitation.
+### External users information
-* If using an allowlist and the userΓÇÖs domain isn't included in an allowlist.
+Use Azure AD entitlement management to configure questions that external users answer. The questions appear to approvers to help them make a decision. You can configure sets of questions for each access package policy, so approvers have relevant information for access they approve. For example, ask vendors for their vendor contract number.
-* If the collaboration partnerΓÇÖs home tenant has tenant restrictions that prevent collaboration with external users..
+Learn more: [Change approval and requestor information settings for an access package in entitlement management](../governance/entitlement-management-access-package-approval-policy.md)
-* If the user isn't part of the partnerΓÇÖs Azure AD tenant. For example, there are users at contoso.com who are only in Active Directory (or another on-premises IdP), they'll only be able to redeem invitations via the email OTP process. for more information, see the [invitation redemption flow](../external-identities/redemption-experience.md).
+If you use a self-service portal, use API connectors to collect user attributes during sign-up. Use the attributes to assign access. You can create custom attributes in the Azure portal and use them in your self-service sign-up user flows. Read and write these attributes by using the Microsoft Graph API.
-## Control what external users can access
+Learn more:
-Most organizations aren't monolithic. That is, there are some resources that are fine to share with external users, and some you will not want external users to access. Therefore, you must control what external users access. Consider using [Entitlement management and access packages to control access](6-secure-access-entitlement-managment.md) to specific resources.
+* [Use API connectors to customize and extend self-service sign-up](../external-identities/api-connectors-overview.md)
+* [Manage Azure AD B2C with Microsoft Graph](../../active-directory-b2c/microsoft-graph-operations.md)
-By default, guest users can see information and attributes about tenant members and other partners, including group memberships. Consider if your security requirements call for limiting external user access to this information.
-
-![Screenshot of configuring external collaboration settings.](media/secure-external-access/5-external-collaboration-settings.png)
-
-We recommend the following restrictions for guest users.
-
-* **Limit guest access to browsing groups and other properties in the directory**
-
- * Use the external collaboration settings to restrict guest ability to read groups they aren't members of.
-
-* **Block access to employee-only apps**.
-
- * Create a Conditional Access policy to block access to Azure AD-integrated applications that are only appropriate for non-guest users.
+### Troubleshoot invitation redemption to Azure AD users
-* **Block access to the Azure portal. You can make rare necessary exceptions**.
+Invited guest users from a collaboration partner can have trouble redeeming an invitation.
- * Create a Conditional Access policy that includes either All guest and external users and then [implement a policy to block access](../conditional-access/concept-conditional-access-cloud-apps.md).
+* User domain isn't on an allowlist
+* The partnerΓÇÖs home tenant restrictions prevent external collaboration
+* The user isn't in partner Azure AD tenant. For example, users at contoso.com are in Active Directory.
+ * They can redeem invitations with the email one-time password (OTP).
+ * See, [Azure Active Directory B2B collaboration invitation redemption](../external-identities/redemption-experience.md)
-
+## External users access
-## Remove users who no longer need access
+Generally, there are resources you can share with external users, and some you can't. You can control what external users access. See, [Manage external access with Entitlement Management](6-secure-access-entitlement-managment.md).
-Evaluate current access so that you can [review and remove users who no longer need access](../governance/access-reviews-external-users.md). Include external users in your tenant as guests, and those with member accounts.
+By default, guest users see information and attributes about tenant members and other partners, including group memberships. Consider limiting external user access to this information.
-Some organizations added external users such as vendors, partners, and contractors as members. These members may have a specific attribute, or usernames that begin with, for example
+ ![Screenshot of Guest user access options on External collaboration settings.](media/secure-external-access/5-external-collaboration-settings.png)
-* v- for vendors
+We recommend the following guest-user restrictions.
-* p- for partners
+* Limit guest access to browsing groups and other properties in the directory
+ * Use the external collaboration settings to restrict guests from reading groups they aren't members of
+* Block access to employee-only apps
+ * Create a Conditional Access policy to block access to Azure AD-integrated applications for non-guest users
+* Block access to the Azure portal
+ * You can make needed exceptions
+ * Create a Conditional Access policy with All guest and external users. Implement a policy to block access.
-* c- for contractors
+Learn more: [Conditional Access: Cloud apps, actions, and authentication context](../conditional-access/concept-conditional-access-cloud-apps.md)
-Evaluate any external users with member accounts to determine if they still need access. If so, transition these users to Azure AD B2B as described in the next section.
+## Remove users who don't need access
-You may also have guest users who weren't invited through Entitlement Management or Azure AD B2B
+Establish a process to review and remove users who don't need access. Include external users in your tenant as guests, and users with member accounts.
-To find these users, you can:
+Learn more: [Use Azure AD Identity Governance to review and remove external users who no longer have resource access](../governance/access-reviews-external-users.md)
-* [Find guest users not invited through Entitlement Management](../governance/access-reviews-external-users.md).
+Some organizations add external users as members (vendors, partners, and contractors). Assign an attribute, or username:
- * We provide a [SAMPLE PowerShell script.](https://github.com/microsoft/access-reviews-samples/tree/master/ExternalIdentityUse)
+* Vendors: **v-**
+* Partners: **p-**
+* Contractors: **c-**
-Transition these users to Azure AD B2B users as described in the following section.
+Evaluate external users with member accounts to determine access. You might have guest users not invited through Entitlement Management or Azure AD B2B
-## Transition your current external users to B2B
+To find these users:
-If you havenΓÇÖt been using Azure AD B2B, you likely have non-employee users in your tenant. We recommend you transition these accounts to Azure AD B2B external user accounts and then change their UserType to Guest. This enables you to take advantage of the many ways Azure AD and Microsoft 365 allow you to treat external users differently. Some of these ways include:
+* [Use Azure AD Identity Governance to review and remove external users who no longer have resource access](../governance/access-reviews-external-users.md)
+* Use a sample PowerShell script on [access-reviews-samples/ExternalIdentityUse/](https://github.com/microsoft/access-reviews-samples/tree/master/ExternalIdentityUse)
-* Easily including or excluding guest users in Conditional Access policies
+## Transition current external users to B2B
-* Easily including or excluding guest users in Access Packages and Access Reviews
+If you don't use Azure AD B2B, you likely have non-employee users in your tenant. We recommend you transition these accounts to Azure AD B2B external user accounts and then change their UserType to Guest. Use Azure AD and Microsoft 365 to handle external users.
-* Easily including or excluding external access to Teams, SharePoint, and other resources.
+Include or exclude:
-To transition these internal users while maintaining their current access, UPN, and group memberships, see [Invite external users to B2B collaboration](../external-identities/invite-internal-users.md).
+* Guest users in Conditional Access policies
+* Guest users in Access Packages and Access Reviews
+* External access to Teams, SharePoint, and other resources
-## Decommission undesired collaboration methods
+You can transition these internal users while maintaining current access, UPN, and group memberships. See [Invite external users to B2B collaboration](../external-identities/invite-internal-users.md).
-To complete your transition to governed collaboration, you should decommission undesired collaboration methods. Which you decommission is based on the degree of control you wish IT to exert over collaboration, and your security posture. For information about IT versus end-user control, see [Determine your security posture for external access](1-secure-access-posture.md).
+## Decommission collaboration methods
-The following are collaboration vehicles you may wish to evaluate.
+To complete the transition to governed collaboration, decommission unwanted collaboration methods. Decommissioning is based on the level of control to exert on collaboration, and the security posture. See, [Determine your security posture for external access](1-secure-access-posture.md).
-### Direct invitation through Microsoft Teams
+### Microsoft Teams invitation
-By default Teams allows external access, which means that organization can communicate with all external domains. If you want to restrict or allow specific domains just for Teams, you can do so in the [Teams Admin portal](https://admin.teams.microsoft.com/company-wide-settings/external-communications).
+By default, Teams allows external access. The organization can communicate with external domains. To restrict or allow domains for Teams, use the [Teams admin center](https://admin.teams.microsoft.com/company-wide-settings/external-communications).
+### Sharing through SharePoint and OneDrive
-### Direct sharing through SharePoint and OneDrive
+Sharing through SharePoint and OneDrive adds users not in the Entitlement Management process.
-Direct sharing through SharePoint and OneDrive can add users outside of the Entitlement Management process. For an in-depth look at these configurations see [Manage Access with Microsoft Teams, SharePoint, and OneDrive for business](9-secure-access-teams-sharepoint.md)
-You can also [block the use of userΓÇÖs personal OneDrive](/office365/troubleshoot/group-policy/block-onedrive-use-from-office) if desired.
+* [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business](9-secure-access-teams-sharepoint.md)
+* [Block OneDrive use from Office](/office365/troubleshoot/group-policy/block-onedrive-use-from-office.md)
-### Sending documents through email
+### Documents in email
-Your users will send documents through email to external users. Consider how you want to restrict and encrypt access to these documents by using sensitivity labels. For more information, see Manage access with Sensitivity labels.
+Users send documents to external users by email. You can use sensitivity labels to restrict and encrypt access to documents. See, [Learn about sensitivity labels](/microsoft-365/compliance/sensitivity-labels?view=o365-worldwide&preserve-view=true).
### Unsanctioned collaboration tools
-The landscape of collaboration tools is vast. Your users likely use many outside of their official duties, including platforms like Google Docs, DropBox, Slack, or Zoom. It's possible to block the use of such tools from a corporate network at the Firewall level and with mobile application management for organization-managed devices. However, this will also block any sanctioned instances of these platforms and wouldn't block access from unmanaged devices. Block platforms you donΓÇÖt want any use of if necessary, and create business policies for no unsanctioned usage for the platforms you need to use.
+Your users likely use Google Docs, DropBox, Slack, or Zoom. You can block use of these tools from a corporate network, at the firewall level, and with mobile application management for organization-managed devices. However, this action blocks sanctioned instances and doesn't block access from unmanaged devices. Block tools you donΓÇÖt want, and create policies for no unsanctioned usage.
-For more information on managing unsanctioned applications, see:
+For more information on governing applications, see:
-* [Governing connected apps](/cloud-app-security/governance-actions)
+* [Governing connected apps](/defender-cloud-apps/governance-actions)
+* [Govern discovered apps](/defender-cloud-apps/governance-discovery)
-* [Sanctioning and unsanctioning an application.](/cloud-app-security/governance-discovery)
-
-
### Next steps
-See the following articles on securing external access to resources. We recommend you take the actions in the listed order.
-
-1. [Determine your security posture for external access](1-secure-access-posture.md)
-
-2. [Discover your current state](2-secure-access-current-state.md)
-
-3. [Create a governance plan](3-secure-access-plan.md)
-
-4. [Use groups for security](4-secure-access-groups.md)
-
-5. [Transition to Azure AD B2B](5-secure-access-b2b.md) (You are here.)
-
-6. [Secure access with Entitlement Management](6-secure-access-entitlement-managment.md)
-
-7. [Secure access with Conditional Access policies](7-secure-access-conditional-access.md)
-
-8. [Secure access with Sensitivity labels](8-secure-access-sensitivity-labels.md)
-
-9. [Secure access to Microsoft Teams, OneDrive, and SharePoint](9-secure-access-teams-sharepoint.md)
+* [Determine your security posture for external access](1-secure-access-posture.md)
+* [Discover the current state of external collaboration in your organization](2-secure-access-current-state.md)
+* [Create a security plan for external access](3-secure-access-plan.md)
+* [Securing external access with groups](4-secure-access-groups.md)
+* [Manage external access with Entitlement Management](6-secure-access-entitlement-managment.md)
+* [Manage external access with Conditional Access policies](7-secure-access-conditional-access.md)
+* [Control access with sensitivity labels](8-secure-access-sensitivity-labels.md)
+* [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business](9-secure-access-teams-sharepoint.md)
active-directory Active Directory Data Storage Eu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-data-storage-eu.md
Administrators can choose to enable or disable certain Azure AD features. If the
* **Azure Active Directory Multi Tenant Collaboration** - With multi tenant collaboration scenarios enabled, customers can configure their tenant to collaborate with users from a different tenant. For example, a customer can invite users to their tenant in a B2B context. A customer can create a multi-tenant SaaS application that allows other third party tenants to provision the application in the third party tenant. Or, the customer can make two or more tenants affiliated with one another and act as a single tenant in certain scenarios, such as multi-tenant organization (MTO) formation, tenant to tenant sync, and shared e-mail domain sharing. Customer configuration and use of multi tenant collaboration may occur with tenants outside of the EU Data Residency and EU Data Boundary resulting in some customer data, such as user and device account data, usage data, and service configuration (application, policy, and group) stored and processed in the location of the collaborating tenant. * **Application Proxy** - Allows customers to access their on-premises web applications externally. Customers may choose advanced routing configurations that allow customer data to egress outside of the EU Data Residency and EU Data Boundary, including user account data, usage data, and application configuration data.
-* **Microsoft 365 Multi Geo** - Microsoft 365 Multi-Geo provides customers with the ability to expand their Microsoft 365 presence to multiple geographic regions/countries within a single existing Microsoft 365 tenant. Azure Active Directory will egress customer data to perform backup authentication to the locations configured by the customer. Types of customer data include user and device account data, branding data, and service configuration data (application, policy, and group).
+* **Microsoft 365 Multi Geo** - Microsoft 365 Multi-Geo provides customers with the ability to expand their Microsoft 365 presence to multiple geographic countries/regions within a single existing Microsoft 365 tenant. Azure Active Directory will egress customer data to perform backup authentication to the locations configured by the customer. Types of customer data include user and device account data, branding data, and service configuration data (application, policy, and group).
### Other EU Data Boundary online services
active-directory Active Directory Deployment Plans https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-deployment-plans.md
Title: Deployment plans - Azure Active Directory | Microsoft Docs
-description: Guidance about how to deploy many Azure Active Directory capabilities.
+ Title: Azure Active Directory deployment plans
+description: Guidance on Azure Active Directory deployment, such as authentication, devices, hybrid scenarios, governance, and more.
- Previously updated : 09/13/2022 Last updated : 01/06/2023 # Azure Active Directory deployment plans
-Looking for complete guidance on deploying Azure Active Directory (Azure AD) capabilities? Azure AD deployment plans walk you through the business value, planning considerations, and operational procedures needed to successfully deploy common Azure AD capabilities.
-
-From any of the plan pages, use your browser's Print to PDF capability to create an up-to-date offline version of the documentation.
--
-## Deploy authentication
-
-| Capability | Description|
-| -| -|
-| [Azure AD multifactor authentication](../authentication/howto-mfa-getstarted.md)| Azure AD Multi-Factor Authentication (MFA) is Microsoft's two-step verification solution. Using admin-approved authentication methods, Azure AD MFA helps safeguard access to your data and applications while meeting the demand for a simple sign-in process. Watch this video on [How to configure and enforce multi-factor authentication in your tenant](https://www.youtube.com/watch?v=qNndxl7gqVM)|
-| [Conditional Access](../conditional-access/plan-conditional-access.md)| With Conditional Access, you can implement automated access control decisions for who can access your cloud apps, based on conditions. |
-| [Self-service password reset](../authentication/howto-sspr-deployment.md)| Self-service password reset helps your users reset their passwords without administrator intervention, when and where they need to. |
-| [Passwordless](../authentication/howto-authentication-passwordless-deployment.md) | Implement passwordless authentication using the Microsoft Authenticator app or FIDO2 Security keys in your organization |
-
-## Deploy application and device management
-
-| Capability | Description|
-| -| - |
-| [Single sign-on](../manage-apps/plan-sso-deployment.md)| Single sign-on helps your users' access the apps and resources they need to do business while signing in only once. After they've signed in, they can go from Microsoft Office to SalesForce to Box to internal applications without being required to enter credentials a second time. |
-| [My Apps](../manage-apps/my-apps-deployment-plan.md)| Offer your users a simple hub to discover and access all their applications. Enable them to be more productive with self-service capabilities, like requesting access to apps and groups, or managing access to resources on behalf of others. |
-| [Devices](../devices/plan-device-deployment.md) | This article helps you evaluate the methods to integrate your device with Azure AD, choose the implementation plan, and provides key links to supported device management tools. |
--
-## Deploy hybrid scenarios
-| Capability | Description|
-| -| -|
-| [AD FS to cloud user authentication](../hybrid/migrate-from-federation-to-cloud-authentication.md)| Learn to migrate your user authentication from federation to cloud authentication with either pass through authentication or password hash sync.
-| [Azure AD Application Proxy](../app-proxy/application-proxy-deployment-plan.md) |Employees today want to be productive at any place, at any time, and from any device. They need to access SaaS apps in the cloud and corporate apps on-premises. Azure AD Application proxy enables this robust access without costly and complex virtual private networks (VPNs) or demilitarized zones (DMZs). |
-| [Seamless SSO](../hybrid/how-to-connect-sso-quick-start.md)| Azure Active Directory Seamless Single Sign-On (Azure AD Seamless SSO) automatically signs users in when they are on their corporate devices connected to your corporate network. With this feature, users won't need to type in their passwords to sign in to Azure AD and usually won't need to enter their usernames. This feature provides authorized users with easy access to your cloud-based applications without needing any extra on-premises components. |
+Use the following guidance to help deploy Azure Active Directory (Azure AD). Learn about business value, planning considerations, and operational procedures. You can use a browser Print to PDF function to create offline documentation.
-## Deploy user provisioning
+## Your stakeholders
-| Capability | Description|
-| -| -|
-| [User provisioning](../app-provisioning/plan-auto-user-provisioning.md)| Azure AD helps you automate the creation, maintenance, and removal of user identities in cloud (SaaS) applications, such as Dropbox, Salesforce, ServiceNow, and more. |
-| [Cloud HR user provisioning](../app-provisioning/plan-cloud-hr-provision.md)| Cloud HR user provisioning to Active Directory creates a foundation for ongoing identity governance and enhances the quality of business processes that rely on authoritative identity data. Using this feature with your cloud HR product, such as Workday or Successfactors, you can seamlessly manage the identity lifecycle of employees and contingent workers by configuring rules that map Joiner-Mover-Leaver processes (such as New Hire, Terminate, Transfer) to IT provisioning actions (such as Create, Enable, Disable) |
-| [Azure AD B2B collaboration](../fundamentals/secure-external-access-resources.md)| Azure AD enables you to collaborate with any external user, allowing them to securely gain access to SaaS and Line-of-Business (LoB) applications. |
+When beginning your deployment plans, include your key stakeholders. Identify and document stakeholders, roles, responsibilities. Titles and roles can differ from one organization to another, however the ownership areas are similar.
-## Deploy governance and reporting
-
-| Capability | Description|
-| -| -|
-| [Privileged Identity Management](../privileged-identity-management/pim-deployment-plan.md)| Azure AD Privileged Identity Management (PIM) helps you manage privileged administrative roles across Azure AD, Azure resources, and other Microsoft Online Services. PIM provides solutions like just-in-time access, request approval workflows, and fully integrated access reviews so you can identify, uncover, and prevent malicious activities of privileged roles in real time. |
-| [Reporting and Monitoring](../reports-monitoring/plan-monitoring-and-reporting.md)| The design of your Azure AD reporting and monitoring solution depends on your legal, security, and operational requirements as well as your existing environment and processes. This article presents the various design options and guides you to the right deployment strategy. |
-| [Access Reviews](../governance/deploy-access-reviews.md) | Access Reviews are an important part of your governance strategy, enabling you to know and manage who has access, and to what they have access. This article helps you plan and deploy access reviews to achieve your desired security and collaboration postures. |
-| [Identity governance for applications](../governance/identity-governance-applications-prepare.md) | As part of your organization's controls to meet your compliance and risk management objectives for managing access for critical applications, you can use Azure AD features to set up and enforce appropriate access.|
-
-## Include the right stakeholders
-
-When beginning your deployment planning for a new capability, it's important to include key stakeholders across your organization. We recommend that you identify and document the person or people who fulfill each of the following roles, and work with them to determine their involvement in the project.
-
-Roles might include the following
-
-|Role |Description |
+|Role |Responsibility |
|-|-|
-|End-user|A representative group of users for which the capability will be implemented. Often previews the changes in a pilot program.
-|IT Support Manager|IT support organization representative who can provide input on the supportability of this change from a helpdesk perspective.ΓÇ»
-|Identity Architect or Azure Global Administrator|Identity management team representative in charge of defining how this change is aligned with the core identity management infrastructure in your organization.|
-|Application Business Owner |The overall business owner of the affected application(s), which may include managing access.  May also provide input on the user experience and usefulness of this change from an end user's perspective.
-|Security Owner|A representative from the security team that can sign out that the plan will meet the security requirements of your organization.|
-|Compliance Manager|The person within your organization responsible for ensuring compliance with corporate, industry, or governmental requirements.|
-
-**Levels of involvement might include:**
+|Sponsor|An enterprise senior leader with authority to approve and/or assign budget and resources. The sponsor is the connection between managers and the executive team.|
+|End user|The people for whom the service is implemented. Users can participate in a pilot program.|
+|IT Support Manager|Provides input on the supportability of proposed changesΓÇ»|
+|Identity architect or Azure Global Administrator|Defines how the change aligns with identity management infrastructure|
+|Application business owner |Owns the affected application(s), which might include access management. Provides input on the user experience.
+|Security owner|Confirms the change plan meets security requirements|
+|Compliance manager|Ensures compliance with corporate, industry, or governmental requirements|
+
+### RACI
+
+RACI is an acronym derived from four key responsibilities:
+
+* **Responsible**
+* **Accountable**
+* **Consulted**
+* **Informed**
+
+Use these terms to clarify and define roles and responsibilities in your project, and for other cross-functional or departmental projects and processes.
+
+## Authentication
+
+Use the following list to plan for authentication deployment.
+
+* **Azure AD multi-factor authentication (MFA)** - Using admin-approved authentication methods, Azure AD MFA helps safeguard access to your data and applications while meeting the demand for a simple sign-in process:
+ * See the video, [How to configure and enforce multi-factor authentication in your tenant](https://www.youtube.com/watch?v=qNndxl7gqVM)
+ * See, [Plan an Azure Active Directory Multi-Factor Authentication deployment](../authentication/howto-mfa-getstarted.md)
+* **Conditional Access** - Implement automated access-control decisions for users to access cloud apps, based on conditions:
+ * See, [What is Conditional Access?](/azure/active-directory/conditional-access/overview)
+ * See, [Plan a Conditional Access deployment](../conditional-access/plan-conditional-access.md)
+* **Azure AD self-service password reset (SSPR)** - Help users reset a password without administrator intervention:
+ * See, [Passwordless authentication options for Azure AD](/articles/active-directory/authentication/concept-authentication-passwordless.md)
+ * See, [Plan an Azure Active Directory self-service password reset deployment](../authentication/howto-sspr-deployment.md)
+* **Passordless authentication** - Implement passwordless authentication using the Microsoft Authenticator app or FIDO2 Security keys:
+ * See, [Enable passwordless sign-in with Microsoft Authenticator](/azure/active-directory/authentication/howto-authentication-passwordless-phone)
+ * See, [Plan a passwordless authentication deployment in Azure Active Directory](../authentication/howto-authentication-passwordless-deployment.md)
+
+## Applications and devices
+
+Use the following list to help deploy applications and devices.
+
+* **Single sign-on (SSO)** - Enable user access to apps and resources while signing in once, without being required to enter credentials again:
+ * See, [What is SSO in Azure AD?](/articles/active-directory/manage-apps/what-is-single-sign-on.md)
+ * See, [Plan a SSO deployment](../manage-apps/plan-sso-deployment.md)
+* **My Apps portal** - A web-based portal to discover and access applications. Enable user productivity with self-service, for instance requesting access to groups, or managing access to resources on behalf of others.
+ * See, [My Apps portal overview](/azure/active-directory/manage-apps/myapps-overview)
+* **Devices** - Evaluate device integration methods with Azure AD, choose the implementation plan, and more.
+ * See, [Plan your Azure Active Directory device deployment](../devices/plan-device-deployment.md)
+
+## Hybrid scenarios
+
+The following list describes features and services for productivity gains in hybrid scenarios.
+
+* **Active Directory Federation Services (AD FS)** - Migrate user authentication from federation to cloud with pass-through authentication or password hash sync:
+ * See, [What is federation with Azure AD?](/articles/active-directory/hybrid/whatis-fed.md)
+ * See, [Migrate from federation to cloud authentication](../hybrid/migrate-from-federation-to-cloud-authentication.md)
+* **Azure AD Application Proxy** - Enable employees to be productive at any place or time, and from a device. Learn about software as a service (SaaS) apps in the cloud and corporate apps on-premises. Azure AD Application Proxy enables access without virtual private networks (VPNs) or demilitarized zones (DMZs):
+ * See, [Remote access to on-premises applications through Azure AD Application Proxy](/articles/active-directory/app-proxy/application-proxy.md)
+ * See, [Plan an Azure AD Application Proxy deployment](../app-proxy/application-proxy-deployment-plan.md)
+* **Seamless single sign-on (Seamless SSO)** - Use Seamless SSO for user sign-in, on corporate devices connected to a corporate network. Users don't need to enter passwords to sign in to Azure AD, and usually don't need to enter usernames. Authorized users access cloud-based apps without extra on-premises components:
+ * See, [Azure Active Directory SSO: Quickstart](../hybrid/how-to-connect-sso-quick-start.md)
+ * See, [Azure Active Directory Seamless SSO: Technical deep dive](/articles/active-directory/hybrid/how-to-connect-sso-how-it-works.md)
+
+## Users
+
+* **User identities** - Learn about automation to create, maintain, and remove user identities in cloud apps, such as Dropbox, Salesforce, ServiceNow, and more.
+ * See, [Plan an automatic user provisioning deployment in Azure Active Directory](../app-provisioning/plan-auto-user-provisioning.md)
+* **Identity governance** - Create identity governance and enhance business processes that rely on identity data. With HR products, such as Workday or Successfactors, manage employee and contingent-staff identity lifecycle with rules. These rules map Joiner-Mover-Leaver processes, such as New Hire, Terminate, Transfer, to IT actions such as Create, Enable, Disable.
+ * See, [Plan cloud HR application to Azure Active Directory user provisioning](../app-provisioning/plan-cloud-hr-provision.md)
+* **Azure AD B2B collaboration** - Improve external-user collaboration with secure access to applications:
+ * See, [B2B collaboration overview](/azure/active-directory/external-identities/what-is-b2b)
+ * See, [Plan an Azure Active Directory B2B collaboration deployment](../fundamentals/secure-external-access-resources.md)
+
+## Governance and reporting
+
+Use the following list to learn about governance and reporting. Items in the list refer to Microsoft Entra.
+
+Learn more: [Secure access for a connected worldΓÇömeet Microsoft Entra](https://www.microsoft.com/en-us/security/blog/?p=114039)
+
+* **Privileged identity management (PIM)** - Manage privileged administrative roles across Azure AD, Azure resources, and other Microsoft Online Services. Use it for just-in-time access, request approval workflows, and fully integrated access reviews to help prevent malicious activities:
+ * See, [Start using Privileged Identity Management](/azure/active-directory/privileged-identity-management/pim-getting-started)
+ * See, [Plan a Privileged Identity Management deployment](../privileged-identity-management/pim-deployment-plan.md)
+* **Reporting and monitoring** - Your Azure AD reporting and monitoring solution design has dependencies and constraints: legal, security, operations, environment, and processes.
+ * See, [Azure Active Directory reporting and monitoring deployment dependencies](../reports-monitoring/plan-monitoring-and-reporting.md)
+* **Access reviews** - Understand and manage access to resources:
+ * See, [What are access reviews?](/articles/active-directory/governance/access-reviews-overview.md)
+ * See, [Plan a Microsoft Entra access reviews deployment](../governance/deploy-access-reviews.md)
+* **Identity governance** - Meet your compliance and risk management objectives for access to critical applications. Learn how to enforce accurate access.
+ * See, [Govern access for applications in your environment](../governance/identity-governance-applications-prepare.md)
+
+Learn more: [Azure governance documentation](/azure/governance/)
-- **R**esponsible for implementing project plan and outcome
+## Best practices for a pilot
-- **A**pproval of project plan and outcome
+Use pilots to test with a small group, before making a change for larger groups, or everyone. Ensure each use case in your organization is tested.
-- **C**ontributor to project plan and outcome
+### Pilot: Phase 1
-- **I**nformed of project plan and outcome
+In your first phase, target IT, usability, and other users who can test and provide feedback. Use this feedback to gain insights on potential issues for support staff, and to develop communications and instructions you send to all users.
-## Best practices for a pilot
-A pilot allows you to test with a small group before turning on a capability for everyone. Ensure that as part of your testing, each use case within your organization is thoroughly tested. It's best to target a specific group of pilot users before rolling this deployment out to your organization as a whole.
+### Pilot: Phase 2
-In your first wave, target IT, usability, and other appropriate users who can test and provide feedback. Use this feedback to further develop the communications and instructions you send to your users, and to give insights into the types of issues your support staff may see.
+Widen the pilot to larger groups of users by using dynamic membership, or by manually adding users to the targeted group(s).
-Widening the rollout to larger groups of users should be carried out by increasing the scope of the group(s) targeted. This can be done through [dynamic group membership](../enterprise-users/groups-dynamic-membership.md), or by manually adding users to the targeted group(s).
+Learn more: [Dynamic membership rules for groups in Azure Active Directory](../enterprise-users/groups-dynamic-membership.md)]
active-directory Azure Active Directory B2c Deployment Plans https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/azure-active-directory-b2c-deployment-plans.md
Title: Azure AD B2C Deployment
-description: Azure Active Directory B2C Deployment guide
-
+ Title: Azure Active Directory B2C deployment plans
+description: Azure Active Directory B2C deployment guide for planning, implementation, and monitoring
Previously updated : 09/13/2022- Last updated : 1/5/2023 - # Azure Active Directory B2C deployment plans
-Azure Active Directory B2C is a scalable identity and access management solution. Its high flexibility to meet your business expectations and smooth integration with existing infrastructure enables further digitalization.
-
-To help organizations understand the business requirements and respect compliance boundaries, a step-by-step approach is recommended throughout an Azure Active Directory (Azure AD) B2C deployment.
-
-| Capability | Description |
-|:--|:|
-| [Plan](#plan-an-azure-ad-b2c-deployment) | Prepare Azure AD B2C projects for deployment. Start by identifying the stakeholders and later defining a project timeline. |
-| [Implement](#implement-an-azure-ad-b2c-deployment) | Start with enabling authentication and authorization and later perform full application onboarding. |
-| [Monitor](#monitor-an-azure-ad-b2c-solution) | Enable logging, auditing, and reporting once an Azure AD B2C solution is in place. |
+Azure Active Directory B2C (Azure AD B2C) is an identity and access management solution that can ease integration with your infrastructure. Use the following guidance to help understand requirements and compliance throughout an Azure AD B2C deployment.
## Plan an Azure AD B2C deployment
-This phase includes the following capabilities:
-
-| Capability | Description |
-|:|:|
-|[Business requirements review](#business-requirements-review) | Assess your organizationΓÇÖs status and expectations |
-| [Stakeholders](#stakeholders) |Build your project team |
-|[Communication](#communication) | Communicate with your team about the project |
-| [Timeline](#timeline) | Reminder of key project milestones |
-
-### Business requirements review
--- Assess the primary reason to switch off existing systems and [move to Azure AD B2C](../../active-directory-b2c/overview.md).--- For a new application, [plan and design](../../active-directory-b2c/best-practices.md#planning-and-design) the Customer Identity Access Management (CIAM) system--- Identify customer's location and [create a tenant in the corresponding datacenter](../../active-directory-b2c/tutorial-create-tenant.md).--- Check the type of applications you have
- - Check the platforms that are currently supported - [MSAL](../develop/msal-overview.md) or [Open source](https://azure.microsoft.com/free/open-source/search/?OCID=AID2200277_SEM_f63bcafc4d5f1d7378bfaa2085b249f9:G:s&ef_id=f63bcafc4d5f1d7378bfaa2085b249f9:G:s&msclkid=f63bcafc4d5f1d7378bfaa2085b249f9).
- - For backend services, use the [client credentials flow](../develop/msal-authentication-flows.md#client-credentials).
--- If you intend to migrate from an existing Identity Provider (IdP)-
- - Consider using the [seamless migration approach](../../active-directory-b2c/user-migration.md#seamless-migration)
- - Learn [how to migrate the existing applications](https://github.com/azure-ad-b2c/user-migration)
- - Ensure the coexistence of multiple solutions at once.
--- Decide the protocols you want to use-
- - If you're currently using Kerberos, NTLM, and WS-Fed, [migrate and refactor your applications](https://www.bing.com/videos/search?q=application+migration+in+azure+ad+b2c&docid=608034225244808069&mid=E21B87D02347A8260128E21B87D02347A8260128&view=detail&FORM=VIRE). Once migrated, your applications can support modern identity protocols such as OAuth 2.0 and OpenID Connect (OIDC) to enable further identity protection and security.
+### Requirements
+
+- Assess the primary reason to turn off systems
+ - See, [What is Azure Active Directory B2C?](../../active-directory-b2c/overview.md)
+- For a new application, plan the design of the Customer Identity Access Management (CIAM) system
+ - See, [Planning and design](../../active-directory-b2c/best-practices.md#planning-and-design)
+- Identify customer locations and create a tenant in the corresponding datacenter
+ - See, [Tutorial: Create an Azure Active Directory B2C tenant](../../active-directory-b2c/tutorial-create-tenant.md)
+- Confirm your application types and supported technologies:
+ - [Overview of the Microsoft Authentication Library (MSAL)](../develop/msal-overview.md)
+ - [Develop with open source languages, frameworks, databases, and tools in Azure](https://azure.microsoft.com/free/open-source/search/?OCID=AID2200277_SEM_f63bcafc4d5f1d7378bfaa2085b249f9:G:s&ef_id=f63bcafc4d5f1d7378bfaa2085b249f9:G:s&msclkid=f63bcafc4d5f1d7378bfaa2085b249f9).
+ - For back-end services, use the [client credentials](../develop/msal-authentication-flows.md#client-credentials) flow
+- To migrate from an identity provider (IdP):
+ - [Seamless migration](../../active-directory-b2c/user-migration.md#seamless-migration)
+ - Go to [azure-ad-b2c-user-migration](https://github.com/azure-ad-b2c/user-migration)
+- Select protocols
+ - If you use Kerberos, Microsoft Windows NT LAN Manager (NTLM), and Web Services Federation (WS-Fed), see the video, [Azure Active Directory: Application and identity migration to Azure AD B2C](https://www.bing.com/videos/search?q=application+migration+in+azure+ad+b2c&docid=608034225244808069&mid=E21B87D02347A8260128E21B87D02347A8260128&view=detail&FORM=VIRE)
+
+After migration, your applications can support modern identity protocols such as OAuth 2.0 and OpenID Connect (OIDC).
### Stakeholders
-When technology projects fail, it's typically because of mismatched expectations on impact, outcomes, and responsibilities. To avoid these pitfalls, [ensure that you're engaging the right
-stakeholders](./active-directory-deployment-plans.md#include-the-right-stakeholders) and that stakeholders understand their roles.
+Technology project success depends on managing expectations, outcomes, and responsibilities.
-- Identify the primary architect, project manager, and owner for the application.
+- Identify the application architect, technical program manager, and owner
+- Create a distribution list (DL) to communicate with the Microsoft account or engineering teams
+ - Ask questions, get answers, and receive notifications
+- Identify a partner or resource outside your organization to support you
-- Consider providing a Distribution List (DL). Using this DL, you can communicate product issues with the Microsoft account team or engineering. You can ask questions, and receive important notifications.
+Learn more: [Include the right stakeholders](./active-directory-deployment-plans.md)
-- Identify a partner or resource outside of your organization who can support you.
+### Communications
-### Communication
+Communicate proactively and regularly with your users about pending and current changes. Inform them about how the experience changes, when it changes, and provide a contact for support.
-Communication is critical to the success of any new service. Proactively communicate to your users about the change. Timely inform them about how their experience will change, when it will change, and how to gain support if they experience issues.
+### Timelines
-### Timeline
+Help set realistic expectations and make contingency plans to meet key milestones:
-Define clear expectations and follow up plans to meet key milestones:
--- Expected pilot date--- Expected launch date--- Any dates that may affect project delivery date
+- Pilot date
+- Launch date
+- Dates that affect delivery
+- Dependencies
## Implement an Azure AD B2C deployment
-This phase includes the following capabilities:
-
-| Capability | Description |
-|:-|:--|
-| [Deploy authentication and authorization](#deploy-authentication-and-authorization) | Understand the [authentication and authorization](../develop/authentication-vs-authorization.md) scenarios |
-| [Deploy applications and user identities](#deploy-applications-and-user-identities) | Plan to deploy client application and migrate user identities |
-| [Client application onboarding and deliverables](#client-application-onboarding-and-deliverables) | Onboard the client application and test the solution |
-| [Security](#security) | Enhance the security of your Identity solution |
-|[Compliance](#compliance) | Address regulatory requirements |
-|[User experience](#user-experience) | Enable a user-friendly service |
+* **Deploy applications and user identities** - Deploy client application and migrate user identities
+* **Client application onboarding and deliverables** - Onboard the client application and test the solution
+* **Security** - Enhance the identity solution security
+* **Compliance** - Address regulatory requirements
+* **User experience** - Enable a user-friendly service
### Deploy authentication and authorization -- Start with [setting up an Azure AD B2C tenant](../../active-directory-b2c/tutorial-create-tenant.md).--- For business driven authorization, use the [Azure AD B2C Identity Experience Framework (IEF) sample user journeys](https://github.com/azure-ad-b2c/samples#local-account-policy-enhancements)--- Try [Open policy agent](https://www.openpolicyagent.org/).-
-Learn more about Azure AD B2C in [this developer course](https://aka.ms/learnaadb2c).
+* Before your applications interact with Azure AD B2C, register them in a tenant you manage
+ * See, [Tutorial: Create an Azure Active Directory B2C tenant](../../active-directory-b2c/tutorial-create-tenant.md)
+* For authorization, use the Identity Experience Framework (IEF) sample user journeys
+ * See, [Azure Active Directory B2C: Custom CIAM User Journeys](https://github.com/azure-ad-b2c/samples#local-account-policy-enhancements)
+* Use policy-based control for cloud-native environments
+ * Go to openpolicyagent.org to learn about [Open Policy Agent](https://www.openpolicyagent.org/) (OPA)
-Follow this sample checklist for more guidance:
+Learn more with the Microsoft Identity PDF, [Gaining expertise with Azure AD B2C](https://aka.ms/learnaadb2c), a course for developers.
-- Identify the different personas that need access to your application.
+### Checklist for personas, permissions, delegation, and calls
-- Define how you manage permissions and entitlements in your existing system today and how to plan for the future.--- Check if you have a permission store and if there any permissions that need to be added to the directory.--- If you need delegated administration define how to solve it. For example, your customers' customers management.--- Check if your application calls directly an API Manager (APIM). There may be a need to call from the IdP before issuing a token to the application.
+* Identify the personas that access to your application
+* Define how you manage system permissions and entitlements today, and in the future
+* Confirm you have a permission store and if there are permissions to add to the directory
+* Define how you manage delegated administration
+ * For example, your customers' customers management
+* Verify your application calls an API Manager (APIM)
+ * There might be a need to call from the IdP before the application is issued a token
### Deploy applications and user identities
-All Azure AD B2C projects start with one or more client applications, which may have different business goals.
-
-1. [Create or configure client applications](../../active-directory-b2c/app-registrations-training-guide.md). Refer to these [code samples](../../active-directory-b2c/integrate-with-app-code-samples.md) for implementation.
-
-2. Next, setup your user journey based on built-in or custom user flows. [Learn when to use user flows vs. custom policies](../../active-directory-b2c/user-flow-overview.md#comparing-user-flows-and-custom-policies).
-
-3. Setup IdPs based on your business need. [Learn how to add Azure Active Directory B2C as an IdP](../../active-directory-b2c/add-identity-provider.md).
-
-4. Migrate your users. [Learn about user migration approaches](../../active-directory-b2c/user-migration.md). Refer to [Azure AD B2C IEF sample user journeys](https://github.com/azure-ad-b2c/samples) for advanced scenarios.
-
-Consider this sample checklist as you **deploy your applications**:
--- Check the number of applications that are in scope for the CIAM deployment.--- Check the type of applications that are in use. For example, traditional web applications, APIs, Single page apps (SPA), or Native mobile applications.--- Check the kind of authentication that is in place. For example, forms based, federated with SAML, or federated with OIDC.
- - If OIDC, check the response type - code or id_token.
--- Check if all the frontend and backend applications are hosted in on-premises, cloud, or hybrid-cloud.--- Check the platforms/languages used such as, [ASP.NET](../../active-directory-b2c/quickstart-web-app-dotnet.md), Java, and Node.js.--- Check where the current user attributes are stored. It could be Lightweight Directory Access Protocol (LDAP) or databases.-
-Consider this sample checklist as you **deploy user identities**:
--- Check the number of users accessing the applications.--- Check the type of IdPs that are needed. For example, Facebook, local account, and [Active Directory Federation Services (AD FS)](/windows-server/identity/active-directory-federation-services).--- Outline the claim schema that is required from your application, [Azure AD B2C](../../active-directory-b2c/claimsschema.md), and your IdPs if applicable.--- Outline the information that is required to capture during a [sign-in/sign-up flow](../../active-directory-b2c/add-sign-up-and-sign-in-policy.md?pivots=b2c-user-flow).
+Azure AD B2C projects start with one or more client applications.
+
+* [The new App registrations experience for Azure Active Directory B2C](../../active-directory-b2c/app-registrations-training-guide.md)
+ * Refer to [Azure Active Directory B2C code samples](../../active-directory-b2c/integrate-with-app-code-samples.md) for implementation
+* Set up your user journey based on custom user flows
+ * [Comparing user flows and custom policies](../../active-directory-b2c/user-flow-overview.md#comparing-user-flows-and-custom-policies)
+ * [Add an identity provider to your Azure Active Directory B2C tenant](../../active-directory-b2c/add-identity-provider.md)
+ * [Migrate users to Azure AD B2C](../../active-directory-b2c/user-migration.md)
+ * [Azure Active Directory B2C: Custom CIAM User Journeys](https://github.com/azure-ad-b2c/samples) for advanced scenarios
+
+### Application deployment checklist
+
+* Applications included in the CIAM deployment
+* Applications in use
+ * For example, web applications, APIs, single-page apps (SPAs), or native mobile applications
+* Authentication in use:
+ * For example, forms federated with SAML, or federated with OIDC
+ * If OIDC, confirm the response type: code or id_token
+* Determine where front-end and back-end applications are hosted: on-premises, cloud, or hybrid-cloud
+* Confirm the platforms or languages in use:
+ * For example ASP.NET, Java, and Node.js
+ * See, [Quickstart: Set up sign in for an ASP.NET application using Azure AD B2C](../../active-directory-b2c/quickstart-web-app-dotnet.md)
+* Verify where user attributes are stored
+ * For example, Lightweight Directory Access Protocol (LDAP) or databases
+
+### User identity deployment checklist
+
+* Confirm the number of users accessing applications
+* Determine the IdP types needed:
+ * For example, Facebook, local account, and Active Directory Federation Services (AD FS)
+ * See, [Active Directory Federation Services](/windows-server/identity/active-directory-federation-services)
+* Outline the claim schema required from your application, Azure AD B2C, and IdPs if applicable
+ * See, [ClaimsSchema](../../active-directory-b2c/claimsschema.md)
+* Determine the information to collect during sign-in and sign-up
+ * [Set up a sign-up and sign-in flow in Azure Active Directory B2C](../../active-directory-b2c/add-sign-up-and-sign-in-policy.md?pivots=b2c-user-flow)
### Client application onboarding and deliverables
-Consider this sample checklist while you **onboard an application**:
-
-| Task | Description |
-|:--|:-|
-| Define the target group of the application | Check if this application is an end customer application, business customer application, or a digital service. Check if there is a need for employee login. |
-| Identify the business value behind an application | Understand the full business case behind an application to find the best fit of Azure AD B2C solution and integration with further client applications.|
-| Check the identity groups you have | Cluster identities in different types of groups with different types of requirements, such as **Business to Customer** (B2C) for end customers and business customers, **Business to Business** (B2B) for partners and suppliers, **Business to Employee** (B2E) for your employees and external employees, **Business to Machine** (B2M) for IoT device logins and service accounts.|
-| Check the IdP you need for your business needs and processes | Azure AD B2C [supports several types of IdPs](../../active-directory-b2c/add-identity-provider.md#select-an-identity-provider) and depending on the use case the right IdP should be chosen. For example, for a Customer to Customer mobile application a fast and easy user login is required. In another use case, for a Business to Customer with digital services additional compliance requirements are necessary. The user may need to log in with their business identity such as E-mail login. |
-| Check the regulatory constraints | Check if there is any reason to have remote profiles or specific privacy policies. |
-| Design the sign-in and sign-up flow | Decide whether an email verification or email verification inside sign-ups will be needed. First check-out process such as Shop systems or [Azure AD Multi-Factor Authentication (MFA)](../authentication/concept-mfa-howitworks.md) is needed or not. Watch [this video](https://www.youtube.com/watch?v=c8rN1ZaR7wk&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=4). |
-| Check the type of application and authentication protocol used or that will be implemented | Information exchange about the implementation of client application such as Web application, SPA, or Native application. Authentication protocols for client application and Azure AD B2C could be OAuth, OIDC, and SAML. Watch [this video](https://www.youtube.com/watch?v=r2TIVBCm7v4&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=9)|
-| Plan user migration | Discuss the possibilities of [user migration with Azure AD B2C](../../active-directory-b2c/user-migration.md). There are several scenarios possible such as Just In Times (JIT) migration, and bulk import/export. Watch [this video](https://www.youtube.com/watch?v=lCWR6PGUgz0&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=2). You can also consider using [Microsoft Graph API](https://www.youtube.com/watch?v=9BRXBtkBzL4&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=3) for user migration.|
-
-Consider this sample checklist while you **deliver**.
-
-| Capability | Description |
-|:--|:-|
-|Protocol information| Gather the base path, policies, metadata URL of both variants. Depending on the client application, specify the attributes such as sample login, client application ID, secrets, and redirects.|
-| Application samples | Refer to the provided [sample codes](../../active-directory-b2c/integrate-with-app-code-samples.md). |
-|Pen testing | Before the tests, inform your operations team about the pen tests and then test all user flows including the OAuth implementation. Learn more about [Penetration testing](../../security/fundamentals/pen-testing.md) and the [Microsoft Cloud unified penetration testing rules of engagement](https://www.microsoft.com/msrc/pentest-rules-of-engagement).
-| Unit testing | Perform unit testing and generate tokens [using Resource owner password credential (ROPC) flows](../develop/v2-oauth-ropc.md). If you hit the Azure AD B2C token limit, [contact the support team](../../active-directory-b2c/support-options.md). Reuse tokens to reduce investigation efforts on your infrastructure. [Setup a ROPC flow](../../active-directory-b2c/add-ropc-policy.md?pivots=b2c-user-flow&tabs=app-reg-ga).|
-| Load testing | Expect reaching Azure AD B2C [service limits](../../active-directory-b2c/service-limits.md). Evaluate the expected number of authentications per month your service will have. Evaluate the expected number of average user logins per month. Assess the expected high load traffic durations and business reason such as holidays, migrations, and events. Evaluate the expected peak sign-up rate, for example, number of requests per second. Evaluate the expected peak traffic rate with MFA, for example, requests per second. Evaluate the expected traffic geographic distribution and their peak rates.
+Use the following checklist for onboarding an application
+
+|Area|Description|
+|||
+|Application target user group | Select among end customers, business customers, or a digital service. </br>Determine a need for employee sign-in.|
+|Application business value| Understand the business need and/or goal to determine the best Azure AD B2C solution and integration with other client applications.|
+|Your identity groups| Cluster identities into groups with requirements, such as business-to-consumer (B2C), business-to-business (B2B) business-to-employee (B2E), and business-to-machine (B2M) for IoT device sign-in and service accounts.|
+|Identity provider (IdP)| See, [Select an identity provider](../../active-directory-b2c/add-identity-provider.md#select-an-identity-provider). For example, for a customer-to-customer (C2C) mobile app use an easy sign-in process. </br>B2C with digital services has compliance requirements. </br>Consider email sign-in. |
+|Regulatory constraints | Determine a need for remote profiles or privacy policies. |
+|Sign-in and sign-up flow | Confirm email verification or email verification during sign-up. </br>For check-out processes, see [How it works: Azure AD Multi-Factor Authentication](../authentication/concept-mfa-howitworks.md). </br>See the video, [Azure AD: Azure AD B2C user migration using Microsoft Graph API](https://www.youtube.com/watch?v=c8rN1ZaR7wk&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=4). |
+|Application and authentication protocol| Implement client applications such as Web application, single-page application (SPA), or native. </br>Authentication protocols for client application and Azure AD B2C: OAuth, OIDC, and SAML. </br>See the video, [Azure AD: Protecting Web APIs with Azure AD](https://www.youtube.com/watch?v=r2TIVBCm7v4&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=9).|
+| User migration | Confirm if you'll [migrate users to Azure AD B2C](../../active-directory-b2c/user-migration.md): Just-in-time (JIT) migration and bulk import/export. </br>See the video, [Azure Active Directory: Azure AD B2C user migration strategies](https://www.youtube.com/watch?v=lCWR6PGUgz0&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=2).|
+
+Use the following checklist for delivery.
+
+|Area| Description|
+|||
+|Protocol information| Gather the base path, policies, and metadata URL of both variants. </br>Specify attributes such as sample sign-in, client application ID, secrets, and redirects.|
+|Application samples | See, [Azure Active Directory B2C code samples](../../active-directory-b2c/integrate-with-app-code-samples.md).|
+|Penetration testing | Inform your operations team about pen tests, then test user flows including the OAuth implementation. </br>See, [Penetration testing](../../security/fundamentals/pen-testing.md) and [Penetration testing rules of engagement](https://www.microsoft.com/msrc/pentest-rules-of-engagement).
+| Unit testing | Unit test and generate tokens. </br>See, [Microsoft identity platform and OAuth 2.0 Resource Owner Password Credentials](../develop/v2-oauth-ropc.md). </br>If you reach the Azure AD B2C token limit, see [Azure AD B2C: File Support Requests](../../active-directory-b2c/support-options.md). </br>Reuse tokens to reduce investigation on your infrastructure. </br>[Set up a resource owner password credentials flow in Azure Active Directory B2C](../../active-directory-b2c/add-ropc-policy.md?pivots=b2c-user-flow&tabs=app-reg-ga).|
+| Load testing | Learn about [Azure AD B2C service limits and restrictions](../../active-directory-b2c/service-limits.md). </br>Calculate the expected authentications and user sign-ins per month. </br>Assess high load traffic durations and business reasons: holiday, migration, and event. </br>Determine expected peak rates for sign-up, traffic, and geographic distribution, for example per second.
### Security
-Consider this sample checklist to enhance the security of your application depending on your business needs:
--- Check if strong authentication method such as [MFA](../authentication/concept-mfa-howitworks.md) is required. For users who trigger high value transactions or other risk events its suggested to use MFA. For example, for banking and finance applications, online shops - first checkout process.--- Check if MFA is required, [check the methods available to do MFA](../authentication/concept-authentication-methods.md) such as SMS/Phone, email, and third-party services.--- Check if any anti-bot mechanism is in use today with your applications.
+Use the following checklist to enhance application security.
-- Assess the risk of attempts to create fraudulent accounts and log-ins. Use [Microsoft Dynamics 365 Fraud Protection assessment](../../active-directory-b2c/partner-dynamics-365-fraud-protection.md) to block or challenge suspicious attempts to create new fake accounts or to compromise existing accounts.
+* Authentication method, such as multi-factor authentication (MFA):
+ * MFA is recommended for users that trigger high-value transactions or other risk events. For example, banking, finance, and check-out processes.
+ * See, [What authentication and verification methods are available in Azure AD?](../authentication/concept-authentication-methods.md)
+* Confirm use of anti-bot mechanisms
+* Assess the risk of attempts to create a fraudulent account or sign-in
+ * See, [Tutorial: Configure Microsoft Dynamics 365 Fraud Protection with Azure Active Directory B2C](../../active-directory-b2c/partner-dynamics-365-fraud-protection.md)
+* Confirm needed conditional postures as part of sign-in or sign-up
-- Check for any special conditional postures that need to be applied as part of sign-in or sign-up for accounts with your application.
+#### Conditional Access and identity protection
->[!NOTE]
->You can use [Conditional Access rules](../conditional-access/overview.md) to adjust the difference between user experience and security based on your business goals.
-
-For more information, see [Identity Protection and Conditional Access in Azure AD B2C](../../active-directory-b2c/conditional-access-identity-protection-overview.md).
+* The modern security perimeter now extends beyond an organization's network. The perimeter includes user and device identity.
+ * See, [What is Conditional Access?](../conditional-access/overview.md)
+* Enhance the security of Azure AD B2C with Azure AD identity protection
+ * See, [Identity Protection and Conditional Access in Azure AD B2C](../../active-directory-b2c/conditional-access-identity-protection-overview.md)
### Compliance
-To satisfy certain regulatory requirements you may consider using vNets, IP restrictions, Web Application Firewall (WAF), and similar services to enhance the security of your backend systems.
-
-To address basic compliance requirements, consider:
+To help comply with regulatory requirements and enhance back-end system security you can use virtual networks (VNets), IP restrictions, Web Application Firewall (WAF), etc. Consider the following requirements:
-- The specific regulatory compliance requirements, for example, PCI-DSS that you need to support.--- Check if it's required to store data into a separate database store. If so, check if this information must never be written into the directory.
+* Your regulatory compliance requirements
+ * For example, Payment Card Industry Data Security Standard (PCI-DSS)
+ * Go to pcisecuritystandards.org to learn more about the [PCI Security Standards Council](https://www.pcisecuritystandards.org/)
+* Data storage into a separate database store
+ * Determine if this information can't be written into the directory
### User experience
-Consider the sample checklist to define the user experience (UX) requirements:
--- Identify the required integrations to [extend CIAM capabilities and build seamless end-user experiences](../../active-directory-b2c/partner-gallery.md).--- Provide screenshots and user stories to show the end-user experience for the existing application. For example, provide screenshots for sign-in, sign-up, combined sign-up sign-in (SUSI), profile edit, and password reset.--- Look for existing hints passed through using queryString parameters in your current CIAM solution.--- If you expect high UX customization such as pixel to pixel, you may need a front-end developer to help you.
+Use the following checklist to help define user experience requirements.
-- Azure AD B2C provides capabilities for customizing HTML and CSS, however, it has additional requirements for [JavaScript](../../active-directory-b2c/javascript-and-page-layout.md?pivots=b2c-custom-policy#guidelines-for-using-javascript).
+* Identify integrations to extend CIAM capabilities and build seamless end-user experiences
+ * [Azure Active Directory B2C ISV partners](../../active-directory-b2c/partner-gallery.md)
+* Use screenshots and user stories to show the application end-user experience
+ * For example, screenshots of sign-in, sign-up, sign-up/sign-in (SUSI), profile edit, and password reset
+* Look for hints passed through by using queryString parameters in your CIAM solution
+* For high user-experience customization, consider a using front-end developer
+* In Azure AD B2C, you can customize HTML and CSS
+ * See, [Guidelines for using JavaScript](../../active-directory-b2c/javascript-and-page-layout.md?pivots=b2c-custom-policy#guidelines-for-using-javascript)
+* Implement an embedded experience by using iframe support:
+ * See, [Embedded sign-up or sign-in experience](../../active-directory-b2c/embedded-login.md?pivots=b2c-custom-policy)
+ * For a single-page application, use a second sign-in HTML page that loads into the `<iframe>` element
-- An embedded experience can be implemented [using iframe support](../../active-directory-b2c/embedded-login.md?pivots=b2c-custom-policy). For a single-page application, you'll also need a second "sign-in" HTML page that loads into the `<iframe>` element.
+## Monitoring auditing, and logging
-## Monitor an Azure AD B2C solution
+Use the following checklist for monitoring, auditing, and logging.
-This phase includes the following capabilities:
+* Monitoring
+ * [Monitor Azure AD B2C with Azure Monitor](../../active-directory-b2c/azure-monitor.md)
+ * See the video [Azure Active Directory: Monitoring and reporting Azure AD B2C using Azure Monitor](https://www.youtube.com/watch?v=Mu9GQy-CbXI&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=1)
+* Auditing and logging
+ * [Accessing Azure AD B2C audit logs](../../active-directory-b2c/view-audit-logs.md)
-| Capability | Description |
-|:|:-|
-| Monitoring |[Monitor Azure AD B2C with Azure Monitor](../../active-directory-b2c/azure-monitor.md). Watch [this video](https://www.youtube.com/watch?v=Mu9GQy-CbXI&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=1)|
-| Auditing and Logging | [Access and review audit logs](../../active-directory-b2c/view-audit-logs.md)
-
-## More information
-
-To accelerate Azure AD B2C deployments and monitor the service at scale, see these articles:
--- [Manage Azure AD B2C with Microsoft Graph](../../active-directory-b2c/microsoft-graph-get-started.md)--- [Manage Azure AD B2C user accounts with Microsoft Graph](../../active-directory-b2c/microsoft-graph-operations.md)
+## Resources
+- [Register a Microsoft Graph application](../../active-directory-b2c/microsoft-graph-get-started.md)
+- [Manage Azure AD B2C with Microsoft Graph](../../active-directory-b2c/microsoft-graph-operations.md)
- [Deploy custom policies with Azure Pipelines](../../active-directory-b2c/deploy-custom-policies-devops.md)- - [Manage Azure AD B2C custom policies with Azure PowerShell](../../active-directory-b2c/manage-custom-policies-powershell.md) -- [Monitor Azure AD B2C with Azure Monitor](../../active-directory-b2c/azure-monitor.md)- ## Next steps -- [Azure AD B2C best practices](../../active-directory-b2c/best-practices.md)--- [Azure AD B2C service limits](../../active-directory-b2c/service-limits.md)
+[Recommendations and best practices for Azure Active Directory B2C](../../active-directory-b2c/best-practices.md)
active-directory Azure Ad Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/azure-ad-data-residency.md
Previously updated : 12/5/2022 Last updated : 01/09/2023 + # Azure Active Directory and data residency
-Azure AD is an Identity as a Service (IDaaS) solution that stores and manages identity and access data in the cloud. You can use the data to enable and manage access to cloud services, achieve mobility scenarios, and secure your organization. An instance of the Azure AD service, called a [tenant](/azure/active-directory/develop/developer-glossary#tenant), is an isolated set of directory object data that the customer provisions and owns.
+Azure AD is an Identity as a Service (IDaaS) solution that stores and manages identity and access data in the cloud. You can use the data to enable and manage access to cloud services, achieve mobility scenarios, and secure your organization. An instance of the Azure AD service, called a [tenant](/azure/active-directory/develop/developer-glossary#tenant), is an isolated set of directory object data that the customer provisions and owns.
## Core Store
-Update or retrieval data operations in the Azure AD Core Store relate to a single tenant based on the userΓÇÖs security token, which achieves tenant isolation. The Core Store is made up of tenants stored in scale units, each of which contains multiple tenants. Azure AD replicates each scale unit in the physical data centers of a logical region for resiliency and performance.
+The Core Store is made up of tenants stored in scale units, each of which contains multiple tenants. Update or retrieval data operations in the Azure AD Core Store relate to a single tenant, based on the userΓÇÖs security token, which achieves tenant isolation. Scale units are assigned to a geo-location. Each geo-location uses two or more Azure regions to store the data. In each Azure region, a scale unit data is replicated in the physical data centers for resiliency and performance.
Learn more: [Azure Active Directory Core Store Scale Units](https://www.youtube.com/watch?v=OcKO44GtHh8)
-Currently Azure AD has the following regions:
+Azure AD is available in the following clouds
-* North America
-* Europe, Middle East, and Africa (EMEA)
-* Australia
+* Public
* China
-* Japan
-* [United States government](https://azure.microsoft.com/global-infrastructure/government/)
-* Worldwide
+* US government
-Azure AD handles directory data based on usability, performance, residency and/or other requirements based on geography. The term residency indicates Microsoft provides assurance the data isnΓÇÖt persisted outside the geography.
+In the public cloud, you're prompted to select a location at the time of tenant creation (for example, signing up for Office 365 or Azure, or creating more Azure AD instances through the Azure portal). Azure AD maps the selection to a geo-location and a single scale unit in it. Tenant location canΓÇÖt be changed after itΓÇÖs set.
-Azure AD replicates each tenant through its scale unit, across data centers, based on the following criteria:
+The location selected during tenant creation will map to one of the following geo-locations:
-* Directory data stored in data centers closest to the tenant-residency location, to reduce latency and provide fast user sign-in times
-* Directory data stored in geographically isolated data centers to assure availability during unforeseen single-datacenter, catastrophic events
-* Compliance with data residency, or other requirements, for specific customers and countries/regions or geographies
+* Australia
+* Asia/Pacific
+* Europe, Middle East, and Africa (EMEA)
+* Japan
+* North America
+* Worldwide
+
+Azure AD handles Core Store data based on usability, performance, residency and/or other requirements based on geo-location. The term residency indicates Microsoft provides assurance the data isnΓÇÖt persisted outside the geo-location.
-During tenant creation (for example, signing up for Office 365 or Azure, or creating more Azure AD instances through the Azure portal) you select a country/region as the primary location. Azure AD maps the selection to a logical region and a single scale unit in it. Tenant location canΓÇÖt be changed after itΓÇÖs set.
+Azure AD replicates each tenant through its scale unit, across data centers, based on the following criteria:
+
+* Azure AD Core Store data, stored in data centers closest to the tenant-residency location, to reduce latency and provide fast user sign-in times
+* Azure AD Core Store data stored in geographically isolated data centers to assure availability during unforeseen single-datacenter, catastrophic events
+* Compliance with data residency, or other requirements, for specific customers and geo-locations
## Azure AD cloud solution models
-Use the following table to see Azure AD cloud solution models based on infrastructure, data location, and operation sovereignty.
+Use the following table to see Azure AD cloud solution models based on infrastructure, data location, and operational sovereignty.
-|Model|Model regions|Data location|Operations personnel|Customer support|Put a tenant in this model|
-|||||||
-|Regional (2)|North America, EMEA, Japan|At rest, in the target region. Exceptions by service or feature|Operated by Microsoft. Microsoft datacenter personnel must pass a background check.|Microsoft, globally|Create the tenant in the sign-up experience. Choose the country/region in the residency.|
-|Worldwide|Worldwide||Operated by Microsoft. Microsoft datacenter personnel must pass a background check.|Microsoft, globally|Create the tenant in the sign-up experience. Choose a country/region without a regional model.|
-|Sovereign or national clouds|US government, China|At rest, in the target country or region. No exceptions.|Operated by a data custodian (1). Personnel are screened according to requirements.|Microsoft, country or region|Each national cloud instance has a sign-up experience.
+|Model|Locations|Data location|Operations personnel|Put a tenant in this model|
+||||||
+|Public geo located|North America, EMEA, Japan, Asia/Pacific|At rest, in the target location. Exceptions by service or feature|Operated by Microsoft. Microsoft datacenter personnel must pass a background check.|Create the tenant in the sign-up experience. Choose the location for data residency.|
+|Public worldwide|Worldwide|All locations|Operated by Microsoft. Microsoft datacenter personnel must pass a background check.|Tenant creation available via official support channel and subject to Microsoft discretion.|
+|Sovereign or national clouds|US government, China|At rest, in the target location. No exceptions.|Operated by a data custodian (1). Personnel are screened according to requirements.|Each national cloud instance has a sign-up experience.|
**Table references**:
-(1) **Data custodians**: Data centers in the Worldwide region are operated by Microsoft. In China, Azure AD is operated through a partnership with [21Vianet](/microsoft-365/admin/services-in-china/services-in-china?redirectSourcePath=%252fen-us%252farticle%252fLearn-about-Office-365-operated-by-21Vianet-a8ab5061-3346-4da0-bb7c-5260822b53ae&view=o365-21vianet&viewFallbackFrom=o365-worldwide&preserve-view=true).
-(2) **Authentication data**: Tenants outside the national clouds have authentication information at rest in the continental United States.
+(1) **Data custodians**: Data centers in the US government cloud are operated by Microsoft. In China, Azure AD is operated through a partnership with [21Vianet](/microsoft-365/admin/services-in-china/services-in-china?redirectSourcePath=%252fen-us%252farticle%252fLearn-about-Office-365-operated-by-21Vianet-a8ab5061-3346-4da0-bb7c-5260822b53ae&view=o365-21vianet&viewFallbackFrom=o365-worldwide&preserve-view=true).
Learn more:
+* [Customer data storage and processing for European customers in Azure AD](/azure/active-directory/fundamentals/active-directory-data-storage-eu)
* Power BI: [Azure Active Directory ΓÇô Where is your data located?](https://aka.ms/aaddatamap) * [What is the Azure Active Directory architecture?](https://aka.ms/aadarch) * [Find the Azure geography that meets your needs](https://azure.microsoft.com/overview/datacenters/how-to-choose/)
Learn more:
## Data residency across Azure AD components
-In addition to authentication service data, Azure AD components and service data are stored on servers in the Azure AD instanceΓÇÖs region.
- Learn more: [Azure Active Directory, Product overview](https://www.microsoft.com/cloud-platform/azure-active-directory-features) > [!NOTE]
Learn more: [Azure Active Directory, Product overview](https://www.microsoft.com
### Azure AD components and data storage location
-Data storage for Azure AD components includes authentication, identity, MFA, and others. In the following table, data includes End User Identifiable Information (EUII) and Customer Content (CC).
- |Azure AD component|Description|Data storage location| ||||
-|Azure AD Authentication Service|This service is stateless. The data for authentication is in the Azure AD Core Store. It has no directory data. Azure AD Authentication Service generates log data in Azure storage, and in the data center where the service instance runs. When users attempt to authenticate using Azure AD, theyΓÇÖre routed to an instance in the geographically nearest data center that is part of its Azure AD logical region. |In region|
-|Azure AD Identity and Access Management (IAM) Services|**User and management experiences**: The Azure AD management experience is stateless and has no directory data. It generates log and usage data stored in Azure Tables storage. The user experience is like the Azure portal. <br>**Identity management business logic and reporting services**: These services have locally cached data storage for groups and users. The services generate log and usage data that goes to Azure Tables storage, Azure SQL, and in Microsoft Elastic Search reporting services. |In region|
+|Azure AD Authentication Service|This service is stateless. The data for authentication is in the Azure AD Core Store. It has no directory data. Azure AD Authentication Service generates log data in Azure storage, and in the data center where the service instance runs. When users attempt to authenticate using Azure AD, theyΓÇÖre routed to an instance in the geographically nearest data center that is part of its Azure AD logical region. |In geo location|
+|Azure AD Identity and Access Management (IAM) Services|**User and management experiences**: The Azure AD management experience is stateless and has no directory data. It generates log and usage data stored in Azure Tables storage. The user experience is like the Azure portal. <br>**Identity management business logic and reporting services**: These services have locally cached data storage for groups and users. The services generate log and usage data that goes to Azure Tables storage, Azure SQL, and in Microsoft Elastic Search reporting services. |In geo location|
|Azure AD Multi-Factor Authentication (MFA)|For details about MFA-operations data storage and retention, see [Data residency and customer data for Azure AD multifactor authentication](/azure/active-directory/authentication/concept-mfa-data-residency). Azure AD MFA logs the User Principal Name (UPN), voice-call telephone numbers, and SMS challenges. For challenges to mobile app modes, the service logs the UPN and a unique device token. Data centers in the North America region store Azure AD MFA, and the logs it creates.|North America|
-|Azure AD Domain Services|See regions where Azure AD Domain Services is published on [Products available by region](https://azure.microsoft.com/regions/services/). The service holds system metadata globally in Azure Tables, and it contains no personal data.|In region|
-|Azure AD Connect Health|Azure AD Connect Health generates alerts and reports in Azure Tables storage and blob storage.|In region|
-|Azure AD dynamic membership for groups, Azure AD self-service group management|Azure Tables storage holds dynamic membership rule definitions.|In region|
-|Azure AD Application Proxy|Azure AD Application Proxy stores metadata about the tenant, connector machines, and configuration data in Azure SQL.|In region|
+|Azure AD Domain Services|See regions where Azure AD Domain Services is published on [Products available by region](https://azure.microsoft.com/regions/services/). The service holds system metadata globally in Azure Tables, and it contains no personal data.|In geo location|
+|Azure AD Connect Health|Azure AD Connect Health generates alerts and reports in Azure Tables storage and blob storage.|In geo location|
+|Azure AD dynamic membership for groups, Azure AD self-service group management|Azure Tables storage holds dynamic membership rule definitions.|In geo location|
+|Azure AD Application Proxy|Azure AD Application Proxy stores metadata about the tenant, connector machines, and configuration data in Azure SQL.|In geo location|
|Azure AD password reset |Azure AD password reset is a back-end service using Redis Cache to track session state. To learn more, go to redis.com to see [Introduction to Redis](https://redis.io/docs/about/).|See, Intro to Redis link in center column.|
-|Azure AD password writeback in Azure AD Connect|During initial configuration, Azure AD Connect generates an asymmetric keypair, using the RivestΓÇôShamirΓÇôAdleman (RSA) cryptosystem. It then sends the public key to the self-service password reset (SSPR) cloud service, which performs two operations: </br></br>1. Creates two Azure Service Bus relays for the Azure AD Connect on-premises service to communicate securely with the SSPR service </br> 2. Generates an Advanced Encryption Standard (AES) key, K1 </br></br> The Azure Service Bus relay locations, corresponding listener keys, and a copy of the AES key (K1) goes to Azure AD Connect in the response. Future communications between SSPR and Azure AD Connect occur over the new ServiceBus channel and are encrypted using SSL. </br> New password resets, submitted during operation, are encrypted with the RSA public key generated by the client during onboarding. The private key on the Azure AD Connect machine decrypts them, which prevents pipeline subsystems from accessing the plaintext password. </br> The AES key encrypts the message payload (encrypted passwords, more data, and metadata), which prevents malicious ServiceBus attackers from tampering with the payload, even with full access to the internal ServiceBus channel. </br> For password writeback, Azure AD Connect need keys and data: </br></br> - The AES key (K1) that encrypts the reset payload, or change requests from the SSPR service to Azure AD Connect, via the ServiceBus pipeline </br> - The private key, from the asymmetric key pair that decrypts the passwords, in reset or change request payloads </br> - The ServiceBus listener keys </br></br> The AES key (K1) and the asymmetric keypair rotate a minimum of every 180 days, a duration you can change during certain onboarding or offboarding configuration events. An example is a customer disables and re-enables password writeback, which might occur during component upgrade during service and maintenance. </br> The writeback keys and data stored in the Azure AD Connect database are encrypted by data protection application programming interfaces (DPAPI) (CALG_AES_256). The result is the master ADSync encryption key stored in the Windows Credential Vault in the context of the ADSync on-premises service account. The Windows Credential Vault supplies automatic secret re-encryption as the password for the service account changes. To reset the service account password invalidates secrets in the Windows Credential Vault for the service account. Manual changes to a new service account might invalidate the stored secrets.</br> By default, the ADSync service runs in the context of a virtual service account. The account might be customized during installation to a least-privileged domain service account, a managed service account (MSA), or a group managed service account (gMSA). While virtual and managed service accounts have automatic password rotation, customers manage password rotation for a custom provisioned domain account. As noted, to reset the password causes loss of stored secrets. |In region|
-|Azure AD Device Registration Service |Azure AD Device Registration Service has computer and device lifecycle management in the directory, which enable scenarios such as device-state conditional access, and mobile device management.|In region|
-|Azure AD provisioning|Azure AD provisioning creates, removes, and updates users in systems, such as software as service (SaaS) applications. It manages user creation in Azure AD and on-premises AD from cloud HR sources, like Workday. The service stores its configuration in an Azure Cosmos DB, which stores the group membership data for the user directory it keeps. Cosmos DB replicates the database to multiple datacenters in the same region as the tenant, which isolates the data, according to the Azure AD cloud solution model. Replication creates high availability and multiple reading and writing endpoints. Cosmos DB has encryption on the database information, and the encryption keys are stored in the secrets storage for Microsoft.|In region|
-|Azure AD business-to-business (B2B) collaboration|Azure AD B2B collaboration has no directory data. Users and other directory objects in a B2B relationship, with another tenant, result in user data copied in other tenants, which might have data residency implications.|In region|
-|Azure AD Identity Protection|Azure AD Identity Protection uses real-time user log-in data, with multiple signals from company and industry sources, to feed its machine-learning systems that detect anomalous logins. Personal data is scrubbed from real-time log-in data before itΓÇÖs passed to the machine learning system. The remaining log-in data identifies potentially risky usernames and logins. After analysis, the data goes to Microsoft reporting systems. Risky logins and usernames appear in reporting for Administrators.|In region|
-|Azure AD managed identities for Azure resources|Azure AD managed identities for Azure resources with managed identities systems can authenticate to Azure services, without storing credentials. Rather than use username and password, managed identities authenticate to Azure services with certificates. The service writes certificates it issues in Azure Cosmos DB in the East US region, which fail over to another region, as needed. Azure Cosmos DB geo-redundancy occurs by global data replication. Database replication puts a read-only copy in each region that Azure AD managed identities runs. To learn more, see [Azure services that can use managed identities to access other services](/azure/active-directory/managed-identities-azure-resources/managed-identities-status#azure-services-that-support-managed-identities-for-azure-resources). Microsoft isolates each Cosmos DB instance in an Azure AD cloud solution model. </br> The resource provider, such as the virtual machine (VM) host, stores the certificate for authentication, and identity flows, with other Azure services. The service stores its master key to access Azure Cosmos DB in a datacenter secrets management service. Azure Key Vault stores the master encryption keys.|In region|
-|Azure Active Directory business-to-consumer (B2C)|Azure Active Directory B2C is an identity management service to customize and manage how customers sign up, sign in, and manage their profiles when using applications. B2C uses the Core Store to keep user identity information. The Core Store database follows known storage, replication, deletion, and data-residency rules. B2C uses an Azure Cosmos DB system to store service policies and secrets. Cosmos DB has encryption and replication services on database information. Its encryption key is stored in the secrets storage for Microsoft. Microsoft isolates Cosmos DB instances in an Azure AD cloud solution model.|Customer-selectable region|
+|Azure AD password writeback in Azure AD Connect|During initial configuration, Azure AD Connect generates an asymmetric keypair, using the RivestΓÇôShamirΓÇôAdleman (RSA) cryptosystem. It then sends the public key to the self-service password reset (SSPR) cloud service, which performs two operations: </br></br>1. Creates two Azure Service Bus relays for the Azure AD Connect on-premises service to communicate securely with the SSPR service </br> 2. Generates an Advanced Encryption Standard (AES) key, K1 </br></br> The Azure Service Bus relay locations, corresponding listener keys, and a copy of the AES key (K1) goes to Azure AD Connect in the response. Future communications between SSPR and Azure AD Connect occur over the new ServiceBus channel and are encrypted using SSL. </br> New password resets, submitted during operation, are encrypted with the RSA public key generated by the client during onboarding. The private key on the Azure AD Connect machine decrypts them, which prevents pipeline subsystems from accessing the plaintext password. </br> The AES key encrypts the message payload (encrypted passwords, more data, and metadata), which prevents malicious ServiceBus attackers from tampering with the payload, even with full access to the internal ServiceBus channel. </br> For password writeback, Azure AD Connect need keys and data: </br></br> - The AES key (K1) that encrypts the reset payload, or change requests from the SSPR service to Azure AD Connect, via the ServiceBus pipeline </br> - The private key, from the asymmetric key pair that decrypts the passwords, in reset or change request payloads </br> - The ServiceBus listener keys </br></br> The AES key (K1) and the asymmetric keypair rotate a minimum of every 180 days, a duration you can change during certain onboarding or offboarding configuration events. An example is a customer disables and re-enables password writeback, which might occur during component upgrade during service and maintenance. </br> The writeback keys and data stored in the Azure AD Connect database are encrypted by data protection application programming interfaces (DPAPI) (CALG_AES_256). The result is the master ADSync encryption key stored in the Windows Credential Vault in the context of the ADSync on-premises service account. The Windows Credential Vault supplies automatic secret re-encryption as the password for the service account changes. To reset the service account password invalidates secrets in the Windows Credential Vault for the service account. Manual changes to a new service account might invalidate the stored secrets.</br> By default, the ADSync service runs in the context of a virtual service account. The account might be customized during installation to a least-privileged domain service account, a managed service account (MSA), or a group managed service account (gMSA). While virtual and managed service accounts have automatic password rotation, customers manage password rotation for a custom provisioned domain account. As noted, to reset the password causes loss of stored secrets. |In geo location|
+|Azure AD Device Registration Service |Azure AD Device Registration Service has computer and device lifecycle management in the directory, which enable scenarios such as device-state conditional access, and mobile device management.|In geo location|
+|Azure AD provisioning|Azure AD provisioning creates, removes, and updates users in systems, such as software as service (SaaS) applications. It manages user creation in Azure AD and on-premises AD from cloud HR sources, like Workday. The service stores its configuration in an Azure Cosmos DB, which stores the group membership data for the user directory it keeps. Cosmos DB replicates the database to multiple datacenters in the same region as the tenant, which isolates the data, according to the Azure AD cloud solution model. Replication creates high availability and multiple reading and writing endpoints. Cosmos DB has encryption on the database information, and the encryption keys are stored in the secrets storage for Microsoft.|In geo location|
+|Azure AD business-to-business (B2B) collaboration|Azure AD B2B collaboration has no directory data. Users and other directory objects in a B2B relationship, with another tenant, result in user data copied in other tenants, which might have data residency implications.|In geo location|
+|Azure AD Identity Protection|Azure AD Identity Protection uses real-time user log-in data, with multiple signals from company and industry sources, to feed its machine-learning systems that detect anomalous logins. Personal data is scrubbed from real-time log-in data before itΓÇÖs passed to the machine learning system. The remaining log-in data identifies potentially risky usernames and logins. After analysis, the data goes to Microsoft reporting systems. Risky logins and usernames appear in reporting for Administrators.|In geo location|
+|Azure AD managed identities for Azure resources|Azure AD managed identities for Azure resources with managed identities systems can authenticate to Azure services, without storing credentials. Rather than use username and password, managed identities authenticate to Azure services with certificates. The service writes certificates it issues in Azure Cosmos DB in the East US region, which fail over to another region, as needed. Azure Cosmos DB geo-redundancy occurs by global data replication. Database replication puts a read-only copy in each region that Azure AD managed identities runs. To learn more, see [Azure services that can use managed identities to access other services](/azure/active-directory/managed-identities-azure-resources/managed-identities-status#azure-services-that-support-managed-identities-for-azure-resources). Microsoft isolates each Cosmos DB instance in an Azure AD cloud solution model. </br> The resource provider, such as the virtual machine (VM) host, stores the certificate for authentication, and identity flows, with other Azure services. The service stores its master key to access Azure Cosmos DB in a datacenter secrets management service. Azure Key Vault stores the master encryption keys.|In geo location|
+|Azure Active Directory B2C |[Azure AD B2C](/azure/active-directory-b2c/data-residency) is an identity management service to customize and manage how customers sign up, sign in, and manage their profiles when using applications. B2C uses the Core Store to keep user identity information. The Core Store database follows known storage, replication, deletion, and data-residency rules. B2C uses an Azure Cosmos DB system to store service policies and secrets. Cosmos DB has encryption and replication services on database information. Its encryption key is stored in the secrets storage for Microsoft. Microsoft isolates Cosmos DB instances in an Azure AD cloud solution model.|Customer-selectable geo location|
## Related resources
-For more information on data residency in Microsoft Cloud offerings see the following articles:
+For more information on data residency in Microsoft Cloud offerings, see the following articles:
* [Azure Active Directory ΓÇô Where is your data located?](https://aka.ms/aaddatamap) * [Data Residency in Azure | Microsoft Azure](https://azure.microsoft.com/explore/global-infrastructure/data-residency/#overview)
active-directory Custom Security Attributes Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/custom-security-attributes-manage.md
Previously updated : 06/30/2022 Last updated : 01/07/2023
The following table provides a high-level comparison of the custom security attr
| Permission | Global Admin | Attribute Definition Admin | Attribute Assignment Admin | Attribute Definition Reader | Attribute Assignment Reader | | | :: | :: | :: | :: | :: | | Read attribute sets | | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| Read attribute definitions | | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
+| Read attribute definitions | | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Read attribute assignments for users and applications (service principals) | | | :heavy_check_mark: | | :heavy_check_mark: | | Add or edit attribute sets | | :heavy_check_mark: | | | | | Add, edit, or deactivate attribute definitions | | :heavy_check_mark: | | | |
Once you have a better understanding of how your attributes will be organized an
| <ul><li>Read attribute definitions in a scoped attribute set</li><li>Read attribute assignments that use attributes in a scoped attribute set for users</li><li>Read attribute assignments that use attributes in a scoped attribute set for applications (service principals)</li><li>[Assign attributes in a scoped attribute set to users](../enterprise-users/users-custom-security-attributes.md)</li><li>[Assign attributes in a scoped attribute set to applications (service principals)](../manage-apps/custom-security-attributes-apps.md)</li><li>[Author Azure role assignment conditions that use the Principal attribute for all attributes in a scoped attribute set](../../role-based-access-control/conditions-format.md#attributes)</li><li>**Cannot** read attributes in other attribute sets</li><li>**Cannot** read attribute assignments that use attributes in other attribute sets</li></ul> | [Attribute Assignment Administrator](../roles/permissions-reference.md#attribute-assignment-administrator) | ![Icon for attribute set scope.](./media/custom-security-attributes-manage/icon-attribute-set.png)<br/>Attribute set | | <ul><li>Read all attribute sets in a tenant</li><li>Read all attribute definitions in a tenant</li></ul> | [Attribute Definition Reader](../roles/permissions-reference.md#attribute-definition-reader) | ![Icon for tenant scope.](./media/custom-security-attributes-manage/icon-tenant.png)<br/>Tenant | | <ul><li>Read attribute definitions in a scoped attribute set</li><li>**Cannot** read other attribute sets</li></ul> | [Attribute Definition Reader](../roles/permissions-reference.md#attribute-definition-reader) | ![Icon for attribute set scope.](./media/custom-security-attributes-manage/icon-attribute-set.png)<br/>Attribute set |
-| <ul><li>Read all attribute sets in a tenant</li><li>Read all attribute assignments in a tenant for users</li><li>Read all attribute assignments in a tenant for applications (service principals)</li></ul> | [Attribute Assignment Reader](../roles/permissions-reference.md#attribute-assignment-reader) | ![Icon for tenant scope.](./media/custom-security-attributes-manage/icon-tenant.png)<br/>Tenant |
-| <ul><li>Read attribute assignments that use attributes in a scoped attribute set for users</li><li>Read attribute assignments that use attributes in a scoped attribute set for applications (service principals)</li><li>**Cannot** read attribute assignments that use attributes in other attribute sets</li></ul> | [Attribute Assignment Reader](../roles/permissions-reference.md#attribute-assignment-reader) | ![Icon for attribute set scope.](./media/custom-security-attributes-manage/icon-attribute-set.png)<br/>Attribute set |
+| <ul><li>Read all attribute sets in a tenant</li><li>Read all attribute definitions in a tenant</li><li>Read all attribute assignments in a tenant for users</li><li>Read all attribute assignments in a tenant for applications (service principals)</li></ul> | [Attribute Assignment Reader](../roles/permissions-reference.md#attribute-assignment-reader) | ![Icon for tenant scope.](./media/custom-security-attributes-manage/icon-tenant.png)<br/>Tenant |
+| <ul><li>Read attribute definitions in a scoped attribute set</li><li>Read attribute assignments that use attributes in a scoped attribute set for users</li><li>Read attribute assignments that use attributes in a scoped attribute set for applications (service principals)</li><li>**Cannot** read attributes in other attribute sets</li><li>**Cannot** read attribute assignments that use attributes in other attribute sets</li></ul> | [Attribute Assignment Reader](../roles/permissions-reference.md#attribute-assignment-reader) | ![Icon for attribute set scope.](./media/custom-security-attributes-manage/icon-attribute-set.png)<br/>Attribute set |
## Step 6: Assign roles
To grant access to the appropriate people, follow these steps to assign one of t
> [!NOTE] > If you are using Azure AD Privileged Identity Management (PIM), eligible role assignments at attribute set scope currently aren't supported. Permanent role assignments at attribute set scope are supported, but the **Assigned roles** page for a user doesn't list the role assignments.
-
- > [!NOTE]
- > Users with attribute set scope role assignments currently can see other attribute sets and custom security attribute definitions.
#### PowerShell
active-directory Secure External Access Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-external-access-resources.md
Previously updated : 11/03/2022 Last updated : 12/21/2022
# Plan an Azure Active Directory B2B collaboration deployment
-Secure collaboration with external partners ensures that the right external partners have appropriate access to internal resources for the right length of time. Through a holistic security and governance approach, you can reduce security risks, meet compliance goals, and ensure that you know who has access.
+Secure collaboration with your external partners ensures they have correct access to internal resources, and for the expected duration. Learn about governance practices to reduce security risks, meet compliance goals, and ensure accurate access.
-Ungoverned collaboration leads to a lack of clarity on ownership of access, and the possibility of sensitive resources being exposed. Moving to secure and governed collaboration can ensure that there are clear lines of ownership and accountability for external usersΓÇÖ access. This includes:
+Governed collaboration improves clarity of ownership of access, reduces exposure of sensitive resources, and enables you to attest to access policy.
-* Managing the external organizations, and users within them, that have access to resources.
+* Manage external organizations, and their users who access resources
+* Ensure access is correct, reviewed, and time bound
+* Empower business owners to manage collaboration with delegation
-* Ensuring that access is appropriate, reviewed, and time bound where appropriate.
+Traditionally, organizations use one of two methods to collaborate:
-* Empowering business owners to manage collaboration within IT-created guard rails via delegation.
-
-Where you have a compliance requirement, governed collaboration enables you to attest to the appropriateness of access.
-
-Traditionally, organizations have used one of the two methods to collaborate:
-
-1. Creating locally managed credentials for external users, or
-2. Establishing federations with partner Identity Providers.
+* Create locally managed credentials for external users, or
+* Establish federations with partner identity providers (IdP)
-Both methods have significant drawbacks in themselves.
+Both methods have drawbacks. For more information, see the following table.
| Area of concern | Local credentials | Federation |
-|:--|:-|:-|
-| Security | - Access continues after external user terminated<br> - Usertype is ΓÇ£memberΓÇ¥ by default which grants too much default access | - No user level visibility <br> - Unknown partner security posture|
-| Expense | - Password + Multi-Factor Authentication management<br> - Onboarding process<br> - Identity cleanup<br> - Overhead of running a separate directory | - Small partners cannot afford the infrastructure<br> - Small partners do not have the expertise<br> - Small Partners might only have consumer emails (no IT) |
-| Complexity | - Partner users need to manage an additional set of credentials | - Complexity grows with each new partner<br> - Complexity grows on partnersΓÇÖ side as well |
--
-Microsoft offers comprehensive suites of tools for secure external access. Azure Active Directory (Azure AD) B2B Collaboration is at the center of any external collaboration plan. Azure AD B2B can integrate with other tools in Azure AD, and tools in Microsoft 365 services, to help secure and manage your external access.
-
-Azure AD B2B simplifies collaboration, reduces expense, and increases security compared to traditional collaboration methods. Benefits of Azure AD B2B include:
--- External users cannot access resources if the home identity is disabled or deleted. --- Authentication and credential management are handled by the userΓÇÖs home identity provider. --- Resource tenant controls all access and authorization of guest users. --- Can collaborate with any user who has an email address without need for partner infrastructure. --- No need for IT departments to connect out-of-band to set up access/federation. --- Guest user access is protected by the same enterprise-grade security as internal users.
+|-|||
+| Security | - Access continues after external user terminates<br> - UserType is Member by default, which grants too much default access | - No user-level visibility <br> - Unknown partner security posture|
+| Expense | - Password and multi-factor authentication (MFA) management<br> - Onboarding process<br> - Identity cleanup<br> - Overhead of running a separate directory | Small partners can't afford the infrastructure, lack expertise, and might user consumer email|
+| Complexity | Partner users manage more credentials | Complexity grows with each new partner, and increased for partners|
-- Easy end user experience with no additional credentials needed.
+Azure Active Directory (Azure AD) B2B integrates with other tools in Azure AD, and Microsoft 365 services. Azure AD B2B simplifies collaboration, reduces expense, and increases security.
-- Users can collaborate easily with partners without needing their IT departments involvement.
+Azure AD B2B benefits:
-- No need for Guest default permissions in the Azure AD directory can be limited or highly restricted. -
-This document set is designed to enable you to move from ad hoc or loosely governed external collaboration to a more secure state.
+- If the home identity is disabled or deleted, external users can't access resources
+- User home IdP handles authentication and credential management
+- Resource tenant controls guest-user access and authorization
+- Collaborate with users who have an email address, but no infrastructure
+- IT departments don't connect out-of-band to set up access or federation
+- Guest user access is protected by the same security processes as internal users
+- Clear end-user experience with no extra credentials required
+- Users collaborate with partners without IT department involvement
+- Guest default permissions in the Azure AD directory aren't limited or highly restricted
## Next steps
-See the following articles on securing external access to resources. We recommend you take the actions in the listed order.
--
-1. [Determine your security posture for external access](1-secure-access-posture.md)
-
-2. [Discover your current state](2-secure-access-current-state.md)
-
-3. [Create a governance plan](3-secure-access-plan.md)
-
-4. [Use groups for security](4-secure-access-groups.md)
-
-5. [Transition to Azure AD B2B](5-secure-access-b2b.md)
-
-6. [Secure access with Entitlement Management](6-secure-access-entitlement-managment.md)
-
-7. [Secure access with Conditional Access policies](7-secure-access-conditional-access.md)
-
-8. [Secure access with Sensitivity labels](8-secure-access-sensitivity-labels.md)
-
-9. [Secure access to Microsoft Teams, OneDrive, and SharePoint](9-secure-access-teams-sharepoint.md)
+* [Determine your security posture for external access](1-secure-access-posture.md)
+* [Discover the current state of external collaboration in your organization](2-secure-access-current-state.md)
+* [Create a security plan for external access](3-secure-access-plan.md)
+* [Securing external access with groups](4-secure-access-groups.md)
+* [Transition to governed collaboration with Azure Active Directory B2B collaboration](5-secure-access-b2b.md)
+* [Manage external access with entitlement management](6-secure-access-entitlement-managment.md)
+* [Secure access with Conditional Access policies](7-secure-access-conditional-access.md)
+* [Control access with sensitivity labels](8-secure-access-sensitivity-labels.md)
+* [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business](9-secure-access-teams-sharepoint.md)
-10. [Convert local guest accounts to B2B](10-secure-local-guest.md)
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md
Azure Support is now available for Azure AD integration components of Microsoft
**Service category:** Enterprise Apps **Product capability:** SSO
-Previously, the number of groups you could use when you conditionally change claims based on group membership within any single application configuration was limited to 10. The use of group membership conditions in SSO claims configuration has now increased to a maximum of 50 groups. For more information on how to configure claims, refer to [Enterprise Applications SSO claims configuration](../develop/active-directory-saml-claims-customization.md#emitting-claims-based-on-conditions).
+Previously, the number of groups you could use when you conditionally change claims based on group membership within any single application configuration was limited to 10. The use of group membership conditions in SSO claims configuration has now increased to a maximum of 50 groups. For more information on how to configure claims, refer to [Enterprise Applications SSO claims configuration](../develop/active-directory-saml-claims-customization.md).
active-directory Trigger Custom Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/trigger-custom-task.md
To use a custom task extension in your workflow, first a custom task extension m
1. In the left menu, select **Workflows (Preview)**.
-1. On the workflows screen, select **custom task extension**.
+1. On the workflows screen, select **Custom task extension**.
:::image type="content" source="media/trigger-custom-task/custom-task-extension-select.png" alt-text="Screenshot of selecting a custom task extension from a workflow overview page.":::
-1. On the custom task extensions page, select **create custom task extension**.
+1. On the custom task extensions page, select **Create custom task extension**.
:::image type="content" source="media/trigger-custom-task/create-custom-task-extension.png" alt-text="Screenshot for creating a custom task extension selection.":::
-1. On the basics page you, give a display name and description for the custom task extension and select **Next**.
+1. On the basics page you, enter a unique display name and description for the custom task extension and select **Next**.
:::image type="content" source="media/trigger-custom-task/custom-task-extension-basics.png" alt-text="Screenshot of the basics section for creating a custom task extension."::: 1. On the **Task behavior** page, you specify how the custom task extension will behave after executing the Azure Logic App and select **Next**. :::image type="content" source="media/trigger-custom-task/custom-task-extension-behavior.png" alt-text="Screenshot for choose task behavior for custom task extension.":::
active-directory Understanding Lifecycle Workflows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/understanding-lifecycle-workflows.md
Title: 'Understanding lifecycle workflows' description: Describes an overview of Lifecycle workflows and the various parts. -+
# Understanding lifecycle workflows
-The following reference document provides an overview of a workflow created using Lifecycle Workflows. Lifecycle Workflows allow you to create workflows that automate common tasks associated with user lifecycle in organizations. Lifecycle Workflows automate tasks based on the joiner-mover-leaver cycle of lifecycle management, and splits tasks for users up into categories of where they are in the lifecycle of an organization. These categories extend into templates where they can be quickly customized to suit the needs of users in your organization. For more information, see: [What are Lifecycle Workflows?](what-are-lifecycle-workflows.md).
+The following document provides an overview of a workflow created using Lifecycle Workflows. Workflows automate tasks based on the joiner-mover-leaver(JML) cycle of lifecycle management, and split tasks for users into categories of where they fall in the lifecycle of an organization. These categories extend into templates, where they can be quickly customized to suit the needs of users in your organization. For more information, see: [What are Lifecycle Workflows?](what-are-lifecycle-workflows.md).
- [![Diagram of a lifecycle workflow](media/understanding-lifecycle-workflows/workflow-2.png)](media/understanding-lifecycle-workflows/workflow-2.png#lightbox)
+ [![Diagram of a lifecycle workflow.](media/understanding-lifecycle-workflows/workflow-2.png)](media/understanding-lifecycle-workflows/workflow-2.png#lightbox)
## License requirements
The following permissions are required for Lifecycle Workflows:
|LifecycleWorkflows.ReadWrite.All | Read and write all lifecycle workflows and tasks.| Allows the app to create, update, list, read and delete all workflows and tasks related to lifecycle workflows on behalf of the signed-in user.| Yes ## Parts of a workflow
-A workflow can be broken down in to the following three main parts.
+
+A workflow can be broken down into the following three main parts:
|Workflow part|Description| |--|--|
-|General information|This portion of a workflow covers basic information such as display name and a description of what the workflow does.|
+|General information|This portion of a workflow covers basic information such as display name, and a description of what the workflow does.|
|Tasks|Tasks are the actions that will be taken when a workflow is executed.|
-|Execution conditions| The execution condition section of a workflow sets up<br><br>- Who(scope) the workflow runs against <br><br>- When(trigger) the workflow runs|
+|Execution conditions| Defines when(trigger), and for who(scope), a scheduled workflow will run. For more information on these two parameters, see [Trigger details](understanding-lifecycle-workflows.md#trigger-details) and [Configure Scope](understanding-lifecycle-workflows.md#configure-scope).|
## Templates
-Creating a workflow via the portal requires the use of a template. A Lifecycle Workflow template is a framework that is used for pre-defined tasks and helps automate the creation of a workflow.
+
+Creating a workflow via the Azure portal requires the use of a template. A Lifecycle Workflow template is a framework that is used for pre-defined tasks, and helps automate the creation of a workflow.
[![Understanding workflow template diagram.](media/understanding-lifecycle-workflows/workflow-3.png)](media/understanding-lifecycle-workflows/workflow-3.png#lightbox)
-The template will define the task that is to be used and then guide you through the creation of the workflow. The template provides input for description information and execution condition information.
+The template, depending on its category, will define which tasks are available to be used, and then guide you through the creation of the workflow. The template provides input for basic description, execution conditions, and task information.
>[!NOTE]
->Depending on the template you select, the options that will be available may vary. This document uses the **Onboarding pre-hire employee** template to illustrate the parts of a workflow.
+>Depending on the template you select, the options that will be available may vary. The images in this document uses the [**Onboarding pre-hire employee**](lifecycle-workflow-templates.md#onboard-pre-hire-employee) template to illustrate the parts of a workflow.
For more information, see [Lifecycle workflow templates.](lifecycle-workflow-templates.md)
-## Workflow basics
+## Workflow overview
-After selecting a template, on the basics screen:
+Every workflow has its own overview section, where you can either take quick actions with the workflow, or view its details. This overview section is split into the three following parts:
- [![Basics of a workflow.](media/understanding-lifecycle-workflows/workflow-4.png)](media/understanding-lifecycle-workflows/workflow-4.png#lightbox)
+- Basic Information
+- My Feed
+- Quick Action
+
+In this section you'll learn what each section tells you, and what actions you'll be able to take from this information.
-### Workflow details
-Under the workflow details section, you can provide the following information:
+### Basic Information
+
+When selecting a workflow, the overview provides you a list of basic details in the **Basic Information** section. These basic details provide you information such as the workflow category, its ID, when it was modified, and when it's scheduled to run again. This information is important in providing quick details surrounding its current usage for administrative purposes. Basic information is also live data, meaning any quick change action that you take place on the overview page, is shown immediately within this section.
+
+Within the **Basic Information** you can view the following information:
|Name|Description| |--|--| |Name|The name of the workflow.| |Description|A brief description that describes the workflow.|
+ |Category|A string identifying the category of the workflow.|
+ |Date Created|The date and time the workflow was created.|
+ |Workflow ID|A unique identifier for the workflow.|
+ |Schedule|Defines if the workflow is currently scheduled to run.|
+ |Last run date|The last date and time the workflow ran.|
+ |Last Modified|The last date and time the workflow was modified.|
-### Trigger details
-Under the trigger details section, you can provide the following information.
+### My Feed
- |Name|Description|
- |--|--|
- |Days for event|The number of days before or after the date specified in the **Event user attribute**.|
+The **My Feed** section of the workflow overview contains a quick peek into when and how the workflow ran. This section also allows you to quickly jump to the target areas for more information. The following information is provided:
-This section defines **when** the workflow will run. Currently, there are two supported types of triggers:
-
-- Trigger and scope based - runs the task on all users in scope once the workflow is triggered.-- On-demand - can be run immediately. Typically used for real-time employee terminations.
+- Next target run: The date and time of the next scheduled workflow run.
+- Total processed users: The total number of users processed by the workflow.
+- Processed users with failures: The total users processed with failed status by the workflow.
+- Failed tasks: The total number of failed
+- Number of tasks: The total number of tasks within the workflow.
+- Current version: How many new versions of the workflow have been created.
-## Configure scope
-After you define the basics tab, on the configure scope screen:
-The configure scope section determines **who** the workflow will run against.
+### Quick Action
- [![Screenshot showing the rule section](media/understanding-lifecycle-workflows/workflow-5.png)](media/understanding-lifecycle-workflows/workflow-5.png#lightbox)
+The **Quick Action** section allows you to quickly take action with your workflow. These quick actions can either be making the workflow do something, or used for history or editing purposes. The following actions you can take are:
-You can add extra expressions using **And/Or** to create complex conditionals, and apply the workflow more granularly across your organization.
+- Run on Demand: Allows you to quickly run the workflow on demand. For more information on this process, see: [Run a workflow on-demand](on-demand-workflow.md)
+- Edit tasks: Allows you to add, delete, edit, or reorder tasks within the workflow. For more information on this process, see: [Edit the tasks of a workflow using the Azure portal](manage-workflow-tasks.md#edit-the-tasks-of-a-workflow-using-the-azure-portal)
+- View Workflow History: Allows you to view the history of the workflow. For more information on the three history perspectives, see: [Lifecycle Workflows history](lifecycle-workflow-history.md)
- [![Extra expressions.](media/understanding-lifecycle-workflows/workflow-8.png)](media/understanding-lifecycle-workflows/workflow-8.png#lightbox)
+Actions taken from the overview of a workflow allow you to quickly complete tasks, which can normally be done via the manage section of a workflow.
-> [!NOTE]
-> For a full list of user properties supported by Lifecycle Workflows, see [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta&preserve-view=true#supported-user-properties-and-query-parameters)
+[![Update manage workflow section review.](media/understanding-lifecycle-workflows/workflow-11.png)](media/understanding-lifecycle-workflows/workflow-11.png#lightbox)
-For more information, see [Create a lifecycle workflow.](create-lifecycle-workflow.md)
+## Workflow basics
+After selecting a template, on the basics screen:
+ - Provide the information that will be used in the description portion of the workflow.
+ - The trigger, defines when of the execution condition.
+
+ [![Basics of a workflow.](media/understanding-lifecycle-workflows/workflow-4.png)](media/understanding-lifecycle-workflows/workflow-4.png#lightbox)
-## Review tasks
-After defining the scope the review tasks screen will allow you to:
+## Trigger details
-[![Screenshot showing the review tasks screen.](media/understanding-lifecycle-workflows/workflow-6.png)](media/understanding-lifecycle-workflows/workflow-6.png#lightbox)
+The trigger of a workflow defines when a scheduled workflow will run for users in scope for the workflow. The trigger is a combination of a time-based attribute, and an offset value. For example, if the attribute is employeeHireDate and offsetInDays is -1, then the workflow should trigger one day before the employee hire date. The value can range between -60 and 60 days.
-You can use the **Add task** button to add extra tasks for the workflow. Select the additional tasks from the list provided.
+The time-based attribute can be either one of two values, which are automatically chosen based on the template in which you select during the creation of your workflow. The two values can be:
- [![Screenshot showing additional tasks section.](media/understanding-lifecycle-workflows/workflow-6.png)](media/understanding-lifecycle-workflows/workflow-6.png#lightbox)
+- employeeHireDate: If the template is a joiner workflow.
+- employeeLeaveDateTime: If the template is a leaver workflow.
-For more information, see: [Lifecycle workflow tasks](lifecycle-workflow-tasks.md)
+These two values must be set within Azure AD for users. For more information on this process, see [How to synchronize attributes for Lifecycle workflows](how-to-lifecycle-workflow-sync-attributes.md)
-## Review and create
+The offset determines how many days before or after the time-based attribute the workflow should be triggered. For example, if the attribute is employeeHireDate and offsetInDays is 7, then the workflow should trigger one week(7 days) before the employee hire date. The offsetInDays value can be as far ahead, or behind, as 60.
-After reviewing the tasks on the review and create screen, you:
- Based on what was defined in the previous sections our workflow will now show:
-- It's named **on-board pre-hire employee**.-- Based on the date in the **EmployeeHireDate** attribute, it will trigger **seven** (7) days prior to the date.-- It will run against users who have **marketing** for the **department** attribute value.-- It will generate a **TAP (temporary access password)**, and send an email to the user in the **manager** attribute of the pre-hire employee.
+## Configure scope
- [![Review and create workflow template.](media/understanding-lifecycle-workflows/workflow-7.png)](media/understanding-lifecycle-workflows/workflow-7.png#lightbox)
+[![Screenshot showing the rule section.](media/understanding-lifecycle-workflows/workflow-5.png)](media/understanding-lifecycle-workflows/workflow-5.png#lightbox)
-## Scheduling
-A workflow isn't scheduled to run by default. To enable the workflow, it needs to be scheduled.
+The scope defines for who the scheduled workflow will run. Configuring this parameter allows you to further narrow down the users for whom the workflow is to be executed.
-To verify whether the workflow is scheduled, you can view the **Scheduled** column.
+The scope is made up of the following two parts:
-To enable the workflow, select the **Enable schedule** option for the workflow.
+- Scope type: Always preset as Rule based.
+- Rule: Where you can set expressions on user properties that define for whom the scheduled workflow will run. You can add extra expressions using **And, And not, Or, Or not** to create complex conditionals, and apply the workflow more granularly across your organization. Lifecycle Workflows supports a [rich set of user properties](/graph/api/resources/identitygovernance-rulebasedsubjectset#supported-user-properties-and-query-parameters) for configuring the scope.
-Once scheduled, the workflow will be evaluated every 3 hours to determine whether or not it should run based on the execution conditions.
+[![Extra expressions.](media/understanding-lifecycle-workflows/workflow-8.png)](media/understanding-lifecycle-workflows/workflow-8.png#lightbox)
+
+For a detailed guide on setting the execution conditions for a workflow, see: [Create a lifecycle workflow.](create-lifecycle-workflow.md)
+
+## Scheduling
+
+While newly created workflows are enabled by default, scheduling is an option that must be enabled manually. To verify whether the workflow is scheduled, you can view the **Scheduled** column.
+
+Once scheduling is enabled, the workflow will be evaluated every three hours to determine whether or not it should run based on the execution conditions.
[![Workflow template schedule.](media/understanding-lifecycle-workflows/workflow-10.png)](media/understanding-lifecycle-workflows/workflow-10.png#lightbox)
+To view a detailed guide on scheduling a workflow, see: [Customize the schedule of workflows](customize-workflow-schedule.md).
### On-demand scheduling
A workflow can be run on-demand for testing or in situations where it's required
Use the **Run on demand** feature to execute the workflow immediately. The workflow must be enabled before you can run it on demand. >[!NOTE]
-> A workflow that is run on demand for any user does not take into account whether or not a user meets the workflow's execution. It will apply the task regardless of whether the execution conditions are met or not.
+> A workflow that is run on demand for a user does not take into account whether or not a user meets the workflow's execution conditions. It will apply the tasks regardless of whether the execution conditions are met by the user or not.
-For more information, see [Run a workflow on-demand](on-demand-workflow.md)
+For more information, see: [Run a workflow on-demand](on-demand-workflow.md)
-## Managing the workflow
+## History
-By selecting on a workflow you created, you can manage the workflow.
+When you've selected a workflow, you can view its historical information through the lens of its users, runs, and tasks. Being able to view information specifically from these viewpoints allows you to quickly narrow down specific information about how a workflow was processed.
-You can select which portion of the workflow you wish to update or change using the left navigation bar. Select the section you wish to update.
+For more information, see: [Lifecycle Workflows history](lifecycle-workflow-history.md)
-[![Update manage workflow section review.](media/understanding-lifecycle-workflows/workflow-11.png)](media/understanding-lifecycle-workflows/workflow-11.png#lightbox)
-
-For more information, see [Manage lifecycle workflow properties](manage-workflow-properties.md)
## Versioning
-Workflow versions are separate workflows built using the same information of an original workflow, but with updated parameters so that they're reported differently within logs. Workflow versions can change the actions or even scope of an existing workflow.
-
-You can view versioning information by selecting **Versions** under **Manage** from the left.
+Workflow versions are separate workflows built using the same information of an original workflow, but with either the tasks or scope updated, so that they're reported differently within logs. Workflow versions can change the actions or even scope of an existing workflow.
[![Manage workflow versioning selection.](media/understanding-lifecycle-workflows/workflow-12.png)](media/understanding-lifecycle-workflows/workflow-12.png#lightbox)
-For more information, see [Lifecycle Workflow versioning](lifecycle-workflow-versioning.md)
-
-## Developer information
-This document covers the parts of a lifecycle workflow
+For more information, see: [Lifecycle Workflows Versioning](lifecycle-workflow-versioning.md)
-For more information, see the [Workflow API Reference](lifecycle-workflows-developer-reference.md)
## Next steps - [Create a custom workflow using the Azure portal](tutorial-onboard-custom-workflow-portal.md)
active-directory Howto Troubleshoot Upn Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/howto-troubleshoot-upn-changes.md
Title: Plan and troubleshoot Azure User Principal name (UPN) changes description: Understand known issues and mitigations for UPN changes- Previously updated : 09/27/2022--- Last updated : 12/19/2022+++ # Plan and troubleshoot User Principal Name changes in Azure Active Directory
-A User Principal Name (UPN) is an attribute that is an internet communication standard for user accounts. A UPN consists of a UPN prefix (the user account name) and a UPN suffix (a DNS domain name). The prefix joins the suffix using the "\@" symbol. For example, someone@example.com. A UPN must be unique among all security principal objects within a directory forest.
-
-**This article assumes you're using UPN as the user identifier. It addresses planning for UPN changes, and recovering from issues that may result from UPN changes.**
+The User Principal Name (UPN) attribute is an internet communication standard for user accounts. A UPN consists of a prefix (user account name) and a suffix (DNS domain name). The prefix joins the suffix using the "\@" symbol. For example, someone@example.com. Ensure the UPN is unique among security principal objects in a directory forest.
> [!NOTE]
-> For developers, we recommend that you use the user objectID as the immutable identifier, rather than UPN or email addresses as their values can change.
--
-## Learn about UPNs and UPN changes
-Sign-in pages often prompt users to enter their email address when the required value is actually their UPN. Therefore, you should be sure to change users' UPN anytime their primary email address changes.
-
-Users' primary email addresses might change for many reasons:
+>This article assumes the UPN is the user identifier. It addresses UPN-change planning, and recovering from issues that might result from changes.
+>For developers, we recommend you use the user objectID as the immutable identifier, rather than UPN or email addresses.
-* company rebranding
+## UPN and their changes
-* employees moving to different company divisions
+Sign-in pages often prompt users to enter an email address, when the value is their UPN. Therefore, change user UPN when their primary email address changes. User primary email address might change:
-* mergers and acquisitions
+* Rebranding
+* Employee moves to another division
+* Mergers and acquisitions
+* Employee name change
-* employee name changes
+### UPN change types
-### Types of UPN changes
+Change the prefix, suffix, or both.
-You can change a UPN by changing the prefix, suffix, or both.
+* **Change the prefix**:
+ * BSimon@contoso.com becomes BJohnson@contoso.com
+ * Bsimon@contoso.com becomes Britta.Simon@contoso.com
+* **Changing the suffix**:
+ * Britta.Simon@contoso.com becomes Britta.Simon@contosolabs.com, or
+ * Britta.Simon@corp.contoso.com becomes Britta.Simon@labs.contoso.com
-* **Changing the prefix**.
+We recommend you change user UPN when their primary email address changes. During initial synchronization from Active Directory to Azure AD, ensure user emails are identical to their UPNs.
- * For example, if a person's name changed, you might change their account name:
-ΓÇÄBSimon@contoso.com to BJohnson@contoso.com
-
- * You might also change the corporate standard for prefixes:
-ΓÇÄBsimon@contoso.com to Britta.Simon@contoso.com
+### UPNs in Active Directory
-* **Changing the suffix**. <br>
+In Active Directory, the default UPN suffix is the domain DNS name where you created the user account. In most cases, you register this domain name as the enterprise domain. If you create the user account in the contoso.com domain, the default UPN is: username@contoso.com. However, you can add more UPN suffixes by using Active Directory domains and trusts. Learn more: [Add your custom domain name using the Azure Active Directory portal](../fundamentals/add-custom-domain.md).
- For example, if a person changed divisions, you might change their domain:
+For example, if you add labs.contoso.com and change the user UPNs and email to reflect that, the result is: username@labs.contoso.com.
- * Britta.Simon@contoso.com to Britta.Simon@contosolabs.com <br>
- Or<br>
- * Britta.Simon@corp.contoso.com to Britta.Simon@labs.contoso.com
+>[!IMPORTANT]
+> If you change the suffix in Active Directory, add and verify a matching custom domain name in Azure AD.
+> [Add your custom domain name using the Azure Active Directory portal](../fundamentals/add-custom-domain.md)
-We recommend to change users' UPN every time their primary email address is updated.
+ ![Screenshot of the Add customer domain option, under Custom domain names.](./media/howto-troubleshoot-upn-changes/custom-domains.png)
-During the initial synchronization from Active Directory to Azure AD, ensure the users' emails are identical to their UPNs.
+### UPNs in Azure Active Directory
-### UPNs in Active Directory
+Users sign in to Azure AD with their userPrincipalName attribute value.
-In Active Directory, the default UPN suffix is the DNS name of the domain where you created the user account. In most cases, this is the domain name that you register as the enterprise domain on the internet. If you create the user account in the contoso.com domain, the default UPN is
+When you use Azure AD with on-premises Active Directory, user accounts are synchronized by using the Azure AD Connect service. The Azure AD Connect wizard uses the userPrincipalName attribute from the on-premises Active Directory as the UPN in Azure AD. You can change it to a different attribute in a custom installation.
-username@contoso.com
+>[!NOTE]
+> Define a process for when you update a User Principal Name (UPN) of a user, or for your organization.
- However, you can [add more UPN suffixes](../fundamentals/add-custom-domain.md) by using Active Directory domains and trusts.
+When you synchronize user accounts from Active Directory to Azure AD, ensure the UPNs in Active Directory map to verified domains in Azure AD.
-For example, you may want to add labs.contoso.com and have the users' UPNs and email reflect that. They would then become
+ ![Screenshot of Active Director UPN suffixes and related domains.](./media/howto-troubleshoot-upn-changes/verified-domains.png)
-username@labs.contoso.com.
+If the userPrincipalName attribute value doesn't correspond to a verified domain in Azure AD, synchronization replaces the suffix with .onmicrosoft.com.
->[!IMPORTANT]
-> If you are [changing the suffix in Active Directory](../fundamentals/add-custom-domain.md), you must ensure that a matching custom domain name has been [added and verified on Azure AD](../fundamentals/add-custom-domain.md).
+### Bulk UPN change rollout
-![A screenshot of verified domains](./media/howto-troubleshoot-upn-changes/custom-domains.png)
+Use our best practices to test bulk UPN changes. Have a tested roll-back plan for reverting UPNs if issues can't be resolved. After your pilot is running, target small user sets, with organizational roles, and sets of apps or devices. This process helps you understand the user experience. Include this information in your communications to stakeholders and users.
-### UPNs in Azure Active Directory
+Learn more: [Azure Active Directory deployment plans](../fundamentals/active-directory-deployment-plans.md)
-Users sign in to Azure AD with the value in their userPrincipalName attribute.
+Create a procedure to change UPNs for individual users. We recommend a procedure that includes documentation about known issues and workarounds.
-When you use Azure AD in conjunction with your on-premises Active Directory, user accounts are synchronized by using the Azure AD Connect service. By default the Azure AD Connect wizard uses the userPrincipalName attribute from the on-premises Active Directory as the UPN in Azure AD. You can change it to a different attribute in a custom installation.
+Read the following sections for known issues and workarounds during UPN change.
-It's important that you have a defined process when you update a User Principal Name (UPN) of a single user, or for your entire organization.
+## Apps known issues and workarounds
-See the Known issues and workarounds in this document.
+Software as a service (SaaS) and line of business (LoB) applications often rely on UPNs to find users and store user profile information, including roles. Applications potentially affected by UNP changes use just-in-time (JIT) provisioning to create a user profile when users initially sign in to the app.
-When you're synchronizing user accounts from Active Directory to Azure AD, ensure that the UPNs in Active Directory map to verified domains in Azure AD.
+Learn more:
-![Screenshot that shows examples of UPNs mapped to verified Azure A D domains.](./media/howto-troubleshoot-upn-changes/verified-domains.png)
+* [What is SaaS?](https://azure.microsoft.com/overview/what-is-saas/)
+* [What is app provisioning in Azure Active Directory?](../app-provisioning/user-provisioning.md)
-If the value of the userPrincipalName attribute doesn't correspond to a verified domain in Azure AD, the synchronization process replaces the suffix with a default .onmicrosoft.com value.
+### Known issues
+Changing user UPN can break the relationship between the Azure AD user and the user profile on the application. If the application uses JIT provisioning, it might create a new user profile. Then, the application administrator makes manual changes to fix the relationship.
-### Roll-out bulk UPN changes
+### Workarounds
-Follow the [best practices for a pilot](../fundamentals/active-directory-deployment-plans.md) for bulk UPN changes. Also have a tested rollback plan for reverting UPNs if you find issues that can't be quickly resolved. Once your pilot is running, you can start targeting small sets of users with various organizational roles and their specific sets of apps or devices.
+Use automated app provisioning in Azure AD to create, maintain, and remove user identities in supported cloud applications. Configure automated user provisioning on your applications to update UPNs on the applications. Test the applications to validate they aren't affected by UPN changes. If you're a developer, consider adding SCIM support to your application to enable automatic user provisioning.
-Going through this first subset of users will give you a good idea of what users should expect as part of the change. Include this information on your user communications.
+Learn more:
-Create a defined procedure for changing UPNs on individual users as part of normal operations. We recommend having a tested procedure that includes documentation about known issues and workarounds.
+* [What is app provisioning in Azure Active Directory?](../app-provisioning/user-provisioning.md)
+* [Tutorial: Develop and plan provisioning for a SCIM endpoint in Azure Active Directory](../app-provisioning/use-scim-to-provision-users-and-groups.md)
-The following sections detail potential known issues and workarounds when UPNs are changed.
+## Managed devices known issues and workarounds
-## Apps known issues and workarounds
+If you bring your devices to Azure AD, you maximize user productivity with single sign-on (SSO) across cloud and on-premises resources.
-[Software as a service (SaaS)](https://azure.microsoft.com/overview/what-is-saas/) and Line of Business (LoB) applications often rely on UPNs to find users and store user profile information, including roles. Applications that use [Just in Time provisioning](../app-provisioning/user-provisioning.md) to create a user profile when users sign in to the app for the first time can be affected by UPN changes.
+Learn more: [What is a device identity?](../devices/overview.md)
-**Known issue**<br>
-Changing a user's UPN could break the relationship between the Azure AD user and the user profile created on the application. If the application uses [Just in Time provisioning](../app-provisioning/user-provisioning.md), it might create a brand-new user profile. This will require the application administrator to make manual changes to fix this relationship.
+### Azure AD joined devices
-**Workaround**<br>
-[Azure AD Automated User Provisioning](../app-provisioning/user-provisioning.md) lets you automatically create, maintain, and remove your user identities in supported cloud applications. Configuring automated user provisioning on your applications automatically updates UPNs on the applications. Test the applications as part of the progressive rollout to validate that they are not impacted by UPN changes.
-If you are a developer, consider [adding SCIM support to your application](../app-provisioning/use-scim-to-provision-users-and-groups.md) to enable automatic user provisioning from Azure Active Directory.
+Azure AD joined devices are joined to Azure AD. Users sign in to the device using their organization identity.
-## Managed devices known issues and workarounds
+Learn more: [Azure AD joined devices](../devices/concept-azure-ad-join.md)
-By [bringing your devices to Azure AD](../devices/overview.md), you maximize your users' productivity through single sign-on (SSO) across your cloud and on-premises resources.
+### Known issues and resolution
-### Azure AD joined devices
+Users might experience single sign-on issues with applications that depend on Azure AD for authentication. This issue was fixed in the Windows 10 May-2020 update (2004).
-[Azure AD joined](../devices/concept-azure-ad-join.md) devices are joined directly to Azure AD and allow users to sign in to the device using their organization's identity.
+### Workaround
-**Known issues** <br>
-Users may experience single sign-on issues with applications that depend on Azure AD for authentication.
+Allow enough time for the UPN change to sync to Azure AD. After you verify the new UPN appears in the Azure portal, ask the user to select the "Other user" tile to sign in with their new UPN. You can verify using PowerShell. See, [Get-AzureADUser](/powershell/module/azuread/get-azureaduser?view=azureadps-2.0&preserve-view=true). After users sign in with a new UPN, references to the old UPN might appear on the **Access work or school** Windows setting.
-**Resolution** <br>
-The issues mentioned on this section have been fixed on the Windows 10 May 2020 update (2004).
+ ![Screenshot of User-1 and Other-user domains, on the sign-in screen.](./media/howto-troubleshoot-upn-changes/other-user.png)
-**Workaround** <br>
-Allow enough time for the UPN change to sync to Azure AD. Once you verify that the new UPN is reflected on the Azure AD Portal, ask the user to select the "Other user" tile to sign in with their new UPN. You can also verify through [PowerShell](/powershell/module/azuread/get-azureaduser). After signing in with their new UPN, references to the old UPN might still appear on the "Access work or school" Windows setting.
+### Hybrid Azure AD joined devices
-![Screenshot of verified domains](./media/howto-troubleshoot-upn-changes/other-user.png)
+Hybrid Azure AD joined devices are joined to Active Directory and Azure AD. You can implement Hybrid Azure AD join if your environment has an on-premises Active Directory footprint.
+
+Learn more: [Hybrid Azure AD joined devices](../devices/concept-azure-ad-join-hybrid.md)
+### Known issues and resolution
-### Hybrid Azure AD joined devices
+Windows 10 Hybrid Azure AD joined devices are likely to experience unexpected restarts and access issues. If users sign in to Windows before the new UPN synchronizes to Azure AD, or they continue using a Windows session, they might experience single sign-on (SSO) issues with apps that use Azure AD for authentication. This situation occurs if Conditional Access is configured to enforce the use of hybrid joined devices to access resources.
-[Hybrid Azure AD joined](../devices/concept-azure-ad-join-hybrid.md) devices are joined to Active Directory and Azure AD. You can implement Hybrid Azure AD join if your environment has an on-premises Active Directory footprint and you also want to benefit from the capabilities provided by Azure AD.
+In addition, the following message can appear, which forces a restart after one minute:
-**Known issues**
+Your PC will automatically restart in one minute. Windows ran into a problem and needs to restart. You should close this message now and save your work.
-Windows 10 Hybrid Azure AD joined devices are likely to experience unexpected restarts and access issues.
+This issue was fixed in the Windows 10 May-2020 update (2004).
-If users sign in to Windows before the new UPN has been synchronized to Azure AD, or continue to use an existing Windows session, they may experience single sign-on issues with applications that use Azure AD for authentication if Conditional Access has been configured to enforce the use of Hybrid Joined devices to access resources.
+### Workaround
-Additionally, the following message will appear, forcing a restart after one minute.
+1. Unjoin the device from Azure AD and restart.
+2. The device joins Azure AD.
+3. The user signs in by selecting the **Other user** tile.
-"Your PC will automatically restart in one minute. Windows ran into a problem and needs to restart. You should close this message now and save your work".
+To unjoin a device from Azure AD, run the following command at a command prompt: dsregcmd/leave
-**Resolution** <br>
-The issues mentioned on this section have been fixed on the Windows 10 May 2020 update (2004).
+>[!NOTE]
+>The user re-enrolls for Windows Hello for Business, if it's in use.
-**Workaround**
+>[!TIP]
+>Windows 7 and 8.1 devices are not affected by this issue.
-The device must be unjoined from Azure AD and restarted. After restart, the device will automatically join Azure AD again and the user must sign in using the new UPN by selecting the "Other user" tile.
-To unjoin a device from Azure AD, run the following command at a command prompt:
+## Mobile Application Management app protection policies
-**dsregcmd /leave**
+### Known issues
-The user will need to [re-enroll](/windows/security/identity-protection/hello-for-business/hello-hybrid-cert-whfb-provision) for Windows Hello for Business if it's being used.
-Windows 7 and 8.1 devices are not affected by this issue after UPN changes.
+Your organization might use Mobile Application Management (MAM) to protect corporate data in apps on user devices. MAM app protection policies aren't resilient during UPN changes, which can break the connection between MAM enrollments and active users in MAM integrated applications. This scenario could leave data in an unprotected state.
+Learn more:
-## Mobile Application Management (MAM) app protection policies known issues and workarounds
+* [App protection policies overview](/mem/intune/apps/app-protection-policy)
+* [Frequently asked questions about MAM and app protection](/mem/intune/apps/mam-faq)
-**Known Issues**
+### Workaround
-Your organization may use [MAM app protection policies](/mem/intune/apps/app-protection-policy) to protect corporate data in apps on end users' devices.
-MAM app protection policies are currently not resiliant to UPN changes. UPN changes can break the connection between existing MAM enrollments and active users in MAM integrated applications, resulting in undefined behavior. This could leave data in an unprotected state.
+IT admins can wipe data from affected devices, after UPN changes. This forces users to reauthenticate and reenroll with new UPNs.
-**Work Around**
-
-IT admins should [issue a selective wipe](/mem/intune/apps/apps-selective-wipe) to impacted users following UPN changes. This will force impacted end users to reauthenticate and reenroll with their new UPNs.
+Learn more: [How to wipe only corporate data from Intune-managed apps](/mem/intune/apps/apps-selective-wipe)
## Microsoft Authenticator known issues and workarounds
-Your organization might require the use of the [Microsoft Authenticator app](https://support.microsoft.com/account-billing/how-to-use-the-microsoft-authenticator-app-9783c865-0308-42fb-a519-8cf666fe0acc) to sign in and access organizational applications and data. Although a username might appear in the app, the account isn't set up to function as a verification method until the user completes the registration process.
-
-The [Microsoft Authenticator app](https://support.microsoft.com/account-billing/how-to-use-the-microsoft-authenticator-app-9783c865-0308-42fb-a519-8cf666fe0acc) has four main functions:
+Your organization might require the Microsoft Authenticator app to sign in and access applications and data. Although a username might appear in the app, the account isn't a verification method until the user completes registration.
-* Multi-factor authentication via a push notification or verification code
+Learn more: [How to use the Microsoft Authenticator app](https://support.microsoft.com/account-billing/how-to-use-the-microsoft-authenticator-app-9783c865-0308-42fb-a519-8cf666fe0acc)
-* Act as an Authentication Broker on iOS and Android devices to provide single sign-on for applications that use [Brokered authentication](../develop/msal-android-single-sign-on.md)
+Microsoft Authenticator app has four main functions:
-* Device registration (also known as Workplace Join) to Azure AD, which is a requirement for other features like Intune App Protection and Device Enrolment/Management,
+* **Multi-factor authentication** with push notification or verification code
+* **Authentication broker** on iOS and Android devices fir SSO for applications using brokered authentication
+ * [Enable cross-app SSO on Android using MSAL](../develop/msal-android-single-sign-on.md)
+* **Device registration** or workplace join, to Azure AD, which is a requirement for Intune App Protection and Device Enrolment/Management
+* **Phone sign in**, which requires MFA and device registration
-* Phone sign in, which requires MFA and device registration.
+### Multi-factor authentication with Android devices
-### Multi-Factor Authentication with Android devices
+Use the Microsoft Authenticator app for out-of-band verification. Instead of an automated phone call, or SMS, to the user during sign-in, MFA pushes a notification to the Microsoft Authenticator app on the user device. The user selects **Approve**, or the user enters a PIN or biometric and selects **Authenticate**.
-The Microsoft Authenticator app offers an out-of-band verification option. Instead of placing an automated phone call or SMS to the user during sign-in, [Multi-Factor Authentication (MFA)](../authentication/concept-mfa-howitworks.md) pushes a notification to the Microsoft Authenticator app on the user's smartphone or tablet. The user simply taps Approve (or enters a PIN or biometric and taps "Authenticate") in the app to complete their sign-in.
+Learn more: [How it works: Azure AD Multi-Factor Authentication](../authentication/concept-mfa-howitworks.md)
**Known issues**
-When you change a user's UPN, the old UPN still displays on the user account and a notification might not be received. [Verification codes](https://support.microsoft.com/account-billing/common-problems-with-the-microsoft-authenticator-app-12d283d1-bcef-4875-9ae5-ac360e2945dd) continue to work.
+When you change user UPN, the old UPN appears on the user account and notification might not be received. Use verification codes.
+
+Learn more: [Common questions about the Microsoft Authenticator app](/account-billing/common-problems-with-the-microsoft-authenticator-app-12d283d1-bcef-4875-9ae5-ac360e2945dd)
**Workaround**
-If a notification is received, instruct the user to dismiss the notification, open the Authenticator app, tap the "Check for notifications" option and approve the MFA prompt. After this, the UPN displayed on the account will be updated. Note the updated UPN might be displayed as a new account, this is due to other Authenticator functionality being used. For more information refer to the additional known issues in this article.
+If notification appears, instruct the user to dismiss it, open the Authenticator app, select **Check for notifications** and approve the MFA prompt. The UPN on the account updates. Note the updated UPN might appear as a new account. This change is due to other Authenticator functionality. For more information, see the known issues in this article.
### Brokered authentication
-On Android and iOS brokers like Microsoft Authenticator enable:
+On Android and iOS. brokers like Microsoft Authenticator enable:
+
+* **SSO** - Users don't sign in to each application
+* **Device identification** - The broker accesses the device certificate created on the device when it was workplace-joined
+* **Application identification verification** - When an application calls the broker, it passes its redirect URL, and the broker verifies it
-* Single sign-on (SSO) - Your users won't need to sign in to each application.
+In addition, applications can participate in other features:
-* Device identification - The broker accesses the device certificate created on the device when it was workplace joined.
+* [Azure AD Conditional Access documentation](../conditional-access/index.yml)
+* [Use Microsoft Authenticator or Intune Company Portal on Xamarin applications](../develop/msal-net-use-brokers-with-xamarin-apps.md).
-* Application identification verification - When an application calls the broker, it passes its redirect URL, and the broker verifies it.
+### Known issues
-Additionally, it allows applications to participate in more advanced features such as [Conditional Access](../conditional-access/index.yml), and supports [Microsoft Intune scenarios](../develop/msal-net-use-brokers-with-xamarin-apps.md).
+Due to a mismatch, between the login_hint passed by the application and the UPN stored on the broker, the user experiences more interactive authentication prompts on new applications that use broker-assisted sign-in.
-**Known issues**<br>
-User is presented with more interactive authentication prompts on new applications that use broker-assisted sign-in due to a mismatch between the login_hint passed by the application and the UPN stored on the broker.
+### Workaround
-**Workaround** <br> The user needs to manually remove the account from Microsoft Authenticator and start a new sign-in from a broker-assisted application. The account will be automatically added after the initial authentication.
+The user manually removes the account from Microsoft Authenticator and starts a new sign-in from a broker-assisted application. The account is added after initial authentication.
### Device registration
-The Microsoft Authenticator app is responsible for registering the device to Azure AD. Device registration allows the device to authenticate to Azure AD and is a requirement for the following scenarios:
+The Microsoft Authenticator app registers the device in Azure AD, which allows the device to authenticate to Azure AD. This registration is a requirement for:
-* Intune App Protection
+* Intune app protection
+* Intune device enrollment
+* Phone sign-in
-* Intune Device Enrollment
+### Known issues
-* Phone Sign In
+If you change UPN, a new account with the new UPN appears on the Microsoft Authenticator app. The account with the old UPN remains listed. Also, the old UPN appears on the Device Registration section in app settings. There's no change in functionality of Device Registration or dependant scenarios.
-**Known issues**<br>
-When you change the UPN, a new account with the new UPN appears listed on the Microsoft Authenticator app, while the account with the old UPN is still listed. Additionally, the old UPN displays on the Device Registration section on the app settings. There is no change in the normal functionality of Device Registration or the dependant scenarios.
+### Workaround
-**Workaround** <br>
-To remove all references to the old UPN on the Microsoft Authenticator app, instruct the user to manually remove both the old and new accounts from Microsoft Authenticator, re-register for MFA and rejoin the device.
+To remove references to the old UPN on the Microsoft Authenticator app, the user removes the old and new accounts from Microsoft Authenticator, re-registers for MFA, and rejoins the device.
### Phone sign-in
-Phone sign-in allows users to sign in to Azure AD without a password. To enable phone sign-in, the user needs to register for MFA using the Authenticator app and then enable phone sign-in directly on Authenticator. As part of the configuration, the device registers with Azure AD.
+User phone sign-in for users to sign in to Azure AD without a password. To enable this feature, the user registers for MFA using the Authenticator app and then enables phone sign-in on Authenticator. The device registers with Azure AD.
+
+### Known issues
+
+Users can't use phone sign-in because they don't receive notification. If the user selects **Check for Notifications**, an error appears.
-**Known issues** <br>
-Users are not able to use Phone sign-in because they do not receive any notification. If the user taps on Check for Notifications, they get an error.
+### Workaround
-**Workaround**<br>
-The user needs to select the drop-down menu on the account enabled for Phone sign-in and select Disable phone sign-in. If desired, Phone sign-in can be enabled again.
+The user selects the drop-down menu on the account enabled for phone sign-in. Next, the user selects **Disable phone sign-in**. Phone sign-in can be re-enabled.
-## Security Key (FIDO2) known issues and workarounds
+## Security key (FIDO2) known issues and workarounds
-**Known issues** <br>
-When multiple users are registered on the same key, the sign in screen shows an account selection page where the old UPN is displayed. Sign-ins using Security Keys are not affected by UPN changes.
+### Known issues
-**Workaround**<br>
-To remove references to old UPNs, users must [reset the security key and re-register](../authentication/howto-authentication-passwordless-security-key.md#known-issues).
+When multiple users are registered on the same key, the sign-in screen shows account selection where the old UPN appears. Sign-in with security keys isn't affected by UPN changes.
+
+### Workaround
+
+To remove references to old UPNs, users reset the security key and re-register.
+
+Learn more: [Enable passwordless security key sign-in, Known issue, UPN changes](../authentication/howto-authentication-passwordless-security-key.md#known-issues)
## OneDrive known issues and workarounds OneDrive users are known to experience issues after UPN changes.
-For more information, see
-[How UPN changes affect the OneDrive URL and OneDrive features](/onedrive/upn-changes).
+
+Learn more: [How UPN changes affect the OneDrive URL and OneDrive features](/sharepoint/upn-changes)
## Teams Meeting Notes known issues and workarounds
-Teams Meeting Notes is a feature that allows users to take notes during their Teams meeting. This support document describes the feature in detail: [Take meeting notes in Teams](https://support.microsoft.com/office/take-meeting-notes-in-teams-3eadf032-0ef8-4d60-9e21-0691d317d103).
+Use Teams Meeting Notes to take and share notes.
-**Known issues** <br>
-When a userΓÇÖs UPN changes, the meeting notes created under the old UPN are no longer accessible by that user or any other user via Microsoft Teams or the Meeting Notes URL.
+Learn more: [Take meeting notes in Teams](/office/take-meeting-notes-in-teams-3eadf032-0ef8-4d60-9e21-0691d317d103).
-**Workaround**<br>
-After the UPN change, users can recover the meeting notes they lost access to by downloading them from OneDrive (navigate to My Files -> Microsoft Teams Data -> Wiki). New meeting notes created after the UPN change are not affected and should behave as normal.
+### Known issues
+When a user UPN changes, meeting notes created under the old UPN are not accessible with Microsoft Teams or the Meeting Notes URL.
+### Workaround
+After the UPN change, users can recover meeting notes by downloading them from OneDrive
+
+1. Go to **My Files**.
+2. Select **Microsoft Teams Data**.
+3. Select **Wiki**.
+
+New meeting notes created after the UPN change aren't affected.
## Next steps
-See these resources:
* [Azure AD Connect: Design concepts](./plan-connect-design-concepts.md)- * [Azure AD UserPrincipalName population](./plan-connect-userprincipalname.md)- * [Microsoft identity platform ID tokens](../develop/id-tokens.md)
active-directory Howto Identity Protection Configure Risk Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-configure-risk-policies.md
Previously updated : 10/04/2022 Last updated : 01/06/2023
Before organizations enable remediation policies, they may want to [investigate]
1. Under **Configure user risk levels needed for policy to be enforced**, select **High**. ([This guidance is based on Microsoft recommendations and may be different for each organization](#choosing-acceptable-risk-levels)) 1. Select **Done**. 1. Under **Access controls** > **Grant**.
- 1. Select **Grant access**, **Require password change**.
+ 1. Select **Grant access**, **Require multifactor authentication** and **Require password change**.
1. Select **Select**. 1. Under **Session**. 1. Select **Sign-in frequency**.
active-directory Overview Identity Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/overview-identity-protection.md
Conditional Access administrators can create policies that factor in user or sig
| Capability | Details | Azure AD Free / Microsoft 365 Apps | Azure AD Premium P1 | Azure AD Premium P2 | | | | | | |
-| Risk policies | User risk policy (via Identity Protection) | No | No | Yes |
-| Risk policies | Sign-in risk policy (via Identity Protection or Conditional Access) | No | No | Yes |
+| Risk policies | Sign-in and user risk policies (via Identity Protection or Conditional Access) | No | No | Yes |
| Security reports | Overview | No | No | Yes | | Security reports | Risky users | Limited Information. Only users with medium and high risk are shown. No details drawer or risk history. | Limited Information. Only users with medium and high risk are shown. No details drawer or risk history. | Full access| | Security reports | Risky sign-ins | Limited Information. No risk detail or risk level is shown. | Limited Information. No risk detail or risk level is shown. | Full access |
active-directory Application Sign In Problem Application Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/application-sign-in-problem-application-error.md
In this scenario, Azure Active Directory (Azure AD) signs the user in. But the application displays an error message and doesn't let the user finish the sign-in flow. The problem is that the app didn't accept the response that Azure AD issued.
-There are several possible reasons why the app didn't accept the response from Azure AD. If there is an error message or code displayed, use the following resources to diagnose the error:
+There are several possible reasons why the app didn't accept the response from Azure AD. If there's an error message or code displayed, use the following resources to diagnose the error:
* [Azure AD Authentication and authorization error codes](../develop/reference-aadsts-error-codes.md)
To change the User Identifier value, follow these steps:
### Change the NameID format
-If the application expects another format for the **NameID** (User Identifier) attribute, see [Editing nameID](../develop/active-directory-saml-claims-customization.md#editing-nameid) to change the NameID format.
+If the application expects another format for the **NameID** (User Identifier) attribute, see the [Edit nameID](../develop/active-directory-saml-claims-customization.md#edit-nameid) section to change the NameID format.
Azure AD selects the format for the **NameID** attribute (User Identifier) based on the value that's selected or the format that's requested by the app in the SAML AuthRequest. For more information, see the "NameIDPolicy" section of [Single sign-on SAML protocol](../develop/single-sign-on-saml-protocol.md#nameidpolicy).
active-directory Migrate Application Authentication To Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-application-authentication-to-azure-active-directory.md
Title: 'Migrate application authentication to Azure Active Directory'
-description: This whitepaper details the planning for and benefits of migrating your application authentication to Azure AD.
+description: Describes in detail the benefits and what you need to do to migrate your application authentication to Azure Active Directory (Azure AD).
-+ Previously updated : 02/05/2021-- Last updated : 01/06/2023+ # Migrate application authentication to Azure Active Directory
-## About this paper
+This article describes the benefits and how to plan for migrating your application authentication to Azure AD. It's intended for Azure administrators and identity professionals.
-This whitepaper details the planning for and benefits of migrating your application authentication to Azure AD. It is designed for Azure administrators and identity professionals.
-
-Breaking the process into four phases, each with detailed planning and exit criteria, it is designed to help you plan your migration strategy and understand how Azure AD authentication supports your organizational goals.
+The process is broken into four phases, each with detailed planning and exit criteria, and designed to help you plan your migration strategy and understand how Azure AD authentication supports your organizational goals.
## Introduction Today, your organization requires a slew of applications (apps) for users to get work done. You likely continue to add, develop, or retire apps every day. Users access these applications from a vast range of corporate and personal devices, and locations. They open apps in many ways, including: -- through a company homepage or portal--- by bookmarking on their browsers--- via a vendorΓÇÖs URL for software as a service (SaaS) apps--- links pushed directly to userΓÇÖs desktops or mobile devices via a mobile device/application management (MDM/ MAM) solution
+- Through a company homepage or portal
+- By bookmarking on their browsers
+- Through a vendorΓÇÖs URL for software as a service (SaaS) apps
+- Links pushed directly to userΓÇÖs desktops or mobile devices via a mobile device/application management (MDM/ MAM) solution
Your applications are likely using the following types of authentication: - On-premises federation solutions (such as Active Directory Federation Services (ADFS) and Ping)- - Active Directory (such as Kerberos Auth and Windows-Integrated Auth)- - Other cloud-based identity and access management (IAM) solutions (such as Okta or Oracle)- - On-premises web infrastructure (such as IIS and Apache)- - Cloud-hosted infrastructure (such as Azure and AWS)
-**To ensure that the users can easily and securely access applications, your goal is to have a single set of access controls and policies across your on-premises and cloud environments.**
+To ensure that the users can easily and securely access applications, your goal is to have a single set of access controls and policies across your on-premises and cloud environments.
[Azure Active Directory (Azure AD)](../fundamentals/active-directory-whatis.md) offers a universal identity platform that provides your people, partners, and customers a single identity to access the applications they want and collaborate from any platform and device.
-![A diagram of Azure Active Directory connectivity](media/migrating-application-authentication-to-azure-active-directory-1.jpg)
+![A diagram of Azure AD connectivity.](media/migrating-application-authentication-to-azure-active-directory-1.jpg)
-Azure AD has a [full suite of identity management capabilities](../fundamentals/active-directory-whatis.md#which-features-work-in-azure-ad). Standardizing your app authentication and authorization to Azure AD enables you get the benefits these capabilities provide.
+Azure AD has a [full suite of identity management capabilities](../fundamentals/active-directory-whatis.md#which-features-work-in-azure-ad). Standardizing your app authentication and authorization to Azure AD gets you the benefits that these capabilities provide.
You can find more migration resources at [https://aka.ms/migrateapps](./migration-resources.md)
Moving app authentication to Azure AD will help you manage risk and cost, increa
Safeguarding your apps requires that you have a full view of all the risk factors. Migrating your apps to Azure AD consolidates your security solutions. With it you can: - Improve secure user access to applications and associated corporate data using [Conditional Access policies](../conditional-access/overview.md), [Multi-Factor Authentication](../authentication/concept-mfa-howitworks.md), and real-time risk-based [Identity Protection](../identity-protection/overview-identity-protection.md) technologies.- - Protect privileged userΓÇÖs access to your environment with [Just-In-Time](../../azure-resource-manager/managed-applications/request-just-in-time-access.md) admin access.- - Use the [multi-tenant, geo-distributed, high availability design of Azure AD](https://cloudblogs.microsoft.com/enterprisemobility/2014/09/02/azure-ad-under-the-hood-of-our-geo-redundant-highly-available-distributed-cloud-directory/)for your most critical business needs.- - Protect your legacy applications with one of our [secure hybrid access partner integrations](https://aka.ms/secure-hybrid-access) that you may have already deployed. ### Manage cost
-Your organization may have multiple Identity Access Management (IAM) solutions in place. Migrating to one Azure AD infrastructure is an opportunity to reduce dependencies on IAM licenses (on-premises or in the cloud) and infrastructure costs. In cases where you may have already paid for Azure AD via Microsoft 365 licenses, there is no reason to pay the added cost of another IAM solution.
+Your organization may have multiple Identity Access Management (IAM) solutions in place. Migrating to one Azure AD infrastructure is an opportunity to reduce dependencies on IAM licenses (on-premises or in the cloud) and infrastructure costs. In cases where you may have already paid for Azure AD via Microsoft 365 licenses, there's no reason to pay the added cost of another IAM solution.
-**With Azure AD, you can reduce infrastructure costs by:**
+With Azure AD, you can reduce infrastructure costs by:
- Providing secure remote access to on-premises apps using [Azure AD Application Proxy](../app-proxy/application-proxy.md).--- Decoupling apps from the on-prem credential approach in your tenant by [setting up Azure AD as the trusted universal identity provider](../hybrid/plan-connect-user-signin.md#choosing-the-user-sign-in-method-for-your-organization).
+- Decoupling apps from the on-premises credential approach in your tenant by [setting up Azure AD as the trusted universal identity provider](../hybrid/plan-connect-user-signin.md#choosing-the-user-sign-in-method-for-your-organization).
### Increase productivity Economics and security benefits drive organizations to adopt Azure AD, but full adoption and compliance are more likely if users benefit too. With Azure AD, you can: -- Improve end-user [Single Sign-On (SSO)](./what-is-single-sign-on.md) experience through seamless and secure access to any application, from any device and any location.-
+- Improve end-user [single sign-on (SSO)](./what-is-single-sign-on.md) experience through seamless and secure access to any application, from any device and any location.
- Use self-service IAM capabilities, such as [Self-Service Password Resets](../authentication/concept-sspr-howitworks.md) and [SelfService Group Management](../enterprise-users/groups-self-service-management.md).- - Reduce administrative overhead by managing only a single identity for each user across cloud and on-premises environments: - [Automate provisioning](../app-provisioning/user-provisioning.md) of user accounts (in [Azure AD Gallery](https://azuremarketplace.microsoft.com/marketplace/apps/category/azure-active-directory-apps))based on Azure AD identities - Access all your apps from MyApps panel in the [Azure portal](https://portal.azure.com/) - Enable developers to secure access to their apps and improve the end-user experience by using the [Microsoft Identity Platform](../develop/v2-overview.md) with the Microsoft Authentication Library (MSAL).- - Empower your partners with access to cloud resources using [Azure AD B2B collaboration](../external-identities/what-is-b2b.md). Cloud resources remove the overhead of configuring point-to-point federation with your partners. ### Address compliance and governance
-Ensure compliance with regulatory requirements by enforcing corporate access policies and monitoring user access to applications and associated data using integrated audit tools and APIs. With Azure AD, you can monitor application sign-ins through reports that use [Security Incident and Event Monitoring (SIEM) tools](../reports-monitoring/plan-monitoring-and-reporting.md). You can access the reports from the portal or APIs, and programmatically audit who has access to your applications and remove access to inactive users via access reviews.
+To comply with regulatory requirements, enforce corporate access policies and monitor user access to applications and associated data using integrated audit tools and APIs. With Azure AD, you can monitor application sign-ins through reports that use [Security Incident and Event Monitoring (SIEM) tools](../reports-monitoring/plan-monitoring-and-reporting.md). You can access the reports from the portal or APIs, and programmatically audit who has access to your applications and remove access to inactive users via access reviews.
## Plan your migration phases and project strategy
-When technology projects fail, it is often due to mismatched expectations, the right stakeholders not being involved, or a lack of communication. Ensure your success by planning the project itself.
+When technology projects fail, it's often due to mismatched expectations, the right stakeholders not being involved, or a lack of communication. Ensure your success by planning the project itself.
### The phases of migration
The following table includes the key roles and their contributions:
| Role | Contributions | | - | - | | **Project Manager** | Project coach accountable for guiding the project, including:<br /> - gain executive support<br /> - bring in stakeholders<br /> - manage schedules, documentation, and communications |
-| **Identity Architect / Azure AD App Administrator** | They are responsible for the following:<br /> - design the solution in cooperation with stakeholders<br /> - document the solution design and operational procedures for handoff to the operations team<br /> - manage the pre-production and production environments |
+| **Identity Architect / Azure AD App Administrator** | They're responsible for the following:<br /> - design the solution in cooperation with stakeholders<br /> - document the solution design and operational procedures for handoff to the operations team<br /> - manage the pre-production and production environments |
| **On premises AD operations team** | The organization that manages the different on-premises identity sources such as AD forests, LDAP directories, HR systems etc.<br /> - perform any remediation tasks needed before synchronizing<br /> - Provide the service accounts required for synchronization<br /> - provide access to configure federation to Azure AD | | **IT Support Manager** | A representative from the IT support organization who can provide input on the supportability of this change from a helpdesk perspective. | | **Security Owner** | A representative from the security team that can ensure that the plan will meet the security requirements of your organization. |
The following table includes the key roles and their contributions:
### Plan communications
-Effective business engagement and communication is the key to success. It is important to give stakeholders and end-users an avenue to get information and keep informed of schedule updates. Educate everyone about the value of the migration, what the expected timelines are, and how to plan for any temporary business disruption. Use multiple avenues such as briefing sessions, emails, one-to-one meetings, banners, and townhalls.
+Effective business engagement and communication are the keys to success. It's important to give stakeholders and end-users an avenue to get information and keep informed of schedule updates. Educate everyone about the value of the migration, what the expected timelines are, and how to plan for any temporary business disruption. Use multiple avenues such as briefing sessions, emails, one-to-one meetings, banners, and townhalls.
Based on the communication strategy that you have chosen for the app you may want to remind users of the pending downtime. You should also verify that there are no recent changes or business impacts that would require to postpone the deployment.
-In the following table you will find the minimum suggested communication to keep your stakeholders informed:
+In the following table you'll find the minimum suggested communication to keep your stakeholders informed:
-**Plan phases and project strategy**:
+#### Plan phases and project strategy
| Communication | Audience | | | - |
The migration states you might consider using are as follows:
| **Configuration in Progress** | Develop the changes necessary to manage authentication against Azure AD | | **Test Configuration Successful** | Evaluate the changes and authenticate the app against the test Azure AD tenant in the test environment | | **Production Configuration Successful** | Change the configurations to work against the production AD tenant and assess the app authentication in the test environment |
-| **Complete / Sign Off** | Deploy the changes for the app to the production environment and execute the against the production Azure AD tenant |
+| **Complete / Sign Off** | Deploy the changes for the app to the production environment and execute against the production Azure AD tenant |
This will ensure app owners know what the app migration and testing schedule are when their apps are up for migration, and what the results are from other apps that have already been migrated. You might also consider providing links to your bug tracker database for owners to be able to file and view issues for apps that are being migrated.
The following are our customer and partnerΓÇÖs success stories, and suggested be
### Find your apps
-The first decision point in an application migration is which apps to migrate, which if any should remain, and which apps to deprecate. There is always an opportunity to deprecate the apps that you will not use in your organization. There are several ways to find apps in your organization. **While discovering apps, ensure you are including in-development and planned apps. Use Azure AD for authentication in all future apps.**
+The first decision point in an application migration is which apps to migrate, which if any should remain, and which apps to deprecate. There is always an opportunity to deprecate the apps that you will not use in your organization. There are several ways to find apps in your organization. While discovering apps, ensure you are including in-development and planned apps. Use Azure AD for authentication in all future apps.
-**Using Active Directory Federation Services (AD FS) To gather a correct app inventory:**
+Using Active Directory Federation Services (AD FS) To gather a correct app inventory:
- **Use Azure AD Connect Health.** If you have an Azure AD Premium license, we recommend deploying [Azure AD Connect Health](../hybrid/how-to-connect-health-adfs.md) to analyze the app usage in your on premises environment. You can use the [ADFS application report](./migrate-adfs-application-activity.md) (preview) to discover ADFS applications that can be migrated and evaluate the readiness of the application to be migrated. After completing your migration, deploy [Cloud Discovery](/cloud-app-security/set-up-cloud-discovery) that allows you to continuously monitor Shadow IT in your organization once youΓÇÖre in the cloud.
For other identity providers (such as Okta or Ping), you can use their tools to
In the cloud environment, you need rich visibility, control over data travel, and sophisticated analytics to find and combat cyber threats across all your cloud services. You can gather your cloud app inventory using the following tools: - **Cloud Access Security Broker (CASB**) ΓÇô A [CASB](/cloud-app-security/) typically works alongside your firewall to provide visibility into your employeesΓÇÖ cloud application usage and helps you protect your corporate data from cybersecurity threats. The CASB report can help you determine the most used apps in your organization, and the early targets to migrate to Azure AD.- - **Cloud Discovery** - By configuring [Cloud Discovery](/cloud-app-security/set-up-cloud-discovery), you gain visibility into the cloud app usage, and can discover unsanctioned or Shadow IT apps.- - **APIs** - For apps connected to cloud infrastructure, you can use the APIs and tools on those systems to begin to take an inventory of hosted apps. In the Azure environment:- - Use the [Get-AzureWebsite](/powershell/module/servicemanagement/azure.service/get-azurewebsite) cmdlet to get information about Azure websites.-
- - Use the [Get-AzureRMWebApp](/powershell/module/azurerm.websites/get-azurermwebapp) cmdlet to get information about your Azure Web Apps.
-D
+ - Use the [Get-AzureRMWebApp](/powershell/module/azurerm.websites/get-azurermwebapp) cmdlet to get information about your Azure Web Apps.D
- You can find all the apps running on Microsoft IIS from the Windows command line using [AppCmd.exe](/iis/get-started/getting-started-with-iis/getting-started-with-appcmdexe#working-with-sites-applications-virtual-directories-and-application-pools).- - Use [Applications](/previous-versions/azure/ad/graph/api/entity-and-complex-type-reference#application-entity) and [Service Principals](/previous-versions/azure/ad/graph/api/entity-and-complex-type-reference#serviceprincipal-entity) to get you information on an app and app instance in a directory in Azure AD. ### Using manual processes
D
Once you have taken the automated approaches described above, you will have a good handle on your applications. However, you might consider doing the following to ensure you have good coverage across all user access areas: - Contact the various business owners in your organization to find the applications in use in your organization.- - Run an HTTP inspection tool on your proxy server, or analyze proxy logs, to see where traffic is commonly routed.- - Review weblogs from popular company portal sites to see what links users access the most.- - Reach out to executives or other key business members to ensure that you have covered the business-critical apps. ### Type of apps to migrate
Once you have taken the automated approaches described above, you will have a go
Once you find your apps, you will identify these types of apps in your organization: - Apps that use modern authentication protocols already- - Apps that use legacy authentication protocols that you choose to modernize- - Apps that use legacy authentication protocols that you choose NOT to modernize- - New Line of Business (LoB) apps ### Apps that use modern authentication already
In addition to the choices in the [Azure AD app gallery,](https://azuremarketpla
For legacy apps that you want to modernize, moving to Azure AD for core authentication and authorization unlocks all the power and data-richness that the [Microsoft Graph](https://developer.microsoft.com/graph/gallery/?filterBy=Samples,SDKs) and [Intelligent Security Graph](https://www.microsoft.com/security/operations/intelligence?rtc=1) have to offer.
-We recommend **updating the authentication stack code** for these applications from the legacy protocol (such as Windows-Integrated Authentication, Kerberos Constrained Delegation, HTTP Headers-based authentication) to a modern protocol (such as SAML or OpenID Connect).
+We recommend updating the authentication stack code for these applications from the legacy protocol (such as Windows-Integrated Authentication, Kerberos Constrained Delegation, HTTP Headers-based authentication) to a modern protocol (such as SAML or OpenID Connect).
### Legacy apps that you choose NOT to modernize For certain apps using legacy authentication protocols, sometimes modernizing their authentication is not the right thing to do for business reasons. These include the following types of apps: - Apps kept on-premises for compliance or control reasons.- - Apps connected to an on-premises identity or federation provider that you do not want to change.- - Apps developed using on-premises authentication standards that you have no plans to move Azure AD can bring great benefits to these legacy apps, as you can enable modern Azure AD security and governance features like [Multi-Factor Authentication](../authentication/concept-mfa-howitworks.md), [Conditional Access](../conditional-access/overview.md), [Identity Protection](../identity-protection/index.yml), [Delegated Application Access](./access-panel-manage-self-service-access.md), and [Access Reviews](../governance/manage-user-access-with-access-reviews.md#create-and-perform-an-access-review) against these apps without touching the app at all!
You usually develop LoB apps for your organizationΓÇÖs in-house use. If you have
Apps without clear owners and clear maintenance and monitoring present a security risk for your organization. Consider deprecating applications when: -- their **functionality is highly redundant** with other systems ΓÇó there is **no business owner**--- there is clearly **no usage**.
+- Their **functionality is highly redundant** with other systems
+- There is **no business owner**
+- There is clearly **no usage**
We recommend that you **do not deprecate high impact, business-critical applications**. In those cases, work with business owners to determine the right strategy.
We recommend that you **do not deprecate high impact, business-critical applicat
You are successful in this phase with: - A good understanding of the systems in scope for your migration (that you can retire once you have moved to Azure AD)- - A list of apps that includes: - What systems those apps connect to
First, start by gathering key details about your applications. The [Application
Information that is important to making your migration decision includes: - **App name** ΓÇô what is this app known as to the business?- - **App type** ΓÇô is it a third-party SaaS app? A custom line-of-business web app? An API?- - **Business criticality** ΓÇô is its high criticality? Low? Or somewhere in between?- - **User access volume** ΓÇô does everyone access this app or just a few people?- - **Planned lifespan** ΓÇô how long will this app be around? Less than six months? More than two years?- - **Current identity provider** ΓÇô what is the primary IdP for this app? Or does it rely on local storage?- - **Method of authentication** ΓÇô does the app authenticate using open standards?- - **Whether you plan to update the app code** ΓÇô is the app under planned or active development?- - **Whether you plan to keep the app on-premises** ΓÇô do you want to keep the app in your datacenter long term?- - **Whether the app depends on other apps or APIs** ΓÇô does the app currently call into other apps or APIs?- - **Whether the app is in the Azure AD gallery** ΓÇô is the app currently already integrated with the [Azure AD Gallery](https://azuremarketplace.microsoft.com/marketplace/apps/category/azure-active-directory-apps)? Other data that will help you later, but that you do not need to make an immediate migration decision includes: - **App URL** ΓÇô where do users go to access the app?- - **App description** ΓÇô what is a brief description of what the app does?- - **App owner** ΓÇô who in the business is the main POC for the app?- - **General comments or notes** ΓÇô any other general information about the app or business ownership Once you have classified your application and documented the details, then be sure to gain business owner buy-in to your planned migration strategy.
Once you have classified your application and documented the details, then be su
The app(s) you select for the pilot should represent the key identity and security requirements of your organization, and you must have clear buy-in from the application owners. Pilots typically run in a separate test environment. See [best practices for pilots](../fundamentals/active-directory-deployment-plans.md#best-practices-for-a-pilot) on the deployment plans page.
-**DonΓÇÖt forget about your external partners.** Make sure that they participate in migration schedules and testing. Finally, ensure they have a way to access your helpdesk if there were breaking issues.
+DonΓÇÖt forget about your external partners. Make sure that they participate in migration schedules and testing. Finally, ensure they have a way to access your helpdesk if there were breaking issues.
### Plan for limitations
-While some apps are easy to migrate, others may take longer due to multiple servers or instances. For example, SharePoint migration may take longer due to custom sign in pages.
+While some apps are easy to migrate, others may take longer due to multiple servers or instances. For example, SharePoint migration may take longer due to custom sign-in pages.
Many SaaS app vendors charge for changing the SSO connection. Check with them and plan for this.
Most organizations have specific requirements about identities and data protecti
You can use this information to protect access to all services integrated with Azure AD. These recommendations are aligned with Microsoft Secure Score and the [identity score in Azure AD](../fundamentals/identity-secure-score.md). The score helps you to: - Objectively measure your identity security posture- - Plan identity security improvements- - Review the success of your improvements This will also help you implement the [five steps to securing your identity infrastructure](../../security/fundamentals/steps-secure-identity.md). Use the guidance as a starting point for your organization and adjust the policies to meet your organization's specific requirements.
You are successful in this phase when you:
- Have prioritized apps based on business criticality, usage volume, and lifespan - Have selected apps that represent your requirements for a pilot- - Business-owner buy-in to your prioritization and strategy- - Understand your security posture needs and how to implement them ## Phase 3: Plan migration and testing
Once you have gained business buy-in, the next step is to start migrating these
Use the tools and guidance below to follow the precise steps needed to migrate your applications to Azure AD: - **General migration guidance** ΓÇô Use the whitepaper, tools, email templates, and applications questionnaire in the [Azure AD apps migration toolkit](./migration-resources.md) to discover, classify, and migrate your apps.- - **SaaS applications** ΓÇô See our list of [hundreds of SaaS app tutorials](../saas-apps/tutorial-list.md) and the complete [Azure AD SSO deployment plan](https://aka.ms/ssodeploymentplan) to walk through the end-to-end process.- - **Applications running on-premises** ΓÇô Learn all [about the Azure AD Application Proxy](../app-proxy/application-proxy.md) and use the complete [Azure AD Application Proxy deployment plan](https://aka.ms/AppProxyDPDownload) to get going quickly.- - **Apps youΓÇÖre developing** ΓÇô Read our step-by-step [integration](../develop/quickstart-register-app.md) and [registration](../develop/quickstart-register-app.md) guidance. After migration, you may choose to send communication informing the users of the successful deployment and remind them of any new steps that they need to take.
You can test each app by logging in with a test user and make sure all functiona
Once you have migrated the apps, go to the [Azure portal](https://aad.portal.azure.com/) to test if the migration was a success. Follow the instructions below: - Select **Enterprise Applications &gt; All applications** and find your app from the list.- - Select **Manage &gt; Users and groups** to assign at least one user or group to the app.- - Select **Manage &gt; Conditional Access**. Review your list of policies and ensure that you are not blocking access to the application with a [conditional access policy](../conditional-access/overview.md). Depending on how you configure your app, verify that SSO works properly.
If you run into problems, check out our [apps troubleshooting guide](../app-prov
If your migration fails, the best strategy is to roll back and test. Here are the steps that you can take to mitigate migration issues: - **Take screenshots** of the existing configuration of your app. You can look back if you must reconfigure the app once again.- - You might also consider **providing links to the legacy authentication**, if there was issues with cloud authentication.- - Before you complete your migration, **do not change your existing configuration** with the earlier identity provider.- - Begin by migrating **the apps that support multiple IdPs**. If something goes wrong, you can always change to the preferred IdPΓÇÖs configuration.- - Ensure that your app experience has a **Feedback button** or pointers to your **helpdesk** issues. ### Exit criteria
If your migration fails, the best strategy is to roll back and test. Here are th
You are successful in this phase when you have: - Determined how each app will be migrated- - Reviewed the migration tools- - Planned your testing including test environments and groups- - Planned rollback ## Phase 4: Plan management and insights
You are successful in this phase when you have:
Once apps are migrated, you must ensure that: - Users can securely access and manage- - You can gain the appropriate insights into usage and app health We recommend taking the following actions as appropriate to your organization.
We recommend taking the following actions as appropriate to your organization.
Once you have migrated the apps, you can enrich your userΓÇÖs experience in many ways
-**Make apps discoverable**
-
-**Point your user** to the [MyApps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510#download-and-install-the-my-apps-secure-sign-in-extension)portal experience. Here, they can access all cloud-based apps, apps you make available by using [Azure AD Connect](../hybrid/whatis-azure-ad-connect.md), and apps using [Application Proxy](../app-proxy/application-proxy.md) provided they have permissions to access those apps.
+- Make apps discoverable
+- Point your user to the [MyApps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510#download-and-install-the-my-apps-secure-sign-in-extension)portal experience. Here, they can access all cloud-based apps, apps you make available by using [Azure AD Connect](../hybrid/whatis-azure-ad-connect.md), and apps using [Application Proxy](../app-proxy/application-proxy.md) provided they have permissions to access those apps.
You can guide your users on how to discover their apps: - Use the [Existing Single Sign-on](./view-applications-portal.md) feature to **link your users to any app**- - Enable [Self-Service Application Access](./manage-self-service-access.md)to an app and **let users add apps that you curate**--- [Hide applications from end-users](./hide-application-from-user-portal.md) (default Microsoft apps or other apps) to **make the apps they do need more discoverable**
+- [Hide applications from end-users](./hide-application-from-user-portal.md) (default Microsoft apps or other apps) to make the apps they do need more discoverable
### Make apps accessible
-**Let users access apps from their mobile devices**. Users can access the MyApps portal with Intune-managed browser on their [iOS](./hide-application-from-user-portal.md) 7.0 or later or [Android](./hide-application-from-user-portal.md) devices.
+#### Let users access apps from their mobile devices
-Users can download an **Intune-managed browser**:
+Users can access the MyApps portal with Intune-managed browser on their [iOS](./hide-application-from-user-portal.md) 7.0 or later or [Android](./hide-application-from-user-portal.md) devices.
-- **For Android devices**, from the [Google play store](https://play.google.com/store/apps/details?id=com.microsoft.intune)
+Users can download an Intune-managed browser:
+- **For Android devices**, from the [Google play store](https://play.google.com/store/apps/details?id=com.microsoft.intune)
- **For Apple devices**, from the [Apple App Store](https://apps.apple.com/us/app/intune-company-portal/id719171358) or they can download the [My Apps mobile app for iOS ](https://appadvice.com/app/my-apps-azure-active-directory/824048653)
-**Let users open their apps from a browser extension.**
+#### Let users open their apps from a browser extension
Users can [download the MyApps Secure Sign-in Extension](https://www.microsoft.com/p/my-apps-secure-sign-in-extension/9pc9sckkzk84?rtc=1&activetab=pivot%3Aoverviewtab) in [Chrome,](https://chrome.google.com/webstore/detail/my-apps-secure-sign-in-ex/ggjhpefgjjfobnfoldnjipclpcfbgbhl) or [Microsoft Edge](https://www.microsoft.com/p/my-apps-secure-sign-in-extension/9pc9sckkzk84?rtc=1&activetab=pivot%3Aoverviewtab) and can launch apps right from their browser bar to: -- **Search for their apps and have their most-recently-used apps appear**--- **Automatically convert internal URLs** that you have configured in [Application Proxy](../app-proxy/application-proxy.md) to the appropriate external URLs. Your users can now work with the links they are familiar with no matter where they are.
+- Search for their apps and have their most-recently-used apps appear
+- Automatically convert internal URLs that you have configured in [Application Proxy](../app-proxy/application-proxy.md) to the appropriate external URLs. Your users can now work with the links they are familiar with no matter where they are.
-**Let users open their apps from Office.com.**
+#### Let users open their apps from Office.com
Users can go to [Office.com](https://www.office.com/) to **search for their apps and have their most-recently-used apps appear** for them right from where they do work.
Users can go to [Office.com](https://www.office.com/) to **search for their apps
Azure AD provides a centralized access location to manage your migrated apps. Go to the [Azure portal](https://portal.azure.com/) and enable the following capabilities: - **Secure user access to apps.** Enable [Conditional Access policies](../conditional-access/overview.md)or [Identity Protection](../identity-protection/overview-identity-protection.md)to secure user access to applications based on device state, location, and more.- - **Automatic provisioning.** Set up [automatic provisioning of users](../app-provisioning/user-provisioning.md) with various third-party SaaS apps that users need to access. In addition to creating user identities, it includes the maintenance and removal of user identities as status or roles change.- - **Delegate user access** **management**. As appropriate, enable self-service application access to your apps and *assign a business approver to approve access to those apps*. Use [Self-Service Group Management](../enterprise-users/groups-self-service-management.md)for groups assigned to collections of apps.- - **Delegate admin access.** using **Directory Role** to assign an admin role (such as Application administrator, Cloud Application administrator, or Application developer) to your user. ### Audit and gain insights of your apps
Azure AD provides a centralized access location to manage your migrated apps. Go
You can also use the [Azure portal](https://portal.azure.com/) to audit all your apps from a centralized location, - **Audit your app** using **Enterprise Applications, Audit**, or access the same information from the [Azure AD Reporting API](../reports-monitoring/concept-reporting-api.md) to integrate into your favorite tools.- - **View the permissions for an app** using **Enterprise Applications, Permissions** for apps using OAuth / OpenID Connect.- - **Get sign-in insights** using **Enterprise Applications, Sign-Ins**. Access the same information from the [Azure AD Reporting API.](../reports-monitoring/concept-reporting-api.md)- - **Visualize your appΓÇÖs usage** from the [Azure AD Power BI content pack](../reports-monitoring/howto-use-azure-monitor-workbooks.md) ### Exit criteria
You can also use the [Azure portal](https://portal.azure.com/) to audit all your
You are successful in this phase when you: - Provide secure app access to your users- - Manage to audit and gain insights of the migrated apps ### Do even more with deployment plans
Many [deployment plans](../fundamentals/active-directory-deployment-plans.md) ar
Visit the following support links to create or track support ticket and monitor health. -- **Azure Support:** You can call [Microsoft Support](https://azure.microsoft.com/support) and open a ticket for any Azure-
-Identity deployment issue depending on your Enterprise Agreement with Microsoft.
-
+- **Azure Support:** You can call [Microsoft Support](https://azure.microsoft.com/support) and open a ticket for any Azure Identity deployment issue depending on your Enterprise Agreement with Microsoft.
- **FastTrack**: If you have purchased Enterprise Mobility and Security (EMS) or Azure AD Premium licenses, you are eligible to receive deployment assistance from the [FastTrack program.](/enterprise-mobility-security/solutions/enterprise-mobility-fasttrack-program)- - **Engage the Product Engineering team:** If you are working on a major customer deployment with millions of users, you are entitled to support from the Microsoft account team or your Cloud Solutions Architect. Based on the projectΓÇÖs deployment complexity, you can work directly with the [Azure Identity Product Engineering team.](https://aad.portal.azure.com/#blade/Microsoft_Azure_Marketplace/MarketplaceOffersBlade/selectedMenuItemId/solutionProviders)- - **Azure AD Identity blog:** Subscribe to the [Azure AD Identity blog](https://techcommunity.microsoft.com/t5/Azure-Active-Directory-Identity/bg-p/Identity) to stay up to date with all the latest product announcements, deep dives, and roadmap information provided directly by the Identity engineering team.
active-directory Protect Against Consent Phishing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/protect-against-consent-phishing.md
Administrators should be in control of application use by providing the right in
- Block [consent phishing emails with Microsoft Defender for Office 365](/microsoft-365/security/office-365-security/set-up-anti-phishing-policies#impersonation-settings-in-anti-phishing-policies-in-microsoft-defender-for-office-365) by protecting against phishing campaigns where an attacker is impersonating a known user in the organization. - Configure Microsoft Defender for Cloud Apps policies to help manage abnormal application activity in the organization. For example, [activity policies](/cloud-app-security/user-activity-policies), [anomaly detection](/cloud-app-security/anomaly-detection-policy), and [OAuth app policies](/cloud-app-security/app-permission-policy). - Investigate and hunt for consent phishing attacks by following the guidance on [advanced hunting with Microsoft 365 Defender](/microsoft-365/security/defender/advanced-hunting-overview).-- Allow access to trusted applications and protect against those applications that aren't:
- - Use applications that have been publisher verified. [Publisher verification](../develop/publisher-verification-overview.md) helps administrators and users understand the authenticity of application developers through a Microsoft supported vetting process.
- - [Configure user consent settings](./configure-user-consent.md?tabs=azure-portal) to allow users to only consent to specific trusted applications, such as applications developed by the organization or from verified publishers and only for low risk permissions you select.
+- Allow access to trusted applications that meet certain criteria and that protect against those applications that don't:
+ - [Configure user consent settings](./configure-user-consent.md?tabs=azure-portal) to allow users to only consent to applications that meet certain criteria, such as applications developed by your organization or from verified publishers and only for low risk permissions you select.
+ - Use applications that have been publisher verified. [Publisher verification](../develop/publisher-verification-overview.md) helps administrators and users understand the authenticity of application developers through a Microsoft supported vetting process. Even if an application does have a verified publisher, it is still important to review the consent prompt to understand and evaluate the request. For example, reviewing the permissions being requested to ensure they align with the scenario the app is requesting them to enable, additional app and publisher details on the consent prompt, etc.
- Create proactive [application governance](/microsoft-365/compliance/app-governance-manage-app-governance) policies to monitor third-party application behavior on the Microsoft 365 platform to address common suspicious application behaviors. ## Next steps
active-directory Secure Hybrid Access Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/secure-hybrid-access-integrations.md
Previously updated : 09/13/2022 Last updated : 12/16/2022 # Secure hybrid access with Azure Active Directory partner integrations
-Azure Active Directory (Azure AD) supports modern authentication protocols that help keep applications secure in a highly connected, cloud-based world. However, many business applications were created to work in a protected corporate network, and some of these applications use legacy authentication methods. As companies look to build a Zero Trust strategy and support hybrid and cloud-first work environments, they need solutions that connect apps to Azure AD and provide modern authentication solutions for legacy applications.
+Azure Active Directory (Azure AD) supports modern authentication protocols that help keep applications secure. However, many business applications work in a protected corporate network, and some use legacy authentication methods. As companies build Zero Trust strategies and support hybrid and cloud environments, there are solutions that connect apps to Azure AD and provide authentication for legacy applications.
-Azure AD natively supports modern protocols like SAML, WS-Fed, and OIDC. App Proxy in Azure AD supports Kerberos and header-based authentication. Other protocols, like SSH, NTLM, LDAP, and cookies, aren't yet supported. But ISVs can create solutions to connect these applications with Azure AD to support customers on their journey to Zero Trust.
+Learn more: [Zero Trust Deployment Guide for Microsoft Azure Active Directory](/security/blog/2020/04/30/zero-trust-deployment-guide-azure-active-directory/)
-ISVs have the opportunity to help customers discover and migrate software as a service (SaaS) applications into Azure AD. They can also connect apps that use legacy authentication methods with Azure AD. This will help customers consolidate onto a single platform (Azure AD) to simplify their app management and enable them to implement Zero Trust principles. Supporting apps that use legacy authentication makes users more secure. This solution can be a great stopgap until the customers modernize their apps to support modern authentication protocols.
+Azure AD natively supports modern protocols:
-## Solution overview
-
-The solution that you build can include the following parts:
-
-1. **App discovery**. Often, customers aren't aware of all the applications they're using. So as a first step, you can build application discovery capabilities into your solution and surface discovered applications in the user interface. This enables the customer to prioritize how they want to approach integrating their applications with Azure AD.
-2. **App migration**. Next, you can create an in-product workflow where the customer can directly integrate apps with Azure AD without having to go to the Azure AD portal. If you don't implement discovery capabilities in your solution, you can start your solution here, integrating the applications that customers do know about with Azure AD.
-3. **Legacy authentication support**. You can connect apps by using legacy authentication methods to Azure AD so that they get the benefits of single sign-on (SSO) and other features.
-4. **Conditional Access**. As an additional feature, you can enable customers to apply Azure AD [Conditional Access](../conditional-access/overview.md) policies to the applications from within your solution without having to go the Azure AD portal.
-
-The rest of this guide explains the technical considerations and our recommendations for implementing a solution.
+* Security Assertion Markup Language (SAML)
+* Web Service Federation (WS-Fed)
+* OpenID Connect (OIDC)
-## Publishing your application to Azure Marketplace
+Azure Active Directory Application Proxy, or Azure AD App Proxy supports Kerberos and header-based authentication. Other protocols, like Secure Shell (SSH), (Microsoft Windows NT LAN Manager) NTLM, Lightweight Directory Access Protocol (LDAP), and cookies, aren't supported. But, independent software vendors (ISVs) can create solutions to connect these applications with Azure AD.
-You can pre-integrate your application with Azure AD to support SSO and automated provisioning by following the process to [publish it in Azure Marketplace](../manage-apps/v2-howto-app-gallery-listing.md). Azure Marketplace is a trusted source of applications for IT admins. Applications listed there have been validated to be compatible with Azure AD. They support SSO, automate user provisioning, and can easily integrate into customer tenants with automated app registration.
+ISVs can help customers discover and migrate software as a service (SaaS) applications into Azure AD. They can connect apps that use legacy authentication methods with Azure AD. Customers can consolidate onto Azure AD to simplify their app management and implement Zero Trust principles.
-In addition, we recommend that you become a [verified publisher](../develop/publisher-verification-overview.md) so that customers know you're the trusted publisher of the app.
-
-## Enabling single sign-on for IT admins
-
-[Choose either OIDC or SAML](/azure/active-directory/manage-apps/sso-options#choosing-a-single-sign-on-method/) to enable SSO for IT administrators to your solution. The best option is to use OIDC.
-
-Microsoft Graph uses [OIDC/OAuth](../develop/v2-protocols-oidc.md). If your solution uses OIDC with Azure AD for IT administrator SSO, your customers will have a seamless end-to-end experience. They'll use OIDC to sign in to your solution, and the same JSON Web Token (JWT) that Azure AD issued can then be used to interact with Microsoft Graph.
+## Solution overview
-If your solution instead uses [SAML](/azure/active-directory/manage-apps/configure-saml-single-sign-on/) for IT administrator SSO, the SAML token won't enable your solution to interact with Microsoft Graph. You can still use SAML for IT administrator SSO, but your solution needs to support OIDC integration with Azure AD so it can get a JWT from Azure AD to properly interact with Microsoft Graph. You can use one of the following approaches:
+The solution that you build can include the following parts:
-- **Recommended SAML approach**: Create a new registration in Azure Marketplace, which is [an OIDC app](../saas-apps/openidoauth-tutorial.md). This provides the most seamless experience for your customers. They'll add both the SAML and OIDC apps to their tenant. If your application isn't in the Azure AD gallery today, you can start with a non-gallery [multi-tenant application](../develop/howto-convert-app-to-be-multi-tenant.md).
+* **App discovery** - Often, customers aren't aware of every application in use
+ * Application discovery finds applications, facilitating app integrating with Azure AD
+* **App migration** - Create a workflow to integrate apps with Azure AD without using the Azure AD portal
+ * Integrate apps that customers use today
+* **Legacy authentication support** - Connect apps with legacy authentication methods and single sign-on (SSO)
+* **Conditional Access** - Enable customers to apply Azure AD policies to apps in your solution without using the Azure AD portal
-- **Alternate SAML approach**: Your customers can manually [create an OIDC application registration](../saas-apps/openidoauth-tutorial.md) in their Azure AD tenant and ensure that they set the right URIs, endpoints, and permissions specified later in this article.
+Learn more: [What is Conditional Access?](../conditional-access/overview.md)
-You'll want to use the [client_credentials grant type](../develop/v2-oauth2-client-creds-grant-flow.md#get-a-token). It will require that your solution allows each customer to enter a client ID and secret into your user interface, and that you store this information. Get a JWT from Azure AD, and then use it to interact with Microsoft Graph.
+See the following sections for technical considerations and recommendations.
-If you choose this route, you should have ready-made documentation for your customer about how to create this application registration within their Azure AD tenant. This information includes the endpoints, URIs, and required permissions.
+## Publishing applications to Azure Marketplace
-> [!NOTE]
-> Before any applications can be used for either IT administrator or user SSO, the customer's IT administrator will need to [consent to the application in their tenant](./grant-admin-consent.md).
+Azure Marketplace is a trusted source of applications for IT admins. Applications are compatible with Azure AD and support SSO, automate user provisioning, and integrate into customer tenants with automated app registration.
-## Authentication flows
+You can pre-integrate your application with Azure AD to support SSO and automated provisioning. See, [Submit a request to publish your application in Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md).
-The solution includes three key authentication flows that support the following scenarios:
+We recommend you become a verified publisher, so customers know you're the trusted publisher. See, [Publisher verification](../develop/publisher-verification-overview.md).
-- The customer's IT administrator signs in with SSO to administer your solution.
+## Enable single sign-on for IT admins
-- The customer's IT administrator uses your solution to integrate applications with Azure AD via Microsoft Graph.
+There are several ways to enable SSO for IT administrators to your solution. See, [Plan a single sign-on deployment, SSO options](/azure/active-directory/manage-apps/plan-sso-deployment#single-sign-on-options).
-- Users sign in to legacy applications secured by your solution and Azure AD.
+Microsoft Graph uses OIDC/OAuth. Customers use OIDC to sign in to your solution. Use the JSON Web Token (JWT) Azure AD issues to interact with Microsoft Graph. See, [OpenID Connect on the Microsoft identity platform](../develop/v2-protocols-oidc.md).
-### Your customer's IT administrator does single sign-on to your solution
+If your solution uses SAML for IT administrator SSO, the SAML token won't enable your solution to interact with Microsoft Graph. You can use SAML for IT administrator SSO, but your solution needs to support OIDC integration with Azure AD, so it can get a JWT from Azure AD to interact with Microsoft Graph. See, [How the Microsoft identity platform uses the SAML protocol](/azure/active-directory/develop/active-directory-saml-protocol-reference).
-Your solution can use either SAML or OIDC for SSO when the customer's IT administrator signs in. Either way, we recommend that the IT administrator can sign in to your solution by using their Azure AD credentials. It enables a seamless experience and allows them to use the existing security controls that they already have in place. Your solution should be integrated with Azure AD for SSO through either SAML or OIDC.
+You can use one of the following SAML approaches:
-Here's a diagram and summary of this user authentication flow:
+* **Recommended SAML approach**: Create a new registration in Azure Marketplace, which is an OIDC app. Customers add the SAML and OIDC apps to their tenant. If your application isn't in the Azure AD gallery, you can start with a non-gallery multi-tenant app.
+ * [Configure an OpenID Connect OAuth application from Azure AD app gallery](../saas-apps/openidoauth-tutorial.md)
+ * [Making your application multi-tenant](../develop/howto-convert-app-to-be-multi-tenant.md)
+* **Alternate SAML approach**: Customers can create an OIDC application registration in their Azure AD tenant and set the URIs, endpoints, and permissions
-![Diagram that shows an I T administrator being redirected by the solution to Azure AD to sign in, and then being redirected by Azure AD back to the solution in a user authentication flow.](./media/secure-hybrid-access-integrations/admin-flow.png)
+Use the client credentials grant type, which requires the solution to allow customers to enter a client ID and secret. The solution also requires you store this information. Get a JWT from Azure AD, and then use it to interact with Microsoft Graph. See, [Get a token](../develop/v2-oauth2-client-creds-grant-flow.md#get-a-token). We recommend you repare customer documentation about how to create application registration in their Azure AD tenant. Include endpoints, URIs, and permissions.
-1. The IT administrator wants to sign in to your solution with their Azure AD credentials.
+> [!NOTE]
+> Before applications are used for IT administrator or user SSO, the customer IT administrator must consent to the application in their tenant. See, [Grant tenant-wide admin consent to an application](./grant-admin-consent.md).
-2. Your solution redirects the IT administrator to Azure AD with either a SAML or an OIDC sign-in request.
+## Authentication flows
-3. Azure AD authenticates the IT administrator and then sends them back to your solution with either a SAML token or JWT in tow to be authorized within your solution.
+The solution authentication flows support the following scenarios:
-### The IT administrator integrates applications with Azure AD by using your solution
+- The customer IT administrator signs in with SSO to administer your solution
+- The customer IT administrator uses your solution to integrate applications with Azure AD with Microsoft Graph
+- Users sign in to legacy applications secured by your solution and Azure AD
-The second leg of the IT administrator journey is to integrate applications with Azure AD by using your solution. To do this, your solution will use Microsoft Graph to create application registrations and Azure AD Conditional Access policies.
+### Your customer IT administrator does single sign-on to your solution
-Here's a diagram and summary of this user authentication flow:
+Your solution can use SAML or OIDC for SSO, when the customer IT administrator signs in. We recommend the IT administrator signs in to your solution with their Azure AD credentials, which enables use of current security controls. Integrate your with Azure AD for SSO through SAML or OIDC.
-![Diagram of redirects and other interactions between the I T administrator, Azure Active Directory, your solution, and Microsoft Graph in a user authentication flow.](./media/secure-hybrid-access-integrations/registration-flow.png)
+The following diagram illustrates the user authentication flow:
+ ![Diagram of an administrator redirected to Azure AD to sign in, then redirected to the solution.](./media/secure-hybrid-access-integrations/admin-flow.png)
-1. The IT administrator wants to sign in to your solution with their Azure AD credentials.
+1. The IT administrator signs in to your solution with their Azure AD credentials
+2. The solution redirects the IT administrator to Azure AD with a SAML or an OIDC sign-in request
+3. Azure AD authenticates the IT administrator and redirects them to your solution, with a SAML token or JWT to be authorized in your solution
-2. Your solution redirects the IT administrator to Azure AD with either a SAML or an OIDC sign-in request.
+### IT administrators integrate applications with Azure AD
-3. Azure AD authenticates the IT administrator and then sends them back to your solution with either a SAML token or JWT for authorization within your solution.
+IT administrators integrate applications with Azure AD by using your solution, which employs Microsoft Graph to create application registrations and Azure AD Conditional Access policies.
-4. When the IT administrator wants to integrate one of their applications with Azure AD, rather than having to go to the Azure AD portal, your solution calls Microsoft Graph with their existing JWT to register those applications or apply Azure AD Conditional Access policies to them.
+The following diagram illustrates the user authentication flow:
-### Users sign in to the applications secured by your solution and Azure AD
+ ![Diagram of interactions between the IT administrator, Azure AD, your solution, and Microsoft Graph.](./media/secure-hybrid-access-integrations/registration-flow.png)
-When users need to sign in to individual applications secured with your solution and Azure AD, they use either OIDC or SAML. If the applications need to interact with Microsoft Graph or any Azure AD-protected API, we recommend that you configure them to use OICD. This configuration will ensure that the JWT that the applications get from Azure AD to authenticate them into the applications can also be applied for interacting with Microsoft Graph. If there's no need for the individual applications to interact with Microsoft Graph or any Azure AD protected API, then SAML will suffice.
-Here's a diagram and summary of this user authentication flow:
+1. The IT administrator signs in to your solution with their Azure AD credentials
+2. The solution redirects the IT administrator to Azure AD with a SAML or an OIDC sign-in request
+3. Azure AD authenticates the IT administrator and redirects them to your solution with a SAML token or JWT for authorization
+4. When the IT administrator integrates an application with Azure AD, the solution calls Microsoft Graph with their JWT to register applications, or apply Azure AD Conditional Access policies
-![Diagram of redirects and other interactions between the user, Azure Active Directory, your solution, and the application in a user authentication flow.](./media/secure-hybrid-access-integrations/end-user-flow.png)
+### Users sign in to the applications
-1. The user wants to sign in to an application secured by your solution and Azure AD.
-2. Your solution redirects the user to Azure AD with either a SAML or an OIDC sign-in request.
-3. Azure AD authenticates the user and then sends them back to your solution with either a SAML token or JWT for authorization within your solution.
-4. After authorization, your solution allows the original request to the application to go through by using the preferred protocol of the application.
+When users sign in to applications, they use OIDC or SAML. If the applications need to interact with Microsoft Graph or Azure AD-protected API, we recommend you configure them to use OICD. This configuration ensures the JWT is applied to interact with Microsoft Graph. If there's no need for applications to interact with Microsoft Graph, or Azure AD protected APIs, then use SAML.
-## Summary of Microsoft Graph APIs
+The following diagram shows user authentication flow:
-Your solution needs to use the following APIs. Azure AD allows you to configure either delegated permissions or application permissions. For this solution, you need only delegated permissions.
+ ![Diagram of interactions between the user, Azure AD, your solution, and the app.](./media/secure-hybrid-access-integrations/end-user-flow.png)
-- [Application Templates API](/graph/application-saml-sso-configure-api#retrieve-the-gallery-application-template-identifier/): If you're interested in searching Azure Marketplace, you can use this API to find a matching application template. **Permission required**: Application.Read.All.
+1. The user signs in to an application
+2. The solution redirects the user to Azure AD with a SAML or an OIDC sign-in request
+3. Azure AD authenticates the user and redirects them to your solution with a SAML token or JWT for authorization
+4. The solution allows the request by using the application protocol
-- [Application Registration API](/graph/api/application-post-applications): You use this API to create either OIDC or SAML application registrations so that users can sign in to the applications that the customers have secured with your solution. Doing this enables these applications to also be secured with Azure AD. **Permissions required**: Application.Read.All, Application.ReadWrite.All.
+## Microsoft Graph API
-- [Service Principal API](/graph/api/serviceprincipal-update): After you register the app, you need to update the service principal object to set some SSO properties. **Permissions required**: Application.ReadWrite.All, Directory.AccessAsUser.All, AppRoleAssignment.ReadWrite.All (for assignment).
+We recommend use of the following APIs. Use Azure AD to configure delegated permissions or application permissions. For this solution, use delegated permissions.
-- [Conditional Access API](/graph/api/resources/conditionalaccesspolicy): If you want to also apply Azure AD Conditional Access policies to these user applications, you can use this API. **Permissions required**: Policy.Read.All, Policy.ReadWrite.ConditionalAccess, and Application.Read.All.
+* **Applications templates API** - In Azure Marketplace, use this API to find a matching application template
+ * Permissions required: Application.Read.All
+* **Application registration API** - Create OIDC or SAML application registrations for users to sign in to applications secured with your solution
+ * Permissions required: Application.Read.All, Application.ReadWrite.All
+* **Service principal API** - After you register the app, update the service principal object to set SSO properties
+ * Permissions required: Application.ReadWrite.All, Directory.AccessAsUser.All, AppRoleAssignment.ReadWrite.All (for assignment)
+* **Conditional Access API** - Apply Azure AD Conditional Access policies to user applications
+ * Permissions required: Policy.Read.All, Policy.ReadWrite.ConditionalAccess, and Application.Read.All
-## Example Graph API scenarios
+Learn more [Use the Microsoft Graph API](/graph/use-the-api?context=graph%2Fapi%2F1.0&view=graph-rest-1.0&preserve-view=true)
-This section provides a reference example for using Microsoft Graph APIs to implement application registrations, connect legacy applications, and enable Conditional Access policies via your solution. This section also gives guidance on automating admin consent, getting the token-signing certificate, and assigning users and groups. This functionality might be useful in your solution.
+## Microsoft Graph API scenarios
-### Use the Graph API to register apps with Azure AD
+Use the following information to implement application registrations, connect legacy applications, and enable Conditional Access policies. Learn to automate admin consent, get the token-signing certificate, and assign users and groups.
-#### Add apps that are in Azure Marketplace
+### Use Microsoft Graph API to register apps with Azure AD
-Some of the applications that your customer is using will already be available in [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps). You can create a solution that programmatically adds these applications to the customer's tenant. The following code is an example of using the Microsoft Graph API to search Azure Marketplace for a matching template and then registering the application in the customer's Azure AD tenant.
+#### Add apps in Azure Marketplace
-Search Azure Marketplace for a matching application. When you're using the Application Templates API, the display name is case-sensitive.
+Some applications your customers use are in the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps). You can create a solution that adds applications to the customer tenant. Use the following example with Microsoft Graph API to search Azure Marketplace for a template.
+> [!NOTE]
+> In Application Templates API, the display name is case-sensitive.
+
```http Authorization: Required with a valid Bearer token Method: Get
Method: Get
https://graph.microsoft.com/v1.0/applicationTemplates?$filter=displayname eq "Salesforce.com" ```
-If a match is found from the preceding API call, capture the ID and then make the following API call while providing a user-friendly display name for the application in the JSON body:
+If you find a match from the API call, capture the ID. Make the following API call and provide a display name for the application in the JSON body:
```https Authorization: Required with a valid Bearer token
https://graph.microsoft.com/v1.0/applicationTemplates/cd3ed3de-93ee-400b-8b19-b6
} ```
-When you make the preceding API call, you'll also generate a service principal object, which might take a few seconds. Be sure to capture the application ID and the service principal ID. You'll use them in the next API calls.
+After you make the API call, you generate a service principal object. Capture the application ID and the service principal ID to use in the next API calls.
-Next, patch the service principal object with the SAML protocol and the appropriate login URL:
+Patch the service principal object with the SAML protocol and a login URL:
```https Authorization: Required with a valid Bearer token
https://graph.microsoft.com/v1.0/servicePrincipals/3161ab85-8f57-4ae0-82d3-7a1f7
} ```
-Finally, patch the application object with the appropriate redirect URIs and the identifier URIs:
+Patch the application object with redirect URIs and the identifier URIs:
```https Authorization: Required with a valid Bearer token
https://graph.microsoft.com/v1.0/applications/54c4806b-b260-4a12-873c-9671169837
} ```
-#### Add apps that are not in Azure Marketplace
+#### Add apps not in Azure Marketplace
-If you can't find a match in Azure Marketplace or you just want to integrate a custom application, you can register a custom application in Azure AD by using this template ID: **8adf8e6e-67b2-4cf2-a259-e3dc5476c621**. Then, make the following API call while providing a user-friendly display name of the application in the JSON body:
+If there's no match in Azure Marketplace, or to integrate a custom application, register a custom application in Azure AD with the template ID: 8adf8e6e-67b2-4cf2-a259-e3dc5476c621. Then, make the following API call and provide an application display name in the JSON body:
```https Authorization: Required with a valid Bearer token
https://graph.microsoft.com/v1.0/applicationTemplates/8adf8e6e-67b2-4cf2-a259-e3
} ```
-When you make the preceding API call, you'll also generate a service principal object, which might take a few seconds. Be sure to capture the application ID and the service principal ID. You'll use them in the next API calls.
+After you make the API call, you generate a service principal object. Capture the application ID and the service principal ID to use in the next API calls.
-Next, patch the service principal object with the SAML protocol and the appropriate login URL:
+Patch the service principal object with the SAML protocol and a login URL:
```https Authorization: Required with a valid Bearer token
https://graph.microsoft.com/v1.0/servicePrincipals/3161ab85-8f57-4ae0-82d3-7a1f7
} ```
-Finally, patch the application object with the appropriate redirect URIs and the identifier URIs:
+Patch the application object with redirect URIs and identifier URIs:
```https Authorization: Required with a valid Bearer token
https://graph.microsoft.com/v1.0/applications/54c4806b-b260-4a12-873c-9671169837
} ```
-#### Cut over to Azure AD single sign-on
+#### Use Azure AD single sign-on
-After you have the SaaS applications registered inside Azure AD, the applications still need to be cut over to start using Azure AD as their identity provider. There are two ways to do this:
+After the SaaS applications are registered in Azure AD, the applications need to start using Azure AD as the identity provider (IdP):
-- If the applications support one-click SSO, Azure AD can cut over the applications for the customer. The customer just needs to go into the Azure AD portal and perform the one-click SSO with the administrative credentials for the supported SaaS applications. For more information, see [One-click app configuration of single sign-on](./one-click-sso-tutorial.md).-- If the applications don't support one-click SSO, the customer needs to manually cut over the applications to start using Azure AD. For more information, see [Tutorials for integrating SaaS applications with Azure Active Directory](../saas-apps/tutorial-list.md).
+- **Applications support one-click SSO** - Azure AD enables the applications. In the Azure portal, the customer performs one-click SSO with the administrative credentials for the supported SaaS applications.
+ - Learn more: [One-click app configuration of single sign-on](./one-click-sso-tutorial.md)
+- **Applications don't support one-click SSO** - The customer enables the applications to use Azure AD.
+ - [Tutorials for integrating SaaS applications with Azure Active Directory](../saas-apps/tutorial-list.md)
-### Connect apps by using legacy authentication methods to Azure AD
+### Connect apps to Azure AD with legacy authentication
-This is where your solution can sit in between Azure AD and the application and enable the customer to get the benefits of SSO and other Azure Active Directory features, even for applications that are not supported. To do so, your application will call Azure AD to authenticate the user and apply Azure AD Conditional Access policies before the user can access these applications with legacy protocols.
-
-You can enable customers to do this integration directly from your console so that the discovery and integration is a seamless end-to-end experience. This will involve your platform creating either a SAML or an OIDC application registration between your platform and Azure AD.
+Your solution can enable the customer to use SSO and Azure Active Directory features, even unsupported applications. To allow access with legacy protocols, your application calls Azure AD to authenticate the user and apply Azure AD Conditional Access policies. Enable this integration from your console. Create a SAML or an OIDC application registration between your solution and Azure AD.
#### Create a SAML application registration
-To create a SAML application registration, use this custom application template ID for a custom application: **8adf8e6e-67b2-4cf2-a259-e3dc5476c621**. Then make the following API call while providing a user-friendly display name in the JSON body:
+Use the following custom application template ID: 8adf8e6e-67b2-4cf2-a259-e3dc5476c621. Then, make the following API call and provide a display name in the JSON body:
```https Authorization: Required with a valid Bearer token
https://graph.microsoft.com/v1.0/applicationTemplates/8adf8e6e-67b2-4cf2-a259-e3
} ```
-When you make the preceding API call, you'll also generate a service principal object, which might take a few seconds. Be sure to capture the application ID and the service principal ID. You'll use them in the next API calls.
+After you make the API call, you generate a service principal object. Capture the application ID and the service principal ID to use in the next API calls.
-Next, patch the service principal object with the SAML protocol and the appropriate login URL:
+Patch the service principal object with the SAML protocol and a login URL:
```https Authorization: Required with a valid Bearer token
https://graph.microsoft.com/v1.0/servicePrincipals/3161ab85-8f57-4ae0-82d3-7a1f7
} ```
-Finally, patch the application object with the appropriate redirect URIs and the identifier URIs:
+Patch the application object with redirect URIs and identifier URIs:
```https Authorization: Required with a valid Bearer token
https://graph.microsoft.com/v1.0/applications/54c4806b-b260-4a12-873c-9671169837
#### Create an OIDC application registration
-To create an OIDC application registration, use this template ID for a custom application: **8adf8e6e-67b2-4cf2-a259-e3dc5476c621**. Then make the following API call while providing a user-friendly display name in the JSON body:
+Use the following template ID for a custom application: 8adf8e6e-67b2-4cf2-a259-e3dc5476c621. Make the following API call and provide a display name in the JSON body:
```https Authorization: Required with a valid Bearer token
https://graph.microsoft.com/v1.0/applicationTemplates/8adf8e6e-67b2-4cf2-a259-e3
} ```
-From the API call, capture the application ID and the service principal ID. You'll use them in the next API calls.
+From the API call, capture the application ID and the service principal ID to use in the next API calls.
```https Authorization: Required with a valid Bearer token
https://graph.microsoft.com/v1.0/applications/{Application Object ID}
``` > [!NOTE]
-> The API permissions listed in within the `resourceAccess` node will grant the application the *openid*, *User.Read*, and *offline_access* permissions, which should be enough to get the user signed in to your solution. For more information about permissions, see the [Microsoft Graph permissions reference](/graph/permissions-reference/).
+> The API permissions in the `resourceAccess` node grant the application the openid, User.Read, and offline_access permissions, which enable sign-in. See, [Overview of Microsoft Graph permissions](/graph/permissions-overview).
### Apply Conditional Access policies
-Customers and partners can also use the Microsoft Graph API to create or apply Conditional Access policies to customer applications. For partners, this can provide additional value because customers can apply these policies directly from your solution without having to go to the Azure AD portal.
-
-You have two options when applying Azure AD Conditional Access policies:
+Customers and partners can use the Microsoft Graph API to create or apply Conditional Access policies to customer applications. For partners, customers can apply these policies from your solution without using the Azure portal. There are two options to apply Azure AD Conditional Access policies:
-- Assign the application to an existing Conditional Access Policy.-- Create a new Conditional Access policy and assign the application to that new policy.
+- Assign the application to a Conditional Access policy
+- Create a new Conditional Access policy and assign the application to it
-#### Use an existing Conditional Access policy
+#### Use a Conditional Access policy
-First, run the following query to get a list of all Conditional Access policies. Get the object ID of the policy that you want to modify.
+For a list of Conditional Access policies, run the following query. Get the policy object ID to modify.
```https Authorization: Required with a valid Bearer token
Method:GET
https://graph.microsoft.com/v1.0/identity/conditionalAccess/policies ```
-Next, patch the policy by including the application object ID to be in scope of `includeApplications` within the JSON body:
+To patch the policy, include the application object ID to be in scope of `includeApplications`, in the JSON body:
```https Authorization: Required with a valid Bearer token
https://graph.microsoft.com/v1.0/identity/conditionalAccess/policies/{policyid}
#### Create a new Conditional Access policy
-Add the application object ID to be in scope of `includeApplications` within the JSON body:
+Add the application object ID to be in scope of `includeApplications`, in the JSON body:
```https Authorization: Required with a valid Bearer token
https://graph.microsoft.com/v1.0/identity/conditionalAccess/policies/
} ```
-If you're interested in creating new Azure AD Conditional Access policies, here are some additional templates that can help get you started with using the [Conditional Access API](../conditional-access/howto-conditional-access-apis.md):
+To create new Azure AD Conditional Access policies, see [Conditional Access: Programmatic access](../conditional-access/howto-conditional-access-apis.md).
```https #Policy Template for Requiring Compliant Device
If you're interested in creating new Azure AD Conditional Access policies, here
### Automate admin consent
-If the customer is onboarding numerous applications from your platform to Azure AD, you can automate admin consent for them so they don't have to manually consent to lots of applications. You can also do this automation via Microsoft Graph. You'll need both the service principal object ID of the application that you created in previous API calls and the service principal object ID of Microsoft Graph from the customer's tenant.
+If the customer is adding applications from your solution to Azure AD, you can automate administrator consent with Microsoft Graph. You need the application service principal object ID you created in API calls, and the Microsoft Graph service principal object ID from the customer tenant.
-Get the service principal object ID of Microsoft Graph by making this API call:
+Get the Microsoft Graph service principal object ID by making the following API call:
```https Authorization: Required with a valid Bearer token
Method:GET
https://graph.microsoft.com/v1.0/serviceprincipals/?$filter=appid eq '00000003-0000-0000-c000-000000000000'&$select=id,appDisplayName ```
-When you're ready to automate admin consent, make this API call:
+To automate admin consent, make the following API call:
```https Authorization: Required with a valid Bearer token
https://graph.microsoft.com/v1.0/oauth2PermissionGrants
### Get the token-signing certificate
-To get the public portion of the token-signing certificate for all these applications, use `GET` from the Azure AD metadata endpoint for the application:
+To get the public portion of the token-signing certificate, use `GET` from the Azure AD metadata endpoint for the application:
```https Method:GET
https://login.microsoftonline.com/{Tenant_ID}/federationmetadata/2007-06/federat
### Assign users and groups
-After you've published the application to Azure AD, you can optionally assign it to users and groups to ensure that it shows up on the [MyApplications](/azure/active-directory/user-help/my-applications-portal-workspaces/) portal. This assignment is stored on the service principal object that was generated when you created the application.
+After you publish the application to Azure AD, you can assign the app to users and groups to ensure it appears on the My Apps portal. This assignment is on the service principal object generated when you created the application. See, [My Apps portal overview](/azure/active-directory/manage-apps/myapps-overview).
-First, get any `AppRole` instances that the application may have associated with it. It's common for SaaS applications to have various `AppRole` instances associated with them. For custom applications, there's typically just the one default `AppRole` instance. Get the ID of the `AppRole` instance that you want to assign:
+Get `AppRole` instances the application might have associated with it. It's common for SaaS applications to have various `AppRole` instances associated with them. Typically, for custom applications, there's one default `AppRole` instance. Get the `AppRole` instance ID you want to assign:
```https Authorization: Required with a valid Bearer token
Method:GET
https://graph.microsoft.com/v1.0/servicePrincipals/3161ab85-8f57-4ae0-82d3-7a1f71680b27 ```
-Next, get the object ID of the user or group from Azure AD that you want to assign to the application. Also take the app role ID from the previous API call and submit it as part of the patch body on the service principal:
+From Azure AD, get the user or group object ID that you want to assign to the application. Take the app role ID from the previous API call and submit it with the patch body on the service principal:
```https Authorization: Required with a valid Bearer token
https://graph.microsoft.com/v1.0/servicePrincipals/3161ab85-8f57-4ae0-82d3-7a1f7
## Partnerships
-Microsoft has partnerships with these application delivery controller (ADC) providers to help protect legacy applications while using existing networking and delivery controllers.
-
-| **ADC provider** | **Link** |
-| | |
-| Akamai Enterprise Application Access | [Akamai Enterprise Application Access](../saas-apps/akamai-tutorial.md) |
-| Citrix ADC | [Citrix ADC](../saas-apps/citrix-netscaler-tutorial.md) |
-| F5 BIG-IP Access Policy Manager | [F5 BIG-IP Access Policy Manager](./f5-aad-integration.md) |
-| Kemp LoadMaster | [Kemp LoadMaster](../saas-apps/kemp-tutorial.md) |
-| Pulse Secure Virtual Traffic Manager | [Pulse Secure Virtual Traffic Manager](../saas-apps/pulse-secure-virtual-traffic-manager-tutorial.md) |
-
-The following VPN solution providers connect with Azure AD to enable modern authentication and authorization methods like SSO and multifactor authentication.
-
-| **VPN vendor** | **Link** |
-| | |
-| Cisco AnyConnect | [Cisco AnyConnect](../saas-apps/cisco-anyconnect.md) |
-| Fortinet FortiGate | [Fortinet FortiGate](../saas-apps/fortigate-ssl-vpn-tutorial.md) |
-| F5 BIG-IP Access Policy Manager | [F5 BIG-IP Access Policy Manager](./f5-aad-password-less-vpn.md) |
-| Palo Alto Networks GlobalProtect | [Palo Alto Networks GlobalProtect](../saas-apps/paloaltoadmin-tutorial.md) |
-| Pulse Connect Secure | [Pulse Connect Secure](../saas-apps/pulse-secure-pcs-tutorial.md) |
-
-The following providers of software-defined perimeter (SDP) solutions connect with Azure AD to enable modern authentication and authorization methods like SSO and multifactor authentication.
-
-| **SDP vendor** | **Link** |
-| | |
-| Datawiza Access Broker | [Datawiza Access Broker](./datawiza-with-azure-ad.md) |
-| Perimeter 81 | [Perimeter 81](../saas-apps/perimeter-81-tutorial.md) |
-| Silverfort Authentication Platform | [Silverfort Authentication Platform](./silverfort-azure-ad-integration.md) |
-| Strata Maverics Identity Orchestrator | [Strata Maverics Identity Orchestrator](../saas-apps/maverics-identity-orchestrator-saml-connector-tutorial.md) |
-| Zscaler Private Access | [Zscaler Private Access](../saas-apps/zscalerprivateaccess-tutorial.md) |
+To help protect legacy applications, while using networking and delivery controllers, Microsoft has partnerships with the following application delivery controller (ADC) providers.
+
+* **Akamai Enterprise Application Access**
+ * [Tutorial: Azure AD SSO integration with Akamai](../saas-apps/akamai-tutorial.md)
+* **Citrix ADC**
+ * [Tutorial: Azure AD SSO integration with Citrix ADC SAML Connector for Azure AD (Kerberos-based authentication)](../saas-apps/citrix-netscaler-tutorial.md)
+* **F5 BIG-IP Access Policy Manager**
+ * [Tutorial: Azure AD SSO integration with Citrix ADC SAML Connector for Azure AD (Kerberos-based authentication)](./f5-aad-integration.md)
+* **Kemp LoadMaster**
+ * [Tutorial: Azure AD SSO integration with Kemp LoadMaster Azure AD integration](../saas-apps/kemp-tutorial.md)
+* **Pulse Secure Virtual Traffic Manager**
+ * [Tutorial: Azure AD SSO integration with Pulse Secure Virtual Traffic Manager](../saas-apps/pulse-secure-virtual-traffic-manager-tutorial.md)
+
+The following VPN solution providers connect with Azure AD to enable modern authentication and authorization methods like SSO and multifactor authentication (MFA).
+
+* **Cisco AnyConnect**
+ * [Tutorial: Azure AD SSO integration with Cisco AnyConnect](../saas-apps/cisco-anyconnect.md)
+* **Fortinet FortiGate**
+ * [Tutorial: Azure AD SSO integration with FortiGate SSL VPN](../saas-apps/fortigate-ssl-vpn-tutorial.md)
+* **F5 BIG-IP Access Policy Manager**
+ * [Tutorial: Configure F5 BIG-IP SSL-VPN for Azure AD SSO](./f5-aad-password-less-vpn.md)
+* **Palo Alto Networks GlobalProtect**
+ * [Tutorial: Azure AD SSO integration with Palo Alto Networks - Admin UI](../saas-apps/paloaltoadmin-tutorial.md)
+* **Pulse Connect Secure**
+ * [Tutorial: Azure AD SSO integration with Pulse Secure PCS](../saas-apps/pulse-secure-pcs-tutorial.md)
+
+The following software-defined perimeter (SDP) solutions providers connect with Azure AD for authentication and authorization methods like SSO and MFA.
+
+* **Datawiza Access Broker**
+ * [Tutorial: Configure Secure Hybrid Access with Azure AD and Datawiza](./datawiza-with-azure-ad.md)
+* **Perimeter 81**
+ * [Tutorial: Azure AD SSO integration with Perimeter 81](../saas-apps/perimeter-81-tutorial.md)
+* **Silverfort Authentication Platform**
+ * [Tutorial: Configure Secure Hybrid Access with Azure AD and Silverfort](./silverfort-azure-ad-integration.md)
+* **Strata Maverics Identity Orchestrator**
+ * [Integrate Azure AD SSO with Maverics Identity Orchestrator SAML Connector](../saas-apps/maverics-identity-orchestrator-saml-connector-tutorial.md)
+* **Zscaler Private Access**
+ * [Tutorial: Integrate Zscaler Private Access with Azure AD](../saas-apps/zscalerprivateaccess-tutorial.md)
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/whats-new-docs.md
Title: "What's new in Azure Active Directory application management" description: "New and updated documentation for the Azure Active Directory application management." Previously updated : 12/01/2022 Last updated : 01/05/2023
Welcome to what's new in Azure Active Directory (Azure AD) application management documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the application management service, see [What's new in Azure AD](../fundamentals/whats-new.md).
+## December 2022
+
+### Updated articles
+
+- [Grant consent on behalf of a single user by using PowerShell](grant-consent-single-user.md)
+- [Tutorial: Configure F5 BIG-IP SSL-VPN for Azure AD SSO](f5-aad-password-less-vpn.md)
+- [Integrate F5 BIG-IP with Azure Active Directory](f5-aad-integration.md)
+- [Azure Active Directory application management: What's new](whats-new-docs.md)
+- [Deploy F5 BIG-IP Virtual Edition VM in Azure](f5-bigip-deployment-guide.md)
+- [End-user experiences for applications](end-user-experiences.md)
+- [Tutorial: Migrate your applications from Okta to Azure Active Directory](migrate-applications-from-okta-to-azure-active-directory.md)
+- [Tutorial: Configure F5 BIG-IP Access Policy Manager for Kerberos authentication](f5-big-ip-kerberos-advanced.md)
+- [Tutorial: Configure F5 BIG-IP Easy Button for Kerberos single sign-on](f5-big-ip-kerberos-easy-button.md)
+- [Tutorial: Configure F5 BIG-IP Easy Button for header-based and LDAP single sign-on](f5-big-ip-ldap-header-easybutton.md)
## November 2022 ### Updated articles
Welcome to what's new in Azure Active Directory (Azure AD) application managemen
- [Tutorial: Configure F5 BIG-IP Easy Button for header-based and LDAP single sign-on](f5-big-ip-ldap-header-easybutton.md) - [Tutorial: Migrate your applications from Okta to Azure Active Directory](migrate-applications-from-okta-to-azure-active-directory.md) - [Tutorial: Configure Secure Hybrid Access with Azure Active Directory and Silverfort](silverfort-azure-ad-integration.md)-
-## September 2022
-
-### New articles
--- [Tutorial: Configure Datawiza to enable Azure Active Directory Multi-Factor Authentication and single sign-on to Oracle PeopleSoft](datawiza-azure-ad-sso-oracle-peoplesoft.md)-- [SAML Request Signature Verification (Preview)](howto-enforce-signed-saml-authentication.md)-
-### Updated articles
--- [Manage app consent policies](manage-app-consent-policies.md)-- [Unexpected consent prompt when signing in to an application](application-sign-in-unexpected-user-consent-prompt.md)
active-directory Pim Create Azure Ad Roles And Resource Roles Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-create-azure-ad-roles-and-resource-roles-review.md
Previously updated : 10/20/2022 Last updated : 1/9/2023
The need for access to privileged Azure resource and Azure AD roles by employees
To create access reviews for Azure resources, you must be assigned to the [Owner](../../role-based-access-control/built-in-roles.md#owner) or the [User Access Administrator](../../role-based-access-control/built-in-roles.md#user-access-administrator) role for the Azure resources. To create access reviews for Azure AD roles, you must be assigned to the [Global Administrator](../roles/permissions-reference.md#global-administrator) or the [Privileged Role Administrator](../roles/permissions-reference.md#privileged-role-administrator) role.
-Access Reviews for **Service Principals** requires an Entra Workload Identities Premium plan.
+Access Reviews for **Service Principals** requires an Entra Workload Identities Premium plan in addition to Azure AD Premium P2 license.
- Workload Identities Premium licensing: You can view and acquire licenses on the [Workload Identities blade](https://portal.azure.com/#view/Microsoft_Azure_ManagedServiceIdentity/WorkloadIdentitiesBlade) in the Azure portal.
active-directory Pim Deployment Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-deployment-plan.md
Previously updated : 12/10/2021 Last updated : 1/9/2023
In case the role expires, you can **extend** or **renew** these assignments.
## Plan the project
-When technology projects fail, itΓÇÖs typically because of mismatched expectations on impact, outcomes, and responsibilities. To avoid these pitfalls, [ensure that youΓÇÖre engaging the right stakeholders](../fundamentals/active-directory-deployment-plans.md#include-the-right-stakeholders) and that stakeholder roles in the project are well understood.
+When technology projects fail, itΓÇÖs typically because of mismatched expectations on impact, outcomes, and responsibilities. To avoid these pitfalls, [ensure that youΓÇÖre engaging the right stakeholders](../fundamentals/active-directory-deployment-plans.md) and that stakeholder roles in the project are well understood.
### Plan a pilot
active-directory Concept All Sign Ins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-all-sign-ins.md
Previously updated : 11/04/2022 Last updated : 01/05/2023
To access the sign-ins log for a tenant, you must have one of the following role
- Global Reader - Reports Reader
+>[!NOTE]
+>To see Conditional Access data in the sign-ins log, you need to be a user in one of the following roles:
+Company Administrator, Global Reader, Security Administrator, Security Reader, Conditional Access Administrator .
+ The sign-in activity report is available in [all editions of Azure AD](reference-reports-data-retention.md#how-long-does-azure-ad-store-the-data). If you have an Azure Active Directory P1 or P2 license, you can access the sign-in activity report through the Microsoft Graph API. See [Getting started with Azure Active Directory Premium](../fundamentals/active-directory-get-started-premium.md) to upgrade your Azure Active Directory edition. It will take a couple of days for the data to show up in Graph after you upgrade to a premium license with no data activities before the upgrade. **To access the Azure AD sign-ins log preview:**
active-directory Plan Monitoring And Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/plan-monitoring-and-reporting.md
Title: Plan reports & monitoring deployment - Azure AD description: Describes how to plan and execute implementation of reporting and monitoring. --++
Last updated 12/19/2022 -
-# Customer intent: As an Azure AD administrator, I want to monitor logs and report on access to increase security
+# Customer intent: For an Azure AD administrator to monitor logs and report on access
-# Plan an Azure Active Directory reporting and monitoring deployment
+# Azure Active Directory reporting and monitoring deployment dependencies
-Your Azure Active Directory (Azure AD) reporting and monitoring solution depends on your legal, security, and operational requirements and your existing environment and processes. This article presents the various design options and guides you to the right deployment strategy.
+Your Azure Active Directory (Azure AD) reporting and monitoring solution depends on legal, security, operational requirements, and your environment's processes. Use the following sections to learn about design options and deployment strategy.
-### Benefits of Azure AD reporting and monitoring
+## Benefits of Azure AD reporting and monitoring
-Azure AD reporting provides a comprehensive view and logs of Azure AD activity in your environment, including sign-in events, audit events, and changes to your directory.
+Azure AD reporting has a view, and logs, of Azure AD activity in your environment: sign-in and audit events, also changes to your directory.
-The provided data enables you to:
+Use data output to:
* determine how your apps and services are used. * detect potential risks affecting the health of your environment.
For detailed feature and licensing information, see the [Azure Active Directory
To deploy Azure AD monitoring and reporting you'll need a user who is a Global Administrator or Security Administrator for the Azure AD tenant.
-Depending on the final destination of your log data, you'll need one of the following:
-
+* [Azure Monitor data platform](../../azure-monitor/data-platform.md)
+* [Azure Monitor naming and terminology changes](../../azure-monitor/terminology.md)
+* [How long does Azure AD store reporting data?](./reference-reports-data-retention.md)
* An Azure storage account that you have `ListKeys` permissions for. We recommend that you use a general storage account and not a Blob storage account. For storage pricing information, see the [Azure Storage pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=storage). * An Azure Event Hubs namespace to integrate with third-party SIEM solutions. * An Azure Log Analytics workspace to send logs to Azure Monitor logs.
Depending on the final destination of your log data, you'll need one of the foll
Reporting and monitoring are used to meet your business requirements, gain insights into usage patterns, and increase your organization's security posture. In this project, you'll define the audiences that will consume and monitor reports, and define your Azure AD monitoring architecture.
-### Engage the right stakeholders
+## Stakeholders, communications, and documentation
When technology projects fail, they typically do so due to mismatched expectations on effect, outcomes, and responsibilities. To avoid these pitfalls, [ensure that you're engaging the right stakeholders](../fundamentals/active-directory-deployment-plans.md). Also ensure that stakeholder roles in the project are well understood by documenting the stakeholders and their project input and responsibilities.
The following roles can read Azure AD reports:
Learn More About [Azure AD Administrative Roles](../roles/permissions-reference.md). Always apply the concept of least privileges to reduce the risk of an account compromise. Consider implementing [Privileged Identity Management](../privileged-identity-management/pim-configure.md) to further secure your organization.
-### Plan communications
+### Engage stakeholders
-Communication is critical to the success of any new service. Proactively communicate with your users how their experience will change, when it will change, and how to gain support if they experience issues.
+Successful projects align expectations, outcomes, and responsibilities. See, [Azure Active Directory deployment plans](../fundamentals/active-directory-deployment-plans.md). Document and communicate stakeholder roles that require input and accountability.
-### Document your current infrastructure and policies
+### Communications plan
-Your current infrastructure and policies will drive your reporting and monitoring design. Ensure that you know
+Tell your users when, and how, their experience will change. Provide contact information for support.
* What, if any, SIEM tools you're using. * Your Azure infrastructure, including existing storage accounts and monitoring being used.
Your current infrastructure and policies will drive your reporting and monitorin
To better prioritize the use cases and solutions, organize the options by "required for solution to meet business needs," "nice to have to meet business needs," and "not applicable."
-|Area |Description |
-|-|-|
-|Retention| **Log retention of more than 30 days**. ΓÇÄDue to legal or business requirements it's required to store audit logs and sign in logs of Azure AD longer than 30 days. |
-|Analytics| **The logs need to be searchable**. ΓÇÄThe stored logs need to be searchable with analytic tools. |
-| Operational Insights| **Insights for various teams**. The need to give access for different users to gain operational insights such as application usage, sign in errors, self-service usage, trends, etc. |
-| Security Insights| **Insights for various teams**. The need to give access for different users to gain operational insights such as application usage, sign in errors, self service usage, trends, etc. |
-| Integration in SIEM systems | **SIEM integration**. ΓÇÄThe need to integrate and stream Azure AD sign-in logs and audit logs to existing SIEM systems. |
+### Considerations
-### Choose a monitoring solution architecture
+* **Retention** - Log retention: store audit logs and sign in logs of Azure AD longer than 30 days
+* **Analytics** - Logs are searchable with analytic tools
+* **Operational and security insights** - Provide access to application usage, sign-in errors, self-service usage, trends, etc.
+* **SIEM integration** - Integrate and stream Azure AD sign-in logs and audit logs to SIEM systems
-With Azure AD monitoring, you can route your Azure AD activity logs to a system that best meets your business needs. You can then retain them for long-term reporting and analysis to gain insights into your environment, and integrate it with SIEM tools.
+### Monitoring solution architecture
-#### Decision flow chart![An image showing what is described in subsequent sections](media/reporting-deployment-plan/deploy-reporting-flow-diagram.png)
+With Azure AD monitoring, you can route Azure AD activity logs and retain them for long-term reporting and analysis to gain environment insights, and integrate it with SIEM tools. Use the following decision flow chart to help select an architecture.
-#### Archive logs in a storage account
+ ![Decision matrix for business-need architecture.](media/reporting-deployment-plan/deploy-reporting-flow-diagram.png)
-By routing logs to an Azure storage account, you can keep them for longer than the default retention period outlined in our [retention policies](./reference-reports-data-retention.md). Use this method if you need to archive your logs, but don't need to integrate them with an SIEM system, and don't need ongoing queries and analysis. You can still do on-demand searches.
+#### Archive logs in a storage account
-Learn how to [route data to your storage account](./quickstart-azure-monitor-route-logs-to-storage-account.md).
+You can keep logs longer than the default retention period by routing them to an Azure storage account.
-#### Send logs to Azure Monitor logs
+ > [!IMPORTANT]
+ > Use this archival method if there is no need to integrate logs with a SIEM system, or no need for ongoing queries and analysis. You can use on-demand searches.
-[Azure Monitor logs](../../azure-monitor/logs/log-query-overview.md) consolidate monitoring data from different sources. It also provides a query language and analytics engine that gives you insights into the operation of your applications and use of resources. By sending Azure AD activity logs to Azure Monitor logs, you can quickly retrieve, monitor, and alert on collected data. Use this method when you don't have an existing SIEM solution that you want to send your data to directly but do want queries and analysis. Once your data is in Azure Monitor logs, you can then send it to event hub, and from there to a SIEM if you want to.
+Learn more:
-Learn how to [send data to Azure Monitor logs](./howto-integrate-activity-logs-with-log-analytics.md).
+* [How long does Azure AD store reporting data?](./reference-reports-data-retention.md)
+* [Tutorial: Archive Azure AD logs to an Azure storage account](./quickstart-azure-monitor-route-logs-to-storage-account.md)
#### Stream logs to storage and SIEM tools
-Routing logs to an Azure event hub enables integration with third-party SIEM tools. This integration allows you to combine Azure AD activity log data with other data managed by your SIEM, to provide richer insights into your environment.
-
+* [Integrate Azure AD logs with Azure Monitor logs](./howto-integrate-activity-logs-with-log-analytics.md).
+* [Analyze Azure AD activity logs with Azure Monitor logs](/MicrosoftDocs/azure-docs/blob/main/articles/active-directory/reports-monitoring/howto-analyze-activity-logs-log-analytics.md).
* Learn how to [stream logs to an event hub](./tutorial-azure-monitor-stream-logs-to-event-hub.md). * Learn how to [Archive Azure AD logs to an Azure Storage account](./quickstart-azure-monitor-route-logs-to-storage-account.md). * [Integrate Azure AD logs with Splunk by using Azure Monitor](./howto-integrate-activity-logs-with-splunk.md)
Routing logs to an Azure event hub enables integration with third-party SIEM too
- Consider implementing [Azure role-based access control](../../role-based-access-control/overview.md) - [Learn more about report retention policies](./reference-reports-data-retention.md). - [Analyze Azure AD activity logs with Azure Monitor logs](./howto-analyze-activity-logs-log-analytics.md)+
active-directory Reference Azure Ad Sla Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-azure-ad-sla-performance.md
The SLA attainment is truncated at three places after the decimal. Numbers are n
| September | 99.999% | 99.998% | | October | 99.999% | 99.999% | | November | 99.998% | 99.999% |
-| December | 99.978% | |
+| December | 99.978% | 99.999% |
### How is Azure AD SLA measured?
active-directory Delegate By Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/delegate-by-task.md
Previously updated : 11/24/2022 Last updated : 01/05/2023
You can further restrict permissions by assigning roles at smaller scopes or by
> [!div class="mx-tableFixed"] > | Task | Least privileged role | Additional roles | > | - | | - |
-> | Disable device | [Cloud Device Administrator](permissions-reference.md#cloud-device-administrator) | |
-> | Enable device | [Cloud Device Administrator](permissions-reference.md#cloud-device-administrator) | |
+> | Delete device | [Cloud Device Administrator](permissions-reference.md#cloud-device-administrator) | [Intune Administrator](permissions-reference.md#intune-administrator) |
+> | Disable device | [Cloud Device Administrator](permissions-reference.md#cloud-device-administrator) | [Intune Administrator](permissions-reference.md#intune-administrator) |
+> | Enable device | [Cloud Device Administrator](permissions-reference.md#cloud-device-administrator) | [Intune Administrator](permissions-reference.md#intune-administrator) |
> | Read basic configuration | [Default user role](../fundamentals/users-default-permissions.md) | |
-> | Read BitLocker keys | [Security Reader](permissions-reference.md#security-reader) | [Password Administrator](permissions-reference.md#password-administrator)<br/>[Security Administrator](permissions-reference.md#security-administrator) |
+> | Read BitLocker keys | [Cloud Device Administrator](permissions-reference.md#cloud-device-administrator) | [Helpdesk Administrator](permissions-reference.md#helpdesk-administrator)<br/>[Intune Administrator](permissions-reference.md#intune-administrator)<br/>[Security Administrator](permissions-reference.md#security-administrator)<br/>[Security Reader](permissions-reference.md#security-reader) |
## Enterprise applications
active-directory Amazon Business Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/amazon-business-tutorial.md
Previously updated : 11/21/2022 Last updated : 12/21/2022
Follow these steps to enable Azure AD SSO in the Azure portal.
1. Click **Save**.
-1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, find **Metadata XML** and select **Download** to download the certificate and save it on your computer.
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
- ![The Certificate download link](common/metadataxml.png)
+ ![The Certificate download link](common/copy-metadataurl.png)
1. On the **Set up Amazon Business** section, copy the appropriate URL(s) based on your requirement.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
![Screenshot shows New user account defaults with Microsoft S S O, Requisitioner, and Next selected.](media/amazon-business-tutorial/group.png)
-1. On the **Upload your metadata file** wizard, click **Browse** to upload the **Metadata XML** file, which you have downloaded from the Azure portal and click **Upload**.
+1. On the **Upload your metadata file** wizard, choose **Paste XML Link** option to paste the **App Federation Metadata URL** value, which you have copied from Azure portal and click **Validate**.
![Screenshot shows Upload your metadata file, which allows you to browse to an x m l file and upload it.](media/amazon-business-tutorial/connection-data.png)
+ >[!NOTE]
+ > Alternatively, you can also upload the **Federation Metadata XML** file by clicking on the **Upload XML File** option.
+ 1. After uploading the downloaded metadata file, the fields in the **Connection data** section will populate automatically. After that click **Next**. ![Screenshot shows Connection data, where you can specify an Azure A D Identifier, Login U R L, and SAML Signing Certificate.](media/amazon-business-tutorial/connection.png)
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
![Screenshot shows Attribute mapping, where you can edit your Amazon data SAML attribute names.](media/amazon-business-tutorial/attribute-mapping.png)
-1. On the **Amazon connection data** wizard, click **Next**.
+1. On the **Amazon connection data** wizard, please confirm your IDP has configured and click **Continue**.
![Screenshot shows Amazon connection data, where you can click next to continue.](media/amazon-business-tutorial/amazon-connect.png)
In this section, you test your Azure AD single sign-on configuration with follow
#### SP initiated:
-* Click on **Test this application** in Azure portal. This will redirect to Amazon Business Sign on URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to Amazon Business Sign-on URL where you can initiate the login flow.
-* Go to Amazon Business Sign-on URL directly and initiate the login flow from there.
+* Go to the Amazon Business Single Sign-on URL directly and initiate the login flow from there.
#### IDP initiated: * Click on **Test this application** in Azure portal and you should be automatically signed in to the Amazon Business for which you set up the SSO.
-You can also use Microsoft My Apps to test the application in any mode. When you click the Amazon Business tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Amazon Business for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+You can also use Microsoft My Apps to test the application in any mode. When you click the Amazon Business tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Amazon Business for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
## Next steps
active-directory Atlassian Cloud Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/atlassian-cloud-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Atlassian Cloud'
+ Title: 'Tutorial: Azure Active Directory SSO integration with Atlassian Cloud'
description: Learn how to configure single sign-on between Azure Active Directory and Atlassian Cloud.
Previously updated : 11/21/2022 Last updated : 01/06/2023
-# Tutorial: Integrate Atlassian Cloud with Azure Active Directory
+# Tutorial: Azure Active Directory SSO integration with Atlassian Cloud
In this tutorial, you'll learn how to integrate Atlassian Cloud with Azure Active Directory (Azure AD). When you integrate Atlassian Cloud with Azure AD, you can:
To configure the integration of Atlassian Cloud into Azure AD, you need to add A
1. In the **Add from the gallery** section, type **Atlassian Cloud** in the search box. 1. Select **Atlassian Cloud** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
- Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
- Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide&preserve-view=true). ## Configure and test Azure AD SSO
In this section, you test your Azure AD single sign-on configuration with follow
#### SP initiated:
-* Click on **Test this application** in Azure portal. This will redirect to Atlassian Cloud Sign on URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to Atlassian Cloud Sign-on URL where you can initiate the login flow.
* Go to Atlassian Cloud Sign-on URL directly and initiate the login flow from there.
In this section, you test your Azure AD single sign-on configuration with follow
* Click on **Test this application** in Azure portal and you should be automatically signed in to the Atlassian Cloud for which you set up the SSO.
-You can also use Microsoft My Apps to test the application in any mode. When you click the Atlassian Cloud tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Atlassian Cloud for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+You can also use Microsoft My Apps to test the application in any mode. When you click the Atlassian Cloud tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Atlassian Cloud for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
## Next steps
active-directory Canvas Lms Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/canvas-lms-tutorial.md
Previously updated : 11/21/2022 Last updated : 01/06/2023 # Tutorial: Azure AD SSO integration with Canvas
Follow these steps to enable Azure AD SSO in the Azure portal.
> [!NOTE] > These values are not real. Update these values with the actual Identifier and Sign on URL. Contact [Canvas Client support team](https://community.canvaslms.com/community/help) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-5. In the **SAML Signing Certificate** section, click **Edit** button to open **SAML Signing Certificate** dialog.
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
- ![Edit SAML Signing Certificate](common/edit-certificate.png)
-
-6. In the **SAML Signing Certificate** section, copy the **THUMBPRINT** and save it on your computer.
-
- ![Copy Thumbprint value](common/copy-thumbprint.png)
-
-7. On the **Set up Canvas** section, copy the appropriate URL(s) as per your requirement.
-
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+ ![The Certificate download link](common/copy-metadataurl.png)
### Create an Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In a different web browser window, log in to your Canvas company site as an administrator.
-2. Go to **Courses \> Managed Accounts \> Microsoft**.
-
- ![Canvas](./media/canvas-lms-tutorial/course.png "Canvas")
-
-3. In the navigation pane on the left, select **Authentication**, and then click **Add New SAML Config**.
-
- ![Authentication](./media/canvas-lms-tutorial/tools.png "Authentication")
-
-4. On the Current Integration page, perform the following steps:
-
- ![Current Integration](./media/canvas-lms-tutorial/save.png "Current Integration")
+2. Go to **Admin > Microsoft OneNote > Authentication**.
+3. Choose an authentication service as **SAML**.
- a. In **IdP Entity ID** textbox, paste the value of **Azure Ad Identifier** which you have copied from Azure portal.
+ ![Canvas](./media/canvas-lms-tutorial/admin.png "Canvas")
- b. In **Log On URL** textbox, paste the value of **Login URL** which you have copied from Azure portal .
+4. On the **Current Provider** page, perform the following steps:
- c. In **Log Out URL** textbox, paste the value of **Logout URL** which you have copied from Azure portal.
+ ![Current Integration](./media/canvas-lms-tutorial/current-provider.png "Current Integration")
- d. In **Change Password Link** textbox, paste the value of **Change Password URL** which you have copied from Azure portal.
+ a. In **IdP Metadata URI** textbox, paste the value of **App Federation Metadata URL** value, which you have copied from Azure portal.
- e. In **Certificate Fingerprint** textbox, paste the **Thumbprint** value of certificate which you have copied from Azure portal.
-
- f. From the **Login Attribute** list, select **NameID**.
-
- g. From the **Identifier Format** list, select **emailAddress**.
-
- h. Click **Save Authentication Settings**.
+ b. Click **Save**.
### Create Canvas test user
To enable Azure AD users to log in to Canvas, they must be provisioned into Canv
1. Log in to your **Canvas** tenant.
-2. Go to **Courses \> Managed Accounts \> Microsoft**.
-
- ![Canvas](./media/canvas-lms-tutorial/course.png "Canvas")
-
-3. Click **Users**.
+2. Go to **Admin > Microsoft OneNote > People**.
- ![Screenshot shows Canvas menu with Users selected.](./media/canvas-lms-tutorial/user.png "Users")
+3. Click **+People**.
-4. Click **Add New User**.
+4. On the Add a New User dialog page, perform the following steps:
- ![Screenshot shows the Add a new User button.](./media/canvas-lms-tutorial/add-user.png "Users")
-
-5. On the Add a New User dialog page, perform the following steps:
-
- ![Add User](./media/canvas-lms-tutorial/name.png "Add User")
+ ![Add User](./media/canvas-lms-tutorial/new-user.png "Add User")
a. In the **Full Name** textbox, enter the name of user like **BrittaSimon**. b. In the **Email** textbox, enter the email of user like **brittasimon\@contoso.com**.
- c. In the **Login** textbox, enter the userΓÇÖs Azure AD email address like **brittasimon\@contoso.com**.
-
- d. Select **Email the user about this account creation**.
-
- e. Click **Add User**.
+ c. Click **Add User**.
> [!NOTE] > You can use any other Canvas user account creation tools or APIs provided by Canvas to provision Azure AD user accounts.
To enable Azure AD users to log in to Canvas, they must be provisioned into Canv
In this section, you test your Azure AD single sign-on configuration with following options.
-* Click on **Test this application** in Azure portal. This will redirect to Canvas Sign-on URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to Canvas Sign on URL where you can initiate the login flow.
-* Go to Canvas Sign-on URL directly and initiate the login flow from there.
+* Go to Canvas Sign on URL directly and initiate the login flow from there.
* You can use Microsoft My Apps. When you click the Canvas tile in the My Apps, you should be automatically signed in to the Canvas for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
active-directory Cch Tagetik Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cch-tagetik-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with CCH Tagetik | Microsoft Docs'
+ Title: 'Tutorial: Azure Active Directory SSO integration with CCH Tagetik'
description: Learn how to configure single sign-on between Azure Active Directory and CCH Tagetik.
Previously updated : 11/21/2022 Last updated : 01/06/2023
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with CCH Tagetik
+# Tutorial: Azure Active Directory SSO integration with CCH Tagetik
In this tutorial, you'll learn how to integrate CCH Tagetik with Azure Active Directory (Azure AD). When you integrate CCH Tagetik with Azure AD, you can:
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
-1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode,perform the following steps:
+1. On the **Basic SAML Configuration** section, perform the following steps:
a. In the **Identifier** text box, type a URL using the following pattern: `https://<CUSTOMER_NAME>.saastagetik.com/prod/5/`
Follow these steps to enable Azure AD SSO in the Azure portal.
b. In the **Reply URL** text box, type a URL using the following pattern: `https://<CUSTOMER_NAME>.saastagetik.com/prod/5/`
-1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+1. Perform the following step if you wish to configure the application in **SP** initiated mode:
In the **Sign-on URL** text box, type a URL using the following pattern:
- `https://<CUSTOMER_NAME>.saastagetik.com/prod/5/`
+ `https://<CUSTOMER_NAME>.saastagetik.com/prod/`
> [!NOTE] > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [CCH Tagetik Client support team](mailto:tgk-dl-supportmembers@wolterskluwer.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. 1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
- ![The Certificate download link](common/metadataxml.png)
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
1. On the **Set up CCH Tagetik** section, copy the appropriate URL(s) based on your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
### Create an Azure AD test user
In this section, you test your Azure AD single sign-on configuration with follow
#### SP initiated:
-* Click on **Test this application** in Azure portal. This will redirect to CCH Tagetik Sign on URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to CCH Tagetik Sign-on URL where you can initiate the login flow.
* Go to CCH Tagetik Sign-on URL directly and initiate the login flow from there.
In this section, you test your Azure AD single sign-on configuration with follow
* Click on **Test this application** in Azure portal and you should be automatically signed in to the CCH Tagetik for which you set up the SSO.
-You can also use Microsoft My Apps to test the application in any mode. When you click the CCH Tagetik tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the CCH Tagetik for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+You can also use Microsoft My Apps to test the application in any mode. When you click the CCH Tagetik tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the CCH Tagetik for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
## Next steps
active-directory Facebook Work Accounts Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/facebook-work-accounts-provisioning-tutorial.md
Title: 'Tutorial: Configure Facebook Work Accounts for automatic user provisioning with Azure Active Directory | Microsoft Docs' description: Learn how to automatically provision and de-provision user accounts from Azure AD to Facebook Work Accounts. -
-writer: Zhchia
--++++ Previously updated : 11/21/2022- Last updated : 01/06/2023 # Tutorial: Configure Facebook Work Accounts for automatic user provisioning This tutorial describes the steps you need to perform in both Facebook Work Accounts and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Facebook Work Accounts](https://work.facebook.com) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). - ## Capabilities supported+ > [!div class="checklist"] > * Create users in Facebook Work Accounts > * Remove users in Facebook Work Accounts when they do not require access anymore
The scenario outlined in this tutorial assumes that you already have the followi
* An admin account in Work Accounts with the permission to change company settings and configure integrations. ## Step 1. Plan your provisioning deployment+ 1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). 1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). 1. Determine what data to [map between Azure AD and Facebook Work Accounts](../app-provisioning/customize-application-attributes.md). - ## Step 2. Add Facebook Work Accounts from the Azure AD application gallery Add Facebook Work Accounts from the Azure AD application gallery to start managing provisioning to Facebook Work Accounts. If you have previously setup Facebook Work Accounts for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
The Azure AD provisioning service allows you to scope who will be provisioned ba
* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles. - ## Step 4. Configure automatic user provisioning to Facebook Work Accounts This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
This section guides you through the steps to configure the Azure AD provisioning
1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, select **Facebook Work Accounts**.
+1. In the applications list, select **Facebook Work Accounts**.
- ![The Facebook Work Accounts link in the Applications list](common/all-applications.png)
+1. Select the **Provisioning** tab.
-3. Select the **Provisioning** tab.
+1. Set the **Provisioning Mode** to **Automatic**.
- ![Provision tab](common/provisioning.png)
+1. Under the **Admin Credentials** section, click on **Authorize**. You will be redirected to **Facebook Work Accounts**'s authorization page. Input your Facebook Work Accounts username and click on the **Continue** button. Click **Test Connection** to ensure Azure AD can connect to Facebook Work Accounts. If the connection fails, ensure your Facebook Work Accounts account has Admin permissions and try again.
-4. Set the **Provisioning Mode** to **Automatic**.
+ :::image type="content" source="media/facebook-work-accounts-provisioning-tutorial/azure-connect.png" alt-text="Screenshot shows the Facebook Work Accounts authorization page.":::
- ![Provisioning tab](common/provisioning-automatic.png)
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
-5. Under the **Admin Credentials** section, click on **Authorize**. You will be redirected to **Facebook Work Accounts**'s authorization page. Input your Facebook Work Accounts username and click on the **Continue** button. Click **Test Connection** to ensure Azure AD can connect to Facebook Work Accounts. If the connection fails, ensure your Facebook Work Accounts account has Admin permissions and try again.
+1. Select **Save**.
- :::image type="content" source="media/facebook-work-accounts-provisioning-tutorial/azure-connect.png" alt-text="OAuth Screen":::
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Facebook Work Accounts**.
-6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
-
- ![Notification Email](common/provisioning-notification-email.png)
-
-7. Select **Save**.
-
-8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Facebook Work Accounts**.
-
-9. Review the user attributes that are synchronized from Azure AD to Facebook Work Accounts in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Facebook Work Accounts for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the Facebook Work Accounts API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+1. Review the user attributes that are synchronized from Azure AD to Facebook Work Accounts in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Facebook Work Accounts for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the Facebook Work Accounts API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
|Attribute|Type|Supported for filtering| ||||
This section guides you through the steps to configure the Azure AD provisioning
|urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:employeeNumber|String| |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String|
-10. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-11. To enable the Azure AD provisioning service for Facebook Work Accounts, change the **Provisioning Status** to **On** in the **Settings** section.
+1. To enable the Azure AD provisioning service for Facebook Work Accounts, change the **Provisioning Status** to **On** in the **Settings** section.
- ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+1. Define the users and/or groups that you would like to provision to Facebook Work Accounts by choosing the desired values in **Scope** in the **Settings** section.
-12. Define the users and/or groups that you would like to provision to Facebook Work Accounts by choosing the desired values in **Scope** in the **Settings** section.
+ ![Screenshot shows the Scope dropdown in the Settings section.](common/provisioning-scope.png)
- ![Provisioning Scope](common/provisioning-scope.png)
-
-13. When you are ready to provision, click **Save**.
-
- ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+1. When you are ready to provision, click **Save**.
This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running. ## Step 5. Monitor your deployment+ Once you've configured provisioning, use the following resources to monitor your deployment: * Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
active-directory Jfrog Artifactory Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/jfrog-artifactory-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with JFrog Artifactory | Microsoft Docs'
+ Title: 'Tutorial: Azure Active Directory SSO integration with JFrog Artifactory'
description: Learn how to configure single sign-on between Azure Active Directory and JFrog Artifactory.
Previously updated : 11/21/2022 Last updated : 01/06/2023
-# Tutorial: Integrate JFrog Artifactory with Azure Active Directory
+# Tutorial: Azure Active Directory SSO integration with JFrog Artifactory
In this tutorial, you'll learn how to integrate JFrog Artifactory with Azure Active Directory (Azure AD). When you integrate JFrog Artifactory with Azure AD, you can:
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Select a Single sign-on method** page, select **SAML**. 1. On the **Set up Single Sign-On with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
Follow these steps to enable Azure AD SSO in the Azure portal.
b. In the **Reply URL** text box, type a URL using the following pattern:
- - For Artifactory Self-hosted: `https://<servername>.jfrog.io/artifactory/webapp/saml/loginResponse`
+ - For Artifactory Self-hosted: `https://<FQDN>/artifactory/webapp/saml/loginResponse`
- For Artifactory SaaS: `https://<servername>.jfrog.io/artifactory/webapp/saml/loginResponse`
-1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+1. Perform the following step if you wish to configure the application in **SP** initiated mode:
In the **Sign-on URL** text box, type a URL using the following pattern:
- - For Artifactory Self-hosted: `https://<servername>.jfrog.io/<servername>/webapp/`
+ - For Artifactory Self-hosted: `https://<FQDN>/<servername>/webapp/`
- For Artifactory SaaS: `https://<servername>.jfrog.io/ui/login` > [!NOTE]
Follow these steps to enable Azure AD SSO in the Azure portal.
c. Click **Save**.
-4. In the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, locate the **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+1. In the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, locate the **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
- ![The Certificate download link](./media/jfrog-artifactory-tutorial/certificate-base.png)
+ ![Screenshot shows the Certificate download link.](./media/jfrog-artifactory-tutorial/certificate-base.png "Certificate")
-6. Configure the Artifactory (SAML Service Provider Name) with the 'Identifier' field (see step 4). In the **Set up JFrog Artifactory** section, copy the appropriate URL(s) based on your requirement.
+1. Configure the Artifactory (SAML Service Provider Name) with the 'Identifier' field (see step 4). In the **Set up JFrog Artifactory** section, copy the appropriate URL(s) based on your requirement.
- - For Artifactory Self-hosted: `https://<servername>.jfrog.io/artifactory/webapp/saml/loginResponse`
+ - For Artifactory Self-hosted: `https://<FQDN>/artifactory/webapp/saml/loginResponse`
- For Artifactory SaaS: `https://<servername>.jfrog.io/<servername>/webapp/saml/loginResponse`
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
### Create an Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure JFrog Artifactory SSO
-To configure single sign-on on **JFrog Artifactory** side, you need to send the downloaded **Certificate (Raw)** and appropriate copied URLs from Azure portal to [JFrog Artifactory support team](https://support.jfrog.com). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on **JFrog Artifactory** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [JFrog Artifactory support team](https://support.jfrog.com). They set this setting to have the SAML SSO connection set properly on both sides.
### Create JFrog Artifactory test user
In this section, you test your Azure AD single sign-on configuration with follow
#### SP initiated:
-* Click on **Test this application** in Azure portal. This will redirect to JFrog Artifactory Sign on URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to JFrog Artifactory Sign-on URL where you can initiate the login flow.
* Go to JFrog Artifactory Sign-on URL directly and initiate the login flow from there.
In this section, you test your Azure AD single sign-on configuration with follow
* Click on **Test this application** in Azure portal and you should be automatically signed in to the JFrog Artifactory for which you set up the SSO.
-You can also use Microsoft My Apps to test the application in any mode. When you click the JFrog Artifactory tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the JFrog Artifactory for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+You can also use Microsoft My Apps to test the application in any mode. When you click the JFrog Artifactory tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the JFrog Artifactory for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
## Next steps
active-directory Netpresenter Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/netpresenter-provisioning-tutorial.md
Title: 'Tutorial: Configure Netpresenter Next for automatic user provisioning with Azure Active Directory | Microsoft Docs' description: Learn how to automatically provision and de-provision user accounts from Azure AD to Netpresenter Next. -
-writer: Zhchia
--++++ Previously updated : 11/21/2022- Last updated : 01/06/2023 # Tutorial: Configure Netpresenter Next for automatic user provisioning
-This tutorial describes the steps you need to perform in both Netpresenter Next and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Netpresenter Next](https://www.Netpresenter.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
-
+This tutorial describes the steps you need to perform in both Netpresenter Next and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Netpresenter Next](https://www.Netpresenter.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
## Capabilities supported+ > [!div class="checklist"] > * Create users in Netpresenter Next > * Remove users in Netpresenter Next when they do not require access anymore
The scenario outlined in this tutorial assumes that you already have the followi
* An administrator account with Netpresenter Next. ## Step 1. Plan your provisioning deployment+ 1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
-2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-3. Determine what data to [map between Azure AD and Netpresenter Next](../app-provisioning/customize-application-attributes.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Netpresenter Next](../app-provisioning/customize-application-attributes.md).
## Step 2. Configure Netpresenter Next to support provisioning with Azure AD 1. Sign in to the Netpresenter Next with an administrator account.
-2. Click on cogwheel icon to go to settings page.
-3. In the settings page, click on **System** to open the submenu and click on **Azure AD**.
-4. Click on the **Generate Token** button.
-5. Save the **SCIM Endpoint URL** and **Token** at a secure place, you'll need it in the **Step 5**.
+1. Click on cogwheel icon to go to settings page.
+1. In the settings page, click on **System** to open the submenu and click on **Azure AD**.
+1. Click on the **Generate Token** button.
+1. Save the **SCIM Endpoint URL** and **Token** at a secure place, you'll need it in the **Step 5**.
- ![Token and URL](media/netpresenter/get-token-and-url.png)
+ ![Screenshot shows the Token and URL values in Netpresenter Next.](media/netpresenter/get-token-and-url.png)
-1. **Optional:** Under **Sign in options**, 'Force sign in with Microsoft' can be enabled or disabled. By enabling it, users with an Azure AD account will lose the ability to sign in with their local account.
+1. **Optional:** Under **Sign in options**, you can enable or disable 'Force sign in with Microsoft'. If enabled, users with an Azure AD account will lose the ability to sign in with their local account.
## Step 3. Add Netpresenter Next from the Azure AD application gallery
-Add Netpresenter Next from the Azure AD application gallery to start managing provisioning to Netpresenter Next. If you have previously setup Netpresenter Next for SSO, you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+Add Netpresenter Next from the Azure AD application gallery to start managing provisioning to Netpresenter Next. If you have previously setup Netpresenter Next for SSO, you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
-## Step 4. Define who will be in scope for provisioning
+## Step 4. Define who will be in scope for provisioning
-The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).- * If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles. - ## Step 5. Configure automatic user provisioning to Netpresenter Next This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
This section guides you through the steps to configure the Azure AD provisioning
1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, select **Netpresenter Next**.
+1. In the applications list, select **Netpresenter Next**.
- ![The Netpresenter Next link in the Applications list](common/all-applications.png)
+1. Select the **Provisioning** tab.
-3. Select the **Provisioning** tab.
+1. Set the **Provisioning Mode** to **Automatic**.
- ![Provision tab](common/provisioning.png)
+1. Under the **Admin Credentials** section, input your Netpresenter Next Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Netpresenter Next. If the connection fails, ensure your Netpresenter Next account has Admin permissions and try again.
-4. Set the **Provisioning Mode** to **Automatic**.
+ ![Screenshot shows the fields for tenant URL and token.](common/provisioning-testconnection-tenanturltoken.png)
- ![Provisioning tab](common/provisioning-automatic.png)
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
-5. Under the **Admin Credentials** section, input your Netpresenter Next Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Netpresenter Next. If the connection fails, ensure your Netpresenter Next account has Admin permissions and try again.
+1. Select **Save**.
- ![Token](common/provisioning-testconnection-tenanturltoken.png)
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Netpresenter Next**.
-6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
-
- ![Notification Email](common/provisioning-notification-email.png)
-
-7. Select **Save**.
-
-8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Netpresenter Next**.
-
-9. Review the user attributes that are synchronized from Azure AD to Netpresenter Next in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Netpresenter Next for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Netpresenter Next API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+1. Review the user attributes that are synchronized from Azure AD to Netpresenter Next in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Netpresenter Next for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Netpresenter Next API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
|Attribute|Type|Supported for filtering|Required by Netpresenter Next |||||
This section guides you through the steps to configure the Azure AD provisioning
|phoneNumbers[type eq "work"].value|String|| |phoneNumbers[type eq "mobile"].value|String||
-10. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-11. To enable the Azure AD provisioning service for Netpresenter Next, change the **Provisioning Status** to **On** in the **Settings** section.
+1. To enable the Azure AD provisioning service for Netpresenter Next, change the **Provisioning Status** to **On** in the **Settings** section.
- ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+1. Define the users and/or groups that you would like to provision to Netpresenter Next by choosing the desired values in **Scope** in the **Settings** section.
-12. Define the users and/or groups that you would like to provision to Netpresenter Next by choosing the desired values in **Scope** in the **Settings** section.
+ ![Screenshot shows the Scope dropdown in the Settings section.](common/provisioning-scope.png)
- ![Provisioning Scope](common/provisioning-scope.png)
+1. When you're ready to provision, click **Save**.
-13. When you'r ready to provision, click **Save**.
-
- ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
-
-This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
## Step 6. Monitor your deployment+ Once you've configured provisioning, use the following resources to monitor your deployment: 1. Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
-2. Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
-3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+1. Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+1. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
## Additional resources
active-directory Officespace Software Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/officespace-software-provisioning-tutorial.md
Before configuring and enabling automatic user provisioning, you should decide w
## Set up OfficeSpace Software for provisioning
-1. Sign in to your [OfficeSpace Software Admin Console](https://support.officespacesoftware.com/hc). Navigate to **Settings > Connectors**.
+1. Sign in to your [OfficeSpace Software Admin Console](https://support.officespacesoftware.com/s/). Navigate to **Settings > Connectors**.
![OfficeSpace Software Admin Console](media/officespace-software-provisioning-tutorial/settings.png)
For more information on how to read the Azure AD provisioning logs, see [Reporti
## Next steps
-* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Opentext Fax Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/opentext-fax-tutorial.md
Previously updated : 11/21/2022 Last updated : 01/05/2023
In this tutorial, you'll learn how to integrate OpenText XM Fax and XM SendSecur
To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* OpenText XM Fax and XM SendSecure single sign-on (SSO) enabled subscription.
-* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+* Azure AD Cloud Application Administrator or Application Administrator role.
For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+* OpenText XM Fax and XM SendSecure subscription.
+* OpenText XM Fax and XM SendSecure administrator account.
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* OpenText XM Fax and XM SendSecure supports **SP** initiated SSO.
+* OpenText XM Fax and XM SendSecure supports **SP-initiated** SSO.
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
Follow these steps to enable Azure AD SSO in the Azure portal.
| **Sign-on URL** | |-|
- | `https://login.xmedius.com/` |
- | `https://login.xmedius.eu/` |
- | `https://login.xmedius.ca/` |
+ | `https://login.xmedius.com/{account}` |
+ | `https://login.xmedius.eu/{account}` |
+ | `https://login.xmedius.ca/{account}` |
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
Follow these steps to enable Azure AD SSO in the Azure portal.
### Create an Azure AD test user
-In this section, you'll create a test user in the Azure portal called B.Simon.
+In this section, you'll create a test user in the Azure portal called B.Simon:
1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**. 1. Select **New user** at the top of the screen. 1. In the **User** properties, follow these steps: 1. In the **Name** field, enter `B.Simon`.
- 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. In the **User name** field, enter the user name in the following format: username@companydomain.extension. For example, `B.Simon@contoso.com`.
1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box. 1. Click **Create**. ### Assign the Azure AD test user
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to OpenText XM Fax and XM SendSecure.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to OpenText XM Fax and XM SendSecure:
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **OpenText XM Fax and XM SendSecure**.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. Provide the following required information:
- a. In the **Sign In URL** textbox, paste the **Login URL** value which you have copied from the Azure portal.
+ a. In the **Issuer (Identity Provider)** textbox, paste the **Azure AD Identifier** value which you have copied from the Azure portal.
+
+ b. In the **Sign In URL** textbox, paste the **Login URL** value which you have copied from the Azure portal.
- b. Open the downloaded **Certificate (Base64)** from the Azure portal into Notepad and paste the content into the **X.509 Signing Certificate** textbox.
+ c. Open the downloaded **Certificate (Base64)** from the Azure portal into Notepad and paste the content into the **X.509 Signing Certificate** textbox.
- c. click **Save**.
+ d. click **Save**.
> [!NOTE] > Keep the fail-safe URL (`https://login.[domain]/[account]/no-sso`) provided at the bottom of the SSO configuration section, it will allow you to log in using your XM Cloud account credentials if you lock yourself after SSO activation. ### Create OpenText XM Fax and XM SendSecure test user
-In this section, you create a user called Britta Simon at OpenText XM Fax and XM SendSecure. Work with [OpenText XM Fax and XM SendSecure support team](mailto:support@opentext.com) to add the users in the OpenText XM Fax and XM SendSecure platform. Users must be created and activated before you use single sign-on.
+Create a user called Britta Simon at OpenText XM Fax and XM SendSecure. Make sure the email is set to "B.Simon@contoso.com".
+
+> [!NOTE]
+> Users must be created and activated before you use single sign-on.
## Test SSO
-In this section, you test your Azure AD single sign-on configuration with following options.
+In this section, you test your Azure AD single sign-on configuration with the following options.
* Click on **Test this application** in Azure portal. This will redirect to OpenText XM Fax and XM SendSecure Sign-on URL where you can initiate the login flow. * Go to OpenText XM Fax and XM SendSecure Sign-on URL directly and initiate the login flow from there.
-* You can use Microsoft My Apps. When you click the OpenText XM Fax and XM SendSecure tile in the My Apps, this will redirect to OpenText XM Fax and XM SendSecure Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+* You can use Microsoft My Apps. When you click the OpenText XM Fax and XM SendSecure tile in the My Apps portal, this will redirect to OpenText XM Fax and XM SendSecure Sign-on URL. For more information about the My Apps portal, see [Introduction to the My Apps portal](../user-help/my-apps-portal-end-user-access.md).
## Next steps
active-directory Tranxfer Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tranxfer-tutorial.md
Previously updated : 11/21/2022 Last updated : 01/05/2023
Complete the following steps to enable Azure AD single sign-on in the Azure port
## Configure Tranxfer SSO
-To configure single sign-on on **Tranxfer** side, you need to send the **App Federation Metadata Url** to [Tranxfer support team](mailto:soporte@tranxfer.com). The support team will use the copied URLs to configure the single sign-on on the application.
+You'll need to log in to your Tranxfer application with the company administrator account.
+
+1. Go to **Settings -> SAML** and paste **App Federation Metadata Url** to **Metadata URL** field.
+1. If you want to give specific permissions to different user groups, you can match Azure AD groups to common **Tranxfer** permissions. To do so, fill in Azure AD group ID for each permission:
+
+ a. SEND permission to send files.
+
+ b. RECEIVE to receive files.
+
+ c. SEND + RECEIVE both of the above.
+
+ d. ADMIN company administration permission but not sending nor receiving files.
+
+ e. FULL all of the above.
+
+ ![Screenshot shows Tranxfer SAML settings.](media/tranxfer-tutorial/tranxfer-saml-settings.png "Tranxfer SAML Settings")
+
+1. If you want to give any user of your organization, the simple Send and Receive permission no matter which groups they have, enable the **Empty groups with permission** option.
+1. If you want only match permissions by groups but don't want to import Azure AD groups to Tranxfer groups enable the **Disable import groups** option.
+
+If you find any problems, please contact [Tranxfer support team](mailto:soporte@tranxfer.com). The support team will assist you in configuring the single sign-on on the application.
### Create Tranxfer test user
active-directory Trend Micro Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/trend-micro-tutorial.md
After you configure the Azure AD service and specify Azure AD as the user authen
1. Clear the browser of all cookies and then restart the browser. 1. Point your browser to the TMWS proxy server.
-For details, see [Traffic Forwarding Using PAC Files](https://docs.trendmicro.com/en-us/enterprise/trend-micro-web-security-online-help/administration_001/pac-files/traffic-forwarding-u.aspx#GUID-A4A83827-7A29-4596-B866-01ACCEDCC36B).
+For details, see [Traffic Forwarding Using PAC Files](https://docs.trendmicro.com/en-us/enterprise/trend-micro-web-security-online-help/administration/pac-files/traffic-forwarding-u.aspx).
1. Visit any internet website. TMWS will direct you to the TMWS captive portal.
active-directory Veracode Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/veracode-tutorial.md
Previously updated : 11/21/2022 Last updated : 01/05/2023
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)**. Select **Download** to download the certificate and save it on your computer.
- ![Screenshot of SAML Signing Certificate section, with Download link highlighted](common/certificatebase64.png)
+ ![Screenshot of SAML Signing Certificate section, with Download link highlighted.](common/certificatebase64.png)
1. Veracode expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
- ![Screenshot of User Attributes & Claims section](common/default-attributes.png)
+ ![Screenshot of User Attributes & Claims section.](common/default-attributes.png)
1. Veracode also expects a few more attributes to be passed back in the SAML response. These attributes are also pre-populated, but you can review them per your requirements.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Set up Veracode** section, copy the appropriate URL(s) based on your requirement.
- ![Screenshot of Set up Veracode section, with configuration URLs highlighted](common/copy-configuration-urls.png)
+ ![Screenshot of Set up Veracode section, with configuration URLs highlighted.](common/copy-configuration-urls.png)
### Create an Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure Veracode SSO
-1. In a different web browser window, sign in to your Veracode company site as an administrator.
+Notes:
+
+* These instructions assume you are using the new [Single Sign On/Just-in-Time Provisioning feature from Veracode](https://docs.veracode.com/r/Signing_On). To activate this feature if it is not already active, please contact Veracode Support.
+* These instructions are valid for all [Veracode regions](https://docs.veracode.com/r/Region_Domains_for_Veracode_APIs).
+
+1. In a different web browser window, sign in to your Veracode company site as an administrator.
1. From the menu on the top, select **Settings** > **Admin**.
- ![Screenshot of Veracode Administration, with Settings icon and Admin highlighted](./media/veracode-tutorial/admin.png "Administration")
+ ![Screenshot of Veracode Administration, with Settings icon and Admin highlighted.](./media/veracode-tutorial/admin.png "Administration")
1. Select the **SAML** tab.
-1. In the **Organization SAML Settings** section, perform the following steps:
+1. In the **SAML Certificate** section, perform the following steps:
- ![Screenshot of Organization SAML Settings section](./media/veracode-tutorial/saml.png "Administration")
+ ![Screenshot of Organization SAML Settings section.](./media/veracode-tutorial/saml.png "Administration")
a. For **Issuer**, paste the value of the **Azure AD Identifier** that you've copied from the Azure portal. b. For **Assertion Signing Certificate**, select **Choose File** to upload your downloaded certificate from the Azure portal.
- c. For **Self Registration**, select **Enable Self Registration**.
+ c. Note the values of the three URLs (**SAML Assertion URL**, **SAML Audience URL**, **Relay state URL**).
+
+ d. Click **Save**.
+
+1. Take the values of the **SAML Assertion URL**, **SAML Audience URL** and **Relay state URL** and update them in the Azure Active Directory settings for the Veracode integration.
+
+1. Select the **JIT Provisioning** tab.
+
+ ![Screenshot of JIT Provisioning tab, with various options highlighted.](./media/veracode-tutorial/just-in-time.png "JIT Provisioning")
-1. In the **Self Registration Settings** section, perform the following steps, and then select **Save**:
+1. In the **Organization Settings** section, toggle the **Configure Default Settings for Just-in-Time user provisioning** setting to **On**.
- ![Screenshot of Self Registration Settings section, with various options highlighted](./media/veracode-tutorial/save.png "Administration")
+1. In the **Basic Settings** section, for **User Data Updates**, select **Prefer Veracode User Data**.
- a. For **New User Activation**, select **No Activation Required**.
+1. In the **Access Settings** section, under **User Roles**, select from the following For more information about Veracode user roles, see the [Veracode Documentation](https://docs.veracode.com/r/c_role_permissions):
- b. For **User Data Updates**, select **Preference Veracode User Data**.
+ ![Screenshot of JIT Provisioning User Roles, with various options highlighted.](./media/veracode-tutorial/user-roles.png "JIT Provisioning")
- c. For **SAML Attribute Details**, select the following:
- * **User Roles**
* **Policy Administrator** * **Reviewer** * **Security Lead**
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
* **Submitter** * **Creator** * **All Scan Types**
- * **Team Memberships**
- * **Default Team**
### Create Veracode test user
active-directory Webce Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/webce-tutorial.md
-# Azure Active Directory SSO integration with WebCE
+# Tutorial: Azure Active Directory SSO integration with WebCE
In this article, you'll learn how to integrate WebCE with Azure Active Directory (Azure AD). WebCE offers self-study online continuing education and pre-license training courses for a variety of professional licenses and designations. When you integrate WebCE with Azure AD, you can:
Complete the following steps to enable Azure AD single sign-on in the Azure port
1. On the **Basic SAML Configuration** section, perform the following steps:
- a. In the **Identifier** textbox, type a URL using one of the following patterns:
-
- | **Identifier** |
- ||
- | `https://www.webce.com/<RootPortalFolder>` |
- | `https://www.webce.com` |
+ a. In the **Identifier** textbox, type a URL using the following pattern:
+ `https://www.webce.com`
b. In the **Reply URL** textbox, type a URL using the following pattern: `https://www.webce.com/<RootPortalFolder>/login/saml20` c. In the **Sign on URL** textbox, type a URL using the following pattern:
- `https://www.webce.com/<RootPortalFolder>/login/saml20`
+ `https://www.webce.com/<RootPortalFolder>/login`
> [!Note] > These values are not the real. Update these values with the actual Identifer, Reply URL and Sign on URL. Contact [WebCE Client support team](mailto:CustomerService@WebCE.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
In this section, a user called B.Simon is created in WebCE. WebCE supports just-
In this section, you test your Azure AD single sign-on configuration with following options.
-* Click on **Test this application** in Azure portal. This will redirect to WebCE Sign on URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to WebCE Sign-on URL where you can initiate the login flow.
-* Go to WebCE Sign on URL directly and initiate the login flow from there.
+* Go to WebCE Sign-on URL directly and initiate the login flow from there.
-* You can use Microsoft My Apps. When you click the WebCE tile in the My Apps, this will redirect to WebCE Sign on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+* You can use Microsoft My Apps. When you click the WebCE tile in the My Apps, this will redirect to WebCE Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Additional resources
active-directory Zenya Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zenya-tutorial.md
Previously updated : 11/21/2022 Last updated : 01/09/2023
Follow these steps to enable Azure AD SSO in the Azure portal.
b. Fill the **Identifier** box with the value that's displayed behind the label **EntityID** on the **Zenya SAML2 info** page. This page is still open in your other browser tab. c. Fill the **Reply-URL** box with the value that's displayed behind the label **Reply URL** on the **Zenya SAML2 info** page. This page is still open in your other browser tab.
+
+ d. Fill the **Logout-URL** box with the value that's displayed behind the label **Logout URL** on the **Zenya SAML2 info** page. This page is still open in your other browser tab.
1. Zenya application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
active-directory Configure Azure Active Directory For Cmmc Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/configure-azure-active-directory-for-cmmc-compliance.md
The remaining articles in this series provide guidance and links to resources, o
Learn more:
-* DoD CMMC website - [Office of the Under Secretary of Defense for Acquisition & Sustainment Cybersecurity Maturity Model Certification](https://www.acq.osd.mil/cmmc/https://docsupdatetracker.net/index.html)
-* Microsoft Download Center - [Microsoft Product Placemat for CMMC Level 3 (preview)](https://www.microsoft.com/download/details.aspx?id=102536)
+* DoD CMMC website - [Office of the Under Secretary of Defense for Acquisition & Sustainment Cybersecurity Maturity Model Certification](https://dodcio.defense.gov/CMMC/)
+* Microsoft Download Center - [Microsoft Product Placemat for CMMC 2.0 (preview)](https://www.microsoft.com/download/details.aspx?id=102536)
### Next steps
active-directory Configure Cmmc Level 2 Additional Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/configure-cmmc-level-2-additional-controls.md
The remainder of this article provides guidance for all of the domains except Ac
## Audit & Accountability
-The following table provides a list of control IDs and associated customer responsibilities and guidance.
+The following table provides a list of practice statement and objectives, and Azure AD guidance and recommendations to enable you to meet these requirements with Azure AD.
-| *Control* | *Guidance* |
+| CMMC practice statement and objectives | Azure AD guidance and recommendations |
| - | - |
-| AU.L2-3.3.1<br><br>AU.L2-3.3.2 | All operations are audited in the Azure AD audit logs. Each audit log entry contains a userΓÇÖs immutable objectID that can be used to uniquely trace an individual system user to each action. You can collect and analyze logs by using a Security Information and Event Management (SIEM) solution such as Microsoft Sentinel. Alternatively, you can use Azure Event Hubs to integrate logs with third-party SIEM solutions to enable monitoring and notification.<br>[Audit activity reports in the Azure Active Directory portal](/azure/active-directory/reports-monitoring/concept-audit-logs)<br>[Connect Azure Active Directory data to Microsoft Sentinel](/azure/sentinel/connect-azure-active-directory)<br>[Tutorial: Stream logs to an Azure event hub](/azure/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub) |
-| AU.L2-3.3.4 | Azure Service Health notifies you about Azure service incidents so you can take action to mitigate downtime. Configure customizable cloud alerts for Azure Active Directory. <br>[What is Azure Service Health?](/azure/service-health/overview)<br>[Three ways to get notified about Azure service issues](https://azure.microsoft.com/blog/three-ways-to-get-notified-about-azure-service-issues/)<br>[Azure Service Health](https://azure.microsoft.com/get-started/azure-portal/service-health/) |
-| AU.L2-3.3.6 | Ensure Azure AD events are included in event logging strategy. You can collect and analyze logs by using a Security Information and Event Management (SIEM) solution such as Microsoft Sentinel. Alternatively, you can use Azure Event Hubs to integrate logs with third-party SIEM solutions to enable monitoring and notification. Use Azure AD entitlement management with access reviews to ensure compliance status of accounts. <br>[Audit activity reports in the Azure Active Directory portal](/azure/active-directory/reports-monitoring/concept-audit-logs)<br>[Connect Azure Active Directory data to Microsoft Sentinel](/azure/sentinel/connect-azure-active-directory)<br>[Tutorial: Stream logs to an Azure event hub](/azure/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub) |
-| AU.L2-3.3.8<br><br>AU.L2-3.3.9 | Azure AD logs are retained by default for 30 days. These logs are unable to modified or deleted and are only accessible to limited set of privileged roles.<br>[Sign-in logs in Azure Active Directory](/azure/active-directory/reports-monitoring/concept-sign-ins)<br>[Audit logs in Azure Active Directory](/azure/active-directory/reports-monitoring/concept-audit-logs)
+| AU.L2-3.3.1<br><br>**Practice statement:** Create and retain system audit logs and records to enable monitoring, analysis, investigation, and reporting of unlawful or unauthorized system activity.<br><br>**Objectives:**<br>Determine if:<br>[a.] audit logs (for example, event types to be logged) to enable monitoring, analysis, investigation, and reporting of unlawful or unauthorized system activity are specified;<br>[b.] the content of audit records needed to support monitoring, analysis, investigation, and reporting of unlawful or unauthorized system activity is defined;<br>[c.] audit records are created (generated);<br>[d.] audit records, once created, contain the defined content;<br>[e.] retention requirements for audit records are defined; and<br>[f.] audit records are retained as defined.<br><br>AU.L2-3.3.2<br><br>**Practice statement:** Ensure that the actions of individual system users can be uniquely traced to those users so they can be held accountable for their actions.<br><br>**Objectives:**<br>Determine if:<br>[a.] the content of the audit records needed to support the ability to uniquely trace users to their actions is defined; and<br>[b.] audit records, once created, contain the defined content. | All operations are audited within the Azure AD audit logs. Each audit log entry contains a userΓÇÖs immutable objectID that can be used to uniquely trace an individual system user to each action. You can collect and analyze logs by using a Security Information and Event Management (SIEM) solution such as Microsoft Sentinel. Alternatively, you can use Azure Event Hubs to integrate logs with third-party SIEM solutions to enable monitoring and notification.<br>[Audit activity reports in the Azure Active Directory portal](/azure/active-directory/reports-monitoring/concept-audit-logs.md)<br>[Connect Azure Active Directory data to Microsoft Sentinel](/azure/sentinel/connect-azure-active-directory)<br>[Tutorial: Stream logs to an Azure event hub](/azure/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) |
+| AU.L2-3.3.4<br><br>**Practice statement:** Alert if an audit logging process fails.<br><br>**Objectives:**<br>Determine if:<br>[a.] personnel or roles to be alerted if an audit logging process failure is identified;<br>[b.] types of audit logging process failures for which alert will be generated are defined; and<br>[c] identified personnel or roles are alerted in the event of an audit logging process failure. | Azure Service Health notifies you about Azure service incidents so you can take action to mitigate downtime. Configure customizable cloud alerts for Azure Active Directory. <br>[What is Azure Service Health?](/azure/service-health/overview.md)<br>[Three ways to get notified about Azure service issues](https://azure.microsoft.com/blog/three-ways-to-get-notified-about-azure-service-issues/)<br>[Azure Service Health](https://azure.microsoft.com/get-started/azure-portal/service-health/) |
+| AU.L2-3.3.6<br><br>**Practice statement:** Provide audit record reduction and report generation to support on-demand analysis and reporting.<br><br>**Objectives:**<br>Determine if:<br>[a.] an audit record reduction capability that supports on-demand analysis is provided; and<br>[b.] a report generation capability that supports on-demand reporting is provided. | Ensure Azure AD events are included in event logging strategy. You can collect and analyze logs by using a Security Information and Event Management (SIEM) solution such as Microsoft Sentinel. Alternatively, you can use Azure Event Hubs to integrate logs with third-party SIEM solutions to enable monitoring and notification. Use Azure AD entitlement management with access reviews to ensure compliance status of accounts. <br>[Audit activity reports in the Azure Active Directory portal](/azure/active-directory/reports-monitoring/concept-audit-logs.md)<br>[Connect Azure Active Directory data to Microsoft Sentinel](/azure/sentinel/connect-azure-active-directory.md)<br>[Tutorial: Stream logs to an Azure event hub](/azure/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) |
+| AU.L2-3.3.8<br><br>**Practice statement:** Protect audit information and audit logging tools from unauthorized access, modification, and deletion.<br><br>**Objectives:**<br>Determine if:<br>[a.] audit information is protected from unauthorized access;<br>[b.] audit information is protected from unauthorized modification;<br>[c.] audit information is protected from unauthorized deletion;<br>[d.] audit logging tools are protected from unauthorized access;<br>[e.] audit logging tools are protected from unauthorized modification; and<br>[f.] audit logging tools are protected from unauthorized deletion.<br><br>AU.L2-3.3.9<br><br>**Practice statement:** Limit management of audit logging functionality to a subset of privileged users.<br><br>**Objectives:**<br>Determine if:<br>[a.] a subset of privileged users granted access to manage audit logging functionality is defined; and<br>[b.] management of audit logging functionality is limited to the defined subset of privileged users. | Azure AD logs are retained by default for 30 days. These logs are unable to modified or deleted and are only accessible to limited set of privileged roles.<br>[Sign-in logs in Azure Active Directory](/azure/active-directory/reports-monitoring/concept-sign-ins.md)<br>[Audit logs in Azure Active Directory](/azure/active-directory/reports-monitoring/concept-audit-logs.md)
## Configuration Management (CM)
-The following table provides a list of control IDs and associated customer responsibilities and guidance.
+The following table provides a list of practice statement and objectives, and Azure AD guidance and recommendations to enable you to meet these requirements with Azure AD.
-| *Control* | *Guidance* |
+| CMMC practice statement and objectives | Azure AD guidance and recommendations |
| - | - |
-| CM.L2-3.4.2 | Adopt a zero-trust security posture. Use conditional access policies to restrict access to compliant devices. Configure policy settings on the device to enforce security configuration settings on the device with MDM solutions such as Microsoft Intune. Microsoft Endpoint Configuration Manager(MECM) or group policy objects can also be considered in hybrid deployments and combined with conditional access require hybrid Azure AD joined device.<br><br>**Zero-trust**<br>[Securing identity with Zero Trust](/security/zero-trust/identity)<br><br>**Conditional access**<br>[What is conditional access in Azure AD?](/azure/active-directory/conditional-access/overview)<br>[Grant controls in Conditional Access policy](/azure/active-directory/conditional-access/concept-conditional-access-grant)<br><br>**Device policies**<br>[What is Microsoft Intune?](/mem/intune/fundamentals/what-is-intune)<br>[What is Defender for Cloud Apps?](/cloud-app-security/what-is-cloud-app-security)<br>[What is app management in Microsoft Intune?](/mem/intune/apps/app-management)<br>[Microsoft Endpoint Manager overview](/mem/endpoint-manager-overview) |
-| CM.L2-3.4.5 | Azure Active Directory (Azure AD) is a cloud-based identity and access management service. Customers don't have physical access to the Azure AD datacenters. As such, each physical access restriction is satisfied by Microsoft and inherited by the customers of Azure AD. Implement Azure AD role based access controls. Eliminate standing privileged access, provide just in time access with approval workflows with Privileged Identity Management.<br>[Overview of Azure Active Directory role-based access control (RBAC)](/azure/active-directory/roles/custom-overview)<br>[What is Privileged Identity Management?](/azure/active-directory/privileged-identity-management/pim-configure)<br>[Approve or deny requests for Azure AD roles in PIM](/azure/active-directory/privileged-identity-management/azure-ad-pim-approval-workflow) |
-| CM.L2-3.4.6 | Configure device management solutions (Such as Microsoft Intune) to implement a custom security baseline applied to organizational systems to remove non-essential applications and disable unnecessary services. Leave only the fewest capabilities necessary for the systems to operate effectively. Configure conditional access to restrict access to compliant or hybrid Azure AD joined devices. <br>[What is Microsoft Intune](/mem/intune/fundamentals/what-is-intune)<br>[Require device to be marked as compliant](../conditional-access/require-managed-devices.md)<br>[Grant controls in Conditional Access policy - Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md) |
-| CM.L2-3.4.7 | Use Application Administrator role to delegate authorized use of essential applications. Use App Roles or group claims to manage least privilege access within application. Configure user consent to require admin approval and don't allow group owner consent. Configure Admin consent request workflows to enable users to request access to applications that require admin consent. Use Microsoft Defender for Cloud Apps to identify unsanctioned/unknown application use. Use this telemetry to then determine essential/non-essential apps.<br>[Azure AD built-in roles - Application Administrator](/azure/active-directory/roles/permissions-reference)<br>[Azure AD App Roles - App Roles vs. Groups ](/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps)<br>[Configure how users consent to applications](/azure/active-directory/manage-apps/configure-user-consent?tabs=azure-portal.md)<br>[Configure group owner consent to apps accessing group data](/azure/active-directory/manage-apps/configure-user-consent-groups?tabs=azure-portal.md)<br>[Configure the admin consent workflow](/azure/active-directory/manage-apps/configure-admin-consent-workflow)<br>[What is Defender for Cloud Apps?](/defender-cloud-apps/what-is-defender-for-cloud-apps)<br>[Discover and manage Shadow IT tutorial](/defender-cloud-apps/tutorial-shadow-it) |
-| CM.L2-3.4.8 <br><br>CM.L2-3.4.9 | Configure MDM/configuration management policy to prevent the use of unauthorized software. Configure conditional access grant controls to require compliant or hybrid joined device to incorporate device compliance with MDM/configuration management policy into the conditional access authorization decision.<br>[What is Microsoft Intune](/mem/intune/fundamentals/what-is-intune)<br>[Conditional Access - Require compliant or hybrid joined devices](/azure/active-directory/conditional-access/howto-conditional-access-policy-compliant-device) |
+| CM.L2-3.4.2<br><br>**Practice statement:** Establish and enforce security configuration settings for information technology products employed in organizational systems.<br><br>**Objectives:**<br>Determine if:<br>[a.] security configuration settings for information technology products employed in the system are established and included in the baseline configuration; and<br>[b.] security configuration settings for information technology products employed in the system are enforced. | Adopt a zero-trust security posture. Use conditional access policies to restrict access to compliant devices. Configure policy settings on the device to enforce security configuration settings on the device with MDM solutions such as Microsoft Intune. Microsoft Endpoint Configuration Manager(MECM) or group policy objects can also be considered in hybrid deployments and combined with conditional access require hybrid Azure AD joined device.<br><br>**Zero-trust**<br>[Securing identity with Zero Trust](/security/zero-trust/identity.md)<br><br>**Conditional access**<br>[What is conditional access in Azure AD?](/azure/active-directory/conditional-access/overview.md)<br>[Grant controls in Conditional Access policy](/azure/active-directory/conditional-access/concept-conditional-access-grant.md)<br><br>**Device policies**<br>[What is Microsoft Intune?](/mem/intune/fundamentals/what-is-intune.md)<br>[What is Defender for Cloud Apps?](/cloud-app-security/what-is-cloud-app-security.md)<br>[What is app management in Microsoft Intune?](/mem/intune/apps/app-management.md)<br>[Microsoft Endpoint Manager overview](/mem/endpoint-manager-overview.md) |
+| CM.L2-3.4.5<br><br>**Practice statement:** Define, document, approve, and enforce physical and logical access restrictions associated with changes to organizational systems.<br><br>**Objectives:**<br>Determine if:<br>[a.] physical access restrictions associated with changes to the system are defined;<br>[b.] physical access restrictions associated with changes to the system are documented;<br>[c.] physical access restrictions associated with changes to the system are approved;<br>[d.] physical access restrictions associated with changes to the system are enforced;<br>[e.] logical access restrictions associated with changes to the system are defined;<br>[f.] logical access restrictions associated with changes to the system are documented;<br>[g.] logical access restrictions associated with changes to the system are approved; and<br>[h.] logical access restrictions associated with changes to the system are enforced. | Azure Active Directory (Azure AD) is a cloud-based identity and access management service. Customers don't have physical access to the Azure AD datacenters. As such, each physical access restriction is satisfied by Microsoft and inherited by the customers of Azure AD. Implement Azure AD role based access controls. Eliminate standing privileged access, provide just in time access with approval workflows with Privileged Identity Management.<br>[Overview of Azure Active Directory role-based access control (RBAC)](/azure/active-directory/roles/custom-overview.md)<br>[What is Privileged Identity Management?](/azure/active-directory/privileged-identity-management/pim-configure.md)<br>[Approve or deny requests for Azure AD roles in PIM](/azure/active-directory/privileged-identity-management/azure-ad-pim-approval-workflow.md) |
+| CM.L2-3.4.6<br><br>**Practice statement:** Employ the principle of least functionality by configuring organizational systems to provide only essential capabilities.<br><br>**Objectives:**<br>Determine if:<br>[a.] essential system capabilities are defined based on the principle of least functionality; and<br>[b.] the system is configured to provide only the defined essential capabilities. | Configure device management solutions (Such as Microsoft Intune) to implement a custom security baseline applied to organizational systems to remove non-essential applications and disable unnecessary services. Leave only the fewest capabilities necessary for the systems to operate effectively. Configure conditional access to restrict access to compliant or hybrid Azure AD joined devices. <br>[What is Microsoft Intune](/mem/intune/fundamentals/what-is-intune.md)<br>[Require device to be marked as compliant](../conditional-access/require-managed-devices.md)<br>[Grant controls in Conditional Access policy - Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md) |
+| CM.L2-3.4.7<br><br>**Practice statement:** Restrict, disable, or prevent the use of nonessential programs, functions, ports, protocols, and services.<br><br>**Objectives:**<br>Determine if:<br>[a.]essential programs are defined;<br>[b.] the use of nonessential programs is defined;<br>[c.] the use of nonessential programs is restricted, disabled, or prevented as defined;<br>[d.] essential functions are defined;<br>[e.] the use of nonessential functions is defined;<br>[f.] the use of nonessential functions is restricted, disabled, or prevented as defined;<br>[g.] essential ports are defined;<br>[h.] the use of nonessential ports is defined;<br>[i.] the use of nonessential ports is restricted, disabled, or prevented as defined;<br>[j.] essential protocols are defined;<br>[k.] the use of nonessential protocols is defined;<br>[l.] the use of nonessential protocols is restricted, disabled, or prevented as defined;<br>[m.] essential services are defined;<br>[n.] the use of nonessential services is defined; and<br>[o.] the use of nonessential services is restricted, disabled, or prevented as defined. | Use Application Administrator role to delegate authorized use of essential applications. Use App Roles or group claims to manage least privilege access within application. Configure user consent to require admin approval and don't allow group owner consent. Configure Admin consent request workflows to enable users to request access to applications that require admin consent. Use Microsoft Defender for Cloud Apps to identify unsanctioned/unknown application use. Use this telemetry to then determine essential/non-essential apps.<br>[Azure AD built-in roles - Application Administrator](/azure/active-directory/roles/permissions-reference.md)<br>[Azure AD App Roles - App Roles vs. Groups ](/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps.md)<br>[Configure how users consent to applications](/azure/active-directory/manage-apps/configure-user-consent?tabs=azure-portal.md)<br>[Configure group owner consent to apps accessing group data](/azure/active-directory/manage-apps/configure-user-consent-groups?tabs=azure-portal.md)<br>[Configure the admin consent workflow](/azure/active-directory/manage-apps/configure-admin-consent-workflow.md)<br>[What is Defender for Cloud Apps?](/defender-cloud-apps/what-is-defender-for-cloud-apps.d)<br>[Discover and manage Shadow IT tutorial](/defender-cloud-apps/tutorial-shadow-it.md) |
+| CM.L2-3.4.8<br><br>**Practice statement:** Apply deny-by-exception (blocklist) policy to prevent the use of unauthorized software or deny-all, permit-by-exception (allowlist) policy to allow the execution of authorized software.<br><br>**Objectives:**<br>Determine if:<br>[a.] a policy specifying whether allowlist or blocklist is to be implemented is specified;<br>[b.] the software allowed to execute under allowlist or denied use under blocklist is specified; and<br>[c.] allowlist to allow the execution of authorized software or blocklist to prevent the use of unauthorized software is implemented as specified.<br><br>CM.L2-3.4.9<br><br>**Practice statement:** Control and monitor user-installed software.<br><br>**Objectives:**<br>Determine if:<br>[a.] a policy for controlling the installation of software by users is established;<br>[b.] installation of software by users is controlled based on the established policy; and<br>[c.] installation of software by users is monitored. | Configure MDM/configuration management policy to prevent the use of unauthorized software. Configure conditional access grant controls to require compliant or hybrid joined device to incorporate device compliance with MDM/configuration management policy into the conditional access authorization decision.<br>[What is Microsoft Intune](/mem/intune/fundamentals/what-is-intune.md)<br>[Conditional Access - Require compliant or hybrid joined devices](/azure/active-directory/conditional-access/howto-conditional-access-policy-compliant-device.md) |
## Incident Response (IR)
-The following table provides a list of control IDs and associated customer responsibilities and guidance.
+The following table provides a list of practice statement and objectives, and Azure AD guidance and recommendations to enable you to meet these requirements with Azure AD.
-| *Control* | *Guidance* |
+| CMMC practice statement and objectives | Azure AD guidance and recommendations |
| - | - |
-| IR.L2-3.6.1 | Implement incident handling and monitoring capabilities. The audit logs record all configuration changes. Authentication and authorization events are audited within the sign-in logs, and any detected risks are audited in the Identity Protection logs. You can stream each of these logs directly into a SIEM solution, such as Microsoft Sentinel. Alternatively, use Azure Event Hubs to integrate logs with third-party SIEM solutions.<br><br>**Audit events**<br>[Audit activity reports in the Azure Active Directory portal](/azure/active-directory/reports-monitoring/concept-audit-logs)<br>[Sign-in activity reports in the Azure Active Directory portal](/azure/active-directory/reports-monitoring/concept-sign-ins)<br>[How To: Investigate risk](/azure/active-directory/identity-protection/howto-identity-protection-investigate-risk)<br><br>**SIEM integrations**<br>[Microsoft Sentinel : Connect data from Azure Active Directory (Azure AD)](/azure/sentinel/connect-azure-active-directory)[Stream to Azure event hub and other SIEMs](/azure/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub) |
+| IR.L2-3.6.1<br><br>**Practice statement:** Establish an operational incident-handling capability for organizational systems that includes preparation, detection, analysis, containment, recovery, and user response activities.<br><br>**Objectives:**<br>Determine if:<br>[a.] an operational incident-handling capability is established;<br>[b.] the operational incident-handling capability includes preparation;<br>[c.] the operational incident-handling capability includes detection;<br>[d.] the operational incident-handling capability includes analysis;<br>[e.] the operational incident-handling capability includes containment;<br>[f.] the operational incident-handling capability includes recovery; and<br>[g.] the operational incident-handling capability includes user response activities. | Implement incident handling and monitoring capabilities. The audit logs record all configuration changes. Authentication and authorization events are audited within the sign-in logs, and any detected risks are audited in the Identity Protection logs. You can stream each of these logs directly into a SIEM solution, such as Microsoft Sentinel. Alternatively, use Azure Event Hubs to integrate logs with third-party SIEM solutions.<br><br>**Audit events**<br>[Audit activity reports in the Azure Active Directory portal](/azure/active-directory/reports-monitoring/concept-audit-logs.md)<br>[Sign-in activity reports in the Azure Active Directory portal](/azure/active-directory/reports-monitoring/concept-sign-ins.md)<br>[How To: Investigate risk](/azure/active-directory/identity-protection/howto-identity-protection-investigate-risk.md)<br><br>**SIEM integrations**<br>[Microsoft Sentinel : Connect data from Azure Active Directory (Azure AD)](/azure/sentinel/connect-azure-active-directory.md)[Stream to Azure event hub and other SIEMs](/azure/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) |
## Maintenance (MA)
-The following table provides a list of control IDs and associated customer responsibilities and guidance.
+The following table provides a list of practice statement and objectives, and Azure AD guidance and recommendations to enable you to meet these requirements with Azure AD.
-| *Control* | *Guidance* |
+| CMMC practice statement and objectives | Azure AD guidance and recommendations |
| - | - |
-| MA.L2-3.7.5 | Accounts assigned administrative rights are targeted by attackers, including accounts used to establish non-local maintenance sessions. Requiring multifactor authentication (MFA) on those accounts is an easy way to reduce the risk of those accounts being compromised.<br>[Conditional Access - Require MFA for administrators](../conditional-access/howto-conditional-access-policy-admin-mfa.md) |
-| MP.L2-3.8.7 | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to control the use of removable media on systems. Deploy and manage Removable Storage Access Control using Intune or Group Policy. Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](/azure/active-directory/conditional-access/concept-conditional-access-grant#require-device-to-be-marked-as-compliant.md)<br>[Require hybrid Azure AD joined device](/azure/active-directory/conditional-access/concept-conditional-access-grant#require-hybrid-azure-ad-joined-device.md)<br><br>**Intune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started)<br><br>**Removable storage access control**<br>[Deploy and manage Removable Storage Access Control using Intune](/microsoft-365/security/defender-endpoint/deploy-manage-removable-storage-intune?view=o365-worldwide&preserve-view=true)<br>[Deploy and manage Removable Storage Access Control using group policy](/microsoft-365/security/defender-endpoint/deploy-manage-removable-storage-group-policy?view=o365-worldwide&preserve-view=true) |
+| MA.L2-3.7.5<br><br>**Practice statement:** Require multifactor authentication to establish nonlocal maintenance sessions via external network connections and terminate such connections when nonlocal maintenance is complete.<br><br>**Objectives:**<br>Determine if:<br>[a.] multifactor authentication is used to establish nonlocal maintenance sessions via external network connections; and<br>[b.] nonlocal maintenance sessions established via external network connections are terminated when nonlocal maintenance is complete.| Accounts assigned administrative rights are targeted by attackers, including accounts used to establish non-local maintenance sessions. Requiring multifactor authentication (MFA) on those accounts is an easy way to reduce the risk of those accounts being compromised.<br>[Conditional Access - Require MFA for administrators](../conditional-access/howto-conditional-access-policy-admin-mfa.md) |
+| MP.L2-3.8.7<br><br>**Practice statement:** Control the use of removable media on system components.<br><br>**Objectives:**<br>Determine if:<br>[a.] the use of removable media on system components is controlled. | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to control the use of removable media on systems. Deploy and manage Removable Storage Access Control using Intune or Group Policy. Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](/azure/active-directory/conditional-access/concept-conditional-access-grant#require-device-to-be-marked-as-compliant.md)<br>[Require hybrid Azure AD joined device](/conditional-access/concept-conditional-access-grant#require-hybrid-azure-ad-joined-device.md)<br><br>**Intune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started.md)<br><br>**Removable storage access control**<br>[Deploy and manage Removable Storage Access Control using Intune](/microsoft-365/security/defender-endpoint/deploy-manage-removable-storage-intune?view=o365-worldwide&preserve-view=true)<br>[Deploy and manage Removable Storage Access Control using group policy](/microsoft-365/security/defender-endpoint/deploy-manage-removable-storage-group-policy?view=o365-worldwide&preserve-view=true) |
## Personnel Security (PS)
-The following table provides a list of control IDs and associated customer responsibilities and guidance.
+The following table provides a list of practice statement and objectives, and Azure AD guidance and recommendations to enable you to meet these requirements with Azure AD.
-| *Control* | *Guidance* |
+| CMMC practice statement and objectives | Azure AD guidance and recommendations |
| - | - |
-| PS.L2-3.9.2 | Configure provisioning (including disablement upon termination) of accounts in Azure AD from external HR systems, on-premises Active Directory, or directly in the cloud. Terminate all system access by revoking existing sessions.<br><br>**Account provisioning**<br>[What is identity provisioning with Azure AD?](/azure/active-directory/cloud-sync/what-is-provisioning)<br>[Azure AD Connect sync: Understand and customize synchronization](/azure/active-directory/hybrid/how-to-connect-sync-whatis)<br>[What is Azure AD Connect cloud sync?](/azure/active-directory/cloud-sync/what-is-cloud-sync)<br><br>**Revoke all associated authenticators**<br>[Revoke user access in an emergency in Azure Active Directory](/azure/active-directory/enterprise-users/users-revoke-access) |
+| PS.L2-3.9.2<br><br>**Practice statement:** Ensure that organizational systems containing CUI are protected during and after personnel actions such as terminations and transfers.<br><br>**Objectives:**<br>Determine if:<br>[a.] a policy and/or process for terminating system access and any credentials coincident with personnel actions is established;<br>[b.] system access and credentials are terminated consistent with personnel actions such as termination or transfer; and<br>[c] the system is protected during and after personnel transfer actions. | Configure provisioning (including disablement upon termination) of accounts in Azure AD from external HR systems, on-premises Active Directory, or directly in the cloud. Terminate all system access by revoking existing sessions.<br><br>**Account provisioning**<br>[What is identity provisioning with Azure AD?](/azure/active-directory/cloud-sync/what-is-provisioning.md)<br>[Azure AD Connect sync: Understand and customize synchronization](/azure/active-directory/hybrid/how-to-connect-sync-whatis.md)<br>[What is Azure AD Connect cloud sync?](/azure/active-directory/cloud-sync/what-is-cloud-sync.md)<br><br>**Revoke all associated authenticators**<br>[Revoke user access in an emergency in Azure Active Directory](/azure/active-directory/enterprise-users/users-revoke-access.md) |
## System and Communications Protection (SC)
-The following table provides a list of control IDs and associated customer responsibilities and guidance.
+The following table provides a list of practice statement and objectives, and Azure AD guidance and recommendations to enable you to meet these requirements with Azure AD.
-| *Control* | *Guidance* |
+| CMMC practice statement and objectives | Azure AD guidance and recommendations |
| - | - |
-| SC.L2-3.13.3 | Maintain separate user accounts in Azure Active Directory for everyday productivity use and administrative or system/privileged management. Privileged accounts should be cloud-only or managed accounts and not synchronized from on-premises to protect the cloud environment from on-premises compromise. System/privileged access should only be permitted from a security hardened privileged access workstation (PAW). Configure Conditional Access device filters to restrict access to administrative applications from PAWs that are enabled using Azure Virtual Desktops.<br>[Why are privileged access devices important](/security/compass/privileged-access-devices)<br>[Device Roles and Profiles](/security/compass/privileged-access-devices)<br>[Filter for devices as a condition in Conditional Access policy](../conditional-access/concept-condition-filters-for-devices.md)<br>[Azure Virtual Desktop](https://azure.microsoft.com/products/virtual-desktop/) |
-| SC.L2-3.13.4 | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to ensure devices are compliant with system hardening procedures. Include compliance with company policy regarding software patches to prevent attackers from exploiting flaws.<br><br>Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](/azure/active-directory/conditional-access/concept-conditional-access-grant)<br>[Require hybrid Azure AD joined device](/azure/active-directory/conditional-access/concept-conditional-access-grant)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started)<br><br>9-20 check split tunneling language. |
-| SC.L2-3.13.13 | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to disable the use of mobile code. Where use of mobile code is required monitor the use with endpoint security such as Microsoft Defender for Endpoint.<br><br>Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](/azure/active-directory/conditional-access/concept-conditional-access-grant)<br>[Require hybrid Azure AD joined device](/azure/active-directory/conditional-access/concept-conditional-access-grant)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started)<br><br>**Defender for Endpoint**<br>[Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint?view=o365-worldwide&preserve-view=true) |
+| SC.L2-3.13.3<br><br>**Practice statement:** Separate user functionality form system management functionality. <br><br>**Objectives:**<br>Determine if:<br>[a.] user functionality is identified;<br>[b.] system management functionality is identified; and<br>[c.] user functionality is separated from system management functionality. | Maintain separate user accounts in Azure Active Directory for everyday productivity use and administrative or system/privileged management. Privileged accounts should be cloud-only or managed accounts and not synchronized from on-premises to protect the cloud environment from on-premises compromise. System/privileged access should only be permitted from a security hardened privileged access workstation (PAW). Configure Conditional Access device filters to restrict access to administrative applications from PAWs that are enabled using Azure Virtual Desktops.<br>[Why are privileged access devices important](/security/compass/privileged-access-devices.md)<br>[Device Roles and Profiles](/security/compass/privileged-access-devices.md)<br>[Filter for devices as a condition in Conditional Access policy](../conditional-access/concept-condition-filters-for-devices.md)<br>[Azure Virtual Desktop](https://azure.microsoft.com/products/virtual-desktop/) |
+| SC.L2-3.13.4<br><br>**Practice statement:** Prevent unauthorized and unintended information transfer via shared system resources.<br><br>**Objectives:**<br>Determine if:<br>[a.] unauthorized and unintended information transfer via shared system resources is prevented. | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to ensure devices are compliant with system hardening procedures. Include compliance with company policy regarding software patches to prevent attackers from exploiting flaws.<br><br>Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](/azure/active-directory/conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](/azure/active-directory/conditional-access/concept-conditional-access-grant.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started.md) |
+| SC.L2-3.13.13<br><br>**Practice statement:** Control and monitor the use of mobile code.<br><br>**Objectives:**<br>Determine if:<br>[a.] use of mobile code is controlled; and<br>[b.] use of mobile code is monitored. | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to disable the use of mobile code. Where use of mobile code is required monitor the use with endpoint security such as Microsoft Defender for Endpoint.<br><br>Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](/azure/active-directory/conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](/azure/active-directory/conditional-access/concept-conditional-access-grant.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started.md)<br><br>**Defender for Endpoint**<br>[Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint?view=o365-worldwide&preserve-view=true) |
## System and Information Integrity (SI)
-The following table provides a list of control IDs and associated customer responsibilities and guidance.
+The following table provides a list of practice statement and objectives, and Azure AD guidance and recommendations to enable you to meet these requirements with Azure AD.
-| *Control* | *Guidance* |
+| CMMC practice statement and objectives | Azure AD guidance and recommendations |
| - | - |
-| SI.L2-3.14.7 | Consolidate telemetry: Azure AD logs to stream to SIEM, such as Azure Sentinel Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM), or group policy objects (GPO) to require Intrusion Detection/Protection (IDS/IPS) such as Microsoft Defender for Endpoint is installed and in use. Use telemetry provided by the IDS/IPS to identify unusual activities or conditions related to inbound and outbound communications traffic or unauthorized use.<br><br>Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](/azure/active-directory/conditional-access/concept-conditional-access-grant)<br>[Require hybrid Azure AD joined device](/azure/active-directory/conditional-access/concept-conditional-access-grant)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started)<br><br>**Defender for Endpoint**<br>[Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint?view=o365-worldwide&preserve-view=true) |
+| SI.L2-3.14.7<br><br>**Practice statement:**<br><br>**Objectives:** Identify unauthorized use of organizational systems.<br>Determine if:<br>[a.] authorized use of the system is defined; and<br>[b.] unauthorized use of the system is identified. | Consolidate telemetry: Azure AD logs to stream to SIEM, such as Azure Sentinel Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM), or group policy objects (GPO) to require Intrusion Detection/Protection (IDS/IPS) such as Microsoft Defender for Endpoint is installed and in use. Use telemetry provided by the IDS/IPS to identify unusual activities or conditions related to inbound and outbound communications traffic or unauthorized use.<br><br>Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](/azure/active-directory/conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](/azure/active-directory/conditional-access/concept-conditional-access-grant.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started.md)<br><br>**Defender for Endpoint**<br>[Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint?view=o365-worldwide&preserve-view=true) |
### Next steps
aks Api Server Vnet Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/api-server-vnet-integration.md
An Azure Kubernetes Service (AKS) cluster configured with API Server VNet Integration (Preview) projects the API server endpoint directly into a delegated subnet in the VNet where AKS is deployed. API Server VNet Integartion enables network communication between the API server and the cluster nodes without requiring a private link or tunnel. The API server is available behind an Internal Load Balancer VIP in the delegated subnet, which the nodes are configured to utilize. By using API Server VNet Integration, you can ensure network traffic between your API server and your node pools remains on the private network only. - ## API server connectivity The control plane or API server is in an Azure Kubernetes Service (AKS)-managed Azure subscription. A customer's cluster or node pool is in the customer's subscription. The server and the virtual machines that make up the cluster nodes can communicate with each other through the API server VIP and pod IPs that are projected into the delegated subnet.
API Server VNet Integration is available in all global Azure regions except the
* Azure CLI with aks-preview extension 0.5.97 or later. * If using ARM or the REST API, the AKS API version must be 2022-04-02-preview or later.
-### Install the aks-preview CLI extension
+## Install the aks-preview Azure CLI extension
-```azurecli-interactive
-# Install the aks-preview extension
+
+To install the aks-preview extension, run the following command:
+
+```azurecli
az extension add --name aks-preview
+```
+
+Run the following command to update to the latest version of the extension released:
-# Update the extension to make sure you have the latest version installed
+```azurecli
az extension update --name aks-preview ```
-### Register the `EnableAPIServerVnetIntegrationPreview` preview feature
-
-To create an AKS cluster with API Server VNet Integration, you must enable the `EnableAPIServerVnetIntegrationPreview` feature flag on your subscription.
+## Register the 'EnableAPIServerVnetIntegrationPreview' feature flag
-Register the `EnableAPIServerVnetIntegrationPreview` feature flag by using the `az feature register` command, as shown in the following example:
+Register the `EnableAPIServerVnetIntegrationPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
```azurecli-interactive az feature register --namespace "Microsoft.ContainerService" --name "EnableAPIServerVnetIntegrationPreview" ```
-It takes a few minutes for the status to show *Registered*. Verify the registration status by using the `az feature list` command:
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature show][az-feature-show] command:
```azurecli-interactive
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/EnableAPIServerVnetIntegrationPreview')].{Name:name,State:properties.state}"
+az feature show --namespace "Microsoft.ContainerService" --name "EnableAPIServerVnetIntegrationPreview"
```
-When the feature has been registered, refresh the registration of the *Microsoft.ContainerService* resource provider by using the `az provider register` command:
+When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
```azurecli-interactive az provider register --namespace Microsoft.ContainerService
az aks update -n <cluster-name> \
For associated best practices, see [Best practices for network connectivity and security in AKS][operator-best-practices-network]. <!-- LINKS - internal -->
-[az-provider-register]: /cli/azure/provider#az_provider_register
-[az-feature-register]: /cli/azure/feature#az_feature_register
-[az-feature-list]: /cli/azure/feature#az_feature_list
-[az-extension-add]: /cli/azure/extension#az_extension_add
-[az-extension-update]: /cli/azure/extension#az_extension_update
+[az-provider-register]: /cli/azure/provider#az-provider-register
+[az-feature-register]: /cli/azure/feature#az-feature-register
+[az-feature-list]: /cli/azure/feature#az-feature-list
+[az-feature-show]: /cli/azure/feature#az-feature-show
+[az-extension-add]: /cli/azure/extension#az-extension-add
+[az-extension-update]: /cli/azure/extension#az-extension-update
[private-link-service]: ../private-link/private-link-service-overview.md#limitations [private-endpoint-service]: ../private-link/private-endpoint-overview.md [virtual-network-peering]: ../virtual-network/virtual-network-peering-overview.md
aks Azure Blob Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-blob-csi.md
Title: Use Container Storage Interface (CSI) driver for Azure Blob storage on Az
description: Learn how to use the Container Storage Interface (CSI) driver for Azure Blob storage in an Azure Kubernetes Service (AKS) cluster. Previously updated : 11/30/2022 Last updated : 12/27/2022
Mounting Azure Blob storage as a file system into a container or pod, enables yo
The data on the object storage can be accessed by applications using BlobFuse or Network File System (NFS) 3.0 protocol. Before the introduction of the Azure Blob storage CSI driver, the only option was to manually install an unsupported driver to access Blob storage from your application running on AKS. When the Azure Blob storage CSI driver is enabled on AKS, there are two built-in storage classes: *azureblob-fuse-premium* and *azureblob-nfs-premium*. > [!NOTE]
-> Azure Blob CSI driver only supports NFS 3.0 protocol for Kubernetes versions 1.25 (preview) on AKS.
+> Azure Blob CSI driver only supports NFS 3.0 protocol for Kubernetes versions 1.25 on AKS.
To create an AKS cluster with CSI drivers support, see [CSI drivers on AKS][csi-drivers-aks]. To learn more about the differences in access between each of the Azure storage types using the NFS protocol, see [Compare access to Azure Files, Blob Storage, and Azure NetApp Files with NFS][compare-access-with-nfs].
aks Azure Cni Overlay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overlay.md
Register the `AzureOverlayPreview` feature flag by using the [az feature registe
az feature register --namespace "Microsoft.ContainerService" --name "AzureOverlayPreview" ```
-It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature list][az-feature-list] command:
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature show][az-feature-show] command:
```azurecli-interactive
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/AzureOverlayPreview')].{Name:name,State:properties.state}"
+az feature show --namespace "Microsoft.ContainerService" --name "AzureOverlayPreview"
```
-When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
+When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
```azurecli-interactive az provider register --namespace Microsoft.ContainerService
The following steps create a new virtual network with a subnet for the cluster n
## Next steps
-To learn how to utilize AKS with your own Container Network Interface (CNI) plugin, see [Bring your own Container Network Interface (CNI) plugin](use-byo-cni.md).
+To learn how to utilize AKS with your own Container Network Interface (CNI) plugin, see [Bring your own Container Network Interface (CNI) plugin](use-byo-cni.md).
+
+<!-- LINKS - internal -->
+[az-provider-register]: /cli/azure/provider#az-provider-register
+[az-feature-register]: /cli/azure/feature#az-feature-register
+[az-feature-show]: /cli/azure/feature#az-feature-show
aks Azure Cni Powered By Cilium https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-powered-by-cilium.md
By making use of eBPF programs loaded into the Linux kernel and a more efficient
- Better observability of cluster traffic - Support for larger clusters (more nodes, pods, and services) - ## IP Address Management (IPAM) with Azure CNI Powered by Cilium Azure CNI Powered by Cilium can be deployed using two different methods for assigning pod IPs:
Azure CNI powered by Cilium currently has the following limitations:
* Azure CLI with aks-preview extension 0.5.109 or later. * If using ARM templates or the REST API, the AKS API version must be 2022-09-02-preview or later.
-### Install the aks-preview CLI extension
+## Install the aks-preview Azure CLI extension
-```azurecli-interactive
-# Install the aks-preview extension
+
+To install the aks-preview extension, run the following command:
+
+```azurecli
az extension add --name aks-preview
+```
+
+Run the following command to update to the latest version of the extension released:
-# Update the extension to make sure you have the latest version installed
+```azurecli
az extension update --name aks-preview ```
-### Register the `CiliumDataplanePreview` preview feature
-
-To create an AKS cluster with Azure CNI powered by Cilium, you must enable the `CiliumDataplanePreview` feature flag on your subscription.
+## Register the 'CiliumDataplanePreview' feature flag
-Register the `CiliumDataplanePreview` feature flag by using the `az feature register` command, as shown in the following example:
+Register the `CiliumDataplanePreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
```azurecli-interactive az feature register --namespace "Microsoft.ContainerService" --name "CiliumDataplanePreview" ```
-It takes a few minutes for the status to show *Registered*. Verify the registration status by using the `az feature list` command:
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature show][az-feature-show] command:
```azurecli-interactive
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/CiliumDataplanePreview')].{Name:name,State:properties.state}"
+az feature show --namespace "Microsoft.ContainerService" --name "CiliumDataplanePreview"
```
-When the feature has been registered, refresh the registration of the *Microsoft.ContainerService* resource provider by using the `az provider register` command:
+When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
```azurecli-interactive az provider register --namespace Microsoft.ContainerService
Learn more about networking in AKS in the following articles:
[aks-ingress-static-tls]: ingress-static-ip.md [aks-http-app-routing]: http-application-routing.md [aks-ingress-internal]: ingress-internal-ip.md
+[az-provider-register]: /cli/azure/provider#az-provider-register
+[az-feature-register]: /cli/azure/feature#az-feature-register
+[az-feature-show]: /cli/azure/feature#az-feature-show
aks Azure Csi Blob Storage Static https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-blob-storage-static.md
description: Learn how to create a static persistent volume with Azure Blob storage for use with multiple concurrent pods in Azure Kubernetes Service (AKS) Previously updated : 07/21/2022 Last updated : 12/27/2022
For more information on Kubernetes volumes, see [Storage options for application
- If you don't have a storage account that supports the NFS v3 protocol, review [NFS v3 support with Azure Blob storage][azure-blob-storage-nfs-support]. -- [Enable the Blob storage CSI driver][enable-blob-csi-driver] (preview) on your AKS cluster.
+- [Enable the Blob storage CSI driver][enable-blob-csi-driver] on your AKS cluster.
## Static provisioning parameters |Name | Description | Example | Mandatory | Default value| | | | | | |
+|volumeHandle | Specify a value the driver can use to uniquely identify the storage blob container in the cluster. | A recommended way to produce a unique value is to combine the globally unique storage account name and container name: {account-name}_{container-name}. Note: The # character is reserved for internal use and can't be used in a volume handle. | Yes ||
|volumeAttributes.resourceGroup | Specify Azure resource group name. | myResourceGroup | No | If empty, driver will use the same resource group name as current cluster.| |volumeAttributes.storageAccount | Specify existing Azure storage account name. | storageAccountName | Yes || |volumeAttributes.containerName | Specify existing container name. | container | Yes ||
The following example demonstrates how to mount a Blob storage container as a pe
csi: driver: blob.csi.azure.com readOnly: false
- # make sure this volumeid is unique in the cluster
- # `#` is not allowed in self defined volumeHandle
+ # make sure volumeid is unique for every identical storage blob container in the cluster
+ # character `#` is reserved for internal use and cannot be used in volumehandle
volumeHandle: unique-volumeid volumeAttributes: resourceGroup: resourceGroupName
Kubernetes needs credentials to access the Blob storage container created earlie
csi: driver: blob.csi.azure.com readOnly: false
- # make sure this volumeid is unique in the cluster
- # `#` is not allowed in self defined volumeHandle
+ # make sure volumeid is unique for every identical storage blob container in the cluster
+ # character `#` is reserved for internal use and cannot be used in volumehandle
volumeHandle: unique-volumeid volumeAttributes: containerName: containerName
aks Azure Files Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-files-csi.md
The output of the commands resembles the following example:
[nfs-overview]:/windows-server/storage/nfs/nfs-overview [kubectl-exec]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#exec [csi-specification]: https://github.com/container-storage-interface/spec/blob/master/spec.md
-[data-plane-api]: https://github.com/Azure/azure-sdk-for-go/blob/master/storage/share.go
+[data-plane-api]: https://github.com/Azure/azure-sdk-for-go/blob/main/sdk/azcore/internal/shared/shared.go
[vhd-disk-feature]: https://github.com/kubernetes-sigs/azurefile-csi-driver/blob/master/deploy/example/disk <!-- LINKS - internal -->
aks Azure Files Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-files-volume.md
description: Learn how to manually create a volume with Azure Files for use with multiple concurrent pods in Azure Kubernetes Service (AKS) Previously updated : 12/12/2022 Last updated : 12/26/2022 #Customer intent: As a developer, I want to learn how to manually create and attach storage using Azure Files to a pod in AKS.
spec:
volumeAttributes: secretName: azure-secret # required shareName: aksshare # required
- mountOptions: "dir_mode=0777,file_mode=0777,cache=strict,actimeo=30" # optional
+ mountOptions: "dir_mode=0777,file_mode=0777,cache=strict,actimeo=30,nosharesock" # optional
``` Use the `kubectl` command to create the pod.
spec:
csi: driver: file.csi.azure.com readOnly: false
- volumeHandle: unique-volumeid # make sure this volumeid is unique in the cluster
+ volumeHandle: unique-volumeid # make sure volumeid is unique for every identical share in the cluster
volumeAttributes: resourceGroup: EXISTING_RESOURCE_GROUP_NAME # optional, only set this when storage account is not in the same resource group as agent node shareName: aksshare
aks Concepts Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-network.md
For more information on core Kubernetes and AKS concepts, see the following arti
[support-policies]: support-policies.md [limit-egress]: limit-egress-traffic.md [k8s-ingress]: https://kubernetes.io/docs/concepts/services-networking/ingress/
-[nginx-ingress]: /ingress-basic.md
+[nginx-ingress]: ingress-basic.md
[ip-preservation]: https://techcommunity.microsoft.com/t5/fasttrack-for-azure/how-client-source-ip-preservation-works-for-loadbalancer/ba-p/3033722#:~:text=Enable%20Client%20source%20IP%20preservation%201%20Edit%20loadbalancer,is%20the%20same%20as%20the%20source%20IP%20%28srjumpbox%29. [nsg-traffic]: ../virtual-network/network-security-group-how-it-works.md [azure-cni-aks]: /configure-azure-cni.md
aks Configure Kube Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kube-proxy.md
The AKS managed `kube-proxy` DaemonSet can also be disabled entirely if that is desired to support [bring-your-own CNI][aks-byo-cni]. - ## Prerequisites * Azure CLI with aks-preview extension 0.5.105 or later. * If using ARM or the REST API, the AKS API version must be 2022-08-02-preview or later.
-### Install the aks-preview CLI extension
+## Install the aks-preview Azure CLI extension
-```azurecli-interactive
-# Install the aks-preview extension
+
+To install the aks-preview extension, run the following command:
+
+```azurecli
az extension add --name aks-preview
+```
+
+Run the following command to update to the latest version of the extension released:
-# Update the extension to make sure you have the latest version installed
+```azurecli
az extension update --name aks-preview ```
-### Register the `KubeProxyConfigurationPreview` preview feature
-
-To create an AKS cluster with custom `kube-proxy` configuration, you must enable the `KubeProxyConfigurationPreview` feature flag on your subscription.
+## Register the 'KubeProxyConfigurationPreview' feature flag
-Register the `KubeProxyConfigurationPreview` feature flag by using the `az feature register` command, as shown in the following example:
+Register the `KubeProxyConfigurationPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
```azurecli-interactive az feature register --namespace "Microsoft.ContainerService" --name "KubeProxyConfigurationPreview" ```
-It takes a few minutes for the status to show *Registered*. Verify the registration status by using the `az feature list` command:
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature show][az-feature-show] command:
```azurecli-interactive
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/KubeProxyConfigurationPreview')].{Name:name,State:properties.state}"
+az feature show --namespace "Microsoft.ContainerService" --name "KubeProxyConfigurationPreview"
```
-When the feature has been registered, refresh the registration of the *Microsoft.ContainerService* resource provider by using the `az provider register` command:
+When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
```azurecli-interactive az provider register --namespace Microsoft.ContainerService
Learn more about Kubernetes services at the [Kubernetes services documentation][
[aks-schema-kubeproxyconfig]: /azure/templates/microsoft.containerservice/managedclusters?pivots=deployment-language-bicep#containerservicenetworkprofilekubeproxyconfig <!-- LINKS - Internal -->
-[aks-byo-cni]: use-byo-cni.md
+[aks-byo-cni]: use-byo-cni.md
+[az-provider-register]: /cli/azure/provider#az-provider-register
+[az-feature-register]: /cli/azure/feature#az-feature-register
+[az-feature-show]: /cli/azure/feature#az-feature-show
aks Configure Kubenet Dual Stack https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kubenet-dual-stack.md
AKS clusters can now be deployed in a dual-stack (using both IPv4 and IPv6 addre
This article shows you how to use dual-stack networking with an AKS cluster. For more information on network options and considerations, see [Network concepts for Kubernetes and AKS][aks-network-concepts]. - ## Limitations > [!NOTE] > Dual-stack kubenet networking is currently not available in sovereign clouds. This note will be removed when rollout is complete.
This article shows you how to use dual-stack networking with an AKS cluster. For
* Azure CLI with the `aks-preview` extension 0.5.48 or newer. * If using Azure Resource Manager templates, schema version 2021-10-01 is required.
-### Register the `AKS-EnableDualStack` preview feature
+## Install the aks-preview Azure CLI extension
-To create an AKS dual-stack cluster, you must enable the `AKS-EnableDualStack` feature flag on your subscription.
-Register the `AKS-EnableDualStack` feature flag by using the `az feature register` command, as shown in the following example:
+To install the aks-preview extension, run the following command:
-```azurecli-interactive
-az feature register --namespace "Microsoft.ContainerService" --name "AKS-EnableDualStack"
+```azurecli
+az extension add --name aks-preview
```
-It takes a few minutes for the status to show *Registered*. Verify the registration status by using the `az feature list` command:
+Run the following command to update to the latest version of the extension released:
-```azurecli-interactive
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/AKS-EnableDualStack')].{Name:name,State:properties.state}"
+```azurecli
+az extension update --name aks-preview
```
-When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the `az provider register` command:
+## Register the 'AKS-EnableDualStack' feature flag
+
+Register the `AKS-EnableDualStack` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
```azurecli-interactive
-az provider register --namespace Microsoft.ContainerService
+az feature register --namespace "Microsoft.ContainerService" --name "AKS-EnableDualStack"
```
-### Install the aks-preview CLI extension
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature show][az-feature-show] command:
```azurecli-interactive
-# Install the aks-preview extension
-az extension add --name aks-preview
+az feature show --namespace "Microsoft.ContainerService" --name "AKS-EnableDualStack"
+```
-# Update the extension to make sure you have the latest version installed
-az extension update --name aks-preview
+When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
+
+```azurecli-interactive
+az provider register --namespace Microsoft.ContainerService
``` ## Overview of dual-stack networking in Kubernetes
curl -s "http://[${SERVICE_IP}]" | head -n5
[express-route]: ../expressroute/expressroute-introduction.md [network-comparisons]: concepts-network.md#compare-network-models [custom-route-table]: ../virtual-network/manage-route-table.md
-[user-assigned managed identity]: use-managed-identity.md#bring-your-own-control-plane-mi
+[user-assigned managed identity]: use-managed-identity.md#bring-your-own-control-plane-mi
+[az-provider-register]: /cli/azure/provider#az-provider-register
+[az-feature-register]: /cli/azure/feature#az-feature-register
+[az-feature-show]: /cli/azure/feature#az-feature-show
aks Custom Certificate Authority https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/custom-certificate-authority.md
Custom certificate authorities (CAs) allow you to establish trust between your A
This feature is applied per nodepool, so new and existing node pools must be configured to enable this feature. - ## Prerequisites * An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free). * [Azure CLI installed][azure-cli-install] (version 2.43.0 or greater). * A base64 encoded certificate string or a text file with certificate.
-### Limitations
+## Limitations
This feature isn't currently supported for Windows node pools.
-### Install the `aks-preview` extension
+## Install the aks-preview Azure CLI extension
+
-You also need the *aks-preview* Azure CLI extensions version 0.5.119 or later. Install the *aks-preview* extension by using the [az extension add][az-extension-add] command, or install any available updates by using the [az extension update][az-extension-update] command.
+To install the aks-preview extension, run the following command:
```azurecli
-# Install the aks-preview extension
az extension add --name aks-preview
+```
+
+Run the following command to update to the latest version of the extension released:
-# Update the extension to make sure you have the latest version installed
+```azurecli
az extension update --name aks-preview ```
-### Register the `CustomCATrustPreview` preview feature
+## Register the 'CustomCATrustPreview' feature flag
-Register the `CustomCATrustPreview` feature flag by using the [az feature register][az-feature-register] command:
+Register the `CustomCATrustPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
```azurecli az feature register --namespace "Microsoft.ContainerService" --name "CustomCATrustPreview" ```
-It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature list][az-feature-list] command:
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature show][az-feature-show] command:
-```azurecli
-az feature list --query "[?contains(name, 'Microsoft.ContainerService/CustomCATrustPreview')].{Name:name,State:properties.state}" -o table
+```azurecli-interactive
+az feature show --namespace "Microsoft.ContainerService" --name "CustomCATrustPreview"
```
-Refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
+When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
-```azurecli
+```azurecli-interactive
az provider register --namespace Microsoft.ContainerService ```
For more information on AKS security best practices, see [Best practices for clu
[az-aks-nodepool-update]: /cli/azure/aks#az-aks-update [az-extension-add]: /cli/azure/extension#az-extension-add [az-extension-update]: /cli/azure/extension#az-extension-update
-[az-feature-list]: /cli/azure/feature#az-feature-list
+[az-feature-show]: /cli/azure/feature#az-feature-show
[az-feature-register]: /cli/azure/feature#az-feature-register [az-provider-register]: /cli/azure/provider#az-provider-register
aks Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/faq.md
Any patch, including security patches, is automatically applied to the AKS clust
[node-updates-kured]: node-updates-kured.md [aks-preview-cli]: /cli/azure/aks [az-aks-create]: /cli/azure/aks#az-aks-create
-[aks-rm-template]: /azure/templates/microsoft.containerservice/2019-06-01/managedclusters
+[aks-rm-template]: /azure/templates/microsoft.containerservice/2022-09-01/managedclusters
[aks-cluster-autoscaler]: cluster-autoscaler.md [nodepool-upgrade]: use-multiple-node-pools.md#upgrade-a-node-pool [aks-windows-cli]: windows-container-cli.md
aks Gpu Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/gpu-cluster.md
There are two options for adding the NVIDIA device plugin:
AKS provides a fully configured AKS image that already contains the [NVIDIA device plugin for Kubernetes][nvidia-github].
-Register the `GPUDedicatedVHDPreview` feature:
+
+First, install the aks-preview Azure CLI extension by running the following command:
```azurecli
-az feature register --name GPUDedicatedVHDPreview --namespace Microsoft.ContainerService
+az extension add --name aks-preview
```
-It might take several minutes for the status to show as **Registered**. You can check the registration status by using the [az feature list](/cli/azure/feature#az-feature-list) command:
+Run the following command to update to the latest version of the extension released:
```azurecli
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/GPUDedicatedVHDPreview')].{Name:name,State:properties.state}"
+az extension update --name aks-preview
```
-When the status shows as registered, refresh the registration of the `Microsoft.ContainerService` resource provider by using the [az provider register](/cli/azure/provider#az-provider-register) command:
+Then, register the `GPUDedicatedVHDPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
-```azurecli
-az provider register --namespace Microsoft.ContainerService
+```azurecli-interactive
+az feature register --namespace "Microsoft.ContainerService" --name "GPUDedicatedVHDPreview"
```
-To install the aks-preview CLI extension, use the following Azure CLI commands:
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature show][az-feature-show] command:
-```azurecli
-az extension add --name aks-preview
+```azurecli-interactive
+az feature show --namespace "Microsoft.ContainerService" --name "GPUDedicatedVHDPreview"
```
-To update the aks-preview CLI extension, use the following Azure CLI commands:
+When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
-```azurecli
-az extension update --name aks-preview
+```azurecli-interactive
+az provider register --namespace Microsoft.ContainerService
``` ## Add a node pool for GPU nodes
For information on using Azure Kubernetes Service with Azure Machine Learning, s
[azureml-triton]: ../machine-learning/how-to-deploy-with-triton.md [aks-container-insights]: monitor-aks.md#container-insights [advanced-scheduler-aks]: /aks/operator-best-practices-advanced-scheduler.md
+[az-provider-register]: /cli/azure/provider#az-provider-register
+[az-feature-register]: /cli/azure/feature#az-feature-register
+[az-feature-show]: /cli/azure/feature#az-feature-show
aks Http Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/http-proxy.md
description: Use the HTTP proxy configuration feature for Azure Kubernetes Servi
Previously updated : 05/23/2022 Last updated : 01/09/2023
Some more complex solutions may require creating a chain of trust to establish s
## Limitations and other details The following scenarios are **not** supported:+ - Different proxy configurations per node pool - Updating proxy settings post cluster creation - User/Password authentication - Custom CAs for API server communication - Windows-based clusters - Node pools using Virtual Machine Availability Sets (VMAS)
+- Using * as wildcard attached to a domain suffix for noProxy
By default, *httpProxy*, *httpsProxy*, and *trustedCa* have no value. ## Prerequisites
-* An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free).
-* Latest version of [Azure CLI installed](/cli/azure/install-azure-cli).
+The latest version of the Azure CLI. Run `az --version` to find the version, and run `az upgrade` to upgrade the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
-## Configuring an HTTP proxy using Azure CLI
+## Configuring an HTTP proxy using the Azure CLI
Using AKS with an HTTP proxy is done at cluster creation, using the [az aks create][az-aks-create] command and passing in configuration as a JSON file.
The schema for the config file looks like this:
} ```
-`httpProxy`: A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be `http`.
-`httpsProxy`: A proxy URL to use for creating HTTPS connections outside the cluster. If this is not specified, then `httpProxy` is used for both HTTP and HTTPS connections.
-`noProxy`: A list of destination domain names, domains, IP addresses or other network CIDRs to exclude proxying.
-`trustedCa`: A string containing the `base64 encoded` alternative CA certificate content. For now we only support `PEM` format. Another thing to note is that, for compatibility with Go-based components that are part of the Kubernetes system, the certificate MUST support `Subject Alternative Names(SANs)` instead of the deprecated Common Name certs.
+* `httpProxy`: A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be `http`.
+* `httpsProxy`: A proxy URL to use for creating HTTPS connections outside the cluster. If this isn't specified, then `httpProxy` is used for both HTTP and HTTPS connections.
+* `noProxy`: A list of destination domain names, domains, IP addresses or other network CIDRs to exclude proxying.
+* `trustedCa`: A string containing the `base64 encoded` alternative CA certificate content. Currently only the `PEM` format is supported.
+
+> [!IMPORTANT]
+> For compatibility with Go-based components that are part of the Kubernetes system, the certificate **must** support `Subject Alternative Names(SANs)` instead of the deprecated Common Name certs.
Example input:
-Note the CA cert should be the base64 encoded string of the PEM format cert content.
+
+> [!NOTE]
+> The CA certificate should be the base64 encoded string of the PEM format cert content.
```json {
Note the CA cert should be the base64 encoded string of the PEM format cert cont
} ```
-Create a file and provide values for *httpProxy*, *httpsProxy*, and *noProxy*. If your environment requires it, also provide a *trustedCa* value. Next, deploy a cluster, passing in your filename via the `http-proxy-config` flag.
+Create a file and provide values for *httpProxy*, *httpsProxy*, and *noProxy*. If your environment requires it, provide a value for *trustedCa*. Next, deploy a cluster, passing in your filename using the `http-proxy-config` flag.
```azurecli az aks create -n $clusterName -g $resourceGroup --http-proxy-config aks-proxy-config.json
Your cluster will initialize with the HTTP proxy configured on the nodes.
## Configuring an HTTP proxy using Azure Resource Manager (ARM) templates
-Deploying an AKS cluster with an HTTP proxy configured via ARM template is straightforward. The same schema used for CLI deployment exists in the `Microsoft.ContainerService/managedClusters` definition under properties:
+Deploying an AKS cluster with an HTTP proxy configured using an ARM template is straightforward. The same schema used for CLI deployment exists in the `Microsoft.ContainerService/managedClusters` definition under properties:
```json "properties": {
Deploying an AKS cluster with an HTTP proxy configured via ARM template is strai
} ```
-In your template, provide values for *httpProxy*, *httpsProxy*, and *noProxy*. If necessary, also provide a value for `*trustedCa*. Deploy the template, and your cluster should initialize with your HTTP proxy configured on the nodes.
+In your template, provide values for *httpProxy*, *httpsProxy*, and *noProxy*. If necessary, provide a value for *trustedCa*. Deploy the template, and your cluster should initialize with your HTTP proxy configured on the nodes.
## Handling CA rollover
-Values for *httpProxy*, *httpsProxy*, and *noProxy* cannot be changed after cluster creation. However, to support rolling CA certs, the value for *trustedCa* can be changed and applied to the cluster with the [az aks update][az-aks-update] command.
+Values for *httpProxy*, *httpsProxy*, and *noProxy* can't be changed after cluster creation. However, to support rolling CA certs, the value for *trustedCa* can be changed and applied to the cluster with the [az aks update][az-aks-update] command.
-For example, assuming a new file has been created with the base64 encoded string of the new CA cert called *aks-proxy-config-2.json*, the following action will update the cluster:
+For example, assuming a new file has been created with the base64 encoded string of the new CA cert called *aks-proxy-config-2.json*, the following action updates the cluster:
```azurecli az aks update -n $clusterName -g $resourceGroup --http-proxy-config aks-proxy-config-2.json
az aks update -n $clusterName -g $resourceGroup --http-proxy-config aks-proxy-co
## Monitoring add-on configuration
-When using the HTTP proxy with the Monitoring add-on, the following configurations are supported:
+The HTTP proxy with the Monitoring add-on supports the following configurations:
- Outbound proxy without authentication - Outbound proxy with username & password authentication - Outbound proxy with trusted cert for Log Analytics endpoint
-The following configurations are not supported:
+The following configurations aren't supported:
- - The Custom Metrics and Recommended Alerts features are not supported when using proxy with trusted cert
- - Outbound proxy is not supported with Azure Monitor Private Link Scope (AMPLS)
+ - The Custom Metrics and Recommended Alerts features aren't supported when you use a proxy with trusted certificates
+ - Outbound proxy isn't supported with Azure Monitor Private Link Scope (AMPLS)
## Next steps-- For more on the network requirements of AKS clusters, see [control egress traffic for cluster nodes in AKS][aks-egress].
+For more information regarding the network requirements of AKS clusters, see [control egress traffic for cluster nodes in AKS][aks-egress].
<!-- LINKS - internal --> [aks-egress]: ./limit-egress-traffic.md
The following configurations are not supported:
[az-provider-register]: /cli/azure/provider#az_provider_register [az-extension-add]: /cli/azure/extension#az_extension_add [az-extension-update]: /cli/azure/extension#az-extension-update
+[install-azure-cli]: /cli/azure/install-azure-cli
aks Image Cleaner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/image-cleaner.md
It's common to use pipelines to build and deploy images on Azure Kubernetes Serv
### [Azure CLI](#tab/azure-cli)
-Register the `EnableImageCleanerPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
+First, install the aks-preview extension by running the following command:
+
+```azurecli
+az extension add --name aks-preview
+```
+
+Run the following command to update to the latest version of the extension released:
+
+```azurecli
+az extension update --name aks-preview
+```
+
+Then register the `EnableImageCleanerPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
```azurecli-interactive az feature register --namespace "Microsoft.ContainerService" --name "EnableImageCleanerPreview" ```
-It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature list][az-feature-list] command:
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature show][az-feature-show] command:
```azurecli-interactive
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/EnableImageCleanerPreview')].{Name:name,State:properties.state}"
+az feature show --namespace "Microsoft.ContainerService" --name "EnableImageCleanerPreview"
```
-When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
+When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
```azurecli-interactive az provider register --namespace Microsoft.ContainerService
And apply it to the cluster:
kubectl apply -f image-list.yml ```
-A job named `eraser-aks-xxx`will be triggerred which causes ImageCleaner to remove the desired images from all nodes.
+A job named `eraser-aks-xxx`will be triggered which causes ImageCleaner to remove the desired images from all nodes.
## Disable ImageCleaner
The deletion logs are stored in the `image-cleaner-kind-worker` pods. You can ch
[az-aks-create]: /cli/azure/aks#az_aks_create [az-aks-update]: /cli/azure/aks#az_aks_update
-[az-feature-register]: /cli/azure/feature#az_feature_register
+[az-feature-register]: /cli/azure/feature#az-feature-register
[register-azproviderpreviewfeature]: /powershell/module/az.resources/register-azproviderpreviewfeature
-[az-feature-list]: /cli/azure/feature#az_feature_list
+[az-feature-show]: /cli/azure/feature#az-feature-show
[get-azproviderpreviewfeature]: /powershell/module/az.resources/get-azproviderpreviewfeature
-[az-provider-register]: /cli/azure/provider#az_provider_register
+[az-provider-register]: /cli/azure/provider#az-provider-register
[register-azresourceprovider]: /powershell/module/az.resources/register-azresourceprovider [arm-vms]: https://azure.microsoft.com/blog/azure-virtual-machines-with-ampere-altra-arm-based-processors-generally-available/
aks Intro Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/intro-kubernetes.md
Learn more about deploying and managing AKS.
[azure-monitor]: ../azure-monitor/containers/containers.md [azure-logs]: ../azure-monitor/logs/log-analytics-overview.md [helm]: quickstart-helm.md
-[aks-best-practices]: best-practices.md
+[aks-best-practices]: best-practices.md
aks Keda Deploy Add On Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-deploy-add-on-arm.md
This article shows you how to deploy the Kubernetes Event-driven Autoscaling (KE
[!INCLUDE [Current version callout](./includes/ked)] - ## Prerequisites - An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free). - [Azure CLI installed](/cli/azure/install-azure-cli). - Firewall rules are configured to allow access to the Kubernetes API server. ([learn more][aks-firewall-requirements])
-### Register the `AKS-KedaPreview` feature flag
+## Install the aks-preview Azure CLI extension
++
+To install the aks-preview extension, run the following command:
+
+```azurecli
+az extension add --name aks-preview
+```
-To use the KEDA, you must enable the `AKS-KedaPreview` feature flag on your subscription.
+Run the following command to update to the latest version of the extension released:
```azurecli
-az feature register --name AKS-KedaPreview --namespace Microsoft.ContainerService
+az extension update --name aks-preview
+```
+
+## Register the 'AKS-KedaPreview' feature flag
+
+Register the `AKS-KedaPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
+
+```azurecli-interactive
+az feature register --namespace "Microsoft.ContainerService" --name "AKS-KedaPreview"
```
-You can check on the registration status by using the `az feature list` command:
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature show][az-feature-show] command:
```azurecli-interactive
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/AKS-KedaPreview')].{Name:name,State:properties.state}"
+az feature show --namespace "Microsoft.ContainerService" --name "AKS-KedaPreview"
```
-When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the `az provider register` command:
+When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
```azurecli-interactive az provider register --namespace Microsoft.ContainerService
You can troubleshoot KEDA add-on problems in [this article][keda-troubleshoot].
[az-group-delete]: /cli/azure/group#az-group-delete [keda-troubleshoot]: /troubleshoot/azure/azure-kubernetes/troubleshoot-kubernetes-event-driven-autoscaling-add-on?context=/azure/aks/context/aks-context [aks-firewall-requirements]: limit-egress-traffic.md#azure-global-required-network-rules
+[az-provider-register]: /cli/azure/provider#az-provider-register
+[az-feature-register]: /cli/azure/feature#az-feature-register
+[az-feature-show]: /cli/azure/feature#az-feature-show
<!-- LINKS - external --> [kubectl]: https://kubernetes.io/docs/user-guide/kubectl
aks Keda Deploy Add On Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-deploy-add-on-cli.md
This article shows you how to install the Kubernetes Event-driven Autoscaling (K
[!INCLUDE [Current version callout](./includes/ked)] - ## Prerequisites - An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free). - [Azure CLI installed](/cli/azure/install-azure-cli). - Firewall rules are configured to allow access to the Kubernetes API server. ([learn more][aks-firewall-requirements])
-### Install the extension `aks-preview`
-
-Install the `aks-preview` extension in the AKS cluster to make sure you have the latest version of AKS extension before installing KEDA add-on.
+## Install the aks-preview Azure CLI extension
++
+To install the aks-preview extension, run the following command:
```azurecli
-az extension add --upgrade --name aks-preview
+az extension add --name aks-preview
```
-### Register the `AKS-KedaPreview` feature flag
-
-To use the KEDA, you must enable the `AKS-KedaPreview` feature flag on your subscription.
+Run the following command to update to the latest version of the extension released:
```azurecli
-az feature register --name AKS-KedaPreview --namespace Microsoft.ContainerService
+az extension update --name aks-preview
+```
+
+## Register the 'AKS-KedaPreview' feature flag
+
+Register the `AKS-KedaPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
+
+```azurecli-interactive
+az feature register --namespace "Microsoft.ContainerService" --name "AKS-KedaPreview"
```
-You can check on the registration status by using the `az feature list` command:
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature show][az-feature-show] command:
```azurecli-interactive
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/AKS-KedaPreview')].{Name:name,State:properties.state}"
+az feature show --namespace "Microsoft.ContainerService" --name "AKS-KedaPreview"
```
-When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the `az provider register` command:
+When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
```azurecli-interactive az provider register --namespace Microsoft.ContainerService
This article showed you how to install the KEDA add-on on an AKS cluster using A
You can troubleshoot KEDA add-on problems in [this article][keda-troubleshoot].
+<!-- LINKS - internal -->
+[az-provider-register]: /cli/azure/provider#az-provider-register
+[az-feature-register]: /cli/azure/feature#az-feature-register
+[az-feature-show]: /cli/azure/feature#az-feature-show
[az-aks-create]: /cli/azure/aks#az-aks-create [az aks install-cli]: /cli/azure/aks#az-aks-install-cli [az aks get-credentials]: /cli/azure/aks#az-aks-get-credentials
aks Open Service Mesh About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-about.md
OSM can be added to your Azure Kubernetes Service (AKS) cluster by enabling the
> [!IMPORTANT] > Based on the version of Kubernetes your cluster is running, the OSM add-on installs a different version of OSM:
-> - If your cluster is running Kubernetes version 1.24.0 or greater, the OSM add-on installs version *1.2.0* of OSM.
-> - If your cluster is running a version of Kubernetes between 1.23.5 and 1.24.0, the OSM add-on installs version *1.1.1* of OSM.
+> - If your cluster is running Kubernetes version 1.24.0 or greater, the OSM add-on installs version *1.2.2* of OSM.
+> - If your cluster is running a version of Kubernetes between 1.23.5 and 1.24.0, the OSM add-on installs version *1.1.3* of OSM.
> - If your cluster is running a version of Kubernetes below 1.23.5, the OSM add-on installs version *1.0.0* of OSM. ## Capabilities and features
aks Open Service Mesh Binary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-binary.md
This article will discuss how to download the OSM client library to be used to o
> [!IMPORTANT] > Based on the version of Kubernetes your cluster is running, the OSM add-on installs a different version of OSM:
-> - If your cluster is running Kubernetes version 1.24.0 or greater, the OSM add-on installs version *1.2.0* of OSM.
-> - If your cluster is running a version of Kubernetes between 1.23.5 and 1.24.0, the OSM add-on installs version *1.1.1* of OSM.
+> - If your cluster is running Kubernetes version 1.24.0 or greater, the OSM add-on installs version *1.2.2* of OSM.
+> - If your cluster is running a version of Kubernetes between 1.23.5 and 1.24.0, the OSM add-on installs version *1.1.3* of OSM.
> - If your cluster is running a version of Kubernetes below 1.23.5, the OSM add-on installs version *1.0.0* of OSM.
aks Open Service Mesh Deploy Addon Az Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-deploy-addon-az-cli.md
This article shows you how to install the Open Service Mesh (OSM) add-on on an A
> [!IMPORTANT] > Based on the version of Kubernetes your cluster is running, the OSM add-on installs a different version of OSM:
-> - If your cluster is running Kubernetes version 1.24.0 or greater, the OSM add-on installs version *1.2.0* of OSM.
-> - If your cluster is running a version of Kubernetes between 1.23.5 and 1.24.0, the OSM add-on installs version *1.1.1* of OSM.
+> - If your cluster is running Kubernetes version 1.24.0 or greater, the OSM add-on installs version *1.2.2* of OSM.
+> - If your cluster is running a version of Kubernetes between 1.23.5 and 1.24.0, the OSM add-on installs version *1.1.3* of OSM.
> - If your cluster is running a version of Kubernetes below 1.23.5, the OSM add-on installs version *1.0.0* of OSM. ## Prerequisites
aks Open Service Mesh Deploy Addon Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-deploy-addon-bicep.md
This article shows you how to deploy the Open Service Mesh (OSM) add-on to Azure
> [!IMPORTANT] > Based on the version of Kubernetes your cluster is running, the OSM add-on installs a different version of OSM:
-> - If your cluster is running Kubernetes version 1.24.0 or greater, the OSM add-on installs version *1.2.0* of OSM.
-> - If your cluster is running a version of Kubernetes between 1.23.5 and 1.24.0, the OSM add-on installs version *1.1.1* of OSM.
+> - If your cluster is running Kubernetes version 1.24.0 or greater, the OSM add-on installs version *1.2.2* of OSM.
+> - If your cluster is running a version of Kubernetes between 1.23.5 and 1.24.0, the OSM add-on installs version *1.1.3* of OSM.
> - If your cluster is running a version of Kubernetes below 1.23.5, the OSM add-on installs version *1.0.0* of OSM. [Bicep](../azure-resource-manager/bicep/overview.md) is a domain-specific language that uses declarative syntax to deploy Azure resources. You can use Bicep in place of creating [Azure Resource Manager templates](../azure-resource-manager/templates/overview.md) to deploy your infrastructure-as-code Azure resources.
aks Operator Best Practices Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-network.md
This article focused on network connectivity and security. For more information
[contour]: https://github.com/heptio/contour [haproxy]: https://www.haproxy.org [traefik]: https://github.com/containous/traefik
-[barracuda-waf]: https://www.barracuda.com/products/webapplicationfirewall/models/5
+[barracuda-waf]: https://www.barracuda.com/products/webapplicationfirewall/models/
<!-- INTERNAL LINKS --> [aks-concepts-network]: concepts-network.md
This article focused on network connectivity and security. For more information
[advanced-networking]: configure-azure-cni.md [aks-configure-kubenet-networking]: configure-kubenet.md [concepts-node-selectors]: concepts-clusters-workloads.md#node-selectors
-[nodepool-upgrade]: use-multiple-node-pools.md#upgrade-a-node-pool
+[nodepool-upgrade]: use-multiple-node-pools.md#upgrade-a-node-pool
aks Out Of Tree https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/out-of-tree.md
We recently rolled out the Cloud Storage Interface (CSI) drivers to be the defau
The Cloud Controller Manager is the default controller from Kubernetes 1.22, supported by AKS. If running < v1.22, follow instructions below.
-## Prerequisites
-You must have the following resource installed:
+## Prerequisites
+You must have the following resources installed:
* The Azure CLI * Kubernetes version 1.20.x or above
-* The `aks-preview` extension version 0.5.5 or later
-### Register the `EnableCloudControllerManager` feature flag
+## Install the aks-preview Azure CLI extension
-To use the Cloud Controller Manager feature, you must enable the `EnableCloudControllerManager` feature flag on your subscription.
+
+To install the aks-preview extension, run the following command:
```azurecli
-az feature register --name EnableCloudControllerManager --namespace Microsoft.ContainerService
+az extension add --name aks-preview
```
-You can check on the registration status by using the [az feature list][az-feature-list] command:
-```azurecli-interactive
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/EnableCloudControllerManager')].{Name:name,State:properties.state}"
+Run the following command to update to the latest version of the extension released:
+
+```azurecli
+az extension update --name aks-preview
```
-When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
+## Register the 'EnableCloudControllerManager' feature flag
+
+Register the `EnableCloudControllerManager` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
```azurecli-interactive
-az provider register --namespace Microsoft.ContainerService
+az feature register --namespace "Microsoft.ContainerService" --name "EnableCloudControllerManager"
```
-### Install the aks-preview CLI extension
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature show][az-feature-show] command:
```azurecli-interactive
-# Install the aks-preview extension
-az extension add --name aks-preview
+az feature show --namespace "Microsoft.ContainerService" --name "EnableCloudControllerManager"
+```
-# Update the extension to make sure you have the latest version installed
-az extension update --name aks-preview
+When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
+
+```azurecli-interactive
+az provider register --namespace Microsoft.ContainerService
``` ## Create a new AKS cluster with Cloud Controller Manager with version <1.22
az aks upgrade -n aks -g myResourceGroup -k <version> --aks-custom-headers Enabl
<!-- LINKS - internal --> [az-provider-register]: /cli/azure/provider#az-provider-register [az-feature-register]: /cli/azure/feature#az-feature-register
-[az-feature-list]: /cli/azure/feature#az-feature-list
+[az-feature-show]: /cli/azure/feature#az-feature-show
[csi-docs]: csi-storage-drivers.md <!-- LINKS - External -->
aks Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/policy-reference.md
Title: Built-in policy definitions for Azure Kubernetes Service description: Lists Azure Policy built-in policy definitions for Azure Kubernetes Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/28/2022 Last updated : 01/05/2023
aks Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Kubernetes Service (AKS) description: Lists Azure Policy Regulatory Compliance controls available for Azure Kubernetes Service (AKS). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/04/2022 Last updated : 01/05/2023
aks Support Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/support-policies.md
Microsoft provides technical support for the following examples:
* Connectivity to other Azure services and applications * Ingress controllers and ingress or load balancer configurations * Network performance and latency
- * [Network policies](use-network-policies.md#differences-between-azure-npm-and-calico-network-policy-and-their-capabilities)
+ * [Network policies](use-network-policies.md#differences-between-azure-network-policy-manager-and-calico-network-policy-and-their-capabilities)
> [!NOTE] > Any cluster actions taken by Microsoft/AKS are made with user consent under a built-in Kubernetes role `aks-service` and built-in role binding `aks-service-rolebinding`. This role enables AKS to troubleshoot and diagnose cluster issues, but can't modify permissions nor create roles or role bindings, or other high privilege actions. Role access is only enabled under active support tickets with just-in-time (JIT) access.
aks Tutorial Kubernetes Deploy Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-deploy-application.md
Title: Kubernetes on Azure tutorial - Deploy an application
+ Title: Kubernetes on Azure tutorial - Deploy an application
description: In this Azure Kubernetes Service (AKS) tutorial, you deploy a multi-container application to your cluster using a custom image stored in Azure Container Registry. Previously updated : 05/24/2021- Last updated : 01/04/2023 #Customer intent: As a developer, I want to learn how to deploy apps to an Azure Kubernetes Service (AKS) cluster so that I can deploy and run my own applications.
# Tutorial: Run applications in Azure Kubernetes Service (AKS)
-Kubernetes provides a distributed platform for containerized applications. You build and deploy your own applications and services into a Kubernetes cluster, and let the cluster manage the availability and connectivity. In this tutorial, part four of seven, a sample application is deployed into a Kubernetes cluster. You learn how to:
+Kubernetes provides a distributed platform for containerized applications. You build and deploy your own applications and services into a Kubernetes cluster and let the cluster manage the availability and connectivity. In this tutorial, part four of seven, you deploy a sample application into a Kubernetes cluster. You learn how to:
> [!div class="checklist"]
-> * Update a Kubernetes manifest file
-> * Run an application in Kubernetes
-> * Test the application
+>
+> * Update a Kubernetes manifest file.
+> * Run an application in Kubernetes.
+> * Test the application.
-In later tutorials, this application is scaled out and updated.
+In later tutorials, you'll scale out and update your application.
-This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts].
+This quickstart assumes you have a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts].
> [!TIP]
-> AKS clusters can use GitOps for configuration management. This enables declarations of your cluster's state, which are pushed to source control, to be applied to the cluster automatically. To learn how to use GitOps to deploy an application with an AKS cluster, see the tutorial [Use GitOps with Flux v2][gitops-flux-tutorial] and follow the [prerequisites for Azure Kubernetes Service clusters][gitops-flux-tutorial-aks].
+> AKS clusters can use GitOps for configuration management. GitOp enables declarations of your cluster's state, which are pushed to source control, to be applied to the cluster automatically. To learn how to use GitOps to deploy an application with an AKS cluster, see the [prerequisites for Azure Kubernetes Service clusters][gitops-flux-tutorial-aks] in the [GitOps with Flux v2][gitops-flux-tutorial] tutorial.
## Before you begin
-In previous tutorials, an application was packaged into a container image, this image was uploaded to Azure Container Registry, and a Kubernetes cluster was created.
+In previous tutorials, you packaged an application into a container image, uploaded the image to Azure Container Registry, and created a Kubernetes cluster.
-To complete this tutorial, you need the pre-created `azure-vote-all-in-one-redis.yaml` Kubernetes manifest file. This file was downloaded with the application source code in a previous tutorial. Verify that you've cloned the repo, and that you have changed directories into the cloned repo. If you haven't done these steps, and would like to follow along, start with [Tutorial 1 ΓÇô Create container images][aks-tutorial-prepare-app].
+To complete this tutorial, you need the pre-created `azure-vote-all-in-one-redis.yaml` Kubernetes manifest file. This file download was included with the application source code in a previous tutorial. Verify that you've cloned the repo and that you've changed directories into the cloned repo. If you haven't done these steps and would like to follow along, start with [Tutorial 1: Prepare an application for AKS][aks-tutorial-prepare-app].
### [Azure CLI](#tab/azure-cli)
In these tutorials, an Azure Container Registry (ACR) instance stores the contai
### [Azure CLI](#tab/azure-cli)
-Get the ACR login server name using the [az acr list][az-acr-list] command as follows:
+Get the ACR login server name using the [az acr list][az-acr-list] command.
```azurecli az acr list --resource-group myResourceGroup --query "[].{acrLoginServer:loginServer}" --output table
az acr list --resource-group myResourceGroup --query "[].{acrLoginServer:loginSe
### [Azure PowerShell](#tab/azure-powershell)
-Get the ACR login server name using the [Get-AzContainerRegistry][get-azcontainerregistry] cmdlet as follows:
+Get the ACR login server name using the [Get-AzContainerRegistry][get-azcontainerregistry] cmdlet.
```azurepowershell (Get-AzContainerRegistry -ResourceGroupName myResourceGroup -Name <acrName>).LoginServer
Get the ACR login server name using the [Get-AzContainerRegistry][get-azcontaine
-The sample manifest file from the git repo cloned in the first tutorial uses the images from Microsoft Container Registry (*mcr.microsoft.com*). Make sure that you're in the cloned *azure-voting-app-redis* directory, then open the manifest file with a text editor, such as `vi`:
+The sample manifest file from the git repo you cloned in the first tutorial uses the images from Microsoft Container Registry (*mcr.microsoft.com*). Make sure you're in the cloned *azure-voting-app-redis* directory, and then open the manifest file with a text editor, such as `vi`:
```console vi azure-vote-all-in-one-redis.yaml ```
-Replace *mcr.microsoft.com* with your ACR login server name. The image name is found on line 60 of the manifest file. The following example shows the default image name:
+Replace *mcr.microsoft.com* with your ACR login server name. You can find the image name on line 60 of the manifest file. The following example shows the default image name:
```yaml containers:
containers:
image: mcr.microsoft.com/azuredocs/azure-vote-front:v1 ```
-Provide your own ACR login server name so that your manifest file looks like the following example:
+Provide your own ACR login server name so your manifest file looks similar to the following example:
```yaml containers:
Save and close the file. In `vi`, use `:wq`.
## Deploy the application
-To deploy your application, use the [kubectl apply][kubectl-apply] command. This command parses the manifest file and creates the defined Kubernetes objects. Specify the sample manifest file, as shown in the following example:
+To deploy your application, use the [`kubectl apply`][kubectl-apply] command, specifying the sample manifest file. This command parses the manifest file and creates the defined Kubernetes objects.
```console kubectl apply -f azure-vote-all-in-one-redis.yaml
service "azure-vote-front" created
When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete.
-To monitor progress, use the [kubectl get service][kubectl-get] command with the `--watch` argument.
+To monitor progress, use the [`kubectl get service`][kubectl-get] command with the `--watch` argument.
```console kubectl get service azure-vote-front --watch ```
-Initially the *EXTERNAL-IP* for the *azure-vote-front* service is shown as *pending*:
+Initially the *EXTERNAL-IP* for the *azure-vote-front* service shows as *pending*.
```output azure-vote-front LoadBalancer 10.0.34.242 <pending> 80:30676/TCP 5s
When the *EXTERNAL-IP* address changes from *pending* to an actual public IP add
azure-vote-front LoadBalancer 10.0.34.242 52.179.23.131 80:30676/TCP 67s ```
-To see the application in action, open a web browser to the external IP address of your service:
+To see the application in action, open a web browser to the external IP address of your service.
:::image type="content" source="./media/container-service-kubernetes-tutorials/azure-vote.png" alt-text="Screenshot showing the container image Azure Voting App running in an AKS cluster opened in a local web browser" lightbox="./media/container-service-kubernetes-tutorials/azure-vote.png":::
-If the application didn't load, it might be due to an authorization problem with your image registry. To view the status of your containers, use the `kubectl get pods` command. If the container images can't be pulled, see [Authenticate with Azure Container Registry from Azure Kubernetes Service](cluster-container-registry-integration.md).
+If the application doesn't load, it might be an authorization problem with your image registry. To view the status of your containers, use the `kubectl get pods` command. If you can't pull the container images, see [Authenticate with Azure Container Registry from Azure Kubernetes Service](cluster-container-registry-integration.md).
## Next steps
-In this tutorial, a sample Azure vote application was deployed to a Kubernetes cluster in AKS. You learned how to:
+In this tutorial, you deployed a sample Azure vote application to a Kubernetes cluster in AKS. You learned how to:
> [!div class="checklist"]
-> * Update a Kubernetes manifest files
-> * Run an application in Kubernetes
-> * Test the application
+>
+> * Update a Kubernetes manifest file.
+> * Run an application in Kubernetes.
+> * Test the application.
-Advance to the next tutorial to learn how to scale a Kubernetes application and the underlying Kubernetes infrastructure.
+In the next tutorial, you'll learn how to scale a Kubernetes application and the underlying Kubernetes infrastructure.
> [!div class="nextstepaction"] > [Scale Kubernetes application and infrastructure][aks-tutorial-scale]
aks Use Azure Ad Pod Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-azure-ad-pod-identity.md
Azure Active Directory (Azure AD) pod-managed identities use Kubernetes primitiv
> Kubernetes native capabilities to federate with any external identity providers on behalf of the > application. >
-> The open source Azure AD pod-managed identity (preview) in Azure Kubernetes Service has been deprecated as of 10/24/2022, the AKS Managed add-on is still supported at this time.
-
+> The open source Azure AD pod-managed identity (preview) in Azure Kubernetes Service has been deprecated as of 10/24/2022. The AKS Managed add-on is still supported.
## Before you begin
-You must have the following resource installed:
-
-* The Azure CLI, version 2.20.0 or later
-* The `aks-preview` extension version 0.5.5 or later
+You must have the Azure CLI version 2.20.0 or later installed.
-### Limitations
+## Limitations
* A maximum of 200 pod-managed identities are allowed for a cluster. * A maximum of 200 pod-managed identity exceptions are allowed for a cluster. * Pod-managed identities are available on Linux node pools only. * This feature is only supported for Virtual Machine Scale Sets backed clusters.
-### Register the `EnablePodIdentityPreview`
+## Install the aks-preview Azure CLI extension
+
-Register the `EnablePodIdentityPreview` feature:
+To install the aks-preview extension, run the following command:
```azurecli
-az feature register --name EnablePodIdentityPreview --namespace Microsoft.ContainerService
+az extension add --name aks-preview
+```
+
+Run the following command to update to the latest version of the extension released:
+
+```azurecli
+az extension update --name aks-preview
```
-### Install the `aks-preview` Azure CLI
+## Register the 'EnablePodIdentityPreview' feature flag
-You also need the *aks-preview* Azure CLI extension version 0.5.5 or later. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. Or install any available updates by using the [az extension update][az-extension-update] command.
+Register the `EnablePodIdentityPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
```azurecli-interactive
-# Install the aks-preview extension
-az extension add --name aks-preview
+az feature register --namespace "Microsoft.ContainerService" --name "EnablePodIdentityPreview"
+```
-# Update the extension to make sure you have the latest version installed
-az extension update --name aks-preview
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature show][az-feature-show] command:
+
+```azurecli-interactive
+az feature show --namespace "Microsoft.ContainerService" --name "EnablePodIdentityPreview"
+```
+
+When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
+
+```azurecli-interactive
+az provider register --namespace Microsoft.ContainerService
```
-### Operation mode options
+## Operation mode options
Azure AD pod-managed identity supports two modes of operation: * **Standard Mode**: In this mode, the following two components are deployed to the AKS cluster:
- * [Managed Identity Controller (MIC)](https://azure.github.io/aad-pod-identity/docs/concepts/mic/): An MIC is a Kubernetes controller that watches for changes to pods, [AzureIdentity](https://azure.github.io/aad-pod-identity/docs/concepts/azureidentity/) and [AzureIdentityBinding](https://azure.github.io/aad-pod-identity/docs/concepts/azureidentitybinding/) through the Kubernetes API Server. When it detects a relevant change, the MIC adds or deletes [AzureAssignedIdentity](https://azure.github.io/aad-pod-identity/docs/concepts/azureassignedidentity/) as needed. Specifically, when a pod is scheduled, the MIC assigns the managed identity on Azure to the underlying virtual machine scale set used by the node pool during the creation phase. When all pods using the identity are deleted, it removes the identity from the virtual machine scale set of the node pool, unless the same managed identity is used by other pods. The MIC takes similar actions when AzureIdentity or AzureIdentityBinding are created or deleted.
+ * [Managed Identity Controller (MIC)](https://azure.github.io/aad-pod-identity/docs/concepts/mic/): An MIC is a Kubernetes controller that watches for changes to pods, [AzureIdentity](https://azure.github.io/aad-pod-identity/docs/concepts/azureidentity/) and [AzureIdentityBinding](https://azure.github.io/aad-pod-identity/docs/concepts/azureidentitybinding/) through the Kubernetes API Server. When it detects a relevant change, the MIC adds or deletes [AzureAssignedIdentity](https://azure.github.io/aad-pod-identity/docs/concepts/azureassignedidentity/) as needed. Specifically, when a pod is scheduled, the MIC assigns the managed identity on Azure to the underlying Virtual Machine Scale Set used by the node pool during the creation phase. When all pods using the identity are deleted, it removes the identity from the Virtual Machine Scale Set of the node pool, unless the same managed identity is used by other pods. The MIC takes similar actions when AzureIdentity or AzureIdentityBinding are created or deleted.
* [Node Managed Identity (NMI)](https://azure.github.io/aad-pod-identity/docs/concepts/nmi/): NMI is a pod that runs as a DaemonSet on each node in the AKS cluster. NMI intercepts security token requests to the [Azure Instance Metadata Service](../virtual-machines/linux/instance-metadata-service.md?tabs=linux) on each node, redirect them to itself and validates if the pod has access to the identity it's requesting a token for and fetch the token from the Azure AD tenant on behalf of the application. * **Managed Mode**: This mode offers only NMI. When installed via the AKS cluster add-on, Azure manages creation of Kubernetes primitives (AzureIdentity and AzureIdentityBinding) and identity assignment in response to CLI commands by the user. Otherwise, if installed via Helm chart, the identity needs to be manually assigned and managed by the user. For more information, see [Pod identity in managed mode](https://azure.github.io/aad-pod-identity/docs/configure/pod_identity_in_managed_mode/).
export IDENTITY_RESOURCE_ID="$(az identity show -g ${IDENTITY_RESOURCE_GROUP} -n
The managed identity that will be assigned to the pod needs to be granted permissions that align with the actions it will be taking.
-To run the demo, the *IDENTITY_CLIENT_ID* managed identity must have Virtual Machine Contributor permissions in the resource group that contains the virtual machine scale set of your AKS cluster.
+To run the demo, the *IDENTITY_CLIENT_ID* managed identity must have Virtual Machine Contributor permissions in the resource group that contains the Virtual Machine Scale Set of your AKS cluster.
```azurecli-interactive NODE_GROUP=$(az aks show -g myResourceGroup -n myAKSCluster --query nodeResourceGroup -o tsv)
For more information on managed identities, see [Managed identities for Azure re
[az-group-create]: /cli/azure/group#az_group_create [az-identity-create]: /cli/azure/identity#az_identity_create [az-managed-identities]: ../active-directory/managed-identities-azure-resources/overview.md
+[az-provider-register]: /cli/azure/provider#az-provider-register
+[az-feature-register]: /cli/azure/feature#az-feature-register
+[az-feature-show]: /cli/azure/feature#az-feature-show
<!-- LINKS - external --> [RFC 1123]: https://tools.ietf.org/html/rfc1123
aks Use Network Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-network-policies.md
description: Learn how to secure traffic that flows in and out of pods by using Kubernetes network policies in Azure Kubernetes Service (AKS) Previously updated : 06/24/2022 Last updated : 01/05/2023
These Network Policy rules are defined as YAML manifests. Network policies can b
Azure provides two ways to implement Network Policy. You choose a Network Policy option when you create an AKS cluster. The policy option can't be changed after the cluster is created:
-* Azure's own implementation, called *Azure Network Policy Manager (NPM)*.
+* Azure's own implementation, called *Azure Network Policy Manager*.
* *Calico Network Policies*, an open-source network and network security solution founded by [Tigera][tigera].
-Azure NPM for Linux uses Linux *IPTables* and Azure NPM for Windows uses *Host Network Service (HNS) ACLPolicies* to enforce the specified policies. Policies are translated into sets of allowed and disallowed IP pairs. These pairs are then programmed as IPTable/HNS ACLPolicy filter rules.
+Azure Network Policy Manager for Linux uses Linux *IPTables* and Azure Network Policy Manager for Windows uses *Host Network Service (HNS) ACLPolicies* to enforce the specified policies. Policies are translated into sets of allowed and disallowed IP pairs. These pairs are then programmed as IPTable/HNS ACLPolicy filter rules.
-## Differences between Azure NPM and Calico Network Policy and their capabilities
+## Differences between Azure Network Policy Manager and Calico Network Policy and their capabilities
-| Capability | Azure NPM | Calico Network Policy |
+| Capability | Azure Network Policy Manager | Calico Network Policy |
||-|--| | Supported platforms | Linux, Windows Server 2022 | Linux, Windows Server 2019 and 2022 | | Supported networking options | Azure CNI | Azure CNI (Linux, Windows Server 2019 and 2022) and kubenet (Linux) |
Azure NPM for Linux uses Linux *IPTables* and Azure NPM for Windows uses *Host N
## Limitations:
-Azure Network Policy Manager(NPM) doesn't support IPv6. Otherwise, Azure NPM fully supports the network policy spec in Linux.
-* In Windows, Azure NPM doesn't support the following:
+Azure Network Policy Manager doesn't support IPv6. Otherwise, Azure Network Policy Manager fully supports the network policy spec in Linux.
+* In Windows, Azure Network Policy Manager doesn't support the following:
* named ports * SCTP protocol * negative match label or namespace selectors (e.g. all labels except "debug=true") * "except" CIDR blocks (a CIDR with exceptions) >[!NOTE]
-> * Azure NPM pod logs will record an error if an unsupported policy is created.
+> * Azure Network Policy Manager pod logs will record an error if an unsupported policy is created.
## Scale:
-With the current limits set on Azure NPM for Linux, it can scale up to 500 Nodes and 40k Pods. You may see OOM kills beyond this scale. Please reach out to us on [aks-acn-github] if you'd like to increase your memory limit.
+With the current limits set on Azure Network Policy Manager for Linux, it can scale up to 500 Nodes and 40k Pods. You may see OOM kills beyond this scale. Please reach out to us on [aks-acn-github] if you'd like to increase your memory limit.
## Create an AKS cluster and enable Network Policy
To see network policies in action, let's create an AKS cluster that supports net
> > The network policy feature can only be enabled when the cluster is created. You can't enable network policy on an existing AKS cluster.
-To use Azure NPM, you must use the [Azure CNI plug-in][azure-cni]. Calico Network Policy could be used with either this same Azure CNI plug-in or with the Kubenet CNI plug-in.
+To use Azure Network Policy Manager, you must use the [Azure CNI plug-in][azure-cni]. Calico Network Policy could be used with either this same Azure CNI plug-in or with the Kubenet CNI plug-in.
The following example script:
-* Creates an AKS cluster with system-assigned identity and enables Network Policy.
- * The _Azure NPM_ option is used. To use Calico as the Network Policy option instead, use the `--network-policy calico` parameter. Note: Calico could be used with either `--network-plugin azure` or `--network-plugin kubenet`.
+* Creates an AKS cluster with system-assigned identity and enables Network Policy using Azure Network Policy Manager. To use Calico as the Network Policy option instead, use the `--network-policy calico` parameter. Note: Calico could be used with either `--network-plugin azure` or `--network-plugin kubenet`.
Instead of using a system-assigned identity, you can also use a user-assigned identity. For more information, see [Use managed identities](use-managed-identity.md).
-### Create an AKS cluster with Azure NPM enabled - Linux only
+### Create an AKS cluster with Azure Network Policy Manager enabled - Linux only
-In this section, we'll work on creating a cluster with Linux node pools and Azure NPM enabled.
+In this section, we'll work on creating a cluster with Linux node pools and Azure Network Policy Manager enabled.
To begin, you should replace the values for *$RESOURCE_GROUP_NAME* and *$CLUSTER_NAME* variables.
$CLUSTER_NAME=myAKSCluster
$LOCATION=canadaeast ```
-Create the AKS cluster and specify *azure* for the `network-plugin` and `network-policy`.
+Create the AKS cluster and specify `azure` for the `network-plugin` and `network-policy`.
Use the following command to create a cluster: ```azurecli
az aks create \
--network-policy azure ```
-### Create an AKS cluster with Azure NPM enabled - Windows Server 2022 (Preview)
+### Create an AKS cluster with Azure Network Policy Manager enabled - Windows Server 2022 (Preview)
-In this section, we'll work on creating a cluster with Windows node pools and Azure NPM enabled.
+In this section, we'll work on creating a cluster with Windows node pools and Azure Network Policy Manager enabled.
-Please execute the following commands prior to creating a cluster:
+> [!NOTE]
+> Azure Network Policy Manager with Windows nodes is available on Windows Server 2022 only.
+>
+
+#### Install the aks-preview Azure CLI extension
++
+To install the aks-preview extension, run the following command:
```azurecli
- az extension add --name aks-preview
- az extension update --name aks-preview
- az feature register --namespace Microsoft.ContainerService --name WindowsNetworkPolicyPreview
- az provider register -n Microsoft.ContainerService
+az extension add --name aks-preview
```
-> [!NOTE]
-> At this time, Azure NPM with Windows nodes is available on Windows Server 2022 only
->
+Run the following command to update to the latest version of the extension released:
+
+```azurecli
+az extension update --name aks-preview
+```
+
+#### Register the 'WindowsNetworkPolicyPreview' feature flag
+
+Register the `WindowsNetworkPolicyPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
+
+```azurecli-interactive
+az feature register --namespace "Microsoft.ContainerService" --name "WindowsNetworkPolicyPreview"
+```
+
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature show][az-feature-show] command:
+
+```azurecli-interactive
+az feature show --namespace "Microsoft.ContainerService" --name "WindowsNetworkPolicyPreview"
+```
+
+When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
+
+```azurecli-interactive
+az provider register --namespace Microsoft.ContainerService
+```
+
+#### Create the AKS cluster
Now, you should replace the values for *$RESOURCE_GROUP_NAME*, *$CLUSTER_NAME* and *$WINDOWS_USERNAME* variables.
az aks nodepool add \
--node-count 1 ``` - ### Create an AKS cluster for Calico network policies
-Create the AKS cluster and specify *azure* for the network plugin, and *calico* for the Network Policy. Using *calico* as the Network Policy enables Calico networking on both Linux and Windows node pools.
+Create the AKS cluster and specify `azure` for the network plugin, and `calico` for the Network Policy. Using `calico` as the Network Policy enables Calico networking on both Linux and Windows node pools.
If you plan on adding Windows node pools to your cluster, include the `windows-admin-username` and `windows-admin-password` parameters with that meet the [Windows Server password requirements][windows-server-password].
To learn more about policies, see [Kubernetes network policies][kubernetes-netwo
[use-advanced-networking]: configure-azure-cni.md [az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials [concepts-network]: concepts-network.md
-[az-feature-register]: /cli/azure/feature#az_feature_register
-[az-feature-list]: /cli/azure/feature#az_feature_list
-[az-provider-register]: /cli/azure/provider#az_provider_register
+[az-feature-register]: /cli/azure/feature#az-feature-register
+[az-feature-show]: /cli/azure/feature#az-feature-show
+[az-provider-register]: /cli/azure/provider#az-provider-register
[windows-server-password]: /windows/security/threat-protection/security-policy-settings/password-must-meet-complexity-requirements#reference [az-extension-add]: /cli/azure/extension#az_extension_add [az-extension-update]: /cli/azure/extension#az_extension_update
aks Use Pod Security Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-pod-security-policies.md
Last updated 03/25/2021
After pod security policy (preview) is deprecated, you must have already migrated to Pod Security Admission controller or disabled the feature on any existing clusters using the deprecated feature to perform future cluster upgrades and stay within Azure support. - ## Before you begin This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal]. You need the Azure CLI version 2.0.61 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
-### Install aks-preview CLI extension
+## Install the aks-preview Azure CLI extension
-To use pod security policies, you need the *aks-preview* CLI extension version 0.4.1 or higher. Install the *aks-preview* Azure CLI extension using the [az extension add][az-extension-add] command, then check for any available updates using the [az extension update][az-extension-update] command:
-```azurecli-interactive
-# Install the aks-preview extension
+To install the aks-preview extension, run the following command:
+
+```azurecli
az extension add --name aks-preview
+```
+
+Run the following command to update to the latest version of the extension released:
-# Update the extension to make sure you have the latest version installed
+```azurecli
az extension update --name aks-preview ```
-### Register pod security policy feature provider
+## Register the 'PodSecurityPolicyPreview' feature flag
-To create or update an AKS cluster to use pod security policies, first enable a feature flag on your subscription. To register the *PodSecurityPolicyPreview* feature flag, use the [az feature register][az-feature-register] command as shown in the following example:
+Register the `PodSecurityPolicyPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
```azurecli-interactive
-az feature register --name PodSecurityPolicyPreview --namespace Microsoft.ContainerService
+az feature register --namespace "Microsoft.ContainerService" --name "PodSecurityPolicyPreview"
```
-It takes a few minutes for the status to show *Registered*. You can check on the registration status using the [az feature list][az-feature-list] command:
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature show][az-feature-show] command:
```azurecli-interactive
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/PodSecurityPolicyPreview')].{Name:name,State:properties.state}"
+az feature show --namespace "Microsoft.ContainerService" --name "PodSecurityPolicyPreview"
```
-When ready, refresh the registration of the *Microsoft.ContainerService* resource provider using the [az provider register][az-provider-register] command:
+When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
```azurecli-interactive az provider register --namespace Microsoft.ContainerService
Below is a summary of behavior changes between pod security policy and Azure Pol
| Default policies | When pod security policy is enabled in AKS, default Privileged and Unrestricted policies are applied. | No default policies are applied by enabling the Azure Policy Add-on. You must explicitly enable policies in Azure Policy. | Who can create and assign policies | Cluster admin creates a pod security policy resource | Users must have a minimum role of 'owner' or 'Resource Policy Contributor' permissions on the AKS cluster resource group. - Through API, users can assign policies at the AKS cluster resource scope. The user should have minimum of 'owner' or 'Resource Policy Contributor' permissions on AKS cluster resource. - In the Azure portal, policies can be assigned at the Management group/subscription/resource group level. | Authorizing policies| Users and Service Accounts require explicit permissions to use pod security policies. | No additional assignment is required to authorize policies. Once policies are assigned in Azure, all cluster users can use these policies.
-| Policy applicability | The admin user bypasses the enforcement of pod security policies. | All users (admin & non-admin) sees the same policies. There is no special casing based on users. Policy application can be excluded at the namespace level.
+| Policy applicability | The admin user bypasses the enforcement of pod security policies. | All users (admin & non-admin) see the same policies. There is no special casing based on users. Policy application can be excluded at the namespace level.
| Policy scope | Pod security policies are not namespaced | Constraint templates used by Azure Policy are not namespaced. | Deny/Audit/Mutation action | Pod security policies support only deny actions. Mutation can be done with default values on create requests. Validation can be done during update requests.| Azure Policy supports both audit & deny actions. Mutation is not supported yet, but planned. | Pod security policy compliance | There is no visibility on compliance of pods that existed before enabling pod security policy. Non-compliant pods created after enabling pod security policies are denied. | Non-compliant pods that existed before applying Azure policies would show up in policy violations. Non-compliant pods created after enabling Azure policies are denied if policies are set with a deny effect.
For more information about limiting pod network traffic, see [Secure traffic bet
[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md [install-azure-cli]: /cli/azure/install-azure-cli [network-policies]: use-network-policies.md
-[az-feature-register]: /cli/azure/feature#az_feature_register
-[az-feature-list]: /cli/azure/feature#az_feature_list
-[az-provider-register]: /cli/azure/provider#az_provider_register
+[az-feature-register]: /cli/azure/feature#az-feature-register
+[az-feature-show]: /cli/azure/feature#az-feature-show
+[az-provider-register]: /cli/azure/provider#az-provider-register
[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials [az-aks-update]: /cli/azure/aks#az_aks_update [az-extension-add]: /cli/azure/extension#az_extension_add
aks Use Ultra Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-ultra-disks.md
This feature can only be set at cluster creation or node pool creation time.
> Azure ultra disks require nodepools deployed in availability zones and regions that support these disks as well as only specific VM series. See the [**Ultra disks GA scope and limitations**](../virtual-machines/disks-enable-ultra-ssd.md#ga-scope-and-limitations). ### Limitations-- See the [**Ultra disks GA scope and limitations**](../virtual-machines/disks-enable-ultra-ssd.md#ga-scope-and-limitations)
+- Ultra disks can't be used with some features and functionality, such as availability sets or Azure Disk Encryption. Review [**Ultra disks GA scope and limitations**](../virtual-machines/disks-enable-ultra-ssd.md#ga-scope-and-limitations) before proceeding.
- The supported size range for a Ultra disks is between 100 and 1500 ## Create a new cluster that can use Ultra disks
For more details on using Azure tags, see [Use Azure tags in Azure Kubernetes Se
[az-feature-register]: /cli/azure/feature#az_feature_register [az-feature-list]: /cli/azure/feature#az_feature_list [az-provider-register]: /cli/azure/provider#az_provider_register
-[use-tags]: use-tags.md
+[use-tags]: use-tags.md
aks Use Wasi Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-wasi-node-pools.md
Title: Create WebAssembly System Interface(WASI) node pools in Azure Kubernetes Service (AKS) to run your WebAssembly(WASM) workload (preview)
-description: Learn how to create a WebAssembly System Interface(WASI) node pool in Azure Kubernetes Service (AKS) to run your WebAssembly(WASM) workload on Kubernetes.
+ Title: Create WebAssembly System Interface (WASI) node pools in Azure Kubernetes Service (AKS) to run your WebAssembly (WASM) workload (preview)
+description: Learn how to create a WebAssembly System Interface (WASI) node pool in Azure Kubernetes Service (AKS) to run your WebAssembly (WASM) workload on Kubernetes.
Last updated 10/19/2022
Last updated 10/19/2022
## Before you begin
-WASM/WASI node pools are currently in preview.
+You must have the latest version of Azure CLI installed.
+## Install the aks-preview Azure CLI extension
-You must also have the latest version of the Azure CLI and `aks-preview` extension installed.
-### Register the `WasmNodePoolPreview` preview feature
+To install the aks-preview extension, run the following command:
-To use the feature, you must also enable the `WasmNodePoolPreview` feature flag on your subscription.
+```azurecli
+az extension add --name aks-preview
+```
-Register the `WasmNodePoolPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
+Run the following command to update to the latest version of the extension released:
-```azurecli-interactive
-az feature register --namespace "Microsoft.ContainerService" --name "WasmNodePoolPreview"
+```azurecli
+az extension update --name aks-preview
```
-It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature list][az-feature-list] command:
+## Register the 'WasmNodePoolPreview' feature flag
+
+Register the `WasmNodePoolPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
```azurecli-interactive
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/WasmNodePoolPreview')].{Name:name,State:properties.state}"
+az feature register --namespace "Microsoft.ContainerService" --name "WasmNodePoolPreview"
```
-When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature show][az-feature-show] command:
```azurecli-interactive
-az provider register --namespace Microsoft.ContainerService
+az feature show --namespace "Microsoft.ContainerService" --name "WasmNodePoolPreview"
```
-### Install the `aks-preview` Azure CLI
-
-You also need the *aks-preview* Azure CLI extension version 0.5.34 or later. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. Or install any available updates by using the [az extension update][az-extension-update] command.
+When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
```azurecli-interactive
-# Install the aks-preview extension
-az extension add --name aks-preview
-
-# Update the extension to make sure you have the latest version installed
-az extension update --name aks-preview
+az provider register --namespace Microsoft.ContainerService
```
-### Limitations
+## Limitations
* Currently, there are only containerd shims available for [spin][spin] and [slight][slight] applications, which use the [wasmtime][wasmtime] runtime. In addition to wasmtime runtime applications, you can also run containers on WASM/WASI node pools. * You can run containers and wasm modules on the same node, but you can't run containers and wasm modules on the same pod.
az aks nodepool delete --name mywasipool -g myresourcegroup --cluster-name myaks
[az-aks-nodepool-delete]: /cli/azure/aks/nodepool#az_aks_nodepool_delete [az-extension-add]: /cli/azure/extension#az_extension_add [az-extension-update]: /cli/azure/extension#az_extension_update
-[az-feature-register]: /cli/azure/feature#az_feature_register
-[az-feature-list]: /cli/azure/feature#az_feature_list
-[az-provider-register]: /cli/azure/provider#az_provider_register
+[az-feature-register]: /cli/azure/feature#az-feature-register
+[az-feature-show]: /cli/azure/feature#az-feature-show
+[az-provider-register]: /cli/azure/provider#az-provider-register
[dockerhub-callout]: ../container-registry/buffer-gate-public-content.md [install-azure-cli]: /cli/azure/install-azure-cli [use-multiple-node-pools]: use-multiple-node-pools.md
aks Vertical Pod Autoscaler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/vertical-pod-autoscaler.md
Vertical Pod Autoscaler provides the following benefits:
* The Azure CLI version 2.0.64 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
-* The `aks-preview` extension version 0.5.102 or later.
- * `kubectl` should be connected to the cluster you want to install VPA. ## API Object The Vertical Pod Autoscaler is an API resource in the Kubernetes autoscaling API group. The version supported in this preview release is 0.11 can be found in the [Kubernetes autoscaler repo][github-autoscaler-repo-v011].
-## Register the VPA provider feature
+## Install the aks-preview Azure CLI extension
[!INCLUDE [preview features callout](includes/preview/preview-callout.md)]
-To install the aks-vpapreview preview feature, run the following command:
+To install the aks-preview extension, run the following command:
+
+```azurecli
+az extension add --name aks-preview
+```
+
+Run the following command to update to the latest version of the extension released:
```azurecli
-az feature register --namespace Microsoft.ContainerService --name AKS-VPAPreview
+az extension update --name aks-preview
+```
+
+## Register the 'AKS-VPAPreview' feature flag
+
+Register the `AKS-VPAPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
+
+```azurecli-interactive
+az feature register --namespace "Microsoft.ContainerService" --name "AKS-VPAPreview"
+```
+
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature show][az-feature-show] command:
+
+```azurecli-interactive
+az feature show --namespace "Microsoft.ContainerService" --name "AKS-VPAPreview"
+```
+
+When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
+
+```azurecli-interactive
+az provider register --namespace Microsoft.ContainerService
``` ## Deploy, upgrade, or disable VPA on a cluster
This article showed you how to automatically scale resource utilization, such as
[az-aks-create]: /cli/azure/aks#az-aks-create [az-aks-upgrade]: /cli/azure/aks#az-aks-upgrade [horizontal-pod-autoscaling]: concepts-scale.md#horizontal-pod-autoscaler
-[scale-applications-in-aks]: tutorial-kubernetes-scale.md
+[scale-applications-in-aks]: tutorial-kubernetes-scale.md
+[az-provider-register]: /cli/azure/provider#az-provider-register
+[az-feature-register]: /cli/azure/feature#az-feature-register
+[az-feature-show]: /cli/azure/feature#az-feature-show
aks Workload Identity Deploy Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-deploy-cluster.md
Title: Deploy and configure an Azure Kubernetes Service (AKS) cluster with workl
description: In this Azure Kubernetes Service (AKS) article, you deploy an Azure Kubernetes Service cluster and configure it with an Azure AD workload identity (preview). Previously updated : 10/24/2022 Last updated : 01/06/2023 # Deploy and configure workload identity (preview) on an Azure Kubernetes Service (AKS) cluster
This article assumes you have a basic understanding of Kubernetes concepts. For
- This article requires version 2.40.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. -- You've installed the latest version of the `aks-preview` extension, version 0.5.102 or later.- - The identity you're using to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)][aks-identity-concepts]. -- If you have multiple Azure subscriptions, select the appropriate subscription ID in which the resources should be billed using the
-[az account][az-account] command.
+- If you have multiple Azure subscriptions, select the appropriate subscription ID in which the resources should be billed using the [az account][az-account] command.
## Install the aks-preview Azure CLI extension
Register the `EnableWorkloadIdentityPreview` feature flag by using the [az featu
az feature register --namespace "Microsoft.ContainerService" --name "EnableWorkloadIdentityPreview" ```
-It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature list][az-feature-list] command:
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature show][az-feature-show] command:
```azurecli-interactive az feature show --namespace "Microsoft.ContainerService" --name "EnableWorkloadIdentityPreview" ```
-When the status shows *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
+When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
```azurecli-interactive az provider register --namespace Microsoft.ContainerService
az identity federated-credential create --name myfederatedIdentity --identity-na
> [!NOTE] > It takes a few seconds for the federated identity credential to be propagated after being initially added. If a token request is made immediately after adding the federated identity credential, it might lead to failure for a couple of minutes as the cache is populated in the directory with old data. To avoid this issue, you can add a slight delay after adding the federated identity credential.
+## Disable workload identity
+
+To disable the Azure AD workload identity on the AKS cluster where it's been enabled and configured, you can run the following command:
+
+```azurecli
+az aks update --resource-group myResourceGroup --name myAKSCluster --enable-workload-identity false
+```
+ ## Next steps In this article, you deployed a Kubernetes cluster and configured it to use a workload identity in preparation for application workloads to authenticate with that credential. Now you're ready to deploy your application and configure it to use the workload identity with the latest version of the [Azure Identity][azure-identity-libraries] client library. If you can't rewrite your application to use the latest client library version, you can [set up your application pod][workload-identity-migration] to authenticate using managed identity with workload identity as a short-term migration solution.
In this article, you deployed a Kubernetes cluster and configured it to use a wo
[kubernetes-concepts]: concepts-clusters-workloads.md [az-feature-register]: /cli/azure/feature#az_feature_register [az-provider-register]: /cli/azure/provider#az-provider-register
-[az-feature-list]: /cli/azure/feature#az-feature-list
+[az-feature-show]: /cli/azure/feature#az-feature-show
[workload-identity-overview]: workload-identity-overview.md [create-key-vault-azure-cli]: ../key-vault/general/quick-create-cli.md [az-keyvault-list]: /cli/azure/keyvault#az-keyvault-list
aks Workload Identity Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-overview.md
Title: Use an Azure AD workload identities (preview) on Azure Kubernetes Service
description: Learn about Azure Active Directory workload identity (preview) for Azure Kubernetes Service (AKS) and how to migrate your application to authenticate using this identity. Previously updated : 10/20/2022 Last updated : 01/06/2023
If you've used [Azure AD pod-managed identity][use-azure-ad-pod-identity], think
|`azure.workload.identity/tenant-id` |Represents the Azure tenant ID where the<br> Azure AD application is registered. |AZURE_TENANT_ID environment variable extracted<br> from `azure-wi-webhook-config` ConfigMap.| |`azure.workload.identity/service-account-token-expiration` |Represents the `expirationSeconds` field for the<br> projected service account token. It's an optional field that you configure to prevent downtime<br> caused by errors during service account token refresh. Kubernetes service account token expiry isn't correlated with Azure AD tokens. Azure AD tokens expire in 24 hours after they're issued. |3600<br> Supported range is 3600-86400.|
+### Pod labels
+
+|Label |Description |Recommended value |Required |
+|||||
+|`azure.workload.identity/use` | Represents the pod is to be used for workload identity. |true |Yes |
+ ### Pod annotations
+> [!NOTE]
+> For applications using Workload Identity it is now required to add the label 'azure.workload.identity/use: "true"' in the pod labels in order for AKS to move Workload Identity to a "Fail Close" scenario before GA to provide a consistent and reliable behavior for pods that need to use workload identity.
+ |Annotation |Description |Default | |--||--|
+|`azure.workload.identity/use` |Represents the service account<br> is to be used for workload identity. | |
|`azure.workload.identity/service-account-token-expiration` |Represents the `expirationSeconds` field for the projected service account token. It's an optional field that you configure to prevent any downtime caused by errors during service account token refresh. Kubernetes service account token expiry isn't correlated with Azure AD tokens. Azure AD tokens expire in 24 hours after they're issued. <sup>1</sup> |3600<br> Supported range is 3600-86400. | |`azure.workload.identity/skip-containers` |Represents a semi-colon-separated list of containers to skip adding projected service account token volume. For example `container1;container2`. |By default, the projected service account token volume is added to all containers if the service account is labeled with `azure.workload.identity/use: true`. | |`azure.workload.identity/inject-proxy-sidecar` |Injects a proxy init container and proxy sidecar into the pod. The proxy sidecar is used to intercept token requests to IMDS and acquire an Azure AD token on behalf of the user with federated identity credential. |true |
analysis-services Analysis Services Gateway Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-gateway-install.md
To learn more about how Azure Analysis Services works with the gateway, see [Con
* During setup, when registering your gateway with Azure, the default region for your subscription is selected. You can choose a different subscription and region. If you have servers in more than one region, you must install a gateway for each region. * The gateway cannot be installed on a domain controller.
+* The gateway cannot be installed and configured by using automation.
* Only one gateway can be installed on a single computer. * Install the gateway on a computer that remains on and does not go to sleep. * Do not install the gateway on a computer with a wireless only connection to your network. Performance can be diminished.
api-management Api Management Access Restriction Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-access-restriction-policies.md
- Title: Azure API Management access restriction policies | Microsoft Docs
-description: Reference for the access restriction policies available for use in Azure API Management. Provides policy usage, settings, and examples.
----- Previously updated : 06/03/2022---
-# API Management access restriction policies
-
-This article provides a reference for API Management access restriction policies.
--
-## <a name="AccessRestrictionPolicies"></a> Access restriction policies
--- [Check HTTP header](#CheckHTTPHeader) - Enforces existence and/or value of an HTTP header.-- [Get authorization context](#GetAuthorizationContext) - Gets the authorization context of a specified [authorization](authorizations-overview.md) configured in the API Management instance.-- [Limit call rate by subscription](#LimitCallRate) - Prevents API usage spikes by limiting call rate, on a per subscription basis.-- [Limit call rate by key](#LimitCallRateByKey) - Prevents API usage spikes by limiting call rate, on a per key basis.-- [Restrict caller IPs](#RestrictCallerIPs) - Filters (allows/denies) calls from specific IP addresses and/or address ranges.-- [Set usage quota by subscription](#SetUsageQuota) - Allows you to enforce a renewable or lifetime call volume and/or bandwidth quota, on a per subscription basis.-- [Set usage quota by key](#SetUsageQuotaByKey) - Allows you to enforce a renewable or lifetime call volume and/or bandwidth quota, on a per key basis.-- [Validate Azure Active Directory token](#ValidateAAD) - Enforces existence and validity of an Azure Active Directory JWT extracted from either a specified HTTP header, query parameter, or token value.-- [Validate JWT](#ValidateJWT) - Enforces existence and validity of a JWT extracted from either a specified HTTP header, query parameter, or token value.-- [Validate client certificate](#validate-client-certificate) - Enforces that a certificate presented by a client to an API Management instance matches specified validation rules and claims.-
-> [!TIP]
-> You can use access restriction policies in different scopes for different purposes. For example, you can secure the whole API with AAD authentication by applying the `validate-azure-ad-token` policy on the API level or you can apply it on the API operation level and use `claims` for more granular control.
-
-## <a name="CheckHTTPHeader"></a> Check HTTP header
-
-Use the `check-header` policy to enforce that a request has a specified HTTP header. You can optionally check to see if the header has a specific value or check for a range of allowed values. If the check fails, the policy terminates request processing and returns the HTTP status code and error message specified by the policy.
--
-### Policy statement
-
-```xml
-<check-header name="header name" failed-check-httpcode="code" failed-check-error-message="message" ignore-case="true">
- <value>Value1</value>
- <value>Value2</value>
-</check-header>
-```
-
-### Example
-
-```xml
-<check-header name="Authorization" failed-check-httpcode="401" failed-check-error-message="Not authorized" ignore-case="false">
- <value>f6dc69a089844cf6b2019bae6d36fac8</value>
-</check-header>
-```
-
-### Elements
-
-| Name | Description | Required |
-| | | -- |
-| check-header | Root element. | Yes |
-| value | Allowed HTTP header value. When multiple value elements are specified, the check is considered a success if any one of the values is a match. | No |
-
-### Attributes
-
-| Name | Description | Required | Default |
-| -- | - | -- | - |
-| failed-check-error-message | Error message to return in the HTTP response body if the header doesn't exist or has an invalid value. This message must have any special characters properly escaped. | Yes | N/A |
-| failed-check-httpcode | HTTP Status code to return if the header doesn't exist or has an invalid value. | Yes | N/A |
-| header-name | The name of the HTTP header to check. | Yes | N/A |
-| ignore-case | Can be set to True or False. If set to True case is ignored when the header value is compared against the set of acceptable values. | Yes | N/A |
-
-### Usage
-
-This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
--- **Policy sections:** inbound, outbound--- **Policy scopes:** all scopes-
-## <a name="GetAuthorizationContext"></a> Get authorization context
-
-Use the `get-authorization-context` policy to get the authorization context of a specified [authorization](authorizations-overview.md) (preview) configured in the API Management instance.
-
-The policy fetches and stores authorization and refresh tokens from the configured authorization provider.
-
-If `identity-type=jwt` is configured, a JWT token is required to be validated. The audience of this token must be `https://azure-api.net/authorization-manager`.
---
-### Policy statement
-
-```xml
-<get-authorization-context
- provider-id="authorization provider id"
- authorization-id="authorization id"
- context-variable-name="variable name"
- identity-type="managed | jwt"
- identity="JWT bearer token"
- ignore-error="true | false" />
-```
-
-### Examples
-
-#### Example 1: Get token back
-
-```xml
-<!-- Add to inbound policy. -->
-<get-authorization-context
- provider-id="github-01"
- authorization-id="auth-01"
- context-variable-name="auth-context"
- identity-type="managed"
- identity="@(context.Request.Headers["Authorization"][0].Replace("Bearer ", ""))"
- ignore-error="false" />
-<!-- Return the token -->
-<return-response>
- <set-status code="200" />
- <set-body template="none">@(((Authorization)context.Variables.GetValueOrDefault("auth-context"))?.AccessToken)</set-body>
-</return-response>
-```
-
-#### Example 2: Get token back with dynamically set attributes
-
-```xml
-<!-- Add to inbound policy. -->
-<get-authorization-context
- provider-id="@(context.Request.Url.Query.GetValueOrDefault("authorizationProviderId"))"
- authorization-id="@(context.Request.Url.Query.GetValueOrDefault("authorizationId"))" context-variable-name="auth-context"
- ignore-error="false"
- identity-type="managed" />
-<!-- Return the token -->
-<return-response>
- <set-status code="200" />
- <set-body template="none">@(((Authorization)context.Variables.GetValueOrDefault("auth-context"))?.AccessToken)</set-body>
-</return-response>
-```
-
-#### Example 3: Attach the token to the backend call
-
-```xml
-<!-- Add to inbound policy. -->
-<get-authorization-context
- provider-id="github-01"
- authorization-id="auth-01"
- context-variable-name="auth-context"
- identity-type="managed"
- ignore-error="false" />
-<!-- Attach the token to the backend call -->
-<set-header name="Authorization" exists-action="override">
- <value>@("Bearer " + ((Authorization)context.Variables.GetValueOrDefault("auth-context"))?.AccessToken)</value>
-</set-header>
-```
-
-#### Example 4: Get token from incoming request and return token
-
-```xml
-<!-- Add to inbound policy. -->
-<get-authorization-context
- provider-id="github-01"
- authorization-id="auth-01"
- context-variable-name="auth-context"
- identity-type="jwt"
- identity="@(context.Request.Headers["Authorization"][0].Replace("Bearer ", ""))"
- ignore-error="false" />
-<!-- Return the token -->
-<return-response>
- <set-status code="200" />
- <set-body template="none">@(((Authorization)context.Variables.GetValueOrDefault("auth-context"))?.AccessToken)</set-body>
-</return-response>
-```
-
-### Elements
-
-| Name | Description | Required |
-| -- | - | -- |
-| get-authorization-context | Root element. | Yes |
-
-### Attributes
-
-| Name | Description | Required | Default |
-|||||
-| provider-id | The authorization provider resource identifier. | Yes | |
-| authorization-id | The authorization resource identifier. | Yes | |
-| context-variable-name | The name of the context variable to receive the [`Authorization` object](#authorization-object). | Yes | |
-| identity-type | Type of identity to be checked against the authorization access policy. <br> - `managed`: managed identity of the API Management service. <br> - `jwt`: JWT bearer token specified in the `identity` attribute. | No | managed |
-| identity | An Azure AD JWT bearer token to be checked against the authorization permissions. Ignored for `identity-type` other than `jwt`. <br><br>Expected claims: <br> - audience: `https://azure-api.net/authorization-manager` <br> - `oid`: Permission object ID <br> - `tid`: Permission tenant ID | No | |
-| ignore-error | Boolean. If acquiring the authorization context results in an error (for example, the authorization resource isn't found or is in an error state): <br> - `true`: the context variable is assigned a value of null. <br> - `false`: return `500` | No | false |
-
-### Authorization object
-
-The Authorization context variable receives an object of type `Authorization`.
-
-```c#
-class Authorization
-{
- public string AccessToken { get; }
- public IReadOnlyDictionary<string, object> Claims { get; }
-}
-```
-
-| Property Name | Description |
-| -- | -- |
-| AccessToken | Bearer access token to authorize a backend HTTP request. |
-| Claims | Claims returned from the authorization serverΓÇÖs token response API (see [RFC6749#section-5.1](https://datatracker.ietf.org/doc/html/rfc6749#section-5.1)). |
-
-### Usage
-
-This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
--- **Policy sections:** inbound--- **Policy scopes:** all scopes--
-## <a name="LimitCallRate"></a> Limit call rate by subscription
-
-The `rate-limit` policy prevents API usage spikes on a per subscription basis by limiting the call rate to a specified number per a specified time period. When the call rate is exceeded, the caller receives a `429 Too Many Requests` response status code.
-
-To understand the difference between rate limits and quotas, [see Rate limits and quotas.](./api-management-sample-flexible-throttling.md#rate-limits-and-quotas)
-
-> [!IMPORTANT]
-> * This policy can be used only once per policy document.
-> * [Policy expressions](api-management-policy-expressions.md) cannot be used in any of the policy attributes for this policy.
---
-### Policy statement
-
-```xml
-<rate-limit calls="number" renewal-period="seconds">
- <api name="API name" id="API id" calls="number" renewal-period="seconds">
- <operation name="operation name" id="operation id" calls="number" renewal-period="seconds"
- retry-after-header-name="custom header name, replaces default 'Retry-After'"
- retry-after-variable-name="policy expression variable name"
- remaining-calls-header-name="header name"
- remaining-calls-variable-name="policy expression variable name"
- total-calls-header-name="header name"/>
- </api>
-</rate-limit>
-```
-
-### Example
-
-In the following example, the per subscription rate limit is 20 calls per 90 seconds. After each policy execution, the remaining calls allowed in the time period are stored in the variable `remainingCallsPerSubscription`.
-
-```xml
-<policies>
- <inbound>
- <base />
- <rate-limit calls="20" renewal-period="90" remaining-calls-variable-name="remainingCallsPerSubscription"/>
- </inbound>
- <outbound>
- <base />
- </outbound>
-</policies>
-```
-
-### Elements
-
-| Name | Description | Required |
-| - | -- | -- |
-| rate-limit | Root element. | Yes |
-| api | Add one or more of these elements to impose a call rate limit on APIs within the product. Product and API call rate limits are applied independently. API can be referenced either via `name` or `id`. If both attributes are provided, `id` will be used, and `name` will be ignored. | No |
-| operation | Add one or more of these elements to impose a call rate limit on operations within an API. Product, API, and operation call rate limits are applied independently. Operation can be referenced either via `name` or `id`. If both attributes are provided, `id` will be used, and `name` will be ignored. | No |
-
-### Attributes
-
-| Name | Description | Required | Default |
-| -- | -- | -- | - |
-| name | The name of the API for which to apply the rate limit. | Yes | N/A |
-| calls | The maximum total number of calls allowed during the time interval specified in `renewal-period`. | Yes | N/A |
-| renewal-period | The length in seconds of the sliding window during which the number of allowed requests shouldn't exceed the value specified in `calls`. Maximum allowed value: 300 seconds. | Yes | N/A |
-| retry-after-header-name | The name of a custom response header whose value is the recommended retry interval in seconds after the specified call rate is exceeded. | No | `Retry-After` |
-| retry-after-variable-name | The name of a policy expression variable that stores the recommended retry interval in seconds after the specified call rate is exceeded. | No | N/A |
-| remaining-calls-header-name | The name of a response header whose value after each policy execution is the number of remaining calls allowed for the time interval specified in the `renewal-period`. | No | N/A |
-| remaining-calls-variable-name | The name of a policy expression variable that after each policy execution stores the number of remaining calls allowed for the time interval specified in the `renewal-period`. | No | N/A |
-| total-calls-header-name | The name of a response header whose value is the value specified in `calls`. | No | N/A |
-
-### Usage
-
-This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
--- **Policy sections:** inbound--- **Policy scopes:** product, api, operation-
-## <a name="LimitCallRateByKey"></a> Limit call rate by key
-
-> [!IMPORTANT]
-> This feature is unavailable in the **Consumption** tier of API Management.
-
-The `rate-limit-by-key` policy prevents API usage spikes on a per key basis by limiting the call rate to a specified number per a specified time period. The key can have an arbitrary string value and is typically provided using a policy expression. Optional increment condition can be added to specify which requests should be counted towards the limit. When this call rate is exceeded, the caller receives a `429 Too Many Requests` response status code.
-
-To understand the difference between rate limits and quotas, [see Rate limits and quotas.](./api-management-sample-flexible-throttling.md#rate-limits-and-quotas)
-
-For more information and examples of this policy, see [Advanced request throttling with Azure API Management](./api-management-sample-flexible-throttling.md).
---
-### Policy statement
-
-```xml
-<rate-limit-by-key calls="number"
- renewal-period="seconds"
- increment-condition="condition"
- increment-count="number"
- counter-key="key value"
- retry-after-header-name="custom header name, replaces default 'Retry-After'"
- retry-after-variable-name="policy expression variable name"
- remaining-calls-header-name="header name"
- remaining-calls-variable-name="policy expression variable name"
- total-calls-header-name="header name"/>
-
-```
-
-### Example
-
-In the following example, the rate limit of 10 calls per 60 seconds is keyed by the caller IP address. After each policy execution, the remaining calls allowed in the time period are stored in the variable `remainingCallsPerIP`.
-
-```xml
-<policies>
- <inbound>
- <base />
- <rate-limit-by-key calls="10"
- renewal-period="60"
- increment-condition="@(context.Response.StatusCode == 200)"
- counter-key="@(context.Request.IpAddress)"
- remaining-calls-variable-name="remainingCallsPerIP"/>
- </inbound>
- <outbound>
- <base />
- </outbound>
-</policies>
-```
-
-### Elements
-
-| Name | Description | Required |
-| -- | - | -- |
-| rate-limit-by-key | Root element. | Yes |
-
-### Attributes
-
-| Name | Description | Required | Default |
-| - | -- | -- | - |
-| calls | The maximum total number of calls allowed during the time interval specified in the `renewal-period`. Policy expression is allowed. | Yes | N/A |
-| counter-key | The key to use for the rate limit policy. For each key value, a single counter is used for all scopes at which the policy is configured. | Yes | N/A |
-| increment-condition | The boolean expression specifying if the request should be counted towards the rate (`true`). | No | N/A |
-| increment-count | The number by which the counter is increased per request. | No | 1 |
-| renewal-period | The length in seconds of the sliding window during which the number of allowed requests shouldn't exceed the value specified in `calls`. Policy expression is allowed. Maximum allowed value: 300 seconds. | Yes | N/A |
-| retry-after-header-name | The name of a custom response header whose value is the recommended retry interval in seconds after the specified call rate is exceeded. | No | `Retry-After` |
-| retry-after-variable-name | The name of a policy expression variable that stores the recommended retry interval in seconds after the specified call rate is exceeded. | No | N/A |
-| remaining-calls-header-name | The name of a response header whose value after each policy execution is the number of remaining calls allowed for the time interval specified in the `renewal-period`. | No | N/A |
-| remaining-calls-variable-name | The name of a policy expression variable that after each policy execution stores the number of remaining calls allowed for the time interval specified in the `renewal-period`. | No | N/A |
-| total-calls-header-name | The name of a response header whose value is the value specified in `calls`. | No | N/A |
-
-### Usage
-
-This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
--- **Policy sections:** inbound--- **Policy scopes:** all scopes-
-## <a name="RestrictCallerIPs"></a> Restrict caller IPs
-
-The `ip-filter` policy filters (allows/denies) calls from specific IP addresses and/or address ranges.
-
-> [!NOTE]
-> The policy filters the immediate caller's IP address. However, if API Management is hosted behind Application Gateway, the policy considers its IP address, not the originator of the API request. Presently, IP addresses in the `X-Forwarded-For` are not considered.
--
-### Policy statement
-
-```xml
-<ip-filter action="allow | forbid">
- <address>address</address>
- <address-range from="address" to="address" />
-</ip-filter>
-```
-
-### Example
-
-In the following example, the policy only allows requests coming either from the single IP address or range of IP addresses specified
-
-```xml
-<ip-filter action="allow">
- <address>13.66.201.169</address>
- <address-range from="13.66.140.128" to="13.66.140.143" />
-</ip-filter>
-```
-
-### Elements
-
-| Name | Description | Required |
-| -- | | -- |
-| ip-filter | Root element. | Yes |
-| address | Specifies a single IP address on which to filter. | At least one `address` or `address-range` element is required. |
-| address-range from="address" to="address" | Specifies a range of IP address on which to filter. | At least one `address` or `address-range` element is required. |
-
-### Attributes
-
-| Name | Description | Required | Default |
-| -- | - | -- | - |
-| address-range from="address" to="address" | A range of IP addresses to allow or deny access for. | Required when the `address-range` element is used. | N/A |
-| ip-filter action="allow &#124; forbid" | Specifies whether calls should be allowed or not for the specified IP addresses and ranges. | Yes | N/A |
-
-### Usage
-
-This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
--- **Policy sections:** inbound-- **Policy scopes:** all scopes-
-> [!NOTE]
-> If you configure this policy at more than one scope, IP filtering is applied in the order of [policy evaluation](set-edit-policies.md#use-base-element-to-set-policy-evaluation-order) in your policy definition.
-
-## <a name="SetUsageQuota"></a> Set usage quota by subscription
-
-The `quota` policy enforces a renewable or lifetime call volume and/or bandwidth quota, on a per subscription basis. When the quota is exceeded, the caller receives a `403 Forbidden` response status code, and the response includes a `Retry-After` header whose value is the recommended retry interval in seconds.
-
-To understand the difference between rate limits and quotas, [see Rate limits and quotas.](./api-management-sample-flexible-throttling.md#rate-limits-and-quotas)
-
-> [!IMPORTANT]
-> * This policy can be used only once per policy document.
-> * [Policy expressions](api-management-policy-expressions.md) cannot be used in any of the policy attributes for this policy.
---
-### Policy statement
-
-```xml
-<quota calls="number" bandwidth="kilobytes" renewal-period="seconds">
- <api name="API name" id="API id" calls="number">
- <operation name="operation name" id="operation id" calls="number" />
- </api>
-</quota>
-```
-
-### Example
-
-```xml
-<policies>
- <inbound>
- <base />
- <quota calls="10000" bandwidth="40000" renewal-period="3600" />
- </inbound>
- <outbound>
- <base />
- </outbound>
-</policies>
-```
-
-### Elements
-
-| Name | Description | Required |
-| | -- | -- |
-| quota | Root element. | Yes |
-| api | Add one or more of these elements to impose call quota on APIs within the product. Product and API call quotas are applied independently. API can be referenced either via `name` or `id`. If both attributes are provided, `id` will be used and `name` will be ignored. | No |
-| operation | Add one or more of these elements to impose call quota on operations within an API. Product, API, and operation call quotas are applied independently. Operation can be referenced either via `name` or `id`. If both attributes are provided, `id` will be used and `name` will be ignored. | No |
-
-### Attributes
-
-| Name | Description | Required | Default |
-| -- | | - | - |
-| name | The name of the API or operation for which the quota applies. | Yes | N/A |
-| bandwidth | The maximum total number of kilobytes allowed during the time interval specified in the `renewal-period`. | Either `calls`, `bandwidth`, or both together must be specified. | N/A |
-| calls | The maximum total number of calls allowed during the time interval specified in the `renewal-period`. | Either `calls`, `bandwidth`, or both together must be specified. | N/A |
-| renewal-period | The length in seconds of the fixed window after which the quota resets. The start of each period is calculated relative to the start time of the subscription. When `renewal-period` is set to `0`, the period is set to infinite.| Yes | N/A |
-
-### Usage
-
-This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
--- **Policy sections:** inbound-- **Policy scopes:** product-
-## <a name="SetUsageQuotaByKey"></a> Set usage quota by key
-
-> [!IMPORTANT]
-> This feature is unavailable in the **Consumption** tier of API Management.
-
-The `quota-by-key` policy enforces a renewable or lifetime call volume and/or bandwidth quota, on a per key basis. The key can have an arbitrary string value and is typically provided using a policy expression. Optional increment condition can be added to specify which requests should be counted towards the quota. If multiple policies would increment the same key value, it's incremented only once per request. When the quota is exceeded, the caller receives a `403 Forbidden` response status code, and the response includes a `Retry-After` header whose value is the recommended retry interval in seconds.
-
-For more information and examples of this policy, see [Advanced request throttling with Azure API Management](./api-management-sample-flexible-throttling.md).
-
-To understand the difference between rate limits and quotas, [see Rate limits and quotas.](./api-management-sample-flexible-throttling.md#rate-limits-and-quotas)
-----
-### Policy statement
-
-```xml
-<quota-by-key calls="number"
- bandwidth="kilobytes"
- renewal-period="seconds"
- increment-condition="condition"
- counter-key="key value"
- first-period-start="date-time" />
-```
-
-### Example
-
-In the following example, the quota is keyed by the caller IP address.
-
-```xml
-<policies>
- <inbound>
- <base />
- <quota-by-key calls="10000" bandwidth="40000" renewal-period="3600"
- increment-condition="@(context.Response.StatusCode >= 200 && context.Response.StatusCode < 400)"
- counter-key="@(context.Request.IpAddress)" />
- </inbound>
- <outbound>
- <base />
- </outbound>
-</policies>
-```
-
-### Elements
-
-| Name | Description | Required |
-| -- | - | -- |
-| quota | Root element. | Yes |
-
-### Attributes
-
-| Name | Description | Required | Default |
-| - | | - | - |
-| bandwidth | The maximum total number of kilobytes allowed during the time interval specified in the `renewal-period`. | Either `calls`, `bandwidth`, or both together must be specified. | N/A |
-| calls | The maximum total number of calls allowed during the time interval specified in the `renewal-period`. | Either `calls`, `bandwidth`, or both together must be specified. | N/A |
-| counter-key | The key to use for the quota policy. For each key value, a single counter is used for all scopes at which the policy is configured. | Yes | N/A |
-| increment-condition | The boolean expression specifying if the request should be counted towards the quota (`true`) | No | N/A |
-| renewal-period | The length in seconds of the fixed window after which the quota resets. The start of each period is calculated relative to `first-perdiod-start`. When `renewal-period` is set to `0`, the period is set to infinite. | Yes | N/A |
-| first-period-start | The starting date and time for quota renewal periods, in the following format: `yyyy-MM-ddTHH:mm:ssZ` as specified by the ISO 8601 standard. | No | `0001-01-01T00:00:00Z` |
-
-> [!NOTE]
-> The `counter-key` attribute value must be unique across all the APIs in the API Management if you don't want to share the total between the other APIs.
-
-### Usage
-
-This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
--- **Policy sections:** inbound-- **Policy scopes:** all scopes-
-## <a name="ValidateAAD"></a> Validate Azure Active Directory token
-
-The `validate-azure-ad-token` policy enforces the existence and validity of a JSON web token (JWT) that was provided by the Azure Active Directory service. The JWT can be extracted from a specified HTTP header, query parameter, or value provided using a policy expression or context variable.
-
-### Policy statement
-
-```xml
-<validate-azure-ad-token
- tenant-id="tenant ID or URL (for example, "contoso.onmicrosoft.com") of the Azure Active Directory service"
- header-name="name of HTTP header containing the token (alternatively, use query-parameter-name or token-value attribute to specify token)"
- query-parameter-name="name of query parameter used to pass the token (alternative, use header-name or token-value attribute to specify token)"
- token-value="expression returning the token as a string (alternatively, use header-name or query-parameter attribute to specify token)"
- failed-validation-httpcode="HTTP status code to return on failure"
- failed-validation-error-message="error message to return on failure"
- output-token-variable-name="name of a variable to receive a JWT object representing successfully validated token">
- <client-application-ids>
- <application-id>Client application ID from Azure Active Directory</application-id>
- <!-- If there are multiple client application IDs, then add additional application-id elements -->
- </client-application-ids>
- <backend-application-ids>
- <application-id>Backend application ID from Azure Active Directory</application-id>
- <!-- If there are multiple backend application IDs, then add additional application-id elements -->
- </backend-application-ids>
- <audiences>
- <audience>audience string</audience>
- <!-- if there are multiple possible audiences, then add additional audience elements -->
- </audiences>
- <required-claims>
- <claim name="name of the claim as it appears in the token" match="all|any" separator="separator character in a multi-valued claim">
- <value>claim value as it is expected to appear in the token</value>
- <!-- if there is more than one allowed value, then add additional value elements -->
- </claim>
- <!-- if there are multiple possible allowed values, then add additional value elements -->
- </required-claims>
-</validate-azure-ad-token>
-```
-
-### Examples
-
-#### Simple token validation
-
-The following policy is the minimal form of the `validate-azure-ad-token` policy. It expects the JWT to be provided in the `Authorization` header using the `Bearer` scheme. In this example, the Azure AD tenant ID and client application ID are provided using named values.
-
-```xml
-<validate-azure-ad-token tenant-id="{{aad-tenant-id}}">
- <client-application-ids>
- <application-id>{{aad-client-application-id}}</application-id>
- </client-application-ids>
-</validate-azure-ad-token>
-```
-
-#### Validate that audience and claim are correct
-
-The following policy checks that the audience is the hostname of the API Management instance and that the `ctry` claim is `US`. The hostname is provided using a policy expression, and the Azure AD tenant ID and client application ID are provided using named values. The decoded JWT is provided in the `jwt` variable after validation.
-
-For more details on optional claims, read [Provide optional claims to your app](../active-directory/develop/active-directory-optional-claims.md).
-
-```xml
-<validate-azure-ad-token tenant-id="{{aad-tenant-id}}" output-token-variable-name="jwt">
- <client-application-ids>
- <application-id>{{aad-client-application-id}}</application-id>
- </client-application-ids>
- <audiences>
- <audience>@(context.Request.OriginalUrl.Host)</audience>
- </audiences>
- <required-claims>
- <claim name="ctry" match="any">
- <value>US</value>
- </claim>
- </required-claims>
-</validate-azure-ad-token>
-```
-
-### Elements
-
-| Element | Description | Required |
-| - | -- | -- |
-| validate-azure-ad-token | Root element. | Yes |
-| audiences | Contains a list of acceptable audience claims that can be present on the token. If multiple audience values are present, then each value is tried until either all are exhausted (in which case validation fails) or until one succeeds. At least one audience must be specified. | No |
-| backend-application-ids | Contains a list of acceptable backend application IDs. This is only required in advanced cases for the configuration of options and can generally be removed. | No |
-| client-application-ids | Contains a list of acceptable client application IDs. If multiple application-id elements are present, then each value is tried until either all are exhausted (in which case validation fails) or until one succeeds. At least one application-id must be specified. | Yes |
-| required-claims | Contains a list of claims expected to be present on the token for it to be considered valid. When the `match` attribute is set to `all`, every claim value in the policy must be present in the token for validation to succeed. When the `match` attribute is set to `any`, at least one claim must be present in the token for validation to succeed. | No |
-
-### Attributes
-
-| Name | Description | Required | Default |
-| - | | -- | |
-| failed-validation-error-message | Error message to return in the HTTP response body if the JWT doesn't pass validation. This message must have any special characters properly escaped. | No | Default error message depends on validation issue, for example "JWT not present." |
-| failed-validation-httpcode | HTTP status code to return if the JWT doesn't pass validation. | No | 401 |
-| header-name | The name of the HTTP header holding the token. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A |
-| match | The `match` attribute on the `claim` element specifies whether every claim value in the policy must be present in the token for validation to succeed. Possible values are:<br /><br /> - `all` - every claim value in the policy must be present in the token for validation to succeed.<br /><br /> - `any` - at least one claim value must be present in the token for validation to succeed. | No | all |
-| output-token-variable-name | String. Name of context variable that will receive token value as an object of type [`Jwt`](api-management-policy-expressions.md) upon successful token validation | No | N/A |
-| query-parameter-name | The name of the query parameter holding the token. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A |
-| separator | String. Specifies a separator (e.g. ",") to be used for extracting a set of values from a multi-valued claim. | No | N/A |
-| token-value | Expression returning a string containing the token. You must not return `Bearer ` as part of the token value. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A |
-
-### Usage
-
-This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
--- **Policy sections:** inbound-- **Policy scopes:** all scopes-
-### Limitations
-
-This policy can only be used with an Azure Active Directory tenant in the public Azure cloud. It doesn't support tenants configured in regional clouds or Azure clouds with restricted access.
-
-## <a name="ValidateJWT"></a> Validate JWT
-
-The `validate-jwt` policy enforces existence and validity of a JSON web token (JWT) extracted from a specified HTTP header, extracted from a specified query parameter, or matching a specific value.
-
-> [!IMPORTANT]
-> The `validate-jwt` policy requires that the `exp` registered claim is included in the JWT token, unless `require-expiration-time` attribute is specified and set to `false`.
-> The `validate-jwt` policy supports HS256 and RS256 signing algorithms. For HS256 the key must be provided inline within the policy in the base64 encoded form. For RS256 the key may be provided either via an Open ID configuration endpoint, or by providing the ID of an uploaded certificate that contains the public key or modulus-exponent pair of the public key but in PFX format.
-> The `validate-jwt` policy supports tokens encrypted with symmetric keys using the following encryption algorithms: A128CBC-HS256, A192CBC-HS384, A256CBC-HS512.
---
-### Policy statement
-
-```xml
-<validate-jwt
- header-name="name of HTTP header containing the token (alternatively, use query-parameter-name or token-value attribute to specify token)"
- query-parameter-name="name of query parameter used to pass the token (alternative, use header-name or token-value attribute to specify token)"
- token-value="expression returning the token as a string (alternatively, use header-name or query-parameter attribute to specify token)"
- failed-validation-httpcode="http status code to return on failure"
- failed-validation-error-message="error message to return on failure"
- require-expiration-time="true|false"
- require-scheme="scheme"
- require-signed-tokens="true|false"
- clock-skew="allowed clock skew in seconds"
- output-token-variable-name="name of a variable to receive a JWT object representing successfully validated token">
- <openid-config url="full URL of the configuration endpoint, e.g. https://login.constoso.com/openid-configuration" />
- <issuer-signing-keys>
- <key>base64 encoded signing key</key>
- <!-- if there are multiple keys, then add additional key elements -->
- </issuer-signing-keys>
- <decryption-keys>
- <key>base64 encoded signing key</key>
- <!-- if there are multiple keys, then add additional key elements -->
- </decryption-keys>
- <audiences>
- <audience>audience string</audience>
- <!-- if there are multiple possible audiences, then add additional audience elements -->
- </audiences>
- <issuers>
- <issuer>issuer string</issuer>
- <!-- if there are multiple possible issuers, then add additional issuer elements -->
- </issuers>
- <required-claims>
- <claim name="name of the claim as it appears in the token" match="all|any" separator="separator character in a multi-valued claim">
- <value>claim value as it is expected to appear in the token</value>
- <!-- if there is more than one allowed values, then add additional value elements -->
- </claim>
- <!-- if there are multiple possible allowed values, then add additional value elements -->
- </required-claims>
-</validate-jwt>
-
-```
-
-### Examples
-
-#### Simple token validation
-
-```xml
-<validate-jwt header-name="Authorization" require-scheme="Bearer">
- <issuer-signing-keys>
- <key>{{jwt-signing-key}}</key> <!-- signing key specified as a named value -->
- </issuer-signing-keys>
- <audiences>
- <audience>@(context.Request.OriginalUrl.Host)</audience> <!-- audience is set to API Management host name -->
- </audiences>
- <issuers>
- <issuer>http://contoso.com/</issuer>
- </issuers>
-</validate-jwt>
-```
-
-#### Token validation with RSA certificate
-
-```xml
-<validate-jwt header-name="Authorization" require-scheme="Bearer">
- <issuer-signing-keys>
- <key certificate-id="my-rsa-cert" /> <!-- signing key specified as certificate ID, enclosed in double-quotes -->
- </issuer-signing-keys>
- <audiences>
- <audience>@(context.Request.OriginalUrl.Host)</audience> <!-- audience is set to API Management host name -->
- </audiences>
- <issuers>
- <issuer>http://contoso.com/</issuer>
- </issuers>
-</validate-jwt>
-```
-
-#### Azure Active Directory token validation
-
-> [!NOTE]
-> Use the [`validate-azure-ad-token`](#ValidateAAD) policy to validate tokens against Azure Active Directory.
-
-```xml
-<validate-jwt header-name="Authorization" failed-validation-httpcode="401" failed-validation-error-message="Unauthorized. Access token is missing or invalid.">
- <openid-config url="https://login.microsoftonline.com/contoso.onmicrosoft.com/v2.0/.well-known/openid-configuration" />
- <audiences>
- <audience>25eef6e4-c905-4a07-8eb4-0d08d5df8b3f</audience>
- </audiences>
- <required-claims>
- <claim name="id" match="all">
- <value>insert claim here</value>
- </claim>
- </required-claims>
-</validate-jwt>
-```
-
-#### Azure Active Directory B2C token validation
-
-```xml
-<validate-jwt header-name="Authorization" failed-validation-httpcode="401" failed-validation-error-message="Unauthorized. Access token is missing or invalid.">
- <openid-config url="https://login.microsoftonline.com/tfp/contoso.onmicrosoft.com/b2c_1_signin/v2.0/.well-known/openid-configuration" />
- <audiences>
- <audience>d313c4e4-de5f-4197-9470-e509a2f0b806</audience>
- </audiences>
- <required-claims>
- <claim name="id" match="all">
- <value>insert claim here</value>
- </claim>
- </required-claims>
-</validate-jwt>
-```
-
-#### Authorize access to operations based on token claims
-
-This example shows how to use the [Validate JWT](api-management-access-restriction-policies.md#ValidateJWT) policy to authorize access to operations based on token claims value.
-
-```xml
-<validate-jwt header-name="Authorization" require-scheme="Bearer" output-token-variable-name="jwt">
- <issuer-signing-keys>
- <key>{{jwt-signing-key}}</key> <!-- signing key is stored in a named value -->
- </issuer-signing-keys>
- <audiences>
- <audience>@(context.Request.OriginalUrl.Host)</audience>
- </audiences>
- <issuers>
- <issuer>contoso.com</issuer>
- </issuers>
- <required-claims>
- <claim name="group" match="any">
- <value>finance</value>
- <value>logistics</value>
- </claim>
- </required-claims>
-</validate-jwt>
-<choose>
- <when condition="@(context.Request.Method == "POST" && !((Jwt)context.Variables["jwt"]).Claims["group"].Contains("finance"))">
- <return-response>
- <set-status code="403" reason="Forbidden" />
- </return-response>
- </when>
-</choose>
-```
-
-### Elements
-
-| Element | Description | Required |
-| - | -- | -- |
-| validate-jwt | Root element. | Yes |
-| audiences | Contains a list of acceptable audience claims that can be present on the token. If multiple audience values are present, then each value is tried until either all are exhausted (in which case validation fails) or until one succeeds. At least one audience must be specified. | No |
-| issuer-signing-keys | A list of Base64-encoded security keys used to validate signed tokens. If multiple security keys are present, then each key is tried until either all are exhausted (in which case validation fails) or one succeeds (useful for token rollover). Key elements have an optional `id` attribute used to match against `kid` claim. <br/><br/>Alternatively supply an issuer signing key using:<br/><br/> - `certificate-id` in format `<key certificate-id="mycertificate" />` to specify the identifier of a certificate entity [uploaded](/rest/api/apimanagement/apimanagementrest/azure-api-management-rest-api-certificate-entity#Add) to API Management<br/>- RSA modulus `n` and exponent `e` pair in format `<key n="<modulus>" e="<exponent>" />` to specify the RSA parameters in base64url-encoded format | No |
-| decryption-keys | A list of Base64-encoded keys used to decrypt the tokens. If multiple security keys are present, then each key is tried until either all keys are exhausted (in which case validation fails) or a key succeeds. Key elements have an optional `id` attribute used to match against `kid` claim.<br/><br/>Alternatively supply a decryption key using:<br/><br/> - `certificate-id` in format `<key certificate-id="mycertificate" />` to specify the identifier of a certificate entity [uploaded](/rest/api/apimanagement/apimanagementrest/azure-api-management-rest-api-certificate-entity#Add) to API Management | No |
-| issuers | A list of acceptable principals that issued the token. If multiple issuer values are present, then each value is tried until either all are exhausted (in which case validation fails) or until one succeeds. | No |
-| openid-config | Add one or more of these elements to specify a compliant OpenID configuration endpoint from which signing keys and issuer can be obtained.<br/><br/>Configuration including the JSON Web Key Set (JWKS) is pulled from the endpoint every 1 hour and cached. If the token being validated references a validation key (using `kid` claim) that is missing in cached configuration, or if retrieval fails, API Management pulls from the endpoint at most once per 5 min. These intervals are subject to change without notice. | No |
-| required-claims | Contains a list of claims expected to be present on the token for it to be considered valid. When the `match` attribute is set to `all`, every claim value in the policy must be present in the token for validation to succeed. When the `match` attribute is set to `any`, at least one claim must be present in the token for validation to succeed. | No |
-
-### Attributes
-
-| Name | Description | Required | Default |
-| - | | -- | |
-| clock-skew | Timespan. Use to specify maximum expected time difference between the system clocks of the token issuer and the API Management instance. | No | 0 seconds |
-| failed-validation-error-message | Error message to return in the HTTP response body if the JWT doesn't pass validation. This message must have any special characters properly escaped. | No | Default error message depends on validation issue, for example "JWT not present." |
-| failed-validation-httpcode | HTTP Status code to return if the JWT doesn't pass validation. | No | 401 |
-| header-name | The name of the HTTP header holding the token. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A |
-| query-parameter-name | The name of the query parameter holding the token. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A |
-| token-value | Expression returning a string containing the token. You must not return `Bearer ` as part of the token value. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A |
-| id | The `id` attribute on the `key` element allows you to specify the string that will be matched against `kid` claim in the token (if present) to find out the appropriate key to use for signature validation. | No | N/A |
-| match | The `match` attribute on the `claim` element specifies whether every claim value in the policy must be present in the token for validation to succeed. Possible values are:<br /><br /> - `all` - every claim value in the policy must be present in the token for validation to succeed.<br /><br /> - `any` - at least one claim value must be present in the token for validation to succeed. | No | all |
-| require-expiration-time | Boolean. Specifies whether an expiration claim is required in the token. | No | true |
-| require-scheme | The name of the token scheme, for example, "Bearer". When this attribute is set, the policy will ensure that specified scheme is present in the Authorization header value. | No | N/A |
-| require-signed-tokens | Boolean. Specifies whether a token is required to be signed. | No | true |
-| separator | String. Specifies a separator (for example, ",") to be used for extracting a set of values from a multi-valued claim. | No | N/A |
-| url | Open ID configuration endpoint URL from where OpenID configuration metadata can be obtained. The response should be according to specs as defined at URL: `https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderMetadata`. <br/><br/>For Azure Active Directory use the OpenID Connect [metadata endpoint](../active-directory/develop/v2-protocols-oidc.md#find-your-apps-openid-configuration-document-uri) configured in your app registration such as:<br/>- (v2) `https://login.microsoftonline.com/{tenant-name}/v2.0/.well-known/openid-configuration`<br/> - (v2 multitenant) ` https://login.microsoftonline.com/organizations/v2.0/.well-known/openid-configuration`<br/>- (v1) `https://login.microsoftonline.com/{tenant-name}/.well-known/openid-configuration` <br/><br/> substituting your directory tenant name or ID, for example `contoso.onmicrosoft.com`, for `{tenant-name}`. | Yes | N/A |
-| output-token-variable-name | String. Name of context variable that will receive token value as an object of type [`Jwt`](api-management-policy-expressions.md) upon successful token validation | No | N/A |
-
-### Usage
-
-This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
--- **Policy sections:** inbound-- **Policy scopes:** all scopes--
-## Validate client certificate
-
-Use the `validate-client-certificate` policy to enforce that a certificate presented by a client to an API Management instance matches specified validation rules and claims such as subject or issuer for one or more certificate identities.
-
-To be considered valid, a client certificate must match all the validation rules defined by the attributes at the top-level element and match all defined claims for at least one of the defined identities.
-
-Use this policy to check incoming certificate properties against desired properties. Also use this policy to override default validation of client certificates in these cases:
-
-* If you have uploaded custom CA certificates to validate client requests to the managed gateway
-* If you configured custom certificate authorities to validate client requests to a self-managed gateway
-
-For more information about custom CA certificates and certificate authorities, see [How to add a custom CA certificate in Azure API Management](api-management-howto-ca-certificates.md).
-
-
-### Policy statement
-
-```xml
-<validate-client-certificate
- validate-revocation="true|false"
- validate-trust="true|false"
- validate-not-before="true|false"
- validate-not-after="true|false"
- ignore-error="true|false">
- <identities>
- <identityΓÇ»
- thumbprint="certificate thumbprint"
- serial-number="certificate serial number"
- common-name="certificate common name"
- subject="certificate subject string"
- dns-name="certificate DNS name"
- issuer-subject="certificate issuer"
- issuer-thumbprint="certificate issuer thumbprint"
- issuer-certificate-id="certificate identifier"ΓÇ»/>
- </identities>
-</validate-client-certificate>
-```
-
-### Example
-
-The following example validates a client certificate to match the policy's default validation rules and checks whether the subject and issuer name match specified values.
-
-```xml
-<validate-client-certificate
- validate-revocation="true"
- validate-trust="true"
- validate-not-before="true"
- validate-not-after="true"
- ignore-error="false">
- <identities>
- <identity
- subject="C=US, ST=Illinois, L=Chicago, O=Contoso Corp., CN=*.contoso.com"
- issuer-subject="C=BE, O=FabrikamSign nv-sa, OU=Root CA, CN=FabrikamSign Root CA" />
- </identities>
-</validate-client-certificate>
-```
-
-### Elements
-
-| Element | Description | Required |
-| - | -- | -- |
-| validate-client-certificate | Root element. | Yes |
-| identities | Contains a list of identities with defined claims on the client certificate. | No |
-
-### Attributes
-
-| Name | Description | Required | Default |
-| - | --| -- | -- |
-| validate-revocationΓÇ» | Boolean. Specifies whether certificate is validated against online revocation list.ΓÇ» | noΓÇ» | True |
-| validate-trustΓÇ»| Boolean. Specifies if validation should fail in case the chain can't be successfully built up to a trusted CA. | no | True |
-| validate-not-before | Boolean. Validates value against current time. | noΓÇ»| True |
-| validate-not-afterΓÇ» | Boolean. Validates value against current time. | noΓÇ»| True|
-| ignore-errorΓÇ» | Boolean. Specifies if policy should proceed to the next handler or jump to on-error upon failed validation. | no | False |
-| identity | String. Combination of certificate claim values that make certificate valid. | yes | N/A |
-| thumbprint | Certificate thumbprint. | no | N/A |
-| serial-number | Certificate serial number. | no | N/A |
-| common-name | Certificate common name (part of Subject string). | no | N/A |
-| subject | Subject string. Must follow format of Distinguished Name. | no | N/A |
-| dns-name | Value of dnsName entry inside Subject Alternative Name claim. | no | N/A |
-| issuer-subject | Issuer's subject. Must follow format of Distinguished Name. | no | N/A |
-| issuer-thumbprint | Issuer thumbprint. | no | N/A |
-| issuer-certificate-id | Identifier of existing certificate entity representing the issuer's public key. Mutually exclusive with other issuer attributes. | no | N/A |
-
-### Usage
-
-This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
--- **Policy sections:** inbound-- **Policy scopes:** all scopes-
api-management Api Management Advanced Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-advanced-policies.md
- Title: Azure API Management advanced policies | Microsoft Docs
-description: Reference for the advanced policies available for use in Azure API Management. Provides policy usage, settings and examples.
-- Previously updated : 04/28/2022----
-# API Management advanced policies
-
-This article provides a reference for advanced API Management policies, such as those that are based on policy expressions.
--
-## <a name="AdvancedPolicies"></a> Advanced policies
--- [Control flow](api-management-advanced-policies.md#choose) - Conditionally applies policy statements based on the results of the evaluation of Boolean [expressions](api-management-policy-expressions.md).-- [Forward request](#ForwardRequest) - Forwards the request to the backend service.-- [Include fragment](#IncludeFragment) - Inserts a policy fragment in the policy definition.-- [Limit concurrency](#LimitConcurrency) - Prevents enclosed policies from executing by more than the specified number of requests at a time.-- [Log to event hub](#log-to-eventhub) - Sends messages in the specified format to an event hub defined by a Logger entity.-- [Emit metrics](#emit-metrics) - Sends custom metrics to Application Insights at execution.-- [Mock response](#mock-response) - Aborts pipeline execution and returns a mocked response directly to the caller.-- [Retry](#Retry) - Retries execution of the enclosed policy statements, if and until the condition is met. Execution will repeat at the specified time intervals and up to the specified retry count.-- [Return response](#ReturnResponse) - Aborts pipeline execution and returns the specified response directly to the caller.-- [Send one way request](#SendOneWayRequest) - Sends a request to the specified URL without waiting for a response.-- [Send request](#SendRequest) - Sends a request to the specified URL.-- [Set HTTP proxy](#SetHttpProxy) - Allows you to route forwarded requests via an HTTP proxy.-- [Set request method](#SetRequestMethod) - Allows you to change the HTTP method for a request.-- [Set status code](#SetStatus) - Changes the HTTP status code to the specified value.-- [Set variable](api-management-advanced-policies.md#set-variable) - Persists a value in a named [context](api-management-policy-expressions.md#ContextVariables) variable for later access.-- [Trace](#Trace) - Adds custom traces into the [API Inspector](./api-management-howto-api-inspector.md) output, Application Insights telemetries, and Resource Logs.-- [Wait](#Wait) - Waits for enclosed [Send request](api-management-advanced-policies.md#SendRequest), [Get value from cache](api-management-caching-policies.md#GetFromCacheByKey), or [Control flow](api-management-advanced-policies.md#choose) policies to complete before proceeding.-
-## <a name="choose"></a> Control flow
-
-The `choose` policy applies enclosed policy statements based on the outcome of evaluation of Boolean expressions, similar to an if-then-else or a switch construct in a programming language.
--
-### <a name="ChoosePolicyStatement"></a> Policy statement
-
-```xml
-<choose>
- <when condition="Boolean expression | Boolean constant">
- <!ΓÇö one or more policy statements to be applied if the above condition is true -->
- </when>
- <when condition="Boolean expression | Boolean constant">
- <!ΓÇö one or more policy statements to be applied if the above condition is true -->
- </when>
- <otherwise>
- <!ΓÇö one or more policy statements to be applied if none of the above conditions are true -->
- </otherwise>
-</choose>
-```
-
-The control flow policy must contain at least one `<when/>` element. The `<otherwise/>` element is optional. Conditions in `<when/>` elements are evaluated in order of their appearance within the policy. Policy statement(s) enclosed within the first `<when/>` element with condition attribute equals `true` will be applied. Policies enclosed within the `<otherwise/>` element, if present, will be applied if all of the `<when/>` element condition attributes are `false`.
-
-### Examples
-
-#### <a name="ChooseExample"></a> Example
-
-The following example demonstrates a [set-variable](api-management-advanced-policies.md#set-variable) policy and two control flow policies.
-
-The set variable policy is in the inbound section and creates an `isMobile` Boolean [context](api-management-policy-expressions.md#ContextVariables) variable that is set to true if the `User-Agent` request header contains the text `iPad` or `iPhone`.
-
-The first control flow policy is also in the inbound section, and conditionally applies one of two [Set query string parameter](api-management-transformation-policies.md#SetQueryStringParameter) policies depending on the value of the `isMobile` context variable.
-
-The second control flow policy is in the outbound section and conditionally applies the [Convert XML to JSON](api-management-transformation-policies.md#ConvertXMLtoJSON) policy when `isMobile` is set to `true`.
-
-```xml
-<policies>
- <inbound>
- <set-variable name="isMobile" value="@(context.Request.Headers.GetValueOrDefault("User-Agent","").Contains("iPad") || context.Request.Headers.GetValueOrDefault("User-Agent","").Contains("iPhone"))" />
- <base />
- <choose>
- <when condition="@(context.Variables.GetValueOrDefault<bool>("isMobile"))">
- <set-query-parameter name="mobile" exists-action="override">
- <value>true</value>
- </set-query-parameter>
- </when>
- <otherwise>
- <set-query-parameter name="mobile" exists-action="override">
- <value>false</value>
- </set-query-parameter>
- </otherwise>
- </choose>
- </inbound>
- <outbound>
- <base />
- <choose>
- <when condition="@(context.Variables.GetValueOrDefault<bool>("isMobile"))">
- <xml-to-json kind="direct" apply="always" consider-accept-header="false"/>
- </when>
- </choose>
- </outbound>
-</policies>
-```
-
-#### Example
-
-This example shows how to perform content filtering by removing data elements from the response received from the backend service when using the `Starter` product. The example backend response includes root-level properties similar to the [OpenWeather One Call API](https://openweathermap.org/api/one-call-api).
-
-```xml
-<!-- Copy this snippet into the outbound section to remove a number of data elements from the response received from the backend service based on the name of the product -->
-<choose>
- <when condition="@(context.Response.StatusCode == 200 && context.Product.Name.Equals("Starter"))">
- <set-body>@{
- var response = context.Response.Body.As<JObject>();
- foreach (var key in new [] {"current", "minutely", "hourly", "daily", "alerts"}) {
- response.Property (key).Remove ();
- }
- return response.ToString();
- }
- </set-body>
- </when>
-</choose>
-```
-
-### Elements
-
-| Element | Description | Required |
-| | - | -- |
-| choose | Root element. | Yes |
-| when | The condition to use for the `if` or `ifelse` parts of the `choose` policy. If the `choose` policy has multiple `when` sections, they are evaluated sequentially. Once the `condition` of a when element evaluates to `true`, no further `when` conditions are evaluated. | Yes |
-| otherwise | Contains the policy snippet to be used if none of the `when` conditions evaluate to `true`. | No |
-
-### Attributes
-
-| Attribute | Description | Required |
-| | | -- |
-| condition="Boolean expression &#124; Boolean constant" | The Boolean expression or constant to evaluated when the containing `when` policy statement is evaluated. | Yes |
-
-### <a name="ChooseUsage"></a> Usage
-
-This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
--- **Policy sections:** inbound, outbound, backend, on-error--- **Policy scopes:** all scopes-
-## <a name="ForwardRequest"></a> Forward request
-
-The `forward-request` policy forwards the incoming request to the backend service specified in the request [context](api-management-policy-expressions.md#ContextVariables). The backend service URL is specified in the API [settings](./import-and-publish.md) and can be changed using the [set backend service](api-management-transformation-policies.md) policy.
-
-> [!IMPORTANT]
-> * This policy is required to forward requests to an API backend. By default, API Management sets up this policy at the global scope.
-> * Removing this policy results in the request not being forwarded to the backend service. Policies in the outbound section are evaluated immediately upon the successful completion of the policies in the inbound section.
--
-### Policy statement
-
-```xml
-<forward-request timeout="time in seconds" follow-redirects="false | true" buffer-request-body="false | true" buffer-response="true | false" fail-on-error-status-code="false | true"/>
-```
-
-### Examples
-
-#### Example
-
-The following API level policy forwards all API requests to the backend service with a timeout interval of 60 seconds.
-
-```xml
-<!-- api level -->
-<policies>
- <inbound>
- <base/>
- </inbound>
- <backend>
- <forward-request timeout="60"/>
- </backend>
- <outbound>
- <base/>
- </outbound>
-</policies>
-
-```
-
-#### Example
-
-This operation level policy uses the `base` element to inherit the backend policy from the parent API level scope.
-
-```xml
-<!-- operation level -->
-<policies>
- <inbound>
- <base/>
- </inbound>
- <backend>
- <base/>
- </backend>
- <outbound>
- <base/>
- </outbound>
-</policies>
-
-```
-
-#### Example
-
-This operation level policy explicitly forwards all requests to the backend service with a timeout of 120 and does not inherit the parent API level backend policy. If the backend service responds with an error status code from 400 to 599 inclusive, [on-error](api-management-error-handling-policies.md) section will be triggered.
-
-```xml
-<!-- operation level -->
-<policies>
- <inbound>
- <base/>
- </inbound>
- <backend>
- <forward-request timeout="120" fail-on-error-status-code="true" />
- <!-- effective policy. note the absence of <base/> -->
- </backend>
- <outbound>
- <base/>
- </outbound>
-</policies>
-
-```
-
-#### Example
-
-This operation level policy does not forward requests to the backend service.
-
-```xml
-<!-- operation level -->
-<policies>
- <inbound>
- <base/>
- </inbound>
- <backend>
- <!-- no forwarding to backend -->
- </backend>
- <outbound>
- <base/>
- </outbound>
-</policies>
-
-```
-
-### Elements
-
-| Element | Description | Required |
-| | - | -- |
-| forward-request | Root element. | Yes |
-
-### Attributes
-
-| Attribute | Description | Required | Default |
-| | -- | -- | - |
-| timeout="integer" | The amount of time in seconds to wait for the HTTP response headers to be returned by the backend service before a timeout error is raised. Minimum value is 0 seconds. Values greater than 240 seconds may not be honored as the underlying network infrastructure can drop idle connections after this time. | No | 300 |
-| follow-redirects="false &#124; true" | Specifies whether redirects from the backend service are followed by the gateway or returned to the caller. | No | false |
-| buffer-request-body="false &#124; true" | When set to "true", request is buffered and will be reused on [retry](api-management-advanced-policies.md#Retry). | No | false |
-| buffer-response="false &#124; true" | Affects processing of chunked responses. When set to "false", each chunk received from the backend is immediately returned to the caller. When set to "true", chunks are buffered (8 KB, unless end of stream is detected) and only then returned to the caller.<br/><br/>Set to "false" with backends such as those implementing [server-sent events (SSE)](how-to-server-sent-events.md) that require content to be returned or streamed immediately to the caller. | No | true |
-| fail-on-error-status-code="false &#124; true" | When set to true, triggers [on-error](api-management-error-handling-policies.md) section for response codes in the range from 400 to 599 inclusive. | No | false |
-
-### Usage
-
-This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
--- **Policy sections:** backend-- **Policy scopes:** all scopes-
-## <a name="IncludeFragment"></a> Include fragment
-
-The `include-fragment` policy inserts the contents of a previously created [policy fragment](policy-fragments.md) in the policy definition. A policy fragment is a centrally managed, reusable XML policy snippet that can be included in policy definitions in your API Management instance.
-
-The policy inserts the policy fragment as-is at the location you select in the policy definition.
-
-### Policy statement
-
-```xml
-<include-fragment fragment-id="fragment" />
-```
-
-### Example
-
-In the following example, the policy fragment named *myFragment* is added in the inbound section of a policy definition.
-
-```xml
-<inbound>
- <include-fragment fragment-id="myFragment" />
- <base />
-</inbound>
-[...]
-```
-
-### Elements
-
-| Element | Description | Required |
-| -- | - | -- |
-| include-fragment | Root element. | Yes |
-
-### Attributes
-
-| Attribute | Description | Required | Default |
-| | -- | -- | - |
-| fragment-id | A string. Expression allowed. Specifies the identifier (name) of a policy fragment created in the API Management instance. | Yes | N/A |
-
-### Usage
-
-This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
--- **Policy sections:** inbound, outbound, backend, on-error--- **Policy scopes:** all scopes-
-## <a name="LimitConcurrency"></a> Limit concurrency
-
-The `limit-concurrency` policy prevents enclosed policies from executing by more than the specified number of requests at any time. When that number is exceeded, new requests will fail immediately with the `429` Too Many Requests status code.
--
-### <a name="LimitConcurrencyStatement"></a> Policy statement
-
-```xml
-<limit-concurrency key="expression" max-count="number">
- <!ΓÇö nested policy statements -->
-</limit-concurrency>
-```
-
-### Example
-
-The following example demonstrates how to limit number of requests forwarded to a backend based on the value of a context variable.
-
-```xml
-<policies>
- <inbound>…</inbound>
- <backend>
- <limit-concurrency key="@((string)context.Variables["connectionId"])" max-count="3">
- <forward-request timeout="120"/>
- </limit-concurrency>
- </backend>
- <outbound>…</outbound>
-</policies>
-```
-
-### Elements
-
-| Element | Description | Required |
-| -- | - | -- |
-| limit-concurrency | Root element. | Yes |
-
-### Attributes
-
-| Attribute | Description | Required | Default |
-| | -- | -- | - |
-| key | A string. Expression allowed. Specifies the concurrency scope. Can be shared by multiple policies. | Yes | N/A |
-| max-count | An integer. Specifies a maximum number of requests that are allowed to enter the policy. | Yes | N/A |
-
-### Usage
-
-This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
--- **Policy sections:** inbound, outbound, backend, on-error--- **Policy scopes:** all scopes-
-## <a name="log-to-eventhub"></a> Log to event hub
-
-The `log-to-eventhub` policy sends messages in the specified format to an event hub defined by a Logger entity. As its name implies, the policy is used for saving selected request or response context information for online or offline analysis.
-The policy is not affected by Application Insights sampling. All invocations of the policy will be logged.
-
-> [!NOTE]
-> For a step-by-step guide on configuring an event hub and logging events, see [How to log API Management events with Azure Event Hubs](./api-management-howto-log-event-hubs.md).
---
-### Policy statement
-
-```xml
-<log-to-eventhub logger-id="id of the logger entity" partition-id="index of the partition where messages are sent" partition-key="value used for partition assignment">
- Expression returning a string to be logged
-</log-to-eventhub>
-
-```
-
-### Example
-
-Any string can be used as the value to be logged in Event Hubs. In this example the date and time, deployment service name, request ID, IP address, and operation name for all inbound calls are logged to the event hub Logger registered with the `contoso-logger` ID
-
-```xml
-<policies>
- <inbound>
- <log-to-eventhub logger-id ='contoso-logger'>
- @( string.Join(",", DateTime.UtcNow, context.Deployment.ServiceName, context.RequestId, context.Request.IpAddress, context.Operation.Name) )
- </log-to-eventhub>
- </inbound>
- <outbound>
- </outbound>
-</policies>
-```
-
-### Elements
-
-| Element | Description | Required |
-| | - | -- |
-| log-to-eventhub | Root element. The value of this element is the string to log to your event hub. | Yes |
-
-### Attributes
-
-| Attribute | Description | Required |
-| - | - | -- |
-| logger-id | The ID of the Logger registered with your API Management service. | Yes |
-| partition-id | Specifies the index of the partition where messages are sent. | Optional. This attribute may not be used if `partition-key` is used. |
-| partition-key | Specifies the value used for partition assignment when messages are sent. | Optional. This attribute may not be used if `partition-id` is used. |
-
-### Usage
-
-This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
--- **Policy sections:** inbound, outbound, backend, on-error--- **Policy scopes:** all scopes-
-## Emit metrics
-
-The `emit-metric` policy sends custom metrics in the specified format to Application Insights.
-
-> [!NOTE]
-> * Custom metrics are a [preview feature](../azure-monitor/essentials/metrics-custom-overview.md) of Azure Monitor and subject to [limitations](../azure-monitor/essentials/metrics-custom-overview.md#design-limitations-and-considerations).
-> * For more information about the API Management data added to Application Insights, see [How to integrate Azure API Management with Azure Application Insights](./api-management-howto-app-insights.md#what-data-is-added-to-application-insights).
--
-### Policy statement
-
-```xml
-<emit-metric name="name of custom metric" value="value of custom metric" namespace="metric namespace">
- <dimension name="dimension name" value="dimension value" />
-</emit-metric>
-```
-
-### Example
-
-The following example sends a custom metric to count the number of API requests along with user ID, client IP, and API ID as custom dimensions.
-
-```xml
-<policies>
- <inbound>
- <emit-metric name="Request" value="1" namespace="my-metrics">
- <dimension name="User ID" />
- <dimension name="Client IP" value="@(context.Request.IpAddress)" />
- <dimension name="API ID" />
- </emit-metric>
- </inbound>
- <outbound>
- </outbound>
-</policies>
-```
-
-### Elements
-
-| Element | Description | Required |
-| -- | | -- |
-| emit-metric | Root element. The value of this element is the string to emit your custom metric. | Yes |
-| dimension | Sub element. Add one or more of these elements for each dimension included in the custom metric. | Yes |
-
-### Attributes
-
-#### emit-metric
-| Attribute | Description | Required | Type | Default value |
-| | -- | -- | | -- |
-| name | Name of custom metric. | Yes | string, expression | N/A |
-| namespace | Namespace of custom metric. | No | string, expression | API Management |
-| value | Value of custom metric. | No | int, expression | 1 |
-
-#### dimension
-| Attribute | Description | Required | Type | Default value |
-| | -- | -- | | -- |
-| name | Name of dimension. | Yes | string, expression | N/A |
-| value | Value of dimension. Can only be omitted if `name` matches one of the default dimensions. If so, value is provided as per dimension name. | No | string, expression | N/A |
-
-**Default dimension names that may be used without value:**
-
-* API ID
-* Operation ID
-* Product ID
-* User ID
-* Subscription ID
-* Location ID
-* Gateway ID
-
-### Usage
-
-This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
--- **Policy sections:** inbound, outbound, backend, on-error--- **Policy scopes:** all scopes-
-## <a name="mock-response"></a> Mock response
-
-The `mock-response`, as the name implies, is used to mock APIs and operations. It aborts normal pipeline execution and returns a mocked response to the caller. The policy always tries to return responses of highest fidelity. It prefers response content examples, whenever available. It generates sample responses from schemas, when schemas are provided and examples are not. If neither examples or schemas are found, responses with no content are returned.
---
-### Policy statement
-
-```xml
-<mock-response status-code="code" content-type="media type"/>
-
-```
-
-### Examples
-
-```xml
-<!-- Returns 200 OK status code. Content is based on an example or schema, if provided for this
-status code. First found content type is used. If no example or schema is found, the content is empty. -->
-<mock-response/>
-
-<!-- Returns 200 OK status code. Content is based on an example or schema, if provided for this
-status code and media type. If no example or schema found, the content is empty. -->
-<mock-response status-code='200' content-type='application/json'/>
-```
-
-### Elements
-
-| Element | Description | Required |
-| - | - | -- |
-| mock-response | Root element. | Yes |
-
-### Attributes
-
-| Attribute | Description | Required | Default |
-| | -- | -- | - |
-| status-code | Specifies response status code and is used to select corresponding example or schema. | No | 200 |
-| content-type | Specifies `Content-Type` response header value and is used to select corresponding example or schema. | No | None |
-
-### Usage
-
-This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
--- **Policy sections:** inbound, outbound, on-error--- **Policy scopes:** all scopes-
-## <a name="Retry"></a> Retry
-
-The `retry` policy executes its child policies once and then retries their execution until the retry `condition` becomes `false` or retry `count` is exhausted.
---
-### Policy statement
-
-```xml
-
-<retry
- condition="boolean expression or literal"
- count="number of retry attempts"
- interval="retry interval in seconds"
- max-interval="maximum retry interval in seconds"
- delta="retry interval delta in seconds"
- first-fast-retry="boolean expression or literal">
- <!-- One or more child policies. No restrictions -->
-</retry>
-
-```
-
-### Example
-
-In the following example, request forwarding is retried up to ten times using an exponential retry algorithm. Since `first-fast-retry` is set to false, all retry attempts are subject to exponentially increasing retry wait times (in this example, approximately 10 seconds, 20 seconds, 40 seconds, ...), up to a maximum wait of `max-interval`.
-
-```xml
-
-<retry
- condition="@(context.Response.StatusCode == 500)"
- count="10"
- interval="10"
- max-interval="100"
- delta="10"
- first-fast-retry="false">
- <forward-request buffer-request-body="true" />
-</retry>
-
-```
-
-### Example
-
-In the following example, sending a request to a URL other than the defined backend is retried up to three times if the connection is dropped/timed out, or the request results in a server-side error. Since `first-fast-retry` is set to true, the first retry is executed immediately upon the initial request failure. Note that `send-request` must set `ignore-error` to true in order for `response-variable-name` to be null in the event of an error.
-
-```xml
-
-<retry
- condition="@(context.Variables["response"] == null || ((IResponse)context.Variables["response"]).StatusCode >= 500)"
- count="3"
- interval="1"
- first-fast-retry="true">
- <send-request
- mode="new"
- response-variable-name="response"
- timeout="3"
- ignore-error="true">
- <set-url>https://api.contoso.com/products/5</set-url>
- <set-method>GET</set-method>
- </send-request>
-</retry>
-
-```
-
-### Elements
-
-| Element | Description | Required |
-| - | - | -- |
-| retry | Root element. May contain any other policies as its child elements. | Yes |
-
-### Attributes
-
-| Attribute | Description | Required | Default |
-| - | -- | -- | - |
-| condition | A boolean literal or [expression](api-management-policy-expressions.md) specifying if retries should be stopped (`false`) or continued (`true`). | Yes | N/A |
-| count | A positive number specifying the maximum number of retries to attempt. | Yes | N/A |
-| interval | A positive number in seconds specifying the wait interval between the retry attempts. | Yes | N/A |
-| max-interval | A positive number in seconds specifying the maximum wait interval between the retry attempts. It is used to implement an exponential retry algorithm. | No | N/A |
-| delta | A positive number in seconds specifying the wait interval increment. It is used to implement the linear and exponential retry algorithms. | No | N/A |
-| first-fast-retry | If set to `true` , the first retry attempt is performed immediately. | No | `false` |
-
-#### Retry wait times
-
-* When only the `interval` is specified, **fixed** interval retries are performed.
-* When only the `interval` and `delta` are specified, a **linear** interval retry algorithm is used. The wait time between retries increases according to the following formula: `interval + (count - 1)*delta`.
-* When the `interval`, `max-interval` and `delta` are specified, an **exponential** interval retry algorithm is applied. The wait time between the retries increases exponentially according to the following formula: `interval + (2^count - 1) * random(delta * 0.8, delta * 1.2)`, up to a maximum interval set by `max-interval`.
-
- For example, when `interval` and `delta` are both set to 10 seconds, and `max-interval` is 100 seconds, the approximate wait time between retries increases as follows: 10 seconds, 20 seconds, 40 seconds, 80 seconds, with 100 seconds wait time used for remaining retries.
-
-### Usage
-
-This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes). Child policy usage restrictions will be inherited by this policy.
--- **Policy sections:** inbound, outbound, backend, on-error--- **Policy scopes:** all scopes-
-## <a name="ReturnResponse"></a> Return response
-
-The `return-response` policy aborts pipeline execution and returns either a default or custom response to the caller. Default response is `200 OK` with no body. Custom response can be specified via a context variable or policy statements. When both are provided, the response contained within the context variable is modified by the policy statements before being returned to the caller.
---
-### Policy statement
-
-```xml
-<return-response response-variable-name="existing context variable">
- <set-status/>
- <set-header/>
- <set-body/>
-</return-response>
-
-```
-
-### Example
-
-```xml
-<return-response>
- <set-status code="401" reason="Unauthorized"/>
- <set-header name="WWW-Authenticate" exists-action="override">
- <value>Bearer error="invalid_token"</value>
- </set-header>
-</return-response>
-
-```
-
-### Elements
-
-| Element | Description | Required |
-| | -- | -- |
-| return-response | Root element. | Yes |
-| set-status | A [set-status](api-management-advanced-policies.md#SetStatus) policy statement. | No |
-| set-header | A [set-header](api-management-transformation-policies.md#SetHTTPheader) policy statement. | No |
-| set-body | A [set-body](api-management-transformation-policies.md#SetBody) policy statement. | No |
-
-### Attributes
-
-| Attribute | Description | Required |
-| - | | |
-| response-variable-name | The name of the context variable referenced from, for example, an upstream [send-request](api-management-advanced-policies.md#SendRequest) policy and containing a `Response` object | Optional. |
-
-### Usage
-
-This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
--- **Policy sections:** inbound, outbound, backend, on-error--- **Policy scopes:** all scopes-
-## <a name="SendOneWayRequest"></a> Send one way request
-
-The `send-one-way-request` policy sends the provided request to the specified URL without waiting for a response.
---
-### Policy statement
-
-```xml
-<send-one-way-request mode="new | copy">
- <set-url>...</set-url>
- <method>...</method>
- <header name="" exists-action="override | skip | append | delete">...</header>
- <body>...</body>
- <authentication-certificate thumbprint="thumbprint" />
-</send-one-way-request>
-
-```
-
-### Example
-
-This sample policy shows an example of using the `send-one-way-request` policy to send a message to a Slack chat room if the HTTP response code is greater than or equal to 500. For more information on this sample, see [Using external services from the Azure API Management service](./api-management-sample-send-request.md).
-
-```xml
-<choose>
- <when condition="@(context.Response.StatusCode >= 500)">
- <send-one-way-request mode="new">
- <set-url>https://hooks.slack.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX</set-url>
- <set-method>POST</set-method>
- <set-body>@{
- return new JObject(
- new JProperty("username","APIM Alert"),
- new JProperty("icon_emoji", ":ghost:"),
- new JProperty("text", String.Format("{0} {1}\nHost: {2}\n{3} {4}\n User: {5}",
- context.Request.Method,
- context.Request.Url.Path + context.Request.Url.QueryString,
- context.Request.Url.Host,
- context.Response.StatusCode,
- context.Response.StatusReason,
- context.User.Email
- ))
- ).ToString();
- }</set-body>
- </send-one-way-request>
- </when>
-</choose>
-
-```
-
-### Elements
-
-| Element | Description | Required |
-| -- | -- | - |
-| send-one-way-request | Root element. | Yes |
-| set-url | The URL of the request. | No if mode=copy; otherwise yes. |
-| method | The HTTP method for the request. | No if mode=copy; otherwise yes. |
-| header | Request header. Use multiple header elements for multiple request headers. | No |
-| body | The request body. | No |
-| authentication-certificate | [Certificate to use for client authentication](api-management-authentication-policies.md#ClientCertificate) | No |
-
-### Attributes
-
-| Attribute | Description | Required | Default |
-| - | -- | -- | -- |
-| mode="string" | Determines whether this is a new request or a copy of the current request. In outbound mode, mode=copy does not initialize the request body. | No | New |
-| name | Specifies the name of the header to be set. | Yes | N/A |
-| exists-action | Specifies what action to take when the header is already specified. This attribute must have one of the following values.<br /><br /> - override - replaces the value of the existing header.<br />- skip - does not replace the existing header value.<br />- append - appends the value to the existing header value.<br />- delete - removes the header from the request.<br /><br /> When set to `override` enlisting multiple entries with the same name results in the header being set according to all entries (which will be listed multiple times); only listed values will be set in the result. | No | override |
-
-### Usage
-
-This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
--- **Policy sections:** inbound, outbound, backend, on-error--- **Policy scopes:** all scopes-
-## <a name="SendRequest"></a> Send request
-
-The `send-request` policy sends the provided request to the specified URL, waiting no longer than the set timeout value.
---
-### Policy statement
-
-```xml
-<send-request mode="new|copy" response-variable-name="" timeout="60 sec" ignore-error
-="false|true">
- <set-url>...</set-url>
- <set-method>...</set-method>
- <set-header name="" exists-action="override|skip|append|delete">...</set-header>
- <set-body>...</set-body>
- <authentication-certificate thumbprint="thumbprint" />
-</send-request>
-
-```
-
-### Example
-
-This example shows one way to verify a reference token with an authorization server. For more information on this sample, see [Using external services from the Azure API Management service](./api-management-sample-send-request.md).
-
-```xml
-<inbound>
- <!-- Extract Token from Authorization header parameter -->
- <set-variable name="token" value="@(context.Request.Headers.GetValueOrDefault("Authorization","scheme param").Split(' ').Last())" />
-
- <!-- Send request to Token Server to validate token (see RFC 7662) -->
- <send-request mode="new" response-variable-name="tokenstate" timeout="20" ignore-error="true">
- <set-url>https://microsoft-apiappec990ad4c76641c6aea22f566efc5a4e.azurewebsites.net/introspection</set-url>
- <set-method>POST</set-method>
- <set-header name="Authorization" exists-action="override">
- <value>basic dXNlcm5hbWU6cGFzc3dvcmQ=</value>
- </set-header>
- <set-header name="Content-Type" exists-action="override">
- <value>application/x-www-form-urlencoded</value>
- </set-header>
- <set-body>@($"token={(string)context.Variables["token"]}")</set-body>
- </send-request>
-
- <choose>
- <!-- Check active property in response -->
- <when condition="@((bool)((IResponse)context.Variables["tokenstate"]).Body.As<JObject>()["active"] == false)">
- <!-- Return 401 Unauthorized with http-problem payload -->
- <return-response>
- <set-status code="401" reason="Unauthorized" />
- <set-header name="WWW-Authenticate" exists-action="override">
- <value>Bearer error="invalid_token"</value>
- </set-header>
- </return-response>
- </when>
- </choose>
- <base />
-</inbound>
-
-```
-
-### Elements
-
-| Element | Description | Required |
-| -- | -- | - |
-| send-request | Root element. | Yes |
-| url | The URL of the request. | No if mode=copy; otherwise yes. |
-| method | The HTTP method for the request. | No if mode=copy; otherwise yes. |
-| header | Request header. Use multiple header elements for multiple request headers. | No |
-| body | The request body. | No |
-| authentication-certificate | [Certificate to use for client authentication](api-management-authentication-policies.md#ClientCertificate) | No |
-
-### Attributes
-
-| Attribute | Description | Required | Default |
-| - | -- | -- | -- |
-| mode="string" | Determines whether this is a new request or a copy of the current request. In outbound mode, mode=copy does not initialize the request body. | No | New |
-| response-variable-name="string" | The name of context variable that will receive a response object. If the variable doesn't exist, it will be created upon successful execution of the policy and will become accessible via [`context.Variable`](api-management-policy-expressions.md#ContextVariables) collection. | Yes | N/A |
-| timeout="integer" | The timeout interval in seconds before the call to the URL fails. | No | 60 |
-| ignore-error | If true and the request results in an error, the error will be ignored, and the response variable will contain a null value. | No | false |
-| name | Specifies the name of the header to be set. | Yes | N/A |
-| exists-action | Specifies what action to take when the header is already specified. This attribute must have one of the following values.<br /><br /> - override - replaces the value of the existing header.<br />- skip - does not replace the existing header value.<br />- append - appends the value to the existing header value.<br />- delete - removes the header from the request.<br /><br /> When set to `override` enlisting multiple entries with the same name results in the header being set according to all entries (which will be listed multiple times); only listed values will be set in the result. | No | override |
-
-### Usage
-
-This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
--- **Policy sections:** inbound, outbound, backend, on-error--- **Policy scopes:** all scopes-
-## <a name="SetHttpProxy"></a> Set HTTP proxy
-
-The `proxy` policy allows you to route requests forwarded to backends via an HTTP proxy. Only HTTP (not HTTPS) is supported between the gateway and the proxy. Basic and NTLM authentication only. To route the send-request via HTTP proxy, you must place the set HTTP proxy policy inside the send-request policy block.
---
-### Policy statement
-
-```xml
-<proxy url="http://hostname-or-ip:port" username="username" password="password" />
-
-```
-
-### Example
-
-Note the use of [properties](api-management-howto-properties.md) as values of the username and password to avoid storing sensitive information in the policy document.
-
-```xml
-<proxy url="http://192.168.1.1:8080" username={{username}} password={{password}} />
-
-```
-
-### Elements
-
-| Element | Description | Required |
-| - | | -- |
-| proxy | Root element | Yes |
-
-### Attributes
-
-| Attribute | Description | Required | Default |
-| -- | | -- | - |
-| url="string" | Proxy URL in the form of http://host:port. | Yes | N/A |
-| username="string" | Username to be used for authentication with the proxy. | No | N/A |
-| password="string" | Password to be used for authentication with the proxy. | No | N/A |
-
-### Usage
-
-This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
--- **Policy sections:** inbound--- **Policy scopes:** all scopes-
-## <a name="SetRequestMethod"></a> Set request method
-
-The `set-method` policy allows you to change the HTTP request method for a request.
---
-### Policy statement
-
-```xml
-<set-method>METHOD</set-method>
-
-```
-
-### Example
-
-This sample policy that uses the `set-method` policy shows an example of sending a message to a Slack chat room if the HTTP response code is greater than or equal to 500. For more information on this sample, see [Using external services from the Azure API Management service](./api-management-sample-send-request.md).
-
-```xml
-<choose>
- <when condition="@(context.Response.StatusCode >= 500)">
- <send-one-way-request mode="new">
- <set-url>https://hooks.slack.com/services/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</set-url>
- <set-method>POST</set-method>
- <set-body>@{
- return new JObject(
- new JProperty("username","APIM Alert"),
- new JProperty("icon_emoji", ":ghost:"),
- new JProperty("text", String.Format("{0} {1}\nHost: {2}\n{3} {4}\n User: {5}",
- context.Request.Method,
- context.Request.Url.Path + context.Request.Url.QueryString,
- context.Request.Url.Host,
- context.Response.StatusCode,
- context.Response.StatusReason,
- context.User.Email
- ))
- ).ToString();
- }</set-body>
- </send-one-way-request>
- </when>
-</choose>
-
-```
-
-### Elements
-
-| Element | Description | Required |
-| - | -- | -- |
-| set-method | Root element. The value of the element specifies the HTTP method. | Yes |
-
-### Usage
-
-This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
--- **Policy sections:** inbound, on-error--- **Policy scopes:** all scopes-
-## <a name="SetStatus"></a> Set status code
-
-The `set-status` policy sets the HTTP status code to the specified value.
---
-### Policy statement
-
-```xml
-<set-status code="" reason=""/>
-
-```
-
-### Example
-
-This example shows how to return a 401 response if the authorization token is invalid. For more information, see [Using external services from the Azure API Management service](./api-management-sample-send-request.md)
-
-```xml
-<choose>
- <when condition="@((bool)((IResponse)context.Variables["tokenstate"]).Body.As<JObject>()["active"] == false)">
- <return-response response-variable-name="existing response variable">
- <set-status code="401" reason="Unauthorized" />
- <set-header name="WWW-Authenticate" exists-action="override">
- <value>Bearer error="invalid_token"</value>
- </set-header>
- </return-response>
- </when>
-</choose>
-
-```
-
-### Elements
-
-| Element | Description | Required |
-| - | - | -- |
-| set-status | Root element. | Yes |
-
-### Attributes
-
-| Attribute | Description | Required | Default |
-| | - | -- | - |
-| code="integer" | The HTTP status code to return. | Yes | N/A |
-| reason="string" | A description of the reason for returning the status code. | Yes | N/A |
-
-### Usage
-
-This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
--- **Policy sections:** inbound, outbound, backend, on-error-- **Policy scopes:** all scopes-
-## <a name="set-variable"></a> Set variable
-
-The `set-variable` policy declares a [context](api-management-policy-expressions.md#ContextVariables) variable and assigns it a value specified via an [expression](api-management-policy-expressions.md) or a string literal. if the expression contains a literal it will be converted to a string and the type of the value will be `System.String`.
---
-### <a name="set-variablePolicyStatement"></a> Policy statement
-
-```xml
-<set-variable name="variable name" value="Expression | String literal" />
-```
-
-### <a name="set-variableExample"></a> Example
-
-The following example demonstrates a set variable policy in the inbound section. This set variable policy creates an `isMobile` Boolean [context](api-management-policy-expressions.md#ContextVariables) variable that is set to true if the `User-Agent` request header contains the text `iPad` or `iPhone`.
-
-```xml
-<set-variable name="IsMobile" value="@(context.Request.Headers.GetValueOrDefault("User-Agent","").Contains("iPad") || context.Request.Headers.GetValueOrDefault("User-Agent","").Contains("iPhone"))" />
-```
-
-### Elements
-
-| Element | Description | Required |
-| | - | -- |
-| set-variable | Root element. | Yes |
-
-### Attributes
-
-| Attribute | Description | Required |
-| | | -- |
-| name | The name of the variable. | Yes |
-| value | The value of the variable. This can be an expression or a literal value. | Yes |
-
-### Usage
-
-This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
--- **Policy sections:** inbound, outbound, backend, on-error-- **Policy scopes:** all scopes-
-### <a name="set-variableAllowedTypes"></a> Allowed types
-
-Expressions used in the `set-variable` policy must return one of the following basic types.
--- System.Boolean-- System.SByte-- System.Byte-- System.UInt16-- System.UInt32-- System.UInt64-- System.Int16-- System.Int32-- System.Int64-- System.Decimal-- System.Single-- System.Double-- System.Guid-- System.String-- System.Char-- System.DateTime-- System.TimeSpan-- System.Byte?-- System.UInt16?-- System.UInt32?-- System.UInt64?-- System.Int16?-- System.Int32?-- System.Int64?-- System.Decimal?-- System.Single?-- System.Double?-- System.Guid?-- System.String?-- System.Char?-- System.DateTime?-
-## <a name="Trace"></a> Trace
-
-The `trace` policy adds a custom trace into the API Inspector output, Application Insights telemetries, and/or Resource Logs.
--- The policy adds a custom trace to the [API Inspector](./api-management-howto-api-inspector.md) output when tracing is triggered, i.e. `Ocp-Apim-Trace` request header is present and set to true and `Ocp-Apim-Subscription-Key` request header is present and holds a valid key that allows tracing.-- The policy creates a [Trace](../azure-monitor/app/data-model-trace-telemetry.md) telemetry in Application Insights, when [Application Insights integration](./api-management-howto-app-insights.md) is enabled and the `severity` specified in the policy is equal to or greater than the `verbosity` specified in the diagnostic setting.-- The policy adds a property in the log entry when [Resource Logs](./api-management-howto-use-azure-monitor.md#activity-logs) is enabled and the severity level specified in the policy is at or higher than the verbosity level specified in the diagnostic setting.-- The policy is not affected by Application Insights sampling. All invocations of the policy will be logged.---
-### Policy statement
-
-```xml
-
-<trace source="arbitrary string literal" severity="verbose|information|error">
- <message>String literal or expressions</message>
- <metadata name="string literal or expressions" value="string literal or expressions"/>
-</trace>
-
-```
-
-### <a name="traceExample"></a> Example
-
-```xml
-<trace source="PetStore API" severity="verbose">
- <message>@((string)context.Variables["clientConnectionID"])</message>
- <metadata name="Operation Name" value="New-Order"/>
-</trace>
-```
-
-### Elements
-
-| Element | Description | Required |
-| -- | - | -- |
-| trace | Root element. | Yes |
-| message | A string or expression to be logged. | Yes |
-| metadata | Adds a custom property to the Application Insights [Trace](../azure-monitor/app/data-model-trace-telemetry.md) telemetry. | No |
-
-### Attributes
-
-| Attribute | Description | Required | Default |
-| | - | -- | - |
-| source | String literal meaningful to the trace viewer and specifying the source of the message. | Yes | N/A |
-| severity | Specifies the severity level of the trace. Allowed values are `verbose`, `information`, `error` (from lowest to highest). | No | Verbose |
-| name | Name of the property. | Yes | N/A |
-| value | Value of the property. | Yes | N/A |
-
-### Usage
-
-This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes) .
--- **Policy sections:** inbound, outbound, backend, on-error--- **Policy scopes:** all scopes-
-## <a name="Wait"></a> Wait
-
-The `wait` policy executes its immediate child policies in parallel, and waits for either all or one of its immediate child policies to complete before it completes. The wait policy can have as its immediate child policies [Send request](api-management-advanced-policies.md#SendRequest), [Get value from cache](api-management-caching-policies.md#GetFromCacheByKey), and [Control flow](api-management-advanced-policies.md#choose) policies.
---
-### Policy statement
-
-```xml
-<wait for="all|any">
- <!--Wait policy can contain send-request, cache-lookup-value,
- and choose policies as child elements -->
-</wait>
-
-```
-
-### Example
-
-In the following example, there are two `choose` policies as immediate child policies of the `wait` policy. Each of these `choose` policies executes in parallel. Each `choose` policy attempts to retrieve a cached value. If there is a cache miss, a backend service is called to provide the value. In this example the `wait` policy does not complete until all of its immediate child policies complete, because the `for` attribute is set to `all`. In this example the context variables (`execute-branch-one`, `value-one`, `execute-branch-two`, and `value-two`) are declared outside of the scope of this example policy.
-
-```xml
-<wait for="all">
- <choose>
- <when condition="@((bool)context.Variables["execute-branch-one="])">
- <cache-lookup-value key="key-one" variable-name="value-one" />
- <choose>
- <when condition="@(!context.Variables.ContainsKey("value-one="))">
- <send-request mode="new" response-variable-name="value-one">
- <set-url>https://backend-one</set-url>
- <set-method>GET</set-method>
- </send-request>
- </when>
- </choose>
- </when>
- </choose>
- <choose>
- <when condition="@((bool)context.Variables["execute-branch-two="])">
- <cache-lookup-value key="key-two" variable-name="value-two" />
- <choose>
- <when condition="@(!context.Variables.ContainsKey("value-two="))">
- <send-request mode="new" response-variable-name="value-two">
- <set-url>https://backend-two</set-url>
- <set-method>GET</set-method>
- </send-request>
- </when>
- </choose>
- </when>
- </choose>
-</wait>
-
-```
-
-### Elements
-
-| Element | Description | Required |
-| - | - | -- |
-| wait | Root element. May contain as child elements only `send-request`, `cache-lookup-value`, and `choose` policies. | Yes |
-
-### Attributes
-
-| Attribute | Description | Required | Default |
-| | - | -- | - |
-| for | Determines whether the `wait` policy waits for all immediate child policies to be completed or just one. Allowed values are:<br /><br /> - `all` - wait for all immediate child policies to complete<br />- any - wait for any immediate child policy to complete. Once the first immediate child policy has completed, the `wait` policy completes and execution of any other immediate child policies is terminated. | No | all |
-
-### Usage
-
-This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
--- **Policy sections:** inbound, outbound, backend-- **Policy scopes:** all scopes-
api-management Api Management Authentication Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-authentication-policies.md
- Title: Azure API Management authentication policies | Microsoft Docs
-description: Reference for the authentication policies available for use in Azure API Management. Provides policy usage, settings, and examples.
----- Previously updated : 03/07/2022--
-# API Management authentication policies
-
-This article provides a reference for API Management policies used for authentication with API backends.
--
-## <a name="AuthenticationPolicies"></a> Authentication policies
--- [Authenticate with Basic](api-management-authentication-policies.md#Basic) - Authenticate with a backend service using Basic authentication.--- [Authenticate with client certificate](api-management-authentication-policies.md#ClientCertificate) - Authenticate with a backend service using client certificates.--- [Authenticate with managed identity](api-management-authentication-policies.md#ManagedIdentity) - Authenticate with the [managed identity](../active-directory/managed-identities-azure-resources/overview.md) for the API Management service.-
-## <a name="Basic"></a> Authenticate with Basic
- Use the `authentication-basic` policy to authenticate with a backend service using Basic authentication. This policy effectively sets the HTTP Authorization header to the value corresponding to the credentials provided in the policy.
---
-### Policy statement
-
-```xml
-<authentication-basic username="username" password="password" />
-```
-
-### Example
-
-```xml
-<authentication-basic username="testuser" password="testpassword" />
-```
-
-### Elements
-
-|Name|Description|Required|
-|-|--|--|
-|authentication-basic|Root element.|Yes|
-
-### Attributes
-
-|Name|Description|Required|Default|
-|-|--|--|-|
-|username|Specifies the username of the Basic credential.|Yes|N/A|
-|password|Specifies the password of the Basic credential.|Yes|N/A|
-
-### Usage
- This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
--- **Policy sections:** inbound--- **Policy scopes:** all scopes-
-## <a name="ClientCertificate"></a> Authenticate with client certificate
- Use the `authentication-certificate` policy to authenticate with a backend service using a client certificate. The certificate needs to be [installed into API Management](./api-management-howto-mutual-certificates.md) first and is identified by its thumbprint or certificate ID (resource name).
-
-> [!CAUTION]
-> If the certificate references a certificate stored in Azure Key Vault, identify it using the certificate ID. When a key vault certificate is rotated, its thumbprint in API Management will change, and the policy will not resolve the new certificate if it is identified by thumbprint.
---
-### Policy statement
-
-```xml
-<authentication-certificate thumbprint="thumbprint" certificate-id="resource name"/>
-```
-
-### Examples
-
-In this example, the client certificate is identified by the certificate ID:
-
-```xml
-<authentication-certificate certificate-id="544fe9ddf3b8f30fb490d90f" />
-```
-
-In this example, the client certificate is identified by its thumbprint:
-
-```xml
-<authentication-certificate thumbprint="CA06F56B258B7A0D4F2B05470939478651151984" />
-```
-In this example, the client certificate is set in the policy rather than retrieved from the built-in certificate store:
-
-```xml
-<authentication-certificate body="@(context.Variables.GetValueOrDefault<byte[]>("byteCertificate"))" password="optional-certificate-password" />
-```
-
-### Elements
-
-|Name|Description|Required|
-|-|--|--|
-|authentication-certificate|Root element.|Yes|
-
-### Attributes
-
-|Name|Description|Required|Default|
-|-|--|--|-|
-|thumbprint|The thumbprint for the client certificate.|Either `thumbprint` or `certificate-id` must be present.|N/A|
-|certificate-id|The certificate resource name.|Either `thumbprint` or `certificate-id` must be present.|N/A|
-|body|Client certificate as a byte array.|No|N/A|
-|password|Password for the client certificate.|Used if certificate specified in `body` is password protected.|N/A|
-
-### Usage
- This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
-
-- **Policy sections:** inbound
-
-- **Policy scopes:** all scopes -
-## <a name="ManagedIdentity"></a> Authenticate with managed identity
- Use the `authentication-managed-identity` policy to authenticate with a backend service using the managed identity. This policy essentially uses the managed identity to obtain an access token from Azure Active Directory for accessing the specified resource. After successfully obtaining the token, the policy will set the value of the token in the `Authorization` header using the `Bearer` scheme. API Management caches the token until it expires.
-
-Both system-assigned identity and any of the multiple user-assigned identities can be used to request a token. If `client-id` is not provided, system-assigned identity is assumed. If the `client-id` variable is provided, token is requested for that user-assigned identity from Azure Active Directory.
--
-
-### Policy statement
-
-```xml
-<authentication-managed-identity resource="resource" client-id="clientid of user-assigned identity" output-token-variable-name="token-variable" ignore-error="true|false"/>
-```
-
-### Example
-#### Use managed identity to authenticate with a backend service
-```xml
-<authentication-managed-identity resource="https://graph.microsoft.com"/>
-```
-```xml
-<authentication-managed-identity resource="https://management.azure.com/"/> <!--Azure Resource Manager-->
-```
-```xml
-<authentication-managed-identity resource="https://vault.azure.net"/> <!--Azure Key Vault-->
-```
-```xml
-<authentication-managed-identity resource="https://servicebus.azure.net/"/> <!--Azure Service Bus-->
-```
-```xml
-<authentication-managed-identity resource="https://storage.azure.com/"/> <!--Azure Blob Storage-->
-```
-```xml
-<authentication-managed-identity resource="https://database.windows.net/"/> <!--Azure SQL-->
-```
-
-```xml
-<authentication-managed-identity resource="AD_application_id"/> <!--Application (client) ID of your own Azure AD Application-->
-```
-
-#### Use managed identity and set header manually
-
-```xml
-<authentication-managed-identity resource="AD_application_id"
- output-token-variable-name="msi-access-token" ignore-error="false" /> <!--Application (client) ID of your own Azure AD Application-->
-<set-header name="Authorization" exists-action="override">
- <value>@("Bearer " + (string)context.Variables["msi-access-token"])</value>
-</set-header>
-```
-
-#### Use managed identity in send-request policy
-```xml
-<send-request mode="new" timeout="20" ignore-error="false">
- <set-url>https://example.com/</set-url>
- <set-method>GET</set-method>
- <authentication-managed-identity resource="ResourceID"/>
-</send-request>
-```
-
-### Elements
-
-|Name|Description|Required|
-|-|--|--|
-|authentication-managed-identity |Root element.|Yes|
-
-### Attributes
-
-|Name|Description|Required|Default|
-|-|--|--|-|
-|resource|String. The App ID of the target web API (secured resource) in Azure Active Directory.|Yes|N/A|
-|client-id|String. The App ID of the user-assigned identity in Azure Active Directory.|No|system-assigned identity|
-|output-token-variable-name|String. Name of the context variable that will receive token value as an object type `string`. |No|N/A|
-|ignore-error|Boolean. If set to `true`, the policy pipeline will continue to execute even if an access token is not obtained.|No|false|
-
-### Usage
- This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
-
-- **Policy sections:** inbound
-
-- **Policy scopes:** all scopes-
api-management Api Management Caching Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-caching-policies.md
- Title: Azure API Management caching policies | Microsoft Docs
-description: Reference for the caching policies available for use in Azure API Management. Provides policy usage, settings, and examples.
----- Previously updated : 03/07/2022---
-# API Management caching policies
-
-This article provides a reference for API Management policies used for caching responses.
---
-> [!IMPORTANT]
-> Built-in cache is volatile and is shared by all units in the same region in the same API Management service.
-
-## <a name="CachingPolicies"></a> Caching policies
--- Response caching policies
- - [Get from cache](#GetFromCache) - Perform cache lookup and return a valid cached response when available.
- - [Store to cache](#StoreToCache) - Caches responses according to the specified cache control configuration.
-- Value caching policies
- - [Get value from cache](#GetFromCacheByKey) - Retrieve a cached item by key.
- - [Store value in cache](#StoreToCacheByKey) - Store an item in the cache by key.
- - [Remove value from cache](#RemoveCacheByKey) - Remove an item in the cache by key.
-
-## <a name="GetFromCache"></a> Get from cache
-Use the `cache-lookup` policy to perform cache lookup and return a valid cached response when available. This policy can be applied in cases where response content remains static over a period of time. Response caching reduces bandwidth and processing requirements imposed on the backend web server and lowers latency perceived by API consumers.
-
-> [!NOTE]
-> This policy must have a corresponding [Store to cache](#StoreToCache) policy.
--
-### Policy statement
-
-```xml
-<cache-lookup vary-by-developer="true | false" vary-by-developer-groups="true | false" caching-type="prefer-external | external | internal" downstream-caching-type="none | private | public" must-revalidate="true | false" allow-private-response-caching="@(expression to evaluate)">
- <vary-by-header>Accept</vary-by-header>
- <!-- should be present in most cases -->
- <vary-by-header>Accept-Charset</vary-by-header>
- <!-- should be present in most cases -->
- <vary-by-header>Authorization</vary-by-header>
- <!-- should be present when allow-private-response-caching is "true"-->
- <vary-by-header>header name</vary-by-header>
- <!-- optional, can repeated several times -->
- <vary-by-query-parameter>parameter name</vary-by-query-parameter>
- <!-- optional, can repeated several times -->
-</cache-lookup>
-```
-
-> [!NOTE]
-> When using `vary-by-query-parameter`, you might want to declare the parameters in the rewrite-uri template or set the attribute `copy-unmatched-params` to `false`. By deactivating this flag, parameters that aren't declared are sent to the back end.
-
-### Examples
-
-#### Example
-
-```xml
-<policies>
- <inbound>
- <base />
- <cache-lookup vary-by-developer="false" vary-by-developer-groups="false" downstream-caching-type="none" must-revalidate="true" caching-type="internal" >
- <vary-by-query-parameter>version</vary-by-query-parameter>
- </cache-lookup>
- </inbound>
- <outbound>
- <cache-store duration="seconds" />
- <base />
- </outbound>
-</policies>
-```
-
-#### Example using policy expressions
-This example shows how to configure API Management response caching duration that matches the response caching of the backend service as specified by the backend service's `Cache-Control` directive.
-
-```xml
-<!-- The following cache policy snippets demonstrate how to control API Management response cache duration with Cache-Control headers sent by the backend service. -->
-
-<!-- Copy this snippet into the inbound section -->
-<cache-lookup vary-by-developer="false" vary-by-developer-groups="false" downstream-caching-type="public" must-revalidate="true" >
- <vary-by-header>Accept</vary-by-header>
- <vary-by-header>Accept-Charset</vary-by-header>
-</cache-lookup>
-
-<!-- Copy this snippet into the outbound section. Note that cache duration is set to the max-age value provided in the Cache-Control header received from the backend service or to the default value of 5 min if none is found -->
-<cache-store duration="@{
- var header = context.Response.Headers.GetValueOrDefault("Cache-Control","");
- var maxAge = Regex.Match(header, @"max-age=(?<maxAge>\d+)").Groups["maxAge"]?.Value;
- return (!string.IsNullOrEmpty(maxAge))?int.Parse(maxAge):300;
- }"
- />
-```
-
-For more information, see [Policy expressions](api-management-policy-expressions.md) and [Context variable](api-management-policy-expressions.md#ContextVariables).
-
-### Elements
-
-|Name|Description|Required|
-|-|--|--|
-|cache-lookup|Root element.|Yes|
-|vary-by-header|Start caching responses per value of specified header, such as Accept, Accept-Charset, Accept-Encoding, Accept-Language, Authorization, Expect, From, Host, If-Match.|No|
-|vary-by-query-parameter|Start caching responses per value of specified query parameters. Enter a single or multiple parameters. Use semicolon as a separator. If none are specified, all query parameters are used.|No|
-
-### Attributes
-
-| Name | Description | Required | Default |
-|--|-|-|-|
-| allow-private-response-caching | When set to `true`, allows caching of requests that contain an Authorization header. | No | false |
-| caching-type | Choose between the following values of the attribute:<br />- `internal` to use the built-in API Management cache,<br />- `external` to use the external cache as described in [Use an external Azure Cache for Redis in Azure API Management](api-management-howto-cache-external.md),<br />- `prefer-external` to use external cache if configured or internal cache otherwise. | No | `prefer-external` |
-| downstream-caching-type | This attribute must be set to one of the following values.<br /><br /> - none - downstream caching is not allowed.<br />- private - downstream private caching is allowed.<br />- public - private and shared downstream caching is allowed. | No | none |
-| must-revalidate | When downstream caching is enabled this attribute turns on or off the `must-revalidate` cache control directive in gateway responses. | No | true |
-| vary-by-developer | Set to `true` to cache responses per developer account that owns [subscription key](./api-management-subscriptions.md) included in the request. | Yes | False |
-| vary-by-developer-groups | Set to `true` to cache responses per [user group](./api-management-howto-create-groups.md). | Yes | False |
-
-### Usage
-This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
--- **Policy sections:** inbound-- **Policy scopes:** all scopes-
-## <a name="StoreToCache"></a> Store to cache
-The `cache-store` policy caches responses according to the specified cache settings. This policy can be applied in cases where response content remains static over a period of time. Response caching reduces bandwidth and processing requirements imposed on the backend web server and lowers latency perceived by API consumers.
-
-> [!NOTE]
-> This policy must have a corresponding [Get from cache](api-management-caching-policies.md#GetFromCache) policy.
--
-### Policy statement
-
-```xml
-<cache-store duration="seconds" cache-response="true | false" />
-```
-
-### Examples
-
-#### Example
-
-```xml
-<policies>
- <inbound>
- <base />
- <cache-lookup vary-by-developer="true | false" vary-by-developer-groups="true | false" downstream-caching-type="none | private | public" must-revalidate="true | false">
- <vary-by-query-parameter>parameter name</vary-by-query-parameter> <!-- optional, can repeated several times -->
- </cache-lookup>
- </inbound>
- <outbound>
- <base />
- <cache-store duration="3600" />
- </outbound>
-</policies>
-```
-
-#### Example using policy expressions
-This example shows how to configure API Management response caching duration that matches the response caching of the backend service as specified by the backend service's `Cache-Control` directive.
-
-```xml
-<!-- The following cache policy snippets demonstrate how to control API Management response cache duration with Cache-Control headers sent by the backend service. -->
-
-<!-- Copy this snippet into the inbound section -->
-<cache-lookup vary-by-developer="false" vary-by-developer-groups="false" downstream-caching-type="public" must-revalidate="true" >
- <vary-by-header>Accept</vary-by-header>
- <vary-by-header>Accept-Charset</vary-by-header>
-</cache-lookup>
-
-<!-- Copy this snippet into the outbound section. Note that cache duration is set to the max-age value provided in the Cache-Control header received from the backend service or to the default value of 5 min if none is found -->
-<cache-store duration="@{
- var header = context.Response.Headers.GetValueOrDefault("Cache-Control","");
- var maxAge = Regex.Match(header, @"max-age=(?<maxAge>\d+)").Groups["maxAge"]?.Value;
- return (!string.IsNullOrEmpty(maxAge))?int.Parse(maxAge):300;
- }"
- />
-```
-
-For more information, see [Policy expressions](api-management-policy-expressions.md) and [Context variable](api-management-policy-expressions.md#ContextVariables).
-
-### Elements
-
-|Name|Description|Required|
-|-|--|--|
-|cache-store|Root element.|Yes|
-
-### Attributes
-
-| Name | Description | Required | Default |
-||-|-|-|
-| duration | Time-to-live of the cached entries, specified in seconds. | Yes | N/A |
-| cache-response | Set to true to cache the current HTTP response. If the attribute is omitted or set to false, only HTTP responses with the status code `200 OK` are cached. | No | false |
-
-### Usage
-This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
--- **Policy sections:** outbound-- **Policy scopes:** all scopes-
-## <a name="GetFromCacheByKey"></a> Get value from cache
-Use the `cache-lookup-value` policy to perform cache lookup by key and return a cached value. The key can have an arbitrary string value and is typically provided using a policy expression.
-
-> [!NOTE]
-> This policy must have a corresponding [Store value in cache](#StoreToCacheByKey) policy.
--
-### Policy statement
-
-```xml
-<cache-lookup-value key="cache key value"
- default-value="value to use if cache lookup resulted in a miss"
- variable-name="name of a variable looked up value is assigned to"
- caching-type="prefer-external | external | internal" />
-```
-
-### Example
-For more information and examples of this policy, see [Custom caching in Azure API Management](./api-management-sample-cache-by-key.md).
-
-```xml
-<cache-lookup-value
- key="@("userprofile-" + context.Variables["enduserid"])"
- variable-name="userprofile" />
-
-```
-
-### Elements
-
-|Name|Description|Required|
-|-|--|--|
-|cache-lookup-value|Root element.|Yes|
-
-### Attributes
-
-| Name | Description | Required | Default |
-||-|-|-|
-| caching-type | Choose between the following values of the attribute:<br />- `internal` to use the built-in API Management cache,<br />- `external` to use the external cache as described in [Use an external Azure Cache for Redis in Azure API Management](api-management-howto-cache-external.md),<br />- `prefer-external` to use external cache if configured or internal cache otherwise. | No | `prefer-external` |
-| default-value | A value that will be assigned to the variable if the cache key lookup resulted in a miss. If this attribute is not specified, `null` is assigned. | No | `null` |
-| key | Cache key value to use in the lookup. | Yes | N/A |
-| variable-name | Name of the [context variable](api-management-policy-expressions.md#ContextVariables) the looked up value will be assigned to, if lookup is successful. If lookup results in a miss, the variable will not be set. | Yes | N/A |
-
-### Usage
-This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
--- **Policy sections:** inbound, outbound, backend, on-error-- **Policy scopes:** all scopes-
-## <a name="StoreToCacheByKey"></a> Store value in cache
-The `cache-store-value` performs cache storage by key. The key can have an arbitrary string value and is typically provided using a policy expression.
-
-> [!NOTE]
-> The operation of storing the value in cache performed by this policy is asynchronous. The stored value can be retrieved using [Get value from cache](#GetFromCacheByKey) policy. However, the stored value may not be immediately available for retrieval since the asynchronous operation that stores the value in cache may still be in progress.
--
-### Policy statement
-
-```xml
-<cache-store-value key="cache key value" value="value to cache" duration="seconds" caching-type="prefer-external | external | internal" />
-```
-
-### Example
-For more information and examples of this policy, see [Custom caching in Azure API Management](./api-management-sample-cache-by-key.md).
-
-```xml
-<cache-store-value
- key="@("userprofile-" + context.Variables["enduserid"])"
- value="@((string)context.Variables["userprofile"])" duration="100000" />
-
-```
-
-### Elements
-
-|Name|Description|Required|
-|-|--|--|
-|cache-store-value|Root element.|Yes|
-
-### Attributes
-
-| Name | Description | Required | Default |
-||-|-|-|
-| caching-type | Choose between the following values of the attribute:<br />- `internal` to use the built-in API Management cache,<br />- `external` to use the external cache as described in [Use an external Azure Cache for Redis in Azure API Management](api-management-howto-cache-external.md),<br />- `prefer-external` to use external cache if configured or internal cache otherwise. | No | `prefer-external` |
-| duration | Value will be cached for the provided duration value, specified in seconds. | Yes | N/A |
-| key | Cache key the value will be stored under. | Yes | N/A |
-| value | The value to be cached. | Yes | N/A |
-### Usage
-This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
--- **Policy sections:** inbound, outbound, backend, on-error-- **Policy scopes:** all scopes-
-## <a name="RemoveCacheByKey"></a> Remove value from cache
-The `cache-remove-value` deletes a cached item identified by its key. The key can have an arbitrary string value and is typically provided using a policy expression.
--
-#### Policy statement
-
-```xml
-
-<cache-remove-value key="cache key value" caching-type="prefer-external | external | internal" />
-
-```
-
-#### Example
-
-```xml
-
-<cache-remove-value key="@("userprofile-" + context.Variables["enduserid"])"/>
-
-```
-
-#### Elements
-
-|Name|Description|Required|
-|-|--|--|
-|cache-remove-value|Root element.|Yes|
-
-#### Attributes
-
-| Name | Description | Required | Default |
-||-|-|-|
-| caching-type | Choose between the following values of the attribute:<br />- `internal` to use the built-in API Management cache,<br />- `external` to use the external cache as described in [Use an external Azure Cache for Redis in Azure API Management](api-management-howto-cache-external.md),<br />- `prefer-external` to use external cache if configured or internal cache otherwise. | No | `prefer-external` |
-| key | The key of the previously cached value to be removed from the cache. | Yes | N/A |
-
-#### Usage
-This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes) .
--- **Policy sections:** inbound, outbound, backend, on-error-- **Policy scopes:** all scopes-
api-management Api Management Dapr Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-dapr-policies.md
- Title: Azure API Management Dapr integration policies | Microsoft Docs
-description: Reference for Azure API Management policies for interacting with Dapr microservices extensions. Provides policy usage, settings and examples.
-- Previously updated : 03/07/2022----
-# API Management Dapr integration policies
-
-This article provides a reference for API Management policies used for integrating with Distributed Application Runtime (Dapr) microservices extensions.
--
-## About Dapr
-
-Dapr is a portable runtime for building stateless and stateful microservices-based applications with any language or framework. It codifies the common microservice patterns, like service discovery and invocation with build-in retry logic, publish-and-subscribe with at-least-once delivery semantics, or pluggable binding resources to ease composition using external services. Go to [dapr.io](https://dapr.io) for detailed information and instruction on how to get started with Dapr.
-
-> [!IMPORTANT]
-> Policies referenced in this topic work only in the [self-hosted version of the API Management gateway](self-hosted-gateway-overview.md) with Dapr support enabled.
-
-## Enable Dapr support in the self-hosted gateway
-
-To turn on Dapr support in the self-hosted gateway add the [Dapr annotations](https://github.com/dapr/docs/blob/master/README.md) below to the [Kubernetes deployment template](how-to-deploy-self-hosted-gateway-kubernetes.md) replacing "app-name" with a desired name. Complete walkthrough of setting up and using API Management with Dapr is available [here](https://aka.ms/apim/dapr/walkthru).
-```yml
-template:
- metadata:
- labels:
- app: app-name
- annotations:
- dapr.io/enabled: "true"
- dapr.io/app-id: "app-name"
-```
-
-> [!TIP]
-> You can also deploy the [self-hosted gateway with Helm](how-to-deploy-self-hosted-gateway-kubernetes-helm.md) and use the Dapr configuration options.
-
-## Distributed Application Runtime (Dapr) integration policies
--- [Send request to a service](api-management-dapr-policies.md#invoke): Uses Dapr runtime to locate and reliably communicate with a Dapr microservice. To learn more about service invocation in Dapr, see the description in this [README](https://github.com/dapr/docs/blob/master/README.md#service-invocation) file.-- [Send message to Pub/Sub topic](api-management-dapr-policies.md#pubsub): Uses Dapr runtime to publish a message to a Publish/Subscribe topic. To learn more about Publish/Subscribe messaging in Dapr, see the description in this [README](https://github.com/dapr/docs/blob/master/README.md) file.-- [Trigger output binding](api-management-dapr-policies.md#bind): Uses Dapr runtime to invoke an external system via output binding. To learn more about bindings in Dapr, see the description in this [README](https://github.com/dapr/docs/blob/master/README.md) file.-
-## <a name="invoke"></a> Send request to a service
-
-This policy sets the target URL for the current request to `http://localhost:3500/v1.0/invoke/{app-id}[.{ns-name}]/method/{method-name}` replacing template parameters with values specified in the policy statement.
-
-The policy assumes that Dapr runs in a sidecar container in the same pod as the gateway. Upon receiving the request, Dapr runtime performs service discovery and actual invocation, including possible protocol translation between HTTP and gRPC, retries, distributed tracing, and error handling.
--
-### Policy statement
-
-```xml
-<set-backend-service backend-id="dapr" dapr-app-id="app-id" dapr-method="method-name" dapr-namespace="ns-name" />
-```
-
-### Examples
-
-#### Example
-
-The following example demonstrates invoking the method named "back" on the microservice called "echo". The `set-backend-service` policy sets the destination URL to `http://localhost:3500/v1.0/invoke/echo.echo-app/method/back`. The `forward-request` policy dispatches the request to the Dapr runtime, which delivers it to the microservice.
-
-The `forward-request` policy is shown here for clarity. The policy is typically "inherited" from the global scope via the `base` keyword.
-
-```xml
-<policies>
- <inbound>
- <base />
- <set-backend-service backend-id="dapr" dapr-app-id="echo" dapr-method="back" dapr-namespace="echo-app" />
- </inbound>
- <backend>
- <forward-request />
- </backend>
- <outbound>
- <base />
- </outbound>
- <on-error>
- <base />
- </on-error>
-</policies>
-```
-
-### Elements
-
-| Element | Description | Required |
-||--|-|
-| set-backend-service | Root element | Yes |
-
-### Attributes
-
-| Attribute | Description | Required | Default |
-|||-||
-| backend-id | Must be set to "dapr" | Yes | N/A |
-| dapr-app-id | Name of the target microservice. Used to form the [appId](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/service_invocation_api.md) parameter in Dapr.| Yes | N/A |
-| dapr-method | Name of the method or a URL to invoke on the target microservice. Maps to the [method-name](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/service_invocation_api.md) parameter in Dapr.| Yes | N/A |
-| dapr-namespace | Name of the namespace the target microservice is residing in. Used to form the [appId](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/service_invocation_api.md) parameter in Dapr.| No | N/A |
-
-### Usage
-
-This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
--- **Policy sections:** inbound-- **Policy scopes:** all scopes-
-## <a name="pubsub"></a> Send message to Pub/Sub topic
-
-This policy instructs API Management gateway to send a message to a Dapr Publish/Subscribe topic. The policy accomplishes that by making an HTTP POST request to `http://localhost:3500/v1.0/publish/{{pubsub-name}}/{{topic}}` replacing template parameters and adding content specified in the policy statement.
-
-The policy assumes that Dapr runtime is running in a sidecar container in the same pod as the gateway. Dapr runtime implements the Pub/Sub semantics.
--
-### Policy statement
-
-```xml
-<publish-to-dapr pubsub-name="pubsub-name" topic="topic-name" ignore-error="false|true" response-variable-name="resp-var-name" timeout="in seconds" template="Liquid" content-type="application/json">
- <!-- message content -->
-</publish-to-dapr>
-```
-
-### Examples
-
-#### Example
-
-The following example demonstrates sending the body of the current request to the "new" [topic](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/pubsub_api.md#url-parameters) of the "orders" Pub/Sub [component](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/pubsub_api.md#url-parameters). Response received from the Dapr runtime is stored in the "dapr-response" entry of the Variables collection in the [context](api-management-policy-expressions.md#ContextVariables) object.
-
-If Dapr runtime can't locate the target topic, for example, and responds with an error, the "on-error" section is triggered. The response received from the Dapr runtime is returned to the caller verbatim. Otherwise, default `200 OK` response is returned.
-
-The "backend" section is empty and the request is not forwarded to the backend.
-
-```xml
-<policies>
- <inbound>
- <base />
- <publish-to-dapr
- pubsub-name="orders"
- topic="new"
- response-variable-name="dapr-response">
- @(context.Request.Body.As<string>())
- </publish-to-dapr>
- </inbound>
- <backend>
- </backend>
- <outbound>
- <base />
- </outbound>
- <on-error>
- <base />
- <return-response response-variable-name="pubsub-response" />
- </on-error>
-</policies>
-```
-
-### Elements
-
-| Element | Description | Required |
-||--|-|
-| publish-to-dapr | Root element | Yes |
-
-### Attributes
-
-| Attribute | Description | Required | Default |
-|||-||
-| pubsub-name | The name of the target PubSub component. Maps to the [pubsubname](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/pubsub_api.md) parameter in Dapr. If not present, the __topic__ attribute value must be in the form of `pubsub-name/topic-name`. | No | None |
-| topic | The name of the topic. Maps to the [topic](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/pubsub_api.md) parameter in Dapr. | Yes | N/A |
-| ignore-error | If set to `true` instructs the policy not to trigger ["on-error"](api-management-error-handling-policies.md) section upon receiving error from Dapr runtime | No | `false` |
-| response-variable-name | Name of the [Variables](api-management-policy-expressions.md#ContextVariables) collection entry to use for storing response from Dapr runtime | No | None |
-| timeout | Time (in seconds) to wait for Dapr runtime to respond. Can range from 1 to 240 seconds. | No | 5 |
-| template | Templating engine to use for transforming the message content. "Liquid" is the only supported value. | No | None |
-| content-type | Type of the message content. "application/json" is the only supported value. | No | None |
-
-### Usage
-
-This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
--- **Policy sections:** inbound, outbound, on-error-- **Policy scopes:** all scopes-
-## <a name="bind"></a> Trigger output binding
-
-This policy instructs API Management gateway to trigger an outbound Dapr [binding](https://github.com/dapr/docs/blob/master/README.md). The policy accomplishes that by making an HTTP POST request to `http://localhost:3500/v1.0/bindings/{{bind-name}}` replacing template parameter and adding content specified in the policy statement.
-
-The policy assumes that Dapr runtime is running in a sidecar container in the same pod as the gateway. Dapr runtime is responsible for invoking the external resource represented by the binding.
--
-### Policy statement
-
-```xml
-<invoke-dapr-binding name="bind-name" operation="op-name" ignore-error="false|true" response-variable-name="resp-var-name" timeout="in seconds" template="Liquid" content-type="application/json">
- <metadata>
- <item key="item-name"><!-- item-value --></item>
- </metadata>
- <data>
- <!-- message content -->
- </data>
-</invoke-dapr-binding>
-```
-
-### Examples
-
-#### Example
-
-The following example demonstrates triggering of outbound binding named "external-systems" with operation name "create", metadata consisting of two key/value items named "source" and "client-ip", and the body coming from the original request. Response received from the Dapr runtime is captured in the "bind-response" entry of the Variables collection in the [context](api-management-policy-expressions.md#ContextVariables) object.
-
-If Dapr runtime fails for some reason and responds with an error, the "on-error" section is triggered and response received from the Dapr runtime is returned to the caller verbatim. Otherwise, default `200 OK` response is returned.
-
-The "backend" section is empty and the request is not forwarded to the backend.
-
-```xml
-<policies>
- <inbound>
- <base />
- <invoke-dapr-binding
- name="external-system"
- operation="create"
- response-variable-name="bind-response">
- <metadata>
- <item key="source">api-management</item>
- <item key="client-ip">@( context.Request.IpAddress )</item>
- </metadata>
- <data>
- @( context.Request.Body.As<string>() )
- </data>
- </invoke-dapr-binding>
- </inbound>
- <backend>
- </backend>
- <outbound>
- <base />
- </outbound>
- <on-error>
- <base />
- <return-response response-variable-name="bind-response" />
- </on-error>
-</policies>
-```
-
-### Elements
-
-| Element | Description | Required |
-||--|-|
-| invoke-dapr-binding | Root element | Yes |
-| metadata | Binding specific metadata in the form of key/value pairs. Maps to the [metadata](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/bindings_api.md#invoking-output-bindings) property in Dapr. | No |
-| data | Content of the message. Maps to the [data](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/bindings_api.md#invoking-output-bindings) property in Dapr. | No |
--
-### Attributes
-
-| Attribute | Description | Required | Default |
-|||-||
-| name | Target binding name. Must match the name of the bindings [defined](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/bindings_api.md#bindings-structure) in Dapr. | Yes | N/A |
-| operation | Target operation name (binding specific). Maps to the [operation](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/bindings_api.md#invoking-output-bindings) property in Dapr. | No | None |
-| ignore-error | If set to `true` instructs the policy not to trigger ["on-error"](api-management-error-handling-policies.md) section upon receiving error from Dapr runtime | No | `false` |
-| response-variable-name | Name of the [Variables](api-management-policy-expressions.md#ContextVariables) collection entry to use for storing response from Dapr runtime | No | None |
-| timeout | Time (in seconds) to wait for Dapr runtime to respond. Can range from 1 to 240 seconds. | No | 5 |
-| template | Templating engine to use for transforming the message content. "Liquid" is the only supported value. | No | None |
-| content-type | Type of the message content. "application/json" is the only supported value. | No | None |
-
-### Usage
-
-This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
--- **Policy sections:** inbound, outbound, on-error-- **Policy scopes:** all scopes--
api-management Api Management Error Handling Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-error-handling-policies.md
The `on-error` policy section can be used at any scope. API publishers can confi
The following policies can be used in the `on-error` policy section. -- [choose](api-management-advanced-policies.md#choose)-- [set-variable](api-management-advanced-policies.md#set-variable)-- [find-and-replace](api-management-transformation-policies.md#Findandreplacestringinbody)-- [return-response](api-management-advanced-policies.md#ReturnResponse)-- [set-header](api-management-transformation-policies.md#SetHTTPheader)-- [set-method](api-management-advanced-policies.md#SetRequestMethod)-- [set-status](api-management-advanced-policies.md#SetStatus)-- [send-request](api-management-advanced-policies.md#SendRequest)-- [send-one-way-request](api-management-advanced-policies.md#SendOneWayRequest)-- [log-to-eventhub](api-management-advanced-policies.md#log-to-eventhub)-- [json-to-xml](api-management-transformation-policies.md#ConvertJSONtoXML)-- [xml-to-json](api-management-transformation-policies.md#ConvertXMLtoJSON)-- [limit-concurrency](api-management-advanced-policies.md#LimitConcurrency)-- [mock-response](api-management-advanced-policies.md#mock-response)-- [retry](api-management-advanced-policies.md#Retry)-- [trace](api-management-advanced-policies.md#Trace)
+- [choose](choose-policy.md)
+- [set-variable](set-variable-policy.md)
+- [find-and-replace](find-and-replace-policy.md)
+- [return-response](return-response-policy.md)
+- [set-header](set-header-policy.md)
+- [set-method](set-method-policy.md)
+- [set-status](set-status-policy.md)
+- [send-request](send-request-policy.md)
+- [send-one-way-request](send-one-way-request-policy.md)
+- [log-to-eventhub](log-to-eventhub-policy.md)
+- [json-to-xml](json-to-xml-policy.md)
+- [xml-to-json](xml-to-json-policy.md)
+- [limit-concurrency](limit-concurrency-policy.md)
+- [mock-response](mock-response-policy.md)
+- [retry](retry-policy.md)
+- [trace](trace-policy.md)
## LastError
api-management Api Management Gateways Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-gateways-overview.md
Managed and self-hosted gateways support all available [policies](api-management
| Policy | Managed (Dedicated) | Managed (Consumption) | Self-hosted | | | -- | -- | - | | [Dapr integration](api-management-dapr-policies.md) | ❌ | ❌ | ✔️ |
-| [Get authorization context](api-management-access-restriction-policies.md#GetAuthorizationContext) | ✔️ | ❌ | ❌ |
+| [Get authorization context](get-authorization-context-policy.md) | ✔️ | ❌ | ❌ |
| [Quota and rate limit](api-management-access-restriction-policies.md) | ✔️ | ✔️<sup>1</sup> | ✔️<sup>2</sup>
-| [Set GraphQL resolver](graphql-policies.md#set-graphql-resolver) | ✔️ | ❌ | ❌ |
+| [Set GraphQL resolver](set-graphql-resolver-policy.md) | ✔️ | ❌ | ❌ |
<sup>1</sup> The rate limit by key and quota by key policies aren't available in the Consumption tier.<br/> <sup>2</sup> By default, rate limit counts in self-hosted gateways are per-gateway, per-node.
api-management Api Management Howto Add Products https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-add-products.md
After you publish a product, developers can access the APIs. Depending on how th
> [!TIP] > You can create or update a user's subscription to a product with custom subscription keys through a [REST API](/rest/api/apimanagement/current-ga/subscription/create-or-update) or PowerShell command.
-* **Open product** - Developers can access an open product's APIs without a subscription key. However, you can configure other mechanisms to secure client access to the APIs, including [OAuth 2.0](api-management-howto-protect-backend-with-aad.md), [client certificates](api-management-howto-mutual-certificates-for-clients.md), and [restricting caller IP addresses](./api-management-access-restriction-policies.md#RestrictCallerIPs).
+* **Open product** - Developers can access an open product's APIs without a subscription key. However, you can configure other mechanisms to secure client access to the APIs, including [OAuth 2.0](api-management-howto-protect-backend-with-aad.md), [client certificates](api-management-howto-mutual-certificates-for-clients.md), and [restricting caller IP addresses](ip-filter-policy.md).
> [!NOTE] > Open products aren't listed in the developer portal for developers to learn about or subscribe to. They're visible only to the **Administrators** group. You'll need to use another mechanism to inform developers about APIs that can be accessed without a subscription key.
api-management Api Management Howto Api Inspector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-api-inspector.md
When making requests to API Management using `curl`, a REST client such as Postm
The response includes the **Ocp-Apim-Trace-Location** header, with a URL to the location of the trace data in Azure blob storage.
-For information about customizing trace information, see the [trace](api-management-advanced-policies.md#Trace) policy.
+For information about customizing trace information, see the [trace](trace-policy.md) policy.
## Next steps
api-management Api Management Howto App Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-app-insights.md
Application Insights receives:
| *Request* | For every incoming request: <ul><li>*frontend request*</li><li>*frontend response*</li></ul> | | *Dependency* | For every request forwarded to a backend service: <ul><li>*backend request*</li><li>*backend response*</li></ul> | | *Exception* | For every failed request: <ul><li>Failed because of a closed client connection</li><li>Triggered an *on-error* section of the API policies</li><li>Has a response HTTP status code matching 4xx or 5xx</li></ul> |
-| *Trace* | If you configure a [trace](api-management-advanced-policies.md#Trace) policy. <br /> The `severity` setting in the `trace` policy must be equal to or greater than the `verbosity` setting in the Application Insights logging. |
+| *Trace* | If you configure a [trace](trace-policy.md) policy. <br /> The `severity` setting in the `trace` policy must be equal to or greater than the `verbosity` setting in the Application Insights logging. |
### Emit custom metrics
-You can emit custom metrics by configuring the [`emit-metric`](api-management-advanced-policies.md#emit-metrics) policy.
+You can emit custom metrics by configuring the [`emit-metric`](emit-metric-policy.md) policy.
To make Application Insights pre-aggregated metrics available in API Management, you'll need to manually enable custom metrics in the service.
-1. Use the [`emit-metric`](api-management-advanced-policies.md#emit-metrics) policy with the [Create or Update API](/rest/api/apimanagement/current-ga/api-diagnostic/create-or-update).
+1. Use the [`emit-metric`](emit-metric-policy.md) policy with the [Create or Update API](/rest/api/apimanagement/current-ga/api-diagnostic/create-or-update).
1. Add `"metrics":true` to the payload, along with any other properties. > [!NOTE]
api-management Api Management Howto Configure Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-configure-notifications.md
If you don't have an API Management service instance, complete the following qui
- **Approaching subscription quota limit** - The specified email recipients and users will receive email notifications when subscription usage gets close to usage quota. > [!NOTE]
- > Notifications are triggered by the [quota by subscription](api-management-access-restriction-policies.md#SetUsageQuota) policy only. The [quota by key](api-management-access-restriction-policies.md#SetUsageQuotaByKey) policy doesn't generate notifications.
+ > Notifications are triggered by the [quota by subscription](quota-policy.md) policy only. The [quota by key](quota-by-key-policy.md) policy doesn't generate notifications.
1. Select a notification, and specify one or more email addresses to be notified: * To add the administrator email address, select **+ Add admin**.
api-management Api Management Howto Log Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-log-event-hubs.md
You can preview the log in Event Hubs by using [Azure Stream Analytics queries](
* [Event Hubs programming guide](../event-hubs/event-hubs-programming-guide.md) * Learn more about API Management and Event Hubs integration * [Logger entity reference](/rest/api/apimanagement/current-ga/logger)
- * [log-to-eventhub policy reference](./api-management-advanced-policies.md#log-to-eventhub)
+ * [log-to-eventhub policy reference](log-to-eventhub-policy.md)
* [Monitor your APIs with Azure API Management, Event Hubs, and Moesif](api-management-log-to-eventhub-sample.md) * Learn more about [integration with Azure Application Insights](api-management-howto-app-insights.md)
api-management Api Management Howto Mutual Certificates For Clients https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-mutual-certificates-for-clients.md
For a conceptual overview of API authorization, see [Authentication and authoriz
## Policy to validate client certificates
-Use the [validate-client-certificate](api-management-access-restriction-policies.md#validate-client-certificate) policy to validate one or more attributes of a client certificate used to access APIs hosted in your API Management instance.
+Use the [validate-client-certificate](validate-client-certificate-policy.md) policy to validate one or more attributes of a client certificate used to access APIs hosted in your API Management instance.
Configure the policy to validate one or more attributes including certificate issuer, subject, thumbprint, whether the certificate is validated against online revocation list, and others.
api-management Api Management Howto Oauth2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-oauth2.md
Many APIs support [OAuth 2.0](https://oauth.net/2/) to secure the API and ensure that only valid users have access, and they can only access resources to which they're entitled. To use Azure API Management's interactive developer console with such APIs, the service allows you to configure an external provider for OAuth 2.0 user authorization.
-Configuring OAuth 2.0 user authorization in the test console of the developer portal provides developers with a convenient way to acquire an OAuth 2.0 access token. From the test console, the token is then passed to the backend with the API call. Token validation must be configured separately - either using a [JWT validation policy](api-management-access-restriction-policies.md#ValidateJWT), or in the backend service.
+Configuring OAuth 2.0 user authorization in the test console of the developer portal provides developers with a convenient way to acquire an OAuth 2.0 access token. From the test console, the token is then passed to the backend with the API call. Token validation must be configured separately - either using a [JWT validation policy](validate-jwt-policy.md), or in the backend service.
## Prerequisites
After saving the OAuth 2.0 server configuration, configure an API or APIs to use
In the configuration so far, API Management doesn't validate the access token. It only passes the token in the authorization header to the backend API.
-To pre-authorize requests, configure a [validate-jwt](api-management-access-restriction-policies.md#ValidateJWT) policy to validate the access token of each incoming request. If a request doesn't have a valid token, API Management blocks it.
+To pre-authorize requests, configure a [validate-jwt](validate-jwt-policy.md) policy to validate the access token of each incoming request. If a request doesn't have a valid token, API Management blocks it.
[!INCLUDE [api-management-configure-validate-jwt](../../includes/api-management-configure-validate-jwt.md)]
api-management Api Management Howto Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-policies.md
Example policy definition at API scope:
In the example policy definition above: * The `cross-domain` statement would execute first.
-* The [`find-and-replace` policy](api-management-transformation-policies.md#Findandreplacestringinbody) would execute after any policies at a broader scope.
+* The [`find-and-replace` policy](find-and-replace-policy.md) would execute after any policies at a broader scope.
>[!NOTE] > If you remove the `base` element at the API scope, only policies configured at the API scope will be applied. Neither product nor global scope policies would be applied. ### Use policy expressions to modify requests
-The following example uses [policy expressions][Policy expressions] and the [`set-header`](api-management-transformation-policies.md#SetHTTPheader) policy to add user data to the incoming request. The added header includes the user ID associated with the subscription key in the request, and the region where the gateway processing the request is hosted.
+The following example uses [policy expressions][Policy expressions] and the [`set-header`](set-header-policy.md) policy to add user data to the incoming request. The added header includes the user ID associated with the subscription key in the request, and the region where the gateway processing the request is hosted.
```xml <policies>
The following example uses [policy expressions][Policy expressions] and the [`se
[Operation]: ./mock-api-responses.md [Advanced policies]: ./api-management-advanced-policies.md
-[Control flow]: ./api-management-advanced-policies.md#choose
-[Set variable]: ./api-management-advanced-policies.md#set-variable
+[Control flow]: choose-policy.md
+[Set variable]: set-variable-policy.md
[Policy expressions]: ./api-management-policy-expressions.md
api-management Api Management Howto Protect Backend With Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-protect-backend-with-aad.md
Follow these steps to protect an API in API Management, using OAuth 2.0 authoriz
To access the API, users or applications will acquire and present a valid OAuth token granting access to this app with each API request.
-1. Configure the [validate-jwt](api-management-access-restriction-policies.md#ValidateJWT) policy in API Management to validate the OAuth token presented in each incoming API request. Valid requests can be passed to the API.
+1. Configure the [validate-jwt](validate-jwt-policy.md) policy in API Management to validate the OAuth token presented in each incoming API request. Valid requests can be passed to the API.
Details about OAuth authorization flows and how to generate the required OAuth tokens are beyond the scope of this article. Typically, a separate client app is used to acquire tokens from Azure AD that authorize access to the API. For links to more information, see the [Next steps](#next-steps).
api-management Api Management Howto Use Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-use-managed-service-identity.md
You can use a system-assigned managed identity to access Azure Key Vault to stor
### Authenticate to a backend by using an API Management identity
-You can use the system-assigned identity to authenticate to a backend service through the [authentication-managed-identity](api-management-authentication-policies.md#ManagedIdentity) policy.
+You can use the system-assigned identity to authenticate to a backend service through the [authentication-managed-identity](authentication-managed-identity-policy.md) policy.
### <a name="apim-as-trusted-service"></a>Connect to Azure resources behind IP firewall using system-assigned managed identity
You can use a user-assigned managed identity to access Azure Key Vault to store
### Authenticate to a backend by using a user-assigned identity
-You can use the user-assigned identity to authenticate to a backend service through the [authentication-managed-identity](api-management-authentication-policies.md#ManagedIdentity) policy.
+You can use the user-assigned identity to authenticate to a backend service through the [authentication-managed-identity](authentication-managed-identity-policy.md) policy.
## <a name="remove"></a>Remove an identity
Learn more about managed identities for Azure resources:
* [What are managed identities for Azure resources?](../active-directory/managed-identities-azure-resources/overview.md) * [Azure Resource Manager templates](https://github.com/Azure/azure-quickstart-templates)
-* [Authenticate with a managed identity in a policy](./api-management-authentication-policies.md#ManagedIdentity)
+* [Authenticate with a managed identity in a policy](authentication-managed-identity-policy.md)
api-management Api Management Key Concepts Experiment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-key-concepts-experiment.md
When developers subscribe to a product, they're granted the primary and secondar
With [policies][API Management policies], an API publisher can change the behavior of an API through configuration. Policies are a collection of statements that are executed sequentially on the request or response of an API. Popular statements include format conversion from XML to JSON and call-rate limiting to restrict the number of incoming calls from a developer. For a complete list, see [API Management policies][Policy reference].
-Policy expressions can be used as attribute values or text values in any of the API Management policies, unless the policy specifies otherwise. Some policies such as the [Control flow](./api-management-advanced-policies.md#choose) and [Set variable](./api-management-advanced-policies.md#set-variable) policies are based on policy expressions.
+Policy expressions can be used as attribute values or text values in any of the API Management policies, unless the policy specifies otherwise. Some policies such as the [Control flow](./choose-policy.md) and [Set variable](./set-variable-policy.md) policies are based on policy expressions.
Policies can be applied at different scopes, depending on your needs: global (all APIs), a product, a specific API, or an API operation.
api-management Api Management Key Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-key-concepts.md
When developers subscribe to a product, they're granted the primary and secondar
With [policies][API Management policies], an API publisher can change the behavior of an API through configuration. Policies are a collection of statements that are executed sequentially on the request or response of an API. Popular statements include format conversion from XML to JSON and call-rate limiting to restrict the number of incoming calls from a developer. For a complete list, see [API Management policies][Policy reference].
-Policy expressions can be used as attribute values or text values in any of the API Management policies, unless the policy specifies otherwise. Some policies such as the [Control flow](./api-management-advanced-policies.md#choose) and [Set variable](./api-management-advanced-policies.md#set-variable) policies are based on policy expressions.
+Policy expressions can be used as attribute values or text values in any of the API Management policies, unless the policy specifies otherwise. Some policies such as the [Control flow](./choose-policy.md) and [Set variable](./set-variable-policy.md) policies are based on policy expressions.
Policies can be applied at different scopes, depending on your needs: global (all APIs), a product, a specific API, or an API operation.
api-management Api Management Log To Eventhub Sample https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-log-to-eventhub-sample.md
Azure API Management service provides an ideal place to capture the HTTP traffic
* Learn more about API Management and Event Hubs integration * [How to log events to Azure Event Hubs in Azure API Management](api-management-howto-log-event-hubs.md) * [Logger entity reference](/rest/api/apimanagement/apimanagementrest/azure-api-management-rest-api-logger-entity)
- * [log-to-eventhub policy reference](./api-management-advanced-policies.md#log-to-eventhub)
+ * [log-to-eventhub policy reference](./log-to-eventhub-policy.md)
api-management Api Management Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-policies.md
Previously updated : 03/04/2022 Last updated : 12/01/2022 # API Management policy reference
More information about policies:
+ [Policy expressions](api-management-policy-expressions.md) > [!IMPORTANT]
-> [Limit call rate by subscription](api-management-access-restriction-policies.md#LimitCallRate) and [Set usage quota by subscription](api-management-access-restriction-policies.md#SetUsageQuota) have a dependency on the subscription key. A subscription key isn't required when using other policies.
-
+> [Limit call rate by subscription](rate-limit-policy.md) and [Set usage quota by subscription](quota-policy.md) have a dependency on the subscription key. A subscription key isn't required when other policies are applied.
## Access restriction policies-- [Check HTTP header](api-management-access-restriction-policies.md#CheckHTTPHeader) - Enforces existence and/or value of an HTTP Header.-- [Get authorization context](api-management-access-restriction-policies.md#GetAuthorizationContext) - Gets the authorization context of a specified [authorization](authorizations-overview.md) configured in the API Management instance.-- [Limit call rate by subscription](api-management-access-restriction-policies.md#LimitCallRate) - Prevents API usage spikes by limiting call rate, on a per subscription basis.-- [Limit call rate by key](api-management-access-restriction-policies.md#LimitCallRateByKey) - Prevents API usage spikes by limiting call rate, on a per key basis.-- [Restrict caller IPs](api-management-access-restriction-policies.md#RestrictCallerIPs) - Filters (allows/denies) calls from specific IP addresses and/or address ranges.-- [Set usage quota by subscription](api-management-access-restriction-policies.md#SetUsageQuota) - Allows you to enforce a renewable or lifetime call volume and/or bandwidth quota, on a per subscription basis.-- [Set usage quota by key](api-management-access-restriction-policies.md#SetUsageQuotaByKey) - Allows you to enforce a renewable or lifetime call volume and/or bandwidth quota, on a per key basis.-- [Validate Azure Active Directory Token](api-management-access-restriction-policies.md#ValidateAAD) - Enforces existence and validity of an Azure Active Directory JWT extracted from either a specified HTTP Header, query parameter, or token value.-- [Validate JWT](api-management-access-restriction-policies.md#ValidateJWT) - Enforces existence and validity of a JWT extracted from either a specified HTTP Header, query parameter, or token value.-- [Validate client certificate](api-management-access-restriction-policies.md#validate-client-certificate) - Enforces that a certificate presented by a client to an API Management instance matches specified validation rules and claims.
+- [Check HTTP header](check-header-policy.md) - Enforces existence and/or value of an HTTP Header.
+- [Get authorization context](get-authorization-context-policy.md) - Gets the authorization context of a specified [authorization](authorizations-overview.md) configured in the API Management instance.
+- [Limit call rate by subscription](rate-limit-policy.md) - Prevents API usage spikes by limiting call rate, on a per subscription basis.
+- [Limit call rate by key](rate-limit-by-key-policy.md) - Prevents API usage spikes by limiting call rate, on a per key basis.
+- [Restrict caller IPs](ip-filter-policy.md) - Filters (allows/denies) calls from specific IP addresses and/or address ranges.
+- [Set usage quota by subscription](quota-policy.md) - Allows you to enforce a renewable or lifetime call volume and/or bandwidth quota, on a per subscription basis.
+- [Set usage quota by key](quota-by-key-policy.md) - Allows you to enforce a renewable or lifetime call volume and/or bandwidth quota, on a per key basis.
+- [Validate Azure Active Directory token](validate-azure-ad-token-policy.md) - Enforces existence and validity of an Azure Active Directory JWT extracted from either a specified HTTP header, query parameter, or token value.
+- [Validate JWT](validate-jwt-policy.md) - Enforces existence and validity of a JWT extracted from either a specified HTTP Header, query parameter, or token value.
+- [Validate client certificate](validate-client-certificate-policy.md) - Enforces that a certificate presented by a client to an API Management instance matches specified validation rules and claims.
## Advanced policies-- [Control flow](api-management-advanced-policies.md#choose) - Conditionally applies policy statements based on the evaluation of Boolean expressions.-- [Forward request](api-management-advanced-policies.md#ForwardRequest) - Forwards the request to the backend service.-- [Limit concurrency](api-management-advanced-policies.md#LimitConcurrency) - Prevents enclosed policies from executing by more than the specified number of requests at a time.-- [Log to event hub](api-management-advanced-policies.md#log-to-eventhub) - Sends messages in the specified format to a message target defined by a Logger entity.-- [Emit metrics](api-management-advanced-policies.md#emit-metrics) - Sends custom metrics to Application Insights at execution.-- [Mock response](api-management-advanced-policies.md#mock-response) - Aborts pipeline execution and returns a mocked response directly to the caller.-- [Retry](api-management-advanced-policies.md#Retry) - Retries execution of the enclosed policy statements, if and until the condition is met. Execution will repeat at the specified time intervals and up to the specified retry count.-- [Return response](api-management-advanced-policies.md#ReturnResponse) - Aborts pipeline execution and returns the specified response directly to the caller.-- [Send one way request](api-management-advanced-policies.md#SendOneWayRequest) - Sends a request to the specified URL without waiting for a response.-- [Send request](api-management-advanced-policies.md#SendRequest) - Sends a request to the specified URL.-- [Set HTTP proxy](api-management-advanced-policies.md#SetHttpProxy) - Allows you to route forwarded requests via an HTTP proxy.-- [Set variable](api-management-advanced-policies.md#set-variable) - Persist a value in a named context variable for later access.-- [Set request method](api-management-advanced-policies.md#SetRequestMethod) - Allows you to change the HTTP method for a request.-- [Set status code](api-management-advanced-policies.md#SetStatus) - Changes the HTTP status code to the specified value.-- [Trace](api-management-advanced-policies.md#Trace) - Adds custom traces into the [API Inspector](./api-management-howto-api-inspector.md) output, Application Insights telemetries, and Resource Logs.-- [Wait](api-management-advanced-policies.md#Wait) - Waits for enclosed [Send request](api-management-advanced-policies.md#SendRequest), [Get value from cache](api-management-caching-policies.md#GetFromCacheByKey), or [Control flow](api-management-advanced-policies.md#choose) policies to complete before proceeding.
+- [Control flow](choose-policy.md) - Conditionally applies policy statements based on the results of the evaluation of Boolean [expressions](api-management-policy-expressions.md).
+- [Emit metrics](emit-metric-policy.md) - Sends custom metrics to Application Insights at execution.
+- [Forward request](forward-request-policy.md) - Forwards the request to the backend service.
+- [Include fragment](include-fragment-policy.md) - Inserts a policy fragment in the policy definition.
+- [Limit concurrency](limit-concurrency-policy.md) - Prevents enclosed policies from executing by more than the specified number of requests at a time.
+- [Log to event hub](log-to-eventhub-policy.md) - Sends messages in the specified format to an event hub defined by a Logger entity.
+- [Mock response](mock-response-policy.md) - Aborts pipeline execution and returns a mocked response directly to the caller.
+- [Retry](retry-policy.md) - Retries execution of the enclosed policy statements, if and until the condition is met. Execution will repeat at the specified time intervals and up to the specified retry count.
+- [Return response](return-response-policy.md) - Aborts pipeline execution and returns the specified response directly to the caller.
+- [Send one way request](send-one-way-request-policy.md) - Sends a request to the specified URL without waiting for a response.
+- [Send request](send-request-policy.md) - Sends a request to the specified URL.
+- [Set HTTP proxy](proxy-policy.md) - Allows you to route forwarded requests via an HTTP proxy.
+- [Set request method](set-method-policy.md) - Allows you to change the HTTP method for a request.
+- [Set status code](set-status-policy.md) - Changes the HTTP status code to the specified value.
+- [Set variable](set-variable-policy.md) - Persists a value in a named [context](api-management-policy-expressions.md#ContextVariables) variable for later access.
+- [Trace](trace-policy.md) - Adds custom traces into the [request tracing](./api-management-howto-api-inspector.md) output in the test console, Application Insights telemetries, and resource logs.
+- [Wait](wait-policy.md) - Waits for enclosed [Send request](send-request-policy.md), [Get value from cache](cache-lookup-value-policy.md), or [Control flow](choose-policy.md) policies to complete before proceeding.
## Authentication policies-- [Authenticate with Basic](api-management-authentication-policies.md#Basic) - Authenticate with a backend service using Basic authentication.-- [Authenticate with client certificate](api-management-authentication-policies.md#ClientCertificate) - Authenticate with a backend service using client certificates.-- [Authenticate with managed identity](api-management-authentication-policies.md#ManagedIdentity) - Authenticate with a backend service using a [managed identity](../active-directory/managed-identities-azure-resources/overview.md).
+- [Authenticate with Basic](authentication-basic-policy.md) - Authenticate with a backend service using Basic authentication.
+- [Authenticate with client certificate](authentication-certificate-policy.md) - Authenticate with a backend service using client certificates.
+- [Authenticate with managed identity](authentication-managed-identity-policy.md) - Authenticate with a backend service using a [managed identity](../active-directory/managed-identities-azure-resources/overview.md).
## Caching policies-- [Get from cache](api-management-caching-policies.md#GetFromCache) - Perform cache lookup and return a valid cached response when available.-- [Store to cache](api-management-caching-policies.md#StoreToCache) - Caches response according to the specified cache control configuration.-- [Get value from cache](api-management-caching-policies.md#GetFromCacheByKey) - Retrieve a cached item by key.-- [Store value in cache](api-management-caching-policies.md#StoreToCacheByKey) - Store an item in the cache by key.-- [Remove value from cache](api-management-caching-policies.md#RemoveCacheByKey) - Remove an item in the cache by key.
+- [Get from cache](cache-lookup-policy.md) - Perform cache lookup and return a valid cached response when available.
+- [Store to cache](cache-store-policy.md) - Caches response according to the specified cache control configuration.
+- [Get value from cache](cache-lookup-value-policy.md) - Retrieve a cached item by key.
+- [Store value in cache](cache-store-value-policy.md) - Store an item in the cache by key.
+- [Remove value from cache](cache-remove-value-policy.md) - Remove an item in the cache by key.
## Cross-domain policies-- [Allow cross-domain calls](api-management-cross-domain-policies.md#AllowCrossDomainCalls) - Makes the API accessible from Adobe Flash and Microsoft Silverlight browser-based clients.-- [CORS](api-management-cross-domain-policies.md#CORS) - Adds cross-origin resource sharing (CORS) support to an operation or an API to allow cross-domain calls from browser-based clients.-- [JSONP](api-management-cross-domain-policies.md#JSONP) - Adds JSON with padding (JSONP) support to an operation or an API to allow cross-domain calls from JavaScript browser-based clients.
+- [Allow cross-domain calls](cross-domain-policy.md) - Makes the API accessible from Adobe Flash and Microsoft Silverlight browser-based clients.
+- [CORS](cors-policy.md) - Adds cross-origin resource sharing (CORS) support to an operation or an API to allow cross-domain calls from browser-based clients.
+- [JSONP](jsonp-policy.md) - Adds JSON with padding (JSONP) support to an operation or an API to allow cross-domain calls from JavaScript browser-based clients.
## Dapr integration policies-- [Send request to a service](api-management-dapr-policies.md#invoke) - uses Dapr runtime to locate and reliably communicate with a Dapr microservice.-- [Send message to Pub/Sub topic](api-management-dapr-policies.md#pubsub) - uses Dapr runtime to publish a message to a Publish/Subscribe topic.-- [Trigger output binding](api-management-dapr-policies.md#bind) - uses Dapr runtime to invoke an external system via output binding.
+- [Send request to a service](set-backend-service-dapr-policy.md): Uses Dapr runtime to locate and reliably communicate with a Dapr microservice. To learn more about service invocation in Dapr, see the description in this [README](https://github.com/dapr/docs/blob/master/README.md#service-invocation) file.
+- [Send message to Pub/Sub topic](publish-to-dapr-policy.md): Uses Dapr runtime to publish a message to a Publish/Subscribe topic. To learn more about Publish/Subscribe messaging in Dapr, see the description in this [README](https://github.com/dapr/docs/blob/master/README.md) file.
+- [Trigger output binding](invoke-dapr-binding-policy.md): Uses Dapr runtime to invoke an external system via output binding. To learn more about bindings in Dapr, see the description in this [README](https://github.com/dapr/docs/blob/master/README.md) file.
## GraphQL API policies-- [Validate GraphQL request](graphql-policies.md#validate-graphql-request) - Validates and authorizes a request to a GraphQL API.-- [Set GraphQL resolver](graphql-policies.md#set-graphql-resolver) - Retrieves or sets data for a GraphQL field in an object type specified in a GraphQL schema.
+- [Validate GraphQL request](validate-graphql-request-policy.md) - Validates and authorizes a request to a GraphQL API.
+- [Set GraphQL resolver](set-graphql-resolver-policy.md) - Retrieves or sets data for a GraphQL field in an object type specified in a GraphQL schema.
## Transformation policies-- [Convert JSON to XML](api-management-transformation-policies.md#ConvertJSONtoXML) - Converts request or response body from JSON to XML.-- [Convert XML to JSON](api-management-transformation-policies.md#ConvertXMLtoJSON) - Converts request or response body from XML to JSON.-- [Find and replace string in body](api-management-transformation-policies.md#Findandreplacestringinbody) - Finds a request or response substring and replaces it with a different substring.-- [Mask URLs in content](api-management-transformation-policies.md#MaskURLSContent) - Re-writes (masks) links in the response body so that they point to the equivalent link via the gateway.-- [Set backend service](api-management-transformation-policies.md#SetBackendService) - Changes the backend service for an incoming request.-- [Set body](api-management-transformation-policies.md#SetBody) - Sets the message body for incoming and outgoing requests.-- [Set HTTP header](api-management-transformation-policies.md#SetHTTPheader) - Assigns a value to an existing response and/or request header or adds a new response and/or request header.-- [Set query string parameter](api-management-transformation-policies.md#SetQueryStringParameter) - Adds, replaces value of, or deletes request query string parameter.-- [Rewrite URL](api-management-transformation-policies.md#RewriteURL) - Converts a request URL from its public form to the form expected by the web service.-- [Transform XML using an XSLT](api-management-transformation-policies.md#XSLTransform) - Applies an XSL transformation to XML in the request or response body.
+- [Convert JSON to XML](json-to-xml-policy.md) - Converts request or response body from JSON to XML.
+- [Convert XML to JSON](xml-to-json-policy.md) - Converts request or response body from XML to JSON.
+- [Find and replace string in body](find-and-replace-policy.md) - Finds a request or response substring and replaces it with a different substring.
+- [Mask URLs in content](redirect-content-urls-policy.md) - Rewrites (masks) links in the response body so that they point to the equivalent link via the gateway.
+- [Set backend service](set-backend-service-policy.md) - Changes the backend service for an incoming request.
+- [Set body](set-body-policy.md) - Sets the message body for incoming and outgoing requests.
+- [Set HTTP header](set-header-policy.md) - Assigns a value to an existing response and/or request header or adds a new response and/or request header.
+- [Set query string parameter](set-query-parameter-policy.md) - Adds, replaces value of, or deletes request query string parameter.
+- [Rewrite URL](rewrite-uri-policy.md) - Converts a request URL from its public form to the form expected by the web service.
+- [Transform XML using an XSLT](xsl-transform-policy.md) - Applies an XSL transformation to XML in the request or response body.
## Validation policies-- [Validate content](validation-policies.md#validate-content) - Validates the size or JSON schema of a request or response body against the API schema.-- [Validate parameters](validation-policies.md#validate-parameters) - Validates the request header, query, or path parameters against the API schema.-- [Validate headers](validation-policies.md#validate-headers) - Validates the response headers against the API schema.-- [Validate status code](validation-policies.md#validate-status-code) - Validates the HTTP status codes in responses against the API schema.
+- [Validate content](validate-content-policy.md) - Validates the size or content of a request or response body against one or more API schemas. The supported schema formats are JSON and XML.
+- [Validate parameters](validate-parameters-policy.md) - Validates the request header, query, or path parameters against the API schema.
+- [Validate headers](validate-headers-policy.md) - Validates the response headers against the API schema.
+- [Validate status code](validate-status-code-policy.md) - Validates the HTTP status codes in
## Next steps For more information about working with policies, see:
api-management Api Management Policy Expressions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-policy-expressions.md
Previously updated : 12/08/2022 Last updated : 02/07/2022 # API Management policy expressions
The `context` variable is implicitly available in every policy [expression](api-
|<a id="ref-context-user"></a>`context.User`|`Email`: `string`<br /><br /> `FirstName`: `string`<br /><br /> `Groups`: `IEnumerable<`[`IGroup`](#ref-igroup)`>`<br /><br /> `Id`: `string`<br /><br /> `Identities`: `IEnumerable<`[`IUserIdentity`](#ref-iuseridentity)`>`<br /><br /> `LastName`: `string`<br /><br /> `Note`: `string`<br /><br /> `RegistrationDate`: `DateTime`| |<a id="ref-iapi"></a>`IApi`|`Id`: `string`<br /><br /> `Name`: `string`<br /><br /> `Path`: `string`<br /><br /> `Protocols`: `IEnumerable<string>`<br /><br /> `ServiceUrl`: [`IUrl`](#ref-iurl)<br /><br /> `SubscriptionKeyParameterNames`: [`ISubscriptionKeyParameterNames`](#ref-isubscriptionkeyparameternames)| |<a id="ref-igroup"></a>`IGroup`|`Id`: `string`<br /><br /> `Name`: `string`|
-|<a id="ref-imessagebody"></a>`IMessageBody`|`As<T>(bool preserveContent = false): Where T: string, byte[], JObject, JToken, JArray, XNode, XElement, XDocument` <br /><br /> - The `context.Request.Body.As<T>` and `context.Response.Body.As<T>` methods read a request or response message body in specified type `T`. <br/><br/> - Or - <br/><br/>`AsFormUrlEncodedContent(bool preserveContent = false)` <br/></br>- The `context.Request.Body.AsFormUrlEncodedContent()` and `context.Response.Body.AsFormUrlEncodedContent()` methods read URL-encoded form data in a request or response message body and return an `IDictionary<string, IList<string>>` object. The decoded object supports `IDictionary` operations and the following expressions: `ToQueryString()`, `JsonConvert.SerializeObject()`, `ToFormUrlEncodedContent().` <br/><br/> By default, the `As<T>` and `AsFormUrlEncodedContent()` methods:<br /><ul><li>Use the original message body stream.</li><li>Render it unavailable after it returns.</li></ul> <br />To avoid that and have the method operate on a copy of the body stream, set the `preserveContent` parameter to `true`, as in [this example](api-management-transformation-policies.md#SetBody).|
+|<a id="ref-imessagebody"></a>`IMessageBody`|`As<T>(preserveContent: bool = false): Where T: string, byte[],JObject, JToken, JArray, XNode, XElement, XDocument`<br /><br /> The `context.Request.Body.As<T>` and `context.Response.Body.As<T>` methods are used to read either a request and response message body in specified type `T`. By default, the method:<br /><ul><li>Uses the original message body stream.</li><li>Renders it unavailable after it returns.</li></ul> <br />To avoid that and have the method operate on a copy of the body stream, set the `preserveContent` parameter to `true`, as in [this example](api-management-transformation-policies.md#SetBody).|
|<a id="ref-iprivateendpointconnection"></a>`IPrivateEndpointConnection`|`Name`: `string`<br /><br /> `GroupId`: `string`<br /><br /> `MemberName`: `string`<br /><br />For more information, see the [REST API](/rest/api/apimanagement/current-ga/private-endpoint-connection/list-private-link-resources).| |<a id="ref-iurl"></a>`IUrl`|`Host`: `string`<br /><br /> `Path`: `string`<br /><br /> `Port`: `int`<br /><br /> [`Query`](#ref-iurl-query): `IReadOnlyDictionary<string, string[]>`<br /><br /> `QueryString`: `string`<br /><br /> `Scheme`: `string`| |<a id="ref-iuseridentity"></a>`IUserIdentity`|`Id`: `string`<br /><br /> `Provider`: `string`|
For more information working with policies, see:
For more information: -- See how to supply context information to your backend service. Use the [Set query string parameter](api-management-transformation-policies.md#SetQueryStringParameter) and [Set HTTP header](api-management-transformation-policies.md#SetHTTPheader) policies to supply this information.-- See how to use the [Validate JWT](api-management-access-restriction-policies.md#ValidateJWT) policy to pre-authorize access to operations based on token claims.
+- See how to supply context information to your backend service. Use the [Set query string parameter](set-query-parameter-policy.md) and [Set HTTP header](set-header-policy.md) policies to supply this information.
+- See how to use the [Validate JWT](validate-jwt-policy.md) policy to pre-authorize access to operations based on token claims.
- See how to use an [API Inspector](./api-management-howto-api-inspector.md) trace to detect how policies are evaluated and the results of those evaluations.-- See how to use expressions with the [Get from cache](api-management-caching-policies.md#GetFromCache) and [Store to cache](api-management-caching-policies.md#StoreToCache) policies to configure API Management response caching. Set a duration that matches the response caching of the backend service as specified by the backed service's `Cache-Control` directive.-- See how to perform content filtering. Remove data elements from the response received from the backend using the [Control flow](api-management-advanced-policies.md#choose) and [Set body](api-management-transformation-policies.md#SetBody) policies.
+- See how to use expressions with the [Get from cache](cache-lookup-policy.md) and [Store to cache](cache-store-policy.md) policies to configure API Management response caching. Set a duration that matches the response caching of the backend service as specified by the backed service's `Cache-Control` directive.
+- See how to perform content filtering. Remove data elements from the response received from the backend using the [Control flow](choose-policy.md) and [Set body](set-body-policy.md) policies.
- To download the policy statements, see the [api-management-samples/policies](https://github.com/Azure/api-management-samples/tree/master/policies) GitHub repo.
api-management Api Management Revisions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-revisions.md
With revisions you can:
Each revision to your API can be accessed using a specially formed URL. Append `;rev={revisionNumber}` at the end of your API URL, but before the query string, to access a specific revision of that API. For example, you might use this URL to access revision 3 of the `customers` API:
-`https://apis.contoso.com/customers;rev=3?customerId=123`
+`https://apis.contoso.com/customers;rev=3/leads?customerId=123`
-By default, each revision has the same security settings as the current revision. You can deliberately change the policies for a specific revision if you want to have different security applied for each revision. For example, you might want to add a [IP filtering policy](./api-management-access-restriction-policies.md#RestrictCallerIPs) to prevent external callers from accessing a revision that is still under development.
+By default, each revision has the same security settings as the current revision. You can deliberately change the policies for a specific revision if you want to have different security applied for each revision. For example, you might want to add a [IP filtering policy](ip-filter-policy.md) to prevent external callers from accessing a revision that is still under development.
+
+> [!NOTE]
+> The `;rev={id}` must be appended to the API ID, and not the URI path.
## Current revision
api-management Api Management Sample Cache By Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-sample-cache-by-key.md
# Custom caching in Azure API Management
-Azure API Management service has built-in support for [HTTP response caching](api-management-howto-cache.md) using the resource URL as the key. The key can be modified by request headers using the `vary-by` properties. This is useful for caching entire HTTP responses (also known as representations), but sometimes it's useful to just cache a portion of a representation. The [cache-lookup-value](./api-management-caching-policies.md#GetFromCacheByKey) and [cache-store-value](./api-management-caching-policies.md#StoreToCacheByKey) policies provide the ability to store and retrieve arbitrary pieces of data from within policy definitions. This ability also adds value to the [send-request](./api-management-advanced-policies.md#SendRequest) policy because you can cache responses from external services.
+Azure API Management service has built-in support for [HTTP response caching](api-management-howto-cache.md) using the resource URL as the key. The key can be modified by request headers using the `vary-by` properties. This is useful for caching entire HTTP responses (also known as representations), but sometimes it's useful to just cache a portion of a representation. The [cache-lookup-value](cache-lookup-value-policy.md) and [cache-store-value](cache-store-value-policy.md) policies provide the ability to store and retrieve arbitrary pieces of data from within policy definitions. This ability also adds value to the [send-request](send-request-policy.md) policy because you can cache responses from external services.
## Architecture API Management service uses a shared per-tenant internal data cache so that, as you scale up to multiple units, you still get access to the same cached data. However, when working with a multi-region deployment there are independent caches within each of the regions. It's important to not treat the cache as a data store, where it's the only source of some piece of information. If you did, and later decided to take advantage of the multi-region deployment, then customers with users that travel may lose access to that cached data.
api-management Api Management Sample Flexible Throttling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-sample-flexible-throttling.md
Rate throttling capabilities that are scoped to a particular subscription are us
> [!NOTE] > The `rate-limit-by-key` and `quota-by-key` policies are not available when in the Consumption tier of Azure API Management.
-The [rate-limit-by-key](./api-management-access-restriction-policies.md#LimitCallRateByKey) and [quota-by-key](./api-management-access-restriction-policies.md#SetUsageQuotaByKey) policies provide a more flexible solution to traffic control. These policies allow you to define expressions to identify the keys that are used to track traffic usage. The way this works is easiest illustrated with an example.
+The [rate-limit-by-key](rate-limit-by-key-policy.md) and [quota-by-key](quota-by-key-policy.md) policies provide a more flexible solution to traffic control. These policies allow you to define expressions to identify the keys that are used to track traffic usage. The way this works is easiest illustrated with an example.
## IP address throttling The following policies restrict a single client IP address to only 10 calls every minute, with a total of 1,000,000 calls and 10,000 kilobytes of bandwidth per month.
If an end user is authenticated, then a throttling key can be generated based on
This example shows how to extract the Authorization header, convert it to `JWT` object and use the subject of the token to identify the user and use that as the rate limiting key. If the user identity is stored in the `JWT` as one of the other claims, then that value could be used in its place. ## Combined policies
-Although the user-based throttling policies provide more control than the subscription-based throttling policies, there is still value combining both capabilities. Throttling by product subscription key ([Limit call rate by subscription](./api-management-access-restriction-policies.md#LimitCallRate) and [Set usage quota by subscription](./api-management-access-restriction-policies.md#SetUsageQuota)) is a great way to enable monetizing of an API by charging based on usage levels. The finer grained control of being able to throttle by user is complementary and prevents one user's behavior from degrading the experience of another.
+Although the user-based throttling policies provide more control than the subscription-based throttling policies, there is still value combining both capabilities. Throttling by product subscription key ([Limit call rate by subscription](rate-limit-policy.md) and [Set usage quota by subscription](quota-policy.md)) is a great way to enable monetizing of an API by charging based on usage levels. The finer grained control of being able to throttle by user is complementary and prevents one user's behavior from degrading the experience of another.
## Client driven throttling When the throttling key is defined using a [policy expression](./api-management-policy-expressions.md), then it is the API provider that is choosing how the throttling is scoped. However, a developer might want to control how they rate limit their own customers. This could be enabled by the API provider by introducing a custom header to allow the developer's client application to communicate the key to the API.
api-management Api Management Sample Send Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-sample-send-request.md
The policies available in Azure API Management service can do a wide range of us
You have previously seen how to interact with the [Azure Event Hub service for logging, monitoring, and analytics](api-management-log-to-eventhub-sample.md). This article demonstrates policies that allow you to interact with any external HTTP-based service. These policies can be used for triggering remote events or for retrieving information that is used to manipulate the original request and response in some way. ## Send-One-Way-Request
-Possibly the simplest external interaction is the fire-and-forget style of request that allows an external service to be notified of some kind of important event. The control flow policy `choose` can be used to detect any kind of condition that you are interested in. If the condition is satisfied, you can make an external HTTP request using the [send-one-way-request](./api-management-advanced-policies.md#SendOneWayRequest) policy. This could be a request to a messaging system like Hipchat or Slack, or a mail API like SendGrid or MailChimp, or for critical support incidents something like PagerDuty. All of these messaging systems have simple HTTP APIs that can be invoked.
+Possibly the simplest external interaction is the fire-and-forget style of request that allows an external service to be notified of some kind of important event. The control flow policy `choose` can be used to detect any kind of condition that you are interested in. If the condition is satisfied, you can make an external HTTP request using the [send-one-way-request](./send-one-way-request-policy.md) policy. This could be a request to a messaging system like Hipchat or Slack, or a mail API like SendGrid or MailChimp, or for critical support incidents something like PagerDuty. All of these messaging systems have simple HTTP APIs that can be invoked.
### Alerting with Slack The following example demonstrates how to send a message to a Slack chat room if the HTTP response status code is greater than or equal to 500. A 500 range error indicates a problem with the backend API that the client of the API cannot resolve themselves. It usually requires some kind of intervention on API Management part.
Slack has the notion of inbound web hooks. When configuring an inbound web hook,
![Slack Web Hook](./media/api-management-sample-send-request/api-management-slack-webhook.png) ### Is fire and forget good enough?
-There are certain tradeoffs when using a fire-and-forget style of request. If for some reason, the request fails, then the failure will not be reported. In this particular situation, the complexity of having a secondary failure reporting system and the additional performance cost of waiting for the response is not warranted. For scenarios where it is essential to check the response, then the [send-request](./api-management-advanced-policies.md#SendRequest) policy is a better option.
+There are certain tradeoffs when using a fire-and-forget style of request. If for some reason, the request fails, then the failure will not be reported. In this particular situation, the complexity of having a secondary failure reporting system and the additional performance cost of waiting for the response is not warranted. For scenarios where it is essential to check the response, then the [send-request](./send-request-policy.md) policy is a better option.
## Send-Request The `send-request` policy enables using an external service to perform complex processing functions and return data to the API management service that can be used for further policy processing.
API Management will send these requests sequentially.
### Responding
-To construct the composite response, you can use the [return-response](./api-management-advanced-policies.md#ReturnResponse) policy. The `set-body` element can use an expression to construct a new `JObject` with all the component representations embedded as properties.
+To construct the composite response, you can use the [return-response](return-response-policy.md) policy. The `set-body` element can use an expression to construct a new `JObject` with all the component representations embedded as properties.
```xml <return-response response-variable-name="existing response variable">
api-management Api Management Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-subscriptions.md
In addition,
> API Management also supports other mechanisms for securing access to APIs, including the following examples: > - [OAuth2.0](api-management-howto-protect-backend-with-aad.md) > - [Client certificates](api-management-howto-mutual-certificates-for-clients.md)
-> - [Restrict caller IPs](./api-management-access-restriction-policies.md#RestrictCallerIPs)
+> - [Restrict caller IPs](ip-filter-policy.md)
## Manage subscription keys
api-management Api Management Transformation Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-transformation-policies.md
- Title: Azure API Management transformation policies | Microsoft Docs
-description: Reference for the transformation policies available for use in Azure API Management. Provides policy usage, settings, and examples.
----- Previously updated : 12/08/2022---
-# API Management transformation policies
-This article provides a reference for API Management policies used to transform API requests or responses.
--
-## <a name="TransformationPolicies"></a> Transformation policies
--- [Convert JSON to XML](#ConvertJSONtoXML) - Converts request or response body from JSON to XML.--- [Convert XML to JSON](#ConvertXMLtoJSON) - Converts request or response body from XML to JSON.--- [Find and replace string in body](#Findandreplacestringinbody) - Finds a request or response substring and replaces it with a different substring.--- [Mask URLs in content](#MaskURLSContent) - Rewrites (masks) links in the response body so that they point to the equivalent link via the gateway.--- [Set backend service](#SetBackendService) - Changes the backend service for an incoming request.--- [Set body](#SetBody) - Sets the message body for incoming and outgoing requests.--- [Set HTTP header](#SetHTTPheader) - Assigns a value to an existing response and/or request header or adds a new response and/or request header.--- [Set query string parameter](#SetQueryStringParameter) - Adds, replaces value of, or deletes request query string parameter.--- [Rewrite URL](#RewriteURL) - Converts a request URL from its public form to the form expected by the web service.--- [Transform XML using an XSLT](#XSLTransform) - Applies an XSL transformation to XML in the request or response body.-
-## <a name="ConvertJSONtoXML"></a> Convert JSON to XML
- The `json-to-xml` policy converts a request or response body from JSON to XML.
--
-### Policy statement
-
-```xml
-<json-to-xml
- apply="always | content-type-json"
- consider-accept-header="true | false"
- parse-date="true | false"
- namespace-separator="separator character"
- namespace-prefix="namepsace prefix"
- attribute-block-name="name" />
-```
-
-### Example
-
-Consider the following policy:
-
-```xml
-<policies>
- <inbound>
- <base />
- </inbound>
- <outbound>
- <base />
- <json-to-xml apply="always" consider-accept-header="false" parse-date="false" namespace-separator=":" namespace-prefix="xmlns" attribute-block-name="#attrs" />
- </outbound>
-</policies>
-```
-
-If the backend returns the following JSON:
-
-``` json
-{
- "soapenv:Envelope": {
- "xmlns:soapenv": "http://schemas.xmlsoap.org/soap/envelope/",
- "xmlns:v1": "http://localdomain.com/core/v1",
- "soapenv:Header": {},
- "soapenv:Body": {
- "v1:QueryList": {
- "#attrs": {
- "queryName": "test"
- },
- "v1:QueryItem": {
- "name": "dummy text"
- }
- }
- }
- }
-}
-```
-
-The XML response to the client will be:
-
-``` xml
-<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:v1="http://localdomain.com/core/v1">
- <soapenv:Header />
- <soapenv:Body>
- <v1:QueryList queryName="test">
- <name>dummy text</name>
- </v1:QueryList>
- </soapenv:Body>
-</soapenv:Envelope>
-```
-
-### Elements
-
-|Name|Description|Required|
-|-|--|--|
-|json-to-xml|Root element.|Yes|
-
-### Attributes
-
-|Name|Description|Required|Default|
-|-|--|--|-|
-|apply|The attribute must be set to one of the following values.<br /><br /> - always - always apply conversion.<br />- content-type-json - convert only if response Content-Type header indicates presence of JSON.|Yes|N/A|
-|consider-accept-header|The attribute must be set to one of the following values.<br /><br /> - true - apply conversion if XML is requested in request Accept header.<br />- false -always apply conversion.|No|true|
-|parse-date|When set to `false` date values are simply copied during transformation|No|true|
-|namespace-separator|The character to use as a namespace separator|No|Underscore|
-|namespace-prefix|The string that identifies property as namespace attribute, usually "xmlns". Properties with names beginning with specified prefix will be added to current element as namespace declarations.|No|N/A|
-|attribute-block-name|When set, properties inside the named object will be added to the element as attributes|No|Not set|
-
-### Usage
- This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
--- **Policy sections:** inbound, outbound, on-error--- **Policy scopes:** all scopes-
-## <a name="ConvertXMLtoJSON"></a> Convert XML to JSON
- The `xml-to-json` policy converts a request or response body from XML to JSON. This policy can be used to modernize APIs based on XML-only backend web services.
--
-### Policy statement
-
-```xml
-<xml-to-json kind="javascript-friendly | direct" apply="always | content-type-xml" consider-accept-header="true | false"/>
-```
-
-### Example
-
-```xml
-<policies>
- <inbound>
- <base />
- </inbound>
- <outbound>
- <base />
- <xml-to-json kind="direct" apply="always" consider-accept-header="false" />
- </outbound>
-</policies>
-```
-
-### Elements
-
-|Name|Description|Required|
-|-|--|--|
-|xml-to-json|Root element.|Yes|
-
-### Attributes
-
-|Name|Description|Required|Default|
-|-|--|--|-|
-|kind|The attribute must be set to one of the following values.<br /><br /> - javascript-friendly - the converted JSON has a form friendly to JavaScript developers.<br />- direct - the converted JSON reflects the original XML document's structure.|Yes|N/A|
-|apply|The attribute must be set to one of the following values.<br /><br /> - always - convert always.<br />- content-type-xml - convert only if response Content-Type header indicates presence of XML.|Yes|N/A|
-|consider-accept-header|The attribute must be set to one of the following values.<br /><br /> - true - apply conversion if JSON is requested in request Accept header.<br />- false -always apply conversion.|No|true|
-
-### Usage
- This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
--- **Policy sections:** inbound, outbound, on-error--- **Policy scopes:** all scopes-
-## <a name="Findandreplacestringinbody"></a> Find and replace string in body
- The `find-and-replace` policy finds a request or response substring and replaces it with a different substring.
---
-### Policy statement
-
-```xml
-<find-and-replace from="what to replace" to="replacement" />
-```
-
-### Example
-
-```xml
-<find-and-replace from="notebook" to="laptop" />
-```
-
-### Elements
-
-|Name|Description|Required|
-|-|--|--|
-|find-and-replace|Root element.|Yes|
-
-### Attributes
-
-|Name|Description|Required|Default|
-|-|--|--|-|
-|from|The string to search for.|Yes|N/A|
-|to|The replacement string. Specify a zero length replacement string to remove the search string.|Yes|N/A|
-
-### Usage
- This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
--- **Policy sections:** inbound, outbound, backend, on-error--- **Policy scopes:** all scopes-
-## <a name="MaskURLSContent"></a> Mask URLs in content
- The `redirect-content-urls` policy rewrites (masks) links in the response body so that they point to the equivalent link via the gateway. Use in the outbound section to rewrite response body links to make them point to the gateway. Use in the inbound section for an opposite effect.
-
-> [!NOTE]
-> This policy does not change any header values such as `Location` headers. To change header values, use the [set-header](api-management-transformation-policies.md#SetHTTPheader) policy.
--
-### Policy statement
-
-```xml
-<redirect-content-urls />
-```
-
-### Example
-
-```xml
-<redirect-content-urls />
-```
-
-### Elements
-
-|Name|Description|Required|
-|-|--|--|
-|redirect-content-urls|Root element.|Yes|
-
-### Usage
- This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
--- **Policy sections:** inbound, outbound--- **Policy scopes:** all scopes-
-## <a name="SetBackendService"></a> Set backend service
- Use the `set-backend-service` policy to redirect an incoming request to a different backend than the one specified in the API settings for that operation. This policy changes the backend service base URL of the incoming request to the one specified in the policy.
--
-### Policy statement
-
-```xml
-<set-backend-service base-url="base URL of the backend service" />
-```
-
-or
-
-```xml
-<set-backend-service backend-id="name of the backend entity specifying base URL of the backend service" />
-```
-
-> [!NOTE]
-> Backend entities can be managed via [Azure portal](how-to-configure-service-fabric-backend.md), management [API](/rest/api/apimanagement), and [PowerShell](https://www.powershellgallery.com/packages?q=apimanagement). Currently, if you define a base `set-backend-service` policy using the `backend-id` attribute and inherit the base policy using `<base />` within the scope, then it can be only overridden with a policy using the `backend-id` attribute, not the `base-url` attribute.
-
-### Example
-
-```xml
-<policies>
- <inbound>
- <choose>
- <when condition="@(context.Request.Url.Query.GetValueOrDefault("version") == "2013-05")">
- <set-backend-service base-url="http://contoso.com/api/8.2/" />
- </when>
- <when condition="@(context.Request.Url.Query.GetValueOrDefault("version") == "2014-03")">
- <set-backend-service base-url="http://contoso.com/api/9.1/" />
- </when>
- </choose>
- <base />
- </inbound>
- <outbound>
- <base />
- </outbound>
-</policies>
-```
-In this example the set backend service policy routes requests based on the version value passed in the query string to a different backend service than the one specified in the API.
-
-Initially the backend service base URL is derived from the API settings. So the request URL `https://contoso.azure-api.net/api/partners/15?version=2013-05&subscription-key=abcdef` becomes `http://contoso.com/api/10.4/partners/15?version=2013-05&subscription-key=abcdef` where `http://contoso.com/api/10.4/` is the backend service URL specified in the API settings.
-
-When the [<choose\>](api-management-advanced-policies.md#choose) policy statement is applied the backend service base URL may change again either to `http://contoso.com/api/8.2` or `http://contoso.com/api/9.1`, depending on the value of the version request query parameter. For example, if the value is `"2013-15"` the final request URL becomes `http://contoso.com/api/8.2/partners/15?version=2013-05&subscription-key=abcdef`.
-
-If further transformation of the request is desired, other [Transformation policies](api-management-transformation-policies.md#TransformationPolicies) can be used. For example, to remove the version query parameter now that the request is being routed to a version specific backend, the [Set query string parameter](api-management-transformation-policies.md#SetQueryStringParameter) policy can be used to remove the now redundant version attribute.
-
-### Example
-
-```xml
-<policies>
- <inbound>
- <set-backend-service backend-id="my-sf-service" sf-partition-key="@(context.Request.Url.Query.GetValueOrDefault("userId","")" sf-replica-type="primary" />
- </inbound>
- <outbound>
- <base />
- </outbound>
-</policies>
-```
-In this example the policy routes the request to a service fabric backend, using the userId query string as the partition key and using the primary replica of the partition.
-
-### Elements
-
-|Name|Description|Required|
-|-|--|--|
-|set-backend-service|Root element.|Yes|
-
-### Attributes
-
-|Name|Description|Required|Default|
-|-|--|--|-|
-|base-url|New backend service base URL.|One of `base-url` or `backend-id` must be present.|N/A|
-|backend-id|Identifier (name) of the backend to route to. (Backend entities are managed via [Azure portal](how-to-configure-service-fabric-backend.md), [API](/rest/api/apimanagement), and [PowerShell](https://www.powershellgallery.com/packages?q=apimanagement).)|One of `base-url` or `backend-id` must be present.|N/A|
-|sf-partition-key|Only applicable when the backend is a Service Fabric service and is specified using 'backend-id'. Used to resolve a specific partition from the name resolution service.|No|N/A|
-|sf-replica-type|Only applicable when the backend is a Service Fabric service and is specified using 'backend-id'. Controls if the request should go to the primary or secondary replica of a partition. |No|N/A|
-|sf-resolve-condition|Only applicable when the backend is a Service Fabric service. Condition identifying if the call to Service Fabric backend has to be repeated with new resolution.|No|N/A|
-|sf-service-instance-name|Only applicable when the backend is a Service Fabric service. Allows changing service instances at runtime. |No|N/A|
-|sf-listener-name|Only applicable when the backend is a Service Fabric service and is specified using ΓÇÿbackend-idΓÇÖ. Service Fabric Reliable Services allows you to create multiple listeners in a service. This attribute is used to select a specific listener when a backend Reliable Service has more than one listener. If this attribute isn't specified, API Management will attempt to use a listener without a name. A listener without a name is typical for Reliable Services that have only one listener. |No|N/A|
-
-### Usage
- This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
--- **Policy sections:** inbound, backend--- **Policy scopes:** all scopes-
-## <a name="SetBody"></a> Set body
- Use the `set-body` policy to set the message body for incoming and outgoing requests. To access the message body you can use the `context.Request.Body` property or the `context.Response.Body`, depending on whether the policy is in the inbound or outbound section.
-
-> [!IMPORTANT]
-> Note that by default when you access the message body using `context.Request.Body` or `context.Response.Body`, the original message body is lost and must be set by returning the body back in the expression. To preserve the body content, set the `preserveContent` parameter to `true` when accessing the message. If `preserveContent` is set to `true` and a different body is returned by the expression, the returned body is used.
->
-> Please note the following considerations when using the `set-body` policy.
->
-> - If you are using the `set-body` policy to return a new or updated body you don't need to set `preserveContent` to `true` because you are explicitly supplying the new body contents.
-> - Preserving the content of a response in the inbound pipeline doesn't make sense because there is no response yet.
-> - Preserving the content of a request in the outbound pipeline doesn't make sense because the request has already been sent to the backend at this point.
-> - If this policy is used when there is no message body, for example in an inbound GET, an exception is thrown.
-
- For more information, see the `context.Request.Body`, `context.Response.Body`, and the `IMessageBody` sections in the [Context variable](api-management-policy-expressions.md#ContextVariables) table.
--
-### Policy statement
-
-```xml
-<set-body template="liquid" xsi-nil="blank | null">
- new body value as text
-</set-body>
-```
-
-### Examples
-
-#### Literal text example
-
-```xml
-<set-body>Hello world!</set-body>
-```
-
-#### Example accessing the body as a string
-
-We are preserving the original request body so that we can access it later in the pipeline.
-
-```xml
-<set-body>
-@{ 
- string inBody = context.Request.Body.As<string>(preserveContent: true); 
- if (inBody[0] =='c') { 
- inBody[0] = 'm'; 
- } 
- return inBody; 
-}
-</set-body>
-```
-
-#### Example accessing the body as a JObject
-
-Since we are not reserving the original request body, accessing it later in the pipeline will result in an exception.
-
-```xml
-<set-body> 
-@{ 
- JObject inBody = context.Request.Body.As<JObject>(); 
- if (inBody.attribute == <tag>) { 
- inBody[0] = 'm'; 
- } 
- return inBody.ToString(); 
-} 
-</set-body>
-
-```
-
-#### Example accessing the body as URL-encoded form data
-The following example uses the `AsFormUrlEncodedContent()` expression to access the request body as URL-encoded form data (content type `application/x-www-form-urlencoded`), and then converts it to JSON. Since we are not reserving the original request body, accessing it later in the pipeline will result in an exception.
-
-```xml
-<set-body> 
-@{ 
- var inBody = context.Request.Body.AsFormUrlEncodedContent();
- return JsonConvert.SerializeObject(inBody); 
-} 
-</set-body>
-
-```
-
-#### Filter response based on product
- This example shows how to perform content filtering by removing data elements from the response received from a backend service when using the `Starter` product. The example backend response includes root-level properties similar to the [OpenWeather One Call API](https://openweathermap.org/api/one-call-api).
-
-```xml
-<!-- Copy this snippet into the outbound section to remove a number of data elements from the response received from the backend service based on the name of the product -->
-<choose>
- <when condition="@(context.Response.StatusCode == 200 && context.Product.Name.Equals("Starter"))">
- <set-body>@{
- var response = context.Response.Body.As<JObject>();
- foreach (var key in new [] {"current", "minutely", "hourly", "daily", "alerts"}) {
- response.Property (key).Remove ();
- }
- return response.ToString();
- }
- </set-body>
- </when>
-</choose>
-```
-
-### Using Liquid templates with set body
-The `set-body` policy can be configured to use the [Liquid](https://shopify.github.io/liquid/basics/introduction/) templating language to transform the body of a request or response. This can be effective if you need to completely reshape the format of your message.
-
-> [!IMPORTANT]
-> The implementation of Liquid used in the `set-body` policy is configured in 'C# mode'. This is particularly important when doing things such as filtering. As an example, using a date filter requires the use of Pascal casing and C# date formatting e.g.:
->
-> {{body.foo.startDateTime| Date:"yyyyMMddTHH:mm:ssZ"}}
-
-> [!IMPORTANT]
-> In order to correctly bind to an XML body using the Liquid template, use a `set-header` policy to set Content-Type to either application/xml, text/xml (or any type ending with +xml); for a JSON body, it must be application/json, text/json (or any type ending with +json).
-
-#### Supported Liquid filters
-
-The following Liquid filters are supported in the `set-body` policy. For filter examples, see the [Liquid documentation](https://shopify.github.io/liquid/).
-
-> [!NOTE]
-> The policy requires Pascal casing for Liquid filter names (for example, "AtLeast" instead of "at_least").
->
-* Abs
-* Append
-* AtLeast
-* AtMost
-* Capitalize
-* Compact
-* Currency
-* Date
-* Default
-* DividedBy
-* Downcase
-* Escape
-* First
-* H
-* Join
-* Last
-* Lstrip
-* Map
-* Minus
-* Modulo
-* NewlineToBr
-* Plus
-* Prepend
-* Remove
-* RemoveFirst
-* Replace
-* ReplaceFirst
-* Round
-* Rstrip
-* Size
-* Slice
-* Sort
-* Split
-* Strip
-* StripHtml
-* StripNewlines
-* Times
-* Truncate
-* TruncateWords
-* Uniq
-* Upcase
-* UrlDecode
-* UrlEncode
-
-#### Convert JSON to SOAP using a Liquid template
-```xml
-<set-body template="liquid">
- <soap:Envelope xmlns="http://tempuri.org/" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
- <soap:Body>
- <GetOpenOrders>
- <cust>{{body.getOpenOrders.cust}}</cust>
- </GetOpenOrders>
- </soap:Body>
- </soap:Envelope>
-</set-body>
-```
-
-#### Transform JSON using a Liquid template
-```xml
-{
-"order": {
- "id": "{{body.customer.purchase.identifier}}",
- "summary": "{{body.customer.purchase.orderShortDesc}}"
- }
-}
-```
-
-### Elements
-
-|Name|Description|Required|
-|-|--|--|
-|set-body|Root element. Contains the body text or an expression that returns a body.|Yes|
-
-### Properties
-
-|Name|Description|Required|Default|
-|-|--|--|-|
-|template|Used to change the templating mode that the `set-body` policy will run in. Currently the only supported value is:<br /><br />- liquid - the `set-body` policy will use the liquid templating engine |No| N/A|
-|xsi-nil| Used to control how elements marked with `xsi:nil="true"` are represented in XML payloads. Set to one of the following values.<br /><br />- blank - `nil` is represented with an empty string.<br />- null - `nil` is represented with a null value.|No | blank |
-
-For accessing information about the request and response, the Liquid template can bind to a context object with the following properties: <br />
-<pre>context.
- Request.
- Url
- Method
- OriginalMethod
- OriginalUrl
- IpAddress
- MatchedParameters
- HasBody
- ClientCertificates
- Headers
-
- Response.
- StatusCode
- Method
- Headers
-Url.
- Scheme
- Host
- Port
- Path
- Query
- QueryString
- ToUri
- ToString
-
-OriginalUrl.
- Scheme
- Host
- Port
- Path
- Query
- QueryString
- ToUri
- ToString
-</pre>
---
-### Usage
- This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
--- **Policy sections:** inbound, outbound, backend--- **Policy scopes:** all scopes-
-## <a name="SetHTTPheader"></a> Set HTTP header
- The `set-header` policy assigns a value to an existing response and/or request header or adds a new response and/or request header.
-
- Use the policy to insert a list of HTTP headers into an HTTP message. When placed in an inbound pipeline, this policy sets the HTTP headers for the request being passed to the target service. When placed in an outbound pipeline, this policy sets the HTTP headers for the response being sent to the gatewayΓÇÖs client.
--
-### Policy statement
-
-```xml
-<set-header name="header name" exists-action="override | skip | append | delete">
- <value>value</value> <!--for multiple headers with the same name add additional value elements-->
-</set-header>
-```
-
-### Examples
-
-#### Example - adding header, override existing
-
-```xml
-<set-header name="some header name" exists-action="override">
- <value>20</value>
-</set-header>
-```
-#### Example - removing header
-
-```xml
- <set-header name="some header name" exists-action="delete" />
-```
---
-#### Forward context information to the backend service
- This example shows how to apply policy at the API level to supply context information to the backend service.
-
-```xml
-<!-- Copy this snippet into the inbound element to forward some context information, user id and the region the gateway is hosted in, to the backend service for logging or evaluation -->
-<set-header name="x-request-context-data" exists-action="override">
- <value>@(context.User.Id)</value>
- <value>@(context.Deployment.Region)</value>
-</set-header>
-```
-
- For more information, see [Policy expressions](api-management-policy-expressions.md) and [Context variable](api-management-policy-expressions.md#ContextVariables).
-
-> [!NOTE]
-> Multiple values of a header are concatenated to a CSV string, for example:
-> `headerName: value1,value2,value3`
->
-> Exceptions include standardized headers whose values:
-> - may contain commas (`User-Agent`, `WWW-Authenticate`, `Proxy-Authenticate`),
-> - may contain date (`Cookie`, `Set-Cookie`, `Warning`),
-> - contain date (`Date`, `Expires`, `If-Modified-Since`, `If-Unmodified-Since`, `Last-Modified`, `Retry-After`).
->
-> In case of those exceptions, multiple header values will not be concatenated into one string and will be passed as separate headers, for example:
->`User-Agent: value1`
->`User-Agent: value2`
->`User-Agent: value3`
-
-### Elements
-
-|Name|Description|Required|
-|-|--|--|
-|set-header|Root element.|Yes|
-|value|Specifies the value of the header to be set. For multiple headers with the same name add additional `value` elements.|No|
-
-### Properties
-
-|Name|Description|Required|Default|
-|-|--|--|-|
-|exists-action|Specifies what action to take when the header is already specified. This attribute must have one of the following values.<br /><br /> - override - replaces the value of the existing header.<br />- skip - does not replace the existing header value.<br />- append - appends the value to the existing header value.<br />- delete - removes the header from the request.<br /><br /> When set to `override` enlisting multiple entries with the same name results in the header being set according to all entries (which will be listed multiple times); only listed values will be set in the result.|No|override|
-|name|Specifies name of the header to be set.|Yes|N/A|
-
-### Usage
- This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
--- **Policy sections:** inbound, outbound, backend, on-error--- **Policy scopes:** all scopes-
-## <a name="SetQueryStringParameter"></a> Set query string parameter
- The `set-query-parameter` policy adds, replaces value of, or deletes request query string parameter. Can be used to pass query parameters expected by the backend service which are optional or never present in the request.
--
-### Policy statement
-
-```xml
-<set-query-parameter name="param name" exists-action="override | skip | append | delete">
- <value>value</value> <!--for multiple parameters with the same name add additional value elements-->
-</set-query-parameter>
-```
-
-#### Example
-
-```xml
-
-<set-query-parameter name="api-key" exists-action="skip">
- <value>12345678901</value>
-</set-query-parameter>
-
-```
-
-#### Forward context information to the backend service
- This example shows how to apply policy at the API level to supply context information to the backend service.
-
-```xml
-<!-- Copy this snippet into the inbound element to forward a piece of context, product name in this example, to the backend service for logging or evaluation -->
-<set-query-parameter name="x-product-name" exists-action="override">
- <value>@(context.Product.Name)</value>
-</set-query-parameter>
-
-```
-
- For more information, see [Policy expressions](api-management-policy-expressions.md) and [Context variable](api-management-policy-expressions.md#ContextVariables).
-
-### Elements
-
-|Name|Description|Required|
-|-|--|--|
-|set-query-parameter|Root element.|Yes|
-|value|Specifies the value of the query parameter to be set. For multiple query parameters with the same name add additional `value` elements.|Yes|
-
-### Properties
-
-|Name|Description|Required|Default|
-|-|--|--|-|
-|exists-action|Specifies what action to take when the query parameter is already specified. This attribute must have one of the following values.<br /><br /> - override - replaces the value of the existing parameter.<br />- skip - does not replace the existing query parameter value.<br />- append - appends the value to the existing query parameter value.<br />- delete - removes the query parameter from the request.<br /><br /> When set to `override` enlisting multiple entries with the same name results in the query parameter being set according to all entries (which will be listed multiple times); only listed values will be set in the result.|No|override|
-|name|Specifies name of the query parameter to be set.|Yes|N/A|
-
-### Usage
- This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
--- **Policy sections:** inbound, backend--- **Policy scopes:** all scopes-
-## <a name="RewriteURL"></a> Rewrite URL
- The `rewrite-uri` policy converts a request URL from its public form to the form expected by the web service, as shown in the following example.
--- Public URL - `http://api.example.com/storenumber/ordernumber`--- Request URL - `http://api.example.com/v2/US/hardware/storenumber&ordernumber?City&State`-
- This policy can be used when a human and/or browser-friendly URL should be transformed into the URL format expected by the web service. This policy only needs to be applied when exposing an alternative URL format, such as clean URLs, RESTful URLs, user-friendly URLs or SEO-friendly URLs that are purely structural URLs that do not contain a query string and instead contain only the path of the resource (after the scheme and the authority). This is often done for aesthetic, usability, or search engine optimization (SEO) purposes.
-
-> [!NOTE]
-> You can only add query string parameters using the policy. You cannot add extra template path parameters in the rewrite URL.
--
-### Policy statement
-
-```xml
-<rewrite-uri template="uri template" copy-unmatched-params="true | false" />
-```
-
-### Example
-
-```xml
-<policies>
- <inbound>
- <base />
- <rewrite-uri template="/v2/US/hardware/{storenumber}&{ordernumber}?City=city&State=state" />
- </inbound>
- <outbound>
- <base />
- </outbound>
-</policies>
-```
-```xml
-<!-- Assuming incoming request is /get?a=b&c=d and operation template is set to /get?a={b} -->
-<policies>
- <inbound>
- <base />
- <rewrite-uri template="/put" />
- </inbound>
- <outbound>
- <base />
- </outbound>
-</policies>
-<!-- Resulting URL will be /put?c=d -->
-```
-```xml
-<!-- Assuming incoming request is /get?a=b&c=d and operation template is set to /get?a={b} -->
-<policies>
- <inbound>
- <base />
- <rewrite-uri template="/put" copy-unmatched-params="false" />
- </inbound>
- <outbound>
- <base />
- </outbound>
-</policies>
-<!-- Resulting URL will be /put -->
-```
-
-### Elements
-
-|Name|Description|Required|
-|-|--|--|
-|rewrite-uri|Root element.|Yes|
-
-### Attributes
-
-|Attribute|Description|Required|Default|
-||--|--|-|
-|template|The actual web service URL with any query string parameters. When using expressions, the whole value must be an expression.|Yes|N/A|
-|copy-unmatched-params|Specifies whether query parameters in the incoming request not present in the original URL template are added to the URL defined by the re-write template|No|true|
-
-### Usage
- This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
--- **Policy sections:** inbound--- **Policy scopes:** all scopes-
-## <a name="XSLTransform"></a> Transform XML using an XSLT
- The `Transform XML using an XSLT` policy applies an XSL transformation to XML in the request or response body.
--
-### Policy statement
-
-```xml
-<xsl-transform>
- <parameter name="User-Agent">@(context.Request.Headers.GetValueOrDefault("User-Agent","non-specified"))</parameter>
- <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
- <xsl:output method="xml" indent="yes" />
- <xsl:param name="User-Agent" />
- <xsl:template match="* | @* | node()">
- <xsl:copy>
- <xsl:if test="self::* and not(parent::*)">
- <xsl:attribute name="User-Agent">
- <xsl:value-of select="$User-Agent" />
- </xsl:attribute>
- </xsl:if>
- <xsl:apply-templates select="* | @* | node()" />
- </xsl:copy>
- </xsl:template>
- </xsl:stylesheet>
- </xsl-transform>
-```
-
-### Example
-
-```xml
-<policies>
- <inbound>
- <base />
- </inbound>
- <outbound>
- <base />
- <xsl-transform>
- <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
- <xsl:output omit-xml-declaration="yes" method="xml" indent="yes" />
- <!-- Copy all nodes directly-->
- <xsl:template match="node()| @*|*">
- <xsl:copy>
- <xsl:apply-templates select="@* | node()|*" />
- </xsl:copy>
- </xsl:template>
- </xsl:stylesheet>
- </xsl-transform>
- </outbound>
-</policies>
-```
-
-### Elements
-
-|Name|Description|Required|
-|-|--|--|
-|xsl-transform|Root element.|Yes|
-|parameter|Used to define variables used in the transform|No|
-|xsl:stylesheet|Root stylesheet element. All elements and attributes defined within follow the standard [XSLT specification](https://www.w3.org/TR/xslt)|Yes|
-
-### Usage
- This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
--- **Policy sections:** inbound, outbound--- **Policy scopes:** all scopes-
api-management Authentication Authorization Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authentication-authorization-overview.md
The most common scenario is when the Azure API Management instance is a "transpa
:::image type="content" source="media/authentication-authorization-overview/oauth-token-backend.svg" alt-text="Diagram showing OAuth communication where audience is the backend." border="false":::
-In this scenario, the access token sent along with the HTTP request is intended for the backend API, not API Management. However, API Management still allows for a defense in depth approach. For example, configure policies to [validate the token](api-management-access-restriction-policies.md#ValidateJWT), rejecting requests that arrive without a token, or a token that's not valid for the intended backend API. You can also configure API Management to check other claims of interest extracted from the token.
+In this scenario, the access token sent along with the HTTP request is intended for the backend API, not API Management. However, API Management still allows for a defense in depth approach. For example, configure policies to [validate the token](validate-jwt-policy.md), rejecting requests that arrive without a token, or a token that's not valid for the intended backend API. You can also configure API Management to check other claims of interest extracted from the token.
For an example, see [Protect an API in Azure API Management using OAuth 2.0 authorization with Azure Active Directory](api-management-howto-protect-backend-with-aad.md).
There are different reasons for wanting to do this. For example:
* A custom policy to obtain an onward access token valid for the backend API from a configured identity provider.
- * The API Management instance's own identity ΓÇô passing the token from the API Management resource's system-assigned or user-assigned [managed identity](api-management-authentication-policies.md#ManagedIdentity) to the backend API.
+ * The API Management instance's own identity ΓÇô passing the token from the API Management resource's system-assigned or user-assigned [managed identity](authentication-managed-identity-policy.md) to the backend API.
### Token management by API Management
With authorizations, API Management manages the tokens for access to OAuth 2.0 b
Although authorization is preferred and OAuth 2.0 has become the dominant method of enabling strong authorization for APIs, API Management enables other authentication options that can be useful if the backend or calling applications are legacy or don't yet support OAuth. Options include: * Mutual TLS (mTLS), also known as client certificate authentication, between the client (app) and API Management. This authentication can be end-to-end, with the call between API Management and the backend API secured in the same way. For more information, see [How to secure APIs using client certificate authentication in API Management](api-management-howto-mutual-certificates-for-clients.md)
-* Basic authentication, using the [authentication-basic](api-management-authentication-policies.md#Basic) policy.
+* Basic authentication, using the [authentication-basic](authentication-basic-policy.md) policy.
* Subscription key, also known as an API key. For more information, see [Subscriptions in API Management](api-management-subscriptions.md). > [!NOTE]
Key configurations:
||| | Authorize developer users of the API Management developer portal using their corporate identities and Azure AD. | [Authorize developer accounts by using Azure Active Directory in Azure API Management](api-management-howto-aad.md) | |Set up the test console in the developer portal to obtain a valid OAuth 2.0 token for the desktop app developers to exercise the backend API. <br/><br/>The same configuration can be used for the test console in the Azure portal, which is accessible to the API Management contributors and backend developers. <br/><br/>The token could be used in combination with an API Management subscription key. | [How to authorize test console of developer portal by configuring OAuth 2.0 user authorization](api-management-howto-oauth2.md)<br/><br/>[Subscriptions in Azure API Management](api-management-subscriptions.md) |
-| Validate the OAuth 2.0 token and claims when an API is called through API Management with an access token. | [Validate JWT policy](api-management-access-restriction-policies.md#ValidateJWT) |
+| Validate the OAuth 2.0 token and claims when an API is called through API Management with an access token. | [Validate JWT policy](validate-jwt-policy.md) |
Go a step further with this scenario by moving API Management into the network perimeter and controlling ingress through a reverse proxy. For a reference architecture, see [Protect APIs with Application Gateway and API Management](/azure/architecture/reference-architectures/apis/protect-apis).
Key configurations:
|Configuration |Reference | ||| | Configure frontend developer access to the developer portal using the default username and password authentication.<br/><br/>Developers can also be invited to the developer portal. | [Configure users of the developer portal to authenticate using usernames and passwords](developer-portal-basic-authentication.md)<br/><br/>[How to manage user accounts in Azure API Management](api-management-howto-create-or-invite-developers.md) |
-| Validate the OAuth 2.0 token and claims when the SPA calls API Management with an access token. In this case, the audience is API Management. | [Validate JWT policy](api-management-access-restriction-policies.md#ValidateJWT) |
+| Validate the OAuth 2.0 token and claims when the SPA calls API Management with an access token. In this case, the audience is API Management. | [Validate JWT policy](validate-jwt-policy.md) |
| Set up API Management to use client certificate authentication to the backend. | [Secure backend services using client certificate authentication in Azure API Management](api-management-howto-mutual-certificates.md) | Go a step further with this scenario by using the [developer portal with Azure AD authorization](api-management-howto-aad.md) and Azure AD [B2B collaboration](../active-directory/external-identities/what-is-b2b.md) to allow the delivery partners to collaborate more closely. Consider delegating access to API Management through RBAC in a development or test environment and enable SSO into the developer portal using their own corporate credentials.
api-management Authentication Basic Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authentication-basic-policy.md
+
+ Title: Azure API Management policy reference - authentication-basic | Microsoft Docs
+description: Reference for the authentication-basic policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 12/01/2022+++
+# Authenticate with Basic
+
+Use the `authentication-basic` policy to authenticate with a backend service using Basic authentication. This policy effectively sets the HTTP Authorization header to the value corresponding to the credentials provided in the policy.
+++
+## Policy statement
+
+```xml
+<authentication-basic username="username" password="password" />
+```
++
+## Attributes
+
+| Attribute | Description | Required | Default |
+| -- | | -- | - |
+|username|Specifies the username of the Basic credential.|Yes|N/A|
+|password|Specifies the password of the Basic credential.|Yes|N/A|
++
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+
+## Example
+
+```xml
+<authentication-basic username="testuser" password="testpassword" />
+```
+
+## Related policies
+
+* [API Management authentication policies](api-management-authentication-policies.md)
+
api-management Authentication Certificate Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authentication-certificate-policy.md
+
+ Title: Azure API Management policy reference - authentication-certificate | Microsoft Docs
+description: Reference for the authentication-certificate policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 12/01/2022+++
+# Authenticate with client certificate
+
+ Use the `authentication-certificate` policy to authenticate with a backend service using a client certificate. When the certificate is [installed into API Management](./api-management-howto-mutual-certificates.md) first, identify it first by its thumbprint or certificate ID (resource name).
+
+> [!CAUTION]
+> If the certificate references a certificate stored in Azure Key Vault, identify it using the certificate ID. When a key vault certificate is rotated, its thumbprint in API Management will change, and the policy will not resolve the new certificate if it is identified by thumbprint.
+++
+## Policy statement
+
+```xml
+<authentication-certificate thumbprint="thumbprint" certificate-id="resource name" body="certificate byte array" password="optional password"/>
+```
+
+## Attributes
+
+| Attribute | Description | Required | Default |
+| -- | | -- | - |
+|thumbprint|The thumbprint for the client certificate.|Either `thumbprint` or `certificate-id` can be present.|N/A|
+|certificate-id|The certificate resource name.|Either `thumbprint` or `certificate-id` can be present.|N/A|
+|body|Client certificate as a byte array. Use if the certificate isn't retrieved from the built-in certificate store.|No|N/A|
+|password|Password for the client certificate.|Use if certificate specified in `body` is password protected.|N/A|
+
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+
+## Examples
+
+### Client certificate identified by the certificate ID
+
+```xml
+<authentication-certificate certificate-id="544fe9ddf3b8f30fb490d90f" />
+```
+
+### Client certificate identified by thumbprint
+
+```xml
+<authentication-certificate thumbprint="CA06F56B258B7A0D4F2B05470939478651151984" />
+```
+
+### Client certificate set in the policy rather than retrieved from the built-in certificate store
+
+```xml
+<authentication-certificate body="@(context.Variables.GetValueOrDefault<byte[]>("byteCertificate"))" password="optional-certificate-password" />
+```
+
+## Related policies
+
+* [API Management authentication policies](api-management-authentication-policies.md)
+
api-management Authentication Managed Identity Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authentication-managed-identity-policy.md
+
+ Title: Azure API Management policy reference - authentication-managed-identity | Microsoft Docs
+description: Reference for the authentication-managed-identity policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 12/06/2022+++
+# Authenticate with managed identity
+
+ Use the `authentication-managed-identity` policy to authenticate with a backend service using the managed identity. This policy essentially uses the managed identity to obtain an access token from Azure Active Directory for accessing the specified resource. After successfully obtaining the token, the policy will set the value of the token in the `Authorization` header using the `Bearer` scheme. API Management caches the token until it expires.
+
+Both system-assigned identity and any of the multiple user-assigned identities can be used to request a token. If `client-id` is not provided, system-assigned identity is assumed. If the `client-id` variable is provided, token is requested for that user-assigned identity from Azure Active Directory.
++
+
+## Policy statement
+
+```xml
+<authentication-managed-identity resource="resource" client-id="clientid of user-assigned identity" output-token-variable-name="token-variable" ignore-error="true|false"/>
+```
+## Attributes
+
+| Attribute | Description | Required | Default |
+| -- | | -- | - |
+|resource|String. The application ID of the target web API (secured resource) in Azure Active Directory.|Yes|N/A|
+|client-id|String. The client ID of the user-assigned identity in Azure Active Directory.|No|system-assigned identity|
+|output-token-variable-name|String. Name of the context variable that will receive token value as an object of type `string`. |No|N/A|
+|ignore-error|Boolean. If set to `true`, the policy pipeline will continue to execute even if an access token is not obtained.|No|`false`|
++
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+
+## Examples
+
+### Use managed identity to authenticate with a backend service
+```xml
+<authentication-managed-identity resource="https://graph.microsoft.com"/>
+```
+```xml
+<authentication-managed-identity resource="https://management.azure.com/"/> <!--Azure Resource Manager-->
+```
+```xml
+<authentication-managed-identity resource="https://vault.azure.net"/> <!--Azure Key Vault-->
+```
+```xml
+<authentication-managed-identity resource="https://servicebus.azure.net/"/> <!--Azure Service Bus-->
+```
+```xml
+<authentication-managed-identity resource="https://storage.azure.com/"/> <!--Azure Blob Storage-->
+```
+```xml
+<authentication-managed-identity resource="https://database.windows.net/"/> <!--Azure SQL-->
+```
+
+```xml
+<authentication-managed-identity resource="AD_application_id"/> <!--Application (client) ID of your own Azure AD Application-->
+```
+
+### Use managed identity and set header manually
+
+```xml
+<authentication-managed-identity resource="AD_application_id"
+ output-token-variable-name="msi-access-token" ignore-error="false" /> <!--Application (client) ID of your own Azure AD Application-->
+<set-header name="Authorization" exists-action="override">
+ <value>@("Bearer " + (string)context.Variables["msi-access-token"])</value>
+</set-header>
+```
+
+### Use managed identity in send-request policy
+```xml
+<send-request mode="new" timeout="20" ignore-error="false">
+ <set-url>https://example.com/</set-url>
+ <set-method>GET</set-method>
+ <authentication-managed-identity resource="ResourceID"/>
+</send-request>
+```
+
+## Related policies
+
+* [API Management authentication policies](api-management-authentication-policies.md)
+
api-management Authorizations How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authorizations-how-to.md
Four steps are needed to set up an authorization with the authorization code gra
- Because the incoming request to API Management will consist of a query parameter called *username*, add the username to the backend call. > [!NOTE]
- > The `get-authorization-context` policy references the authorization provider and authorization that were created earlier. [Learn more](api-management-access-restriction-policies.md#GetAuthorizationContext) about how to configure this policy.
+ > The `get-authorization-context` policy references the authorization provider and authorization that were created earlier. [Learn more](get-authorization-context-policy.md) about how to configure this policy.
:::image type="content" source="media/authorizations-how-to/policy-configuration-cropped.png" lightbox="media/authorizations-how-to/policy-configuration.png" alt-text="Screenshot of configuring policy in the portal."::: 1. Test the API.
api-management Authorizations Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authorizations-overview.md
The feature consists of two parts, management and runtime:
* The **management** part takes care of configuring identity providers, enabling the consent flow for the identity provider, and managing access to the authorizations.
-* The **runtime** part uses the [`get-authorization-context`](api-management-access-restriction-policies.md#GetAuthorizationContext) policy to fetch and store access and refresh tokens. When a call comes into API Management, and the `get-authorization-context` policy is executed, it will first validate if the existing authorization token is valid. If the authorization token has expired, the refresh token is used to try to fetch a new authorization and refresh token from the configured identity provider. If the call to the backend provider is successful, the new authorization token will be used, and both the authorization token and refresh token will be stored encrypted.
+* The **runtime** part uses the [`get-authorization-context`](get-authorization-context-policy.md) policy to fetch and store access and refresh tokens. When a call comes into API Management, and the `get-authorization-context` policy is executed, it will first validate if the existing authorization token is valid. If the authorization token has expired, the refresh token is used to try to fetch a new authorization and refresh token from the configured identity provider. If the call to the backend provider is successful, the new authorization token will be used, and both the authorization token and refresh token will be stored encrypted.
During the policy execution, access to the tokens is also validated using access policies.
The following image shows the process flow to fetch and store authorization and
:::image type="content" source="media/authorizations-overview/get-token-for-backend.svg" alt-text="Diagram that shows the process flow for creating runtime." border="false"::: 1. Client sends request to API Management instance.
-1. The policy [`get-authorization-context`](api-management-access-restriction-policies.md#GetAuthorizationContext) checks if the access token is valid for the current authorization.
+1. The policy [`get-authorization-context`](get-authorization-context-policy.md) checks if the access token is valid for the current authorization.
1. If the access token has expired but the refresh token is valid, API Management tries to fetch new access and refresh tokens from the configured identity provider. 1. The identity provider returns both an access token and a refresh token, which are encrypted and saved to API Management. 1. After the tokens have been retrieved, the access token is attached using the `set-header` policy as an authorization header to the outgoing request to the backend API.
api-management Backends https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/backends.md
API Management also supports using other Azure resources as an API backend, such
Custom backends require extra configuration to authorize the credentials of requests to the backend service and define API operations. Configure and manage custom backends in the Azure portal, or using Azure APIs or tools.
-After creating a backend, you can reference the backend in your APIs. Use the [`set-backend-service`](api-management-transformation-policies.md#SetBackendService) policy to redirect an incoming API request to the custom backend instead of the default backend for that API.
+After creating a backend, you can reference the backend in your APIs. Use the [`set-backend-service`](set-backend-service-policy.md) policy to redirect an incoming API request to the custom backend instead of the default backend for that API.
> [!NOTE] > When you use the `set-backend-service` policy to redirect requests to a custom backend, refer to the backend by its name (`backend-id`), not by its URL.
api-management Cache Lookup Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cache-lookup-policy.md
+
+ Title: Azure API Management policy reference - cache-lookup | Microsoft Docs
+description: Reference for the cache-lookup policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 12/07/2022+++
+# Get from cache
+
+Use the `cache-lookup` policy to perform cache lookup and return a valid cached response when available. This policy can be applied in cases where response content remains static over a period of time. Response caching reduces bandwidth and processing requirements imposed on the backend web server and lowers latency perceived by API consumers.
+
+> [!NOTE]
+> This policy must have a corresponding [Store to cache](cache-store-policy.md) policy.
+++
+## Policy statement
+
+```xml
+<cache-lookup vary-by-developer="true | false" vary-by-developer-groups="true | false" caching-type="prefer-external | external | internal" downstream-caching-type="none | private | public" must-revalidate="true | false" allow-private-response-caching="@(expression to evaluate)">
+ <vary-by-header>Accept</vary-by-header>
+ <!-- should be present in most cases -->
+ <vary-by-header>Accept-Charset</vary-by-header>
+ <!-- should be present in most cases -->
+ <vary-by-header>Authorization</vary-by-header>
+ <!-- should be present when allow-private-response-caching is "true"-->
+ <vary-by-header>header name</vary-by-header>
+ <!-- optional, can be repeated -->
+ <vary-by-query-parameter>parameter name</vary-by-query-parameter>
+ <!-- optional, can be repeated -->
+</cache-lookup>
+```
+
+## Attributes
+
+| Attribute | Description | Required | Default |
+| -- | | -- | - |
+| allow-private-response-caching | When set to `true`, allows caching of requests that contain an Authorization header. | No | `false` |
+| caching-type | Choose between the following values of the attribute:<br />- `internal` to use the [built-in API Management cache](api-management-howto-cache.md),<br />- `external` to use the external cache as described in [Use an external Azure Cache for Redis in Azure API Management](api-management-howto-cache-external.md),<br />- `prefer-external` to use external cache if configured or internal cache otherwise. | No | `prefer-external` |
+| downstream-caching-type | This attribute must be set to one of the following values.<br /><br /> - none - downstream caching is not allowed.<br />- private - downstream private caching is allowed.<br />- public - private and shared downstream caching is allowed. | No | none |
+| must-revalidate | When downstream caching is enabled this attribute turns on or off the `must-revalidate` cache control directive in gateway responses. | No | `true` |
+| vary-by-developer | Set to `true` to cache responses per developer account that owns [subscription key](./api-management-subscriptions.md) included in the request. | Yes | `false` |
+| vary-by-developer-groups | Set to `true` to cache responses per [user group](./api-management-howto-create-groups.md). | Yes | `false` |
++
+## Elements
+
+|Name|Description|Required|
+|-|--|--|
+|vary-by-header|Add one or more of these elements to start caching responses per value of specified header, such as `Accept`, `Accept-Charset`, `Accept-Encoding`, `Accept-Language`, `Authorization`, `Expect`, `From`, `Host`, `If-Match`.|No|
+|vary-by-query-parameter|Add one or more of these elements to start caching responses per value of specified query parameters. Enter a single or multiple parameters. Use semicolon as a separator. If none are specified, all query parameters are used.|No|
+
+## Usage
++
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+
+### Usage notes
+
+When using `vary-by-query-parameter`, you might want to declare the parameters in the rewrite-uri template or set the attribute `copy-unmatched-params` to `false`. By deactivating this flag, parameters that aren't declared are sent to the backend.
+
+## Examples
+
+### Example with corresponding cache-store policy
+
+```xml
+<policies>
+ <inbound>
+ <base />
+ <cache-lookup vary-by-developer="false" vary-by-developer-groups="false" downstream-caching-type="none" must-revalidate="true" caching-type="internal" >
+ <vary-by-query-parameter>version</vary-by-query-parameter>
+ </cache-lookup>
+ </inbound>
+ <outbound>
+ <cache-store duration="seconds" />
+ <base />
+ </outbound>
+</policies>
+```
+
+### Example using policy expressions
+This example shows how to configure API Management response caching duration that matches the response caching of the backend service as specified by the backend service's `Cache-Control` directive.
+
+```xml
+<!-- The following cache policy snippets demonstrate how to control API Management response cache duration with Cache-Control headers sent by the backend service. -->
+
+<!-- Copy this snippet into the inbound section -->
+<cache-lookup vary-by-developer="false" vary-by-developer-groups="false" downstream-caching-type="public" must-revalidate="true" >
+ <vary-by-header>Accept</vary-by-header>
+ <vary-by-header>Accept-Charset</vary-by-header>
+</cache-lookup>
+
+<!-- Copy this snippet into the outbound section. Note that cache duration is set to the max-age value provided in the Cache-Control header received from the backend service or to the default value of 5 min if none is found -->
+<cache-store duration="@{
+ var header = context.Response.Headers.GetValueOrDefault("Cache-Control","");
+ var maxAge = Regex.Match(header, @"max-age=(?<maxAge>\d+)").Groups["maxAge"]?.Value;
+ return (!string.IsNullOrEmpty(maxAge))?int.Parse(maxAge):300;
+ }"
+ />
+```
+
+For more information, see [Policy expressions](api-management-policy-expressions.md) and [Context variable](api-management-policy-expressions.md#ContextVariables).
++
+## Related policies
+
+* [API Management caching policies](api-management-caching-policies.md)
+
api-management Cache Lookup Value Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cache-lookup-value-policy.md
+
+ Title: Azure API Management policy reference - cache-lookup-value | Microsoft Docs
+description: Reference for the cache-lookup-value policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 12/07/2022+++
+# Get value from cache
+Use the `cache-lookup-value` policy to perform cache lookup by key and return a cached value. The key can have an arbitrary string value and is typically provided using a policy expression.
+
+> [!NOTE]
+> This policy must have a corresponding [Store value in cache](cache-store-value-policy.md) policy.
+++
+## Policy statement
+
+```xml
+<cache-lookup-value key="cache key value"
+ default-value="value to use if cache lookup resulted in a miss"
+ variable-name="name of a variable looked up value is assigned to"
+ caching-type="prefer-external | external | internal" />
+```
++
+## Attributes
+
+| Attribute | Description | Required | Default |
+||--|--|--|
+| caching-type | Choose between the following values of the attribute:<br />- `internal` to use the [built-in API Management cache](api-management-howto-cache.md),<br />- `external` to use the external cache as described in [Use an external Azure Cache for Redis in Azure API Management](api-management-howto-cache-external.md),<br />- `prefer-external` to use external cache if configured or internal cache otherwise. | No | `prefer-external` |
+| default-value | A value that will be assigned to the variable if the cache key lookup resulted in a miss. If this attribute is not specified, `null` is assigned. | No | `null` |
+| key | Cache key value to use in the lookup. | Yes | N/A |
+| variable-name | Name of the [context variable](api-management-policy-expressions.md#ContextVariables) the looked up value will be assigned to, if lookup is successful. If lookup results in a miss, the variable will not be set. | Yes | N/A |
++
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+
+## Example
+
+```xml
+<cache-lookup-value
+ key="@("userprofile-" + context.Variables["enduserid"])"
+ variable-name="userprofile" />
+```
+
+For more information and examples of this policy, see [Custom caching in Azure API Management](./api-management-sample-cache-by-key.md).
+++
+## Related policies
+
+* [API Management caching policies](api-management-caching-policies.md)
+
api-management Cache Remove Value Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cache-remove-value-policy.md
+
+ Title: Azure API Management policy reference - cache-remove-value | Microsoft Docs
+description: Reference for the cache-remove-value policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 12/07/2022+++
+# Remove value from cache
+The `cache-remove-value` deletes a cached item identified by its key. The key can have an arbitrary string value and is typically provided using a policy expression.
++
+## Policy statement
+
+```xml
+<cache-remove-value key="cache key value" caching-type="prefer-external | external | internal" />
+```
++
+## Attributes
+
+| Attribute | Description | Required | Default |
+||--|--|--|
+| caching-type | Choose between the following values of the attribute:<br />- `internal` to use the [built-in API Management cache](api-management-howto-cache.md),<br />- `external` to use the external cache as described in [Use an external Azure Cache for Redis in Azure API Management](api-management-howto-cache-external.md),<br />- `prefer-external` to use external cache if configured or internal cache otherwise. | No | `prefer-external` |
+| key | The key of the previously cached value to be removed from the cache. | Yes | N/A |
+## Usage
++
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+
+## Example
+
+```xml
+<cache-store-value
+ key="@("userprofile-" + context.Variables["enduserid"])"
+ value="@((string)context.Variables["userprofile"])" duration="100000" />
+```
+
+For more information and examples of this policy, see [Custom caching in Azure API Management](./api-management-sample-cache-by-key.md).
+
+## Related policies
+
+* [API Management caching policies](api-management-caching-policies.md)
+
api-management Cache Store Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cache-store-policy.md
+
+ Title: Azure API Management policy reference - cache-store | Microsoft Docs
+description: Reference for the cache-store policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 12/07/2022+++
+# Store to cache
+
+The `cache-store` policy caches responses according to the specified cache settings. This policy can be applied in cases where response content remains static over a period of time. Response caching reduces bandwidth and processing requirements imposed on the backend web server and lowers latency perceived by API consumers.
+
+> [!NOTE]
+> This policy must have a corresponding [Get from cache](cache-lookup-policy.md) policy.
++++
+## Policy statement
+
+```xml
+<cache-store duration="seconds" cache-response="true | false" />
+```
++
+## Attributes
+
+| Attribute | Description | Required | Default |
+| -- | | -- | - |
+| duration | Time-to-live of the cached entries, specified in seconds. | Yes | N/A |
+| cache-response | Set to `true` to cache the current HTTP response. If the attribute is omitted or set to `false`, only HTTP responses with the status code `200 OK` are cached. | No | `false` |
+
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) outbound
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+
+## Examples
+
+### Example with corresponding cache-lookup policy
+
+```xml
+<policies>
+ <inbound>
+ <base />
+ <cache-lookup vary-by-developer="false" vary-by-developer-groups="false" downstream-caching-type="none" must-revalidate="true" caching-type="internal" >
+ <vary-by-query-parameter>version</vary-by-query-parameter>
+ </cache-lookup>
+ </inbound>
+ <outbound>
+ <cache-store duration="seconds" />
+ <base />
+ </outbound>
+</policies>
+```
+
+### Example using policy expressions
+
+This example shows how to configure API Management response caching duration that matches the response caching of the backend service as specified by the backend service's `Cache-Control` directive.
+
+```xml
+<!-- The following cache policy snippets demonstrate how to control API Management response cache duration with Cache-Control headers sent by the backend service. -->
+
+<!-- Copy this snippet into the inbound section -->
+<cache-store vary-by-developer="false" vary-by-developer-groups="false" downstream-caching-type="public" must-revalidate="true" >
+ <vary-by-header>Accept</vary-by-header>
+ <vary-by-header>Accept-Charset</vary-by-header>
+</cache-store>
+
+<!-- Copy this snippet into the outbound section. Note that cache duration is set to the max-age value provided in the Cache-Control header received from the backend service or to the default value of 5 min if none is found -->
+<cache-store duration="@{
+ var header = context.Response.Headers.GetValueOrDefault("Cache-Control","");
+ var maxAge = Regex.Match(header, @"max-age=(?<maxAge>\d+)").Groups["maxAge"]?.Value;
+ return (!string.IsNullOrEmpty(maxAge))?int.Parse(maxAge):300;
+ }"
+ />
+```
+
+For more information, see [Policy expressions](api-management-policy-expressions.md) and [Context variable](api-management-policy-expressions.md#ContextVariables).
++
+## Related policies
+
+* [API Management caching policies](api-management-caching-policies.md)
+
api-management Cache Store Value Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cache-store-value-policy.md
+
+ Title: Azure API Management policy reference - cache-store-value | Microsoft Docs
+description: Reference for the cache-store-value policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 12/07/2022+++
+# Store value in cache
+The `cache-store-value` performs cache storage by key. The key can have an arbitrary string value and is typically provided using a policy expression.
+
+> [!NOTE]
+> The operation of storing the value in cache performed by this policy is asynchronous. The stored value can be retrieved using [Get value from cache](cache-lookup-value-policy.md) policy. However, the stored value may not be immediately available for retrieval since the asynchronous operation that stores the value in cache may still be in progress.
+++
+## Policy statement
+
+```xml
+<cache-store-value key="cache key value" value="value to cache" duration="seconds" caching-type="prefer-external | external | internal" />
+```
++
+## Attributes
+
+| Attribute | Description | Required | Default |
+||--|--|--|
+| caching-type | Choose between the following values of the attribute:<br />- `internal` to use the [built-in API Management cache](api-management-howto-cache.md),<br />- `external` to use the external cache as described in [Use an external Azure Cache for Redis in Azure API Management](api-management-howto-cache-external.md),<br />- `prefer-external` to use external cache if configured or internal cache otherwise. | No | `prefer-external` |
+| duration | Value will be cached for the provided duration value, specified in seconds. | Yes | N/A |
+| key | Cache key the value will be stored under. | Yes | N/A |
+| value | The value to be cached. | Yes | N/A |
+
+## Usage
++
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+
+## Example
+
+```xml
+<cache-store-value
+ key="@("userprofile-" + context.Variables["enduserid"])"
+ value="@((string)context.Variables["userprofile"])" duration="100000" />
+```
+
+For more information and examples of this policy, see [Custom caching in Azure API Management](./api-management-sample-cache-by-key.md).
+
+## Related policies
+
+* [API Management caching policies](api-management-caching-policies.md)
+
api-management Check Header Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/check-header-policy.md
+
+ Title: Azure API Management policy reference - check-header | Microsoft Docs
+description: Reference for the check-header policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 12/08/2022+++
+# Check HTTP header
+
+Use the `check-header` policy to enforce that a request has a specified HTTP header. You can optionally check to see if the header has a specific value or one of a range of allowed values. If the check fails, the policy terminates request processing and returns the HTTP status code and error message specified by the policy.
++
+## Policy statement
+
+```xml
+<check-header name="header name" failed-check-httpcode="code" failed-check-error-message="message" ignore-case="true | false">
+ <value>Value1</value>
+ <value>Value2</value>
+</check-header>
+```
+
+## Attributes
+
+| Attribute | Description | Required | Default |
+| -- | - | -- | - |
+| name | The name of the HTTP header to check. | Yes | N/A |
+| failed-check-httpcode | HTTP status code to return if the header doesn't exist or has an invalid value. | Yes | N/A |
+| failed-check-error-message | Error message to return in the HTTP response body if the header doesn't exist or has an invalid value. This message must have any special characters properly escaped. | Yes | N/A |
+| ignore-case | Boolean. If set to `true`, case is ignored when the header value is compared against the set of acceptable values. | Yes | N/A |
+
+## Elements
+
+| Element | Description | Required |
+| | | -- |
+| value | Add one or more of these elements to specify allowed HTTP header values. When multiple `value` elements are specified, the check is considered a success if any one of the values is a match. | No |
++
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+
+## Example
+
+```xml
+<check-header name="Authorization" failed-check-httpcode="401" failed-check-error-message="Not authorized" ignore-case="false">
+ <value>f6dc69a089844cf6b2019bae6d36fac8</value>
+</check-header>
+```
+
+## Related policies
+
+* [API Management access restriction policies](api-management-access-restriction-policies.md)
+
api-management Choose Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/choose-policy.md
+
+ Title: Azure API Management policy reference - choose | Microsoft Docs
+description: Reference for the choose policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 12/08/2022+++
+# Control flow
+
+Use the `choose` policy to conditionally apply policy statements based on the results of the evaluation of Boolean [expressions](api-management-policy-expressions.md). Use the policy for control flow similar to an if-then-else or a switch construct in a programming language.
+++
+## Policy statement
+
+```xml
+<choose>
+ <when condition="Boolean expression | Boolean constant">
+ <!ΓÇö one or more policy statements to be applied if the above condition is true -->
+ </when>
+ <when condition="Boolean expression | Boolean constant">
+ <!ΓÇö one or more policy statements to be applied if the above condition is true -->
+ </when>
+ <otherwise>
+ <!ΓÇö one or more policy statements to be applied if none of the above conditions are true -->
+ </otherwise>
+</choose>
+```
+
+The `choose` policy must contain at least one `<when/>` element. The `<otherwise/>` element is optional. Conditions in `<when/>` elements are evaluated in order of their appearance within the policy. Policy statement(s) enclosed within the first `<when/>` element with condition attribute equals `true` will be applied. Policies enclosed within the `<otherwise/>` element, if present, will be applied if all of the `<when/>` element condition attributes are `false`.
+
+## Elements
+
+| Element | Description | Required |
+| | - | -- |
+| when | One or more elements specifying the `if` or `ifelse` parts of the `choose` policy. If multiple `when` elements are specified, they are evaluated sequentially. Once the `condition` of a when element evaluates to `true`, no further `when` conditions are evaluated. | Yes |
+| otherwise | The policy snippet to be evaluated if none of the `when` conditions evaluate to `true`. | No |
+
+### when attributes
+
+| Attribute | Description | Required |
+| | | -- |
+| condition | The Boolean expression or Boolean constant to be evaluated when the containing `when` policy statement is evaluated. | Yes |
+
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+
+## Examples
+
+### Modify request and response based on user agent
+
+The following example demonstrates a [set-variable](set-variable-policy.md) policy and two control flow policies.
+
+The set variable policy is in the inbound section and creates an `isMobile` Boolean [context](api-management-policy-expressions.md#ContextVariables) variable that is set to true if the `User-Agent` request header contains the text `iPad` or `iPhone`.
+
+The first control flow policy is also in the inbound section, and conditionally applies one of two [Set query string parameter](set-query-parameter-policy.md) policies depending on the value of the `isMobile` context variable.
+
+The second control flow policy is in the outbound section and conditionally applies the [Convert XML to JSON](xml-to-json-policy.md) policy when `isMobile` is set to `true`.
+
+```xml
+<policies>
+ <inbound>
+ <set-variable name="isMobile" value="@(context.Request.Headers.GetValueOrDefault("User-Agent","").Contains("iPad") || context.Request.Headers.GetValueOrDefault("User-Agent","").Contains("iPhone"))" />
+ <base />
+ <choose>
+ <when condition="@(context.Variables.GetValueOrDefault<bool>("isMobile"))">
+ <set-query-parameter name="mobile" exists-action="override">
+ <value>true</value>
+ </set-query-parameter>
+ </when>
+ <otherwise>
+ <set-query-parameter name="mobile" exists-action="override">
+ <value>false</value>
+ </set-query-parameter>
+ </otherwise>
+ </choose>
+ </inbound>
+ <outbound>
+ <base />
+ <choose>
+ <when condition="@(context.Variables.GetValueOrDefault<bool>("isMobile"))">
+ <xml-to-json kind="direct" apply="always" consider-accept-header="false"/>
+ </when>
+ </choose>
+ </outbound>
+</policies>
+```
+
+### Modify response based on product name
+
+This example shows how to perform content filtering by removing data elements from the response received from the backend service when using the `Starter` product. The example backend response includes root-level properties similar to the [OpenWeather One Call API](https://openweathermap.org/api/one-call-api).
+
+```xml
+<!-- Copy this snippet into the outbound section to remove a number of data elements from the response received from the backend service based on the name of the product -->
+<choose>
+ <when condition="@(context.Response.StatusCode == 200 && context.Product.Name.Equals("Starter"))">
+ <set-body>@{
+ var response = context.Response.Body.As<JObject>();
+ foreach (var key in new [] {"current", "minutely", "hourly", "daily", "alerts"}) {
+ response.Property (key).Remove ();
+ }
+ return response.ToString();
+ }
+ </set-body>
+ </when>
+</choose>
+```
+
+## Related policies
+
+* [API Management advanced policies](api-management-advanced-policies.md)
+
api-management Configure Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/configure-custom-domain.md
When you create an Azure API Management service instance in the Azure cloud, Azu
## Prerequisites -- An active Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - An API Management instance. For more information, see [Create an Azure API Management instance](get-started-create-service-instance.md). - A custom domain name that is owned by you or your organization. This article does not provide instructions on how to procure a custom domain name. - Optionally, a valid certificate with a public and private key (.PFX). The subject or subject alternative name (SAN) has to match the domain name (this enables API Management instance to securely expose URLs over TLS).
There are several API Management endpoints to which you can assign a custom doma
### Considerations * You can update any of the endpoints supported in your service tier. Typically, customers update **Gateway** (this URL is used to call the APIs exposed through API Management) and **Developer portal** (the developer portal URL).
+* The default **Gateway** endpoint also is available after you configure a custom Gateway domain name. For other API Management endpoints (such as **Developer portal**) that you configure with a custom domain name, the default endpoint is no longer available.
* Only API Management instance owners can use **Management** and **SCM** endpoints internally. These endpoints are less frequently assigned a custom domain name. * The **Premium** and **Developer** tiers support setting multiple hostnames for the **Gateway** endpoint. * Wildcard domain names, like `*.contoso.com`, are supported in all tiers except the Consumption tier.
There are several API Management endpoints to which you can assign a custom doma
API Management supports custom TLS certificates or certificates imported from Azure Key Vault. You can also enable a free, managed certificate.
-> [!WARNING]
+> [!WARNING
> If you require certificate pinning, please use a custom domain name and either a custom or Key Vault certificate, not the default certificate or the free, managed certificate. We don't recommend taking a hard dependency on a certificate that you don't manage. # [Custom](#tab/custom)
api-management Cors Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cors-policy.md
+
+ Title: Azure API Management policy reference - cors | Microsoft Docs
+description: Reference for the cors policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 11/18/2022+++
+# CORS
+
+The `cors` policy adds cross-origin resource sharing (CORS) support to an operation or an API to allow cross-domain calls from browser-based clients.
++
+## Policy statement
+
+```xml
+<cors allow-credentials="false | true" terminate-unmatched-request="true | false">
+ <allowed-origins>
+ <origin>origin uri</origin>
+ </allowed-origins>
+ <allowed-methods preflight-result-max-age="number of seconds">
+ <method>HTTP verb</method>
+ </allowed-methods>
+ <allowed-headers>
+ <header>header name</header>
+ </allowed-headers>
+ <expose-headers>
+ <header>header name</header>
+ </expose-headers>
+</cors>
+```
+
+## Attributes
+
+|Name|Description|Required|Default|
+|-|--|--|-|
+|allow-credentials|The `Access-Control-Allow-Credentials` header in the preflight response will be set to the value of this attribute and affect the client's ability to submit credentials in cross-domain requests.|No|`false`|
+|terminate-unmatched-request|Controls the processing of cross-origin requests that don't match the policy settings.<br/><br/>When `OPTIONS` request is processed as a preflight request and `Origin` header doesn't match policy settings:<br/> - If the attribute is set to `true`, immediately terminate the request with an empty `200 OK` response<br/>- If the attribute is set to `false`, check inbound for other in-scope `cors` policies that are direct children of the inbound element and apply them. If no `cors` policies are found, terminate the request with an empty `200 OK` response. <br/><br/>When `GET` or `HEAD` request includes the `Origin` header (and therefore is processed as a simple cross-origin request), and doesn't match policy settings:<br/>- If the attribute is set to `true`, immediately terminate the request with an empty `200 OK` response.<br/> - If the attribute is set to `false`, allow the request to proceed normally and don't add CORS headers to the response.|No|`true`|
+
+## Elements
+
+|Name|Description|Required|Default|
+|-|--|--|-|
+|allowed-origins|Contains `origin` elements that describe the allowed origins for cross-domain requests. `allowed-origins` can contain either a single `origin` element that specifies `*` to allow any origin, or one or more `origin` elements that contain a URI.|Yes|N/A|
+|origin|The value can be either `*` to allow all origins, or a URI that specifies a single origin. The URI must include a scheme, host, and port.|Yes|If the port is omitted in a URI, port 80 is used for HTTP and port 443 is used for HTTPS.|
+|allowed-methods|This element is required if methods other than `GET` or `POST` are allowed. Contains `method` elements that specify the supported HTTP verbs. The value `*` indicates all methods.|No|If this section isn't present, `GET` and `POST` are supported.|
+|method|Specifies an HTTP verb.|At least one `method` element is required if the `allowed-methods` section is present.|N/A|
+|allowed-headers|This element contains `header` elements specifying names of the headers that can be included in the request.|Yes|N/A|
+|expose-headers|This element contains `header` elements specifying names of the headers that will be accessible by the client.|No|N/A|
+|header|Specifies a header name.|At least one `header` element is required in `allowed-headers` or in `expose-headers` if that section is present.|N/A|
+
+> [!CAUTION]
+> Use the `*` wildcard with care in policy settings. This configuration may be overly permissive and may make an API more vulnerable to certain [API security threats](mitigate-owasp-api-threats.md#security-misconfiguration).
++
+### allowed-methods attributes
+
+|Name|Description|Required|Default|
+|-|--|--|-|
+|preflight-result-max-age|The `Access-Control-Max-Age` header in the preflight response will be set to the value of this attribute and affect the user agent's ability to cache the preflight response.|No|0|
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+
+### Usage notes
+ * You may configure the `cors` policy at more than one scope (for example, at the product scope and the global scope). Ensure that the `base` element is configured at the operation, API, and product scopes to inherit needed policies at the parent scopes.
+* Only the `cors` policy is evaluated on the `OPTIONS` request during preflight. Remaining configured policies are evaluated on the approved request.
+
+## About CORS
+
+[CORS](https://developer.mozilla.org/docs/Web/HTTP/CORS) is an HTTP header-based standard that allows a browser and a server to interact and determine whether or not to allow specific cross-origin requests (`XMLHttpRequest` calls made from JavaScript on a web page to other domains). This allows for more flexibility than only allowing same-origin requests, but is more secure than allowing all cross-origin requests.
+
+CORS specifies two types of [cross-origin requests](https://developer.mozilla.org/docs/Web/HTTP/CORS#specifications):
+
+- **Preflighted (or "preflight") requests** - The browser first sends an HTTP request using the `OPTIONS` method to the server, to determine if the actual request is permitted to send. If the server response includes the `Access-Control-Allow-Origin` header that allows access, the browser follows with the actual request.
+
+- **Simple requests** - These requests include one or more extra `Origin` headers but don't trigger a CORS preflight. Only requests using the `GET` and `HEAD` methods and a limited set of request headers are allowed.
++
+## `cors` policy scenarios
+
+Configure the `cors` policy in API Management for the following scenarios:
+
+* Enable the interactive test console in the developer portal. Refer to the [developer portal documentation](./developer-portal-faq.md#cors) for details.
+ > [!NOTE]
+ > When you enable CORS for the interactive console, by default API Management configures the `cors` policy at the global scope.
+
+* Enable API Management to reply to preflight requests or to pass through simple CORS requests when the backends don't provide their own CORS support.
+
+ > [!NOTE]
+ > If a request matches an operation with an `OPTIONS` method defined in the API, preflight request processing logic associated with the `cors` policy will not be executed. Therefore, such operations can be used to implement custom preflight processing logic - for example, to apply the `cors` policy only under certain conditions.
+
+## Common configuration issues
+
+* **Subscription key in header** - If you configure the `cors` policy at the *product* scope, and your API uses subscription key authentication, the policy won't work when the subscription key is passed in a header. As a workaround, modify requests to include a subscription key as a query parameter.
+* **API with header versioning** - If you configure the `cors` policy at the *API* scope, and your API uses a header-versioning scheme, the policy won't work because the version is passed in a header. You may need to configure an alternative versioning method such as a path or query parameter.
+* **Policy order** - You may experience unexpected behavior if the `cors` policy is not the first policy in the inbound section. Select **Calculate effective policy** in the policy editor to check the [policy evaluation order](set-edit-policies.md#use-base-element-to-set-policy-evaluation-order) at each scope. Generally, only the first `cors` policy is applied.
+* **Empty 200 OK response** - In some policy configurations, certain cross-origin requests complete with an empty `200 OK` response. This response is expected when `terminate-unmatched-request` is set to its default value of `true` and an incoming request has an `Origin` header that doesnΓÇÖt match an allowed origin configured in the `cors` policy.
+
+## Example
+
+This example demonstrates how to support [preflight requests](https://developer.mozilla.org/docs/Web/HTTP/CORS#preflighted_requests), such as those with custom headers or methods other than `GET` and `POST`. To support custom headers and other HTTP verbs, use the `allowed-methods` and `allowed-headers` sections as shown in the following example.
+
+```xml
+<cors allow-credentials="true">
+ <allowed-origins>
+ <!-- Localhost useful for development -->
+ <origin>http://localhost:8080/</origin>
+ <origin>http://example.com/</origin>
+ </allowed-origins>
+ <allowed-methods preflight-result-max-age="300">
+ <method>GET</method>
+ <method>POST</method>
+ <method>PATCH</method>
+ <method>DELETE</method>
+ </allowed-methods>
+ <allowed-headers>
+ <!-- Examples below show Azure Mobile Services headers -->
+ <header>x-zumo-installation-id</header>
+ <header>x-zumo-application</header>
+ <header>x-zumo-version</header>
+ <header>x-zumo-auth</header>
+ <header>content-type</header>
+ <header>accept</header>
+ </allowed-headers>
+ <expose-headers>
+ <!-- Examples below show Azure Mobile Services headers -->
+ <header>x-zumo-installation-id</header>
+ <header>x-zumo-application</header>
+ </expose-headers>
+</cors>
+```
+
+## Related policies
+
+* [API Management cross-domain policies](api-management-cross-domain-policies.md)
+
api-management Cross Domain Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cross-domain-policy.md
+
+ Title: Azure API Management policy reference - cross-domain | Microsoft Docs
+description: Reference for the cross-domain policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 12/07/2022+++
+# Allow cross-domain calls
+
+Use the `cross-domain` policy to make the API accessible from Adobe Flash and Microsoft Silverlight browser-based clients.
+++
+## Policy statement
+
+```xml
+<cross-domain>
+ <!-Policy configuration is in the Adobe cross-domain policy file format,
+ see https://www.adobe.com/devnet-docs/acrobatetk/tools/AppSec/CrossDomain_PolicyFile_Specification.pdf-->
+</cross-domain>
+```
+
+> [!CAUTION]
+> Use the `*` wildcard with care in policy settings. This configuration may be overly permissive and may make an API more vulnerable to certain [API security threats](mitigate-owasp-api-threats.md#security-misconfiguration).
+
+## Elements
+
+Child elements must conform to the [Adobe cross-domain policy file specification](https://www.adobe.com/devnet-docs/acrobatetk/tools/AppSec/CrossDomain_PolicyFile_Specification.pdf).
+
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+
+## Example
+
+```xml
+<cross-domain>
+ <cross-domain-policy>
+ <allow-http-request-headers-from domain='*' headers='*' />
+ </cross-domain-policy>
+</cross-domain>
+```
+
+## Related policies
+
+* [API Management cross-domain policies](api-management-cross-domain-policies.md)
+
api-management Developer Portal Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-faq.md
Most configuration changes (for example, VNet, sign-in, product terms) require [
## <a name="cors"></a> I'm getting a CORS error when using the interactive console
-The interactive console makes a client-side API request from the browser. Resolve the CORS problem by adding [a CORS policy](api-management-cross-domain-policies.md#CORS) on your API(s).
+The interactive console makes a client-side API request from the browser. Resolve the CORS problem by adding [a CORS policy](cors-policy.md) on your API(s).
You can check the status of the CORS policy in the **Portal overview** section of your API Management service in the Azure portal. A warning box indicates an absent or misconfigured policy.
api-management Diagnostic Logs Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/diagnostic-logs-reference.md
This reference describes settings for API diagnostics logging from an API Manage
| Sampling (%) | decimal | Values from 0 to 100 (percent). <br/> Specifies the percentage of requests that are logged. 0% sampling means zero requests logged, while 100% sampling means all requests logged. Default: 100<br/><br/> For performance impacts of Application Insights logging, see [Performance implications and log sampling](api-management-howto-app-insights.md#performance-implications-and-log-sampling). | | Always log errors | boolean | If this setting is enabled, all failures are logged, regardless of the **Sampling** setting. | Log client IP address | boolean | If this setting is enabled, the client IP address for API requests is logged. |
-| Verbosity | | Specifies the verbosity of the logs and whether custom traces that are configured in [trace](api-management-advanced-policies.md#Trace) policies are logged. <br/><br/>* Error - failed requests, and custom traces of severity `error`<br/>* Information - failed and successful requests, and custom traces of severity `error` and `information`<br/> * Verbose - failed and successful requests, and custom traces of severity `error`, `information`, and `verbose`<br/><br/>Default: Information |
+| Verbosity | | Specifies the verbosity of the logs and whether custom traces that are configured in [trace](trace-policy.md) policies are logged. <br/><br/>* Error - failed requests, and custom traces of severity `error`<br/>* Information - failed and successful requests, and custom traces of severity `error` and `information`<br/> * Verbose - failed and successful requests, and custom traces of severity `error`, `information`, and `verbose`<br/><br/>Default: Information |
| Correlation protocol | | Specifies the protocol used to correlate telemetry sent by multiple components to Application Insights. Default: Legacy <br/><br/>For information, see [Telemetry correlation in Application Insights](../azure-monitor/app/correlation.md). | | Headers to log | list | Specifies the headers that are logged for requests and responses. Default: no headers are logged. | | Number of payload bytes to log | integer | Specifies the number of initial bytes of the body that are logged for requests and responses. Default: 0 |
This reference describes settings for API diagnostics logging from an API Manage
## Next steps * For more information, see the reference for the [Diagnostic](/rest/api/apimanagement/current-ga/diagnostic/) entity in the API Management REST API.
-* Use the [trace](api-management-advanced-policies.md#Trace) policy to add custom traces to Application Insights telemetry, resource logs, or request tracing.
+* Use the [trace](trace-policy.md) policy to add custom traces to Application Insights telemetry, resource logs, or request tracing.
api-management Emit Metric Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/emit-metric-policy.md
+
+ Title: Azure API Management policy reference - emit-metric | Microsoft Docs
+description: Reference for the emit-metric policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 12/08/2022+++
+# Emit custom metrics
+
+The `emit-metric` policy sends custom metrics in the specified format to Application Insights.
+
+> [!NOTE]
+> * Custom metrics are a [preview feature](../azure-monitor/essentials/metrics-custom-overview.md) of Azure Monitor and subject to [limitations](../azure-monitor/essentials/metrics-custom-overview.md#design-limitations-and-considerations).
+> * For more information about the API Management data added to Application Insights, see [How to integrate Azure API Management with Azure Application Insights](./api-management-howto-app-insights.md#what-data-is-added-to-application-insights).
++
+## Policy statement
+
+```xml
+<emit-metric name="name of custom metric" value="value of custom metric" namespace="metric namespace">
+ <dimension name="dimension name" value="dimension value" />
+</emit-metric>
+```
+
+## Attributes
+
+| Attribute | Description | Required | Default value |
+| | -- | | -- |
+| name | A string or policy expression. Name of custom metric. | Yes | N/A |
+| namespace | A string or policy expression. Namespace of custom metric. | No | API Management |
+| value | An integer or policy expression. Value of custom metric. | No | 1 |
++
+## Elements
+
+| Element | Description | Required |
+| -- | | -- |
+| dimension | Add one or more of these elements for each dimension included in the custom metric. | Yes |
+
+### dimension attributes
+
+| Attribute | Description | Required | Default value |
+| | -- | | -- |
+| name | A string or policy expression. Name of dimension. | Yes | N/A |
+| value | A string or policy expression. Value of dimension. Can only be omitted if `name` matches one of the default dimensions. If so, value is provided as per dimension name. | No | N/A |
+
+ ### Default dimension names that may be used without value
+
+* API ID
+* Operation ID
+* Product ID
+* User ID
+* Subscription ID
+* Location ID
+* Gateway ID
+
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+
+## Example
+
+The following example sends a custom metric to count the number of API requests along with user ID, client IP, and API ID as custom dimensions.
+
+```xml
+<policies>
+ <inbound>
+ <emit-metric name="Request" value="1" namespace="my-metrics">
+ <dimension name="User ID" />
+ <dimension name="Client IP" value="@(context.Request.IpAddress)" />
+ <dimension name="API ID" />
+ </emit-metric>
+ </inbound>
+ <outbound>
+ </outbound>
+</policies>
+```
+
+## Related policies
+
+* [API Management advanced policies](api-management-advanced-policies.md)
+
api-management Export Api Power Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/export-api-power-platform.md
You can manage your custom connector in your Power Apps or Power Platform enviro
1. Select the pencil (Edit) icon to edit and test the custom connector. > [!NOTE]
-> To call the API from the Power Apps test console, you need to add the `https://flow.microsoft.com` URL as an origin to the [CORS policy](api-management-cross-domain-policies.md#CORS) in your API Management instance.
+> To call the API from the Power Apps test console, you need to add the `https://flow.microsoft.com` URL as an origin to the [CORS policy](cors-policy.md) in your API Management instance.
## Update a custom connector
api-management Find And Replace Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/find-and-replace-policy.md
+
+ Title: Azure API Management policy reference - find-and-replace | Microsoft Docs
+description: Reference for the find-and-replace policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 12/02/2022+++
+# Find and replace string in body
+The `find-and-replace` policy finds a request or response substring and replaces it with a different substring.
+++
+## Policy statement
+
+```xml
+<find-and-replace from="what to replace" to="replacement" />
+```
++
+## Attributes
+
+| Attribute | Description | Required | Default |
+| -- | | -- | - |
+|from|The string to search for.|Yes|N/A|
+|to|The replacement string. Specify a zero length replacement string to remove the search string.|Yes|N/A|
+
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+
+## Example
+
+```xml
+<find-and-replace from="notebook" to="laptop" />
+```
+
+## Related policies
+
+* [API Management transformation policies](api-management-transformation-policies.md)
+
api-management Forward Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/forward-request-policy.md
+
+ Title: Azure API Management policy reference - forward-request | Microsoft Docs
+description: Reference for the forward-request policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 12/08/2022+++
+# Forward request
+
+The `forward-request` policy forwards the incoming request to the backend service specified in the request [context](api-management-policy-expressions.md#ContextVariables). The backend service URL is specified in the API [settings](./import-and-publish.md) and can be changed using the [set backend service](api-management-transformation-policies.md) policy.
+
+> [!IMPORTANT]
+> * This policy is required to forward requests to an API backend. By default, API Management sets up this policy at the global scope.
+> * Removing this policy results in the request not being forwarded to the backend service. Policies in the outbound section are evaluated immediately upon the successful completion of the policies in the inbound section.
++
+## Policy statement
+
+```xml
+<forward-request timeout="time in seconds" follow-redirects="false | true" buffer-request-body="false | true" buffer-response="true | false" fail-on-error-status-code="false | true"/>
+```
+
+## Attributes
+
+| Attribute | Description | Required | Default |
+| | -- | -- | - |
+| timeout | The amount of time in seconds to wait for the HTTP response headers to be returned by the backend service before a timeout error is raised. Minimum value is 0 seconds. Values greater than 240 seconds may not be honored, because the underlying network infrastructure can drop idle connections after this time. | No | 300 |
+| follow-redirects | Specifies whether redirects from the backend service are followed by the gateway or returned to the caller. | No | `false` |
+| buffer-request-body | When set to `true`, request is buffered and will be reused on [retry](retry-policy.md). | No | `false` |
+| buffer-response | Affects processing of chunked responses. When set to `false`, each chunk received from the backend is immediately returned to the caller. When set to `true`, chunks are buffered (8 KB, unless end of stream is detected) and only then returned to the caller.<br/><br/>Set to `false` with backends such as those implementing [server-sent events (SSE)](how-to-server-sent-events.md) that require content to be returned or streamed immediately to the caller. | No | `true` |
+| fail-on-error-status-code | When set to `true`, triggers [on-error](api-management-error-handling-policies.md) section for response codes in the range from 400 to 599 inclusive. | No | `false` |
++
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) backend
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+
+## Examples
+
+### Forward request with timeout interval
+
+The following API level policy forwards all API requests to the backend service with a timeout interval of 60 seconds.
+
+```xml
+<!-- api level -->
+<policies>
+ <inbound>
+ <base/>
+ </inbound>
+ <backend>
+ <forward-request timeout="60"/>
+ </backend>
+ <outbound>
+ <base/>
+ </outbound>
+</policies>
+
+```
+
+### Inherit policy from parent scope
+
+This operation level policy uses the `base` element to inherit the backend policy from the parent API level scope.
+
+```xml
+<!-- operation level -->
+<policies>
+ <inbound>
+ <base/>
+ </inbound>
+ <backend>
+ <base/>
+ </backend>
+ <outbound>
+ <base/>
+ </outbound>
+</policies>
+
+```
+
+### Do not inherit policy from parent scope
+
+This operation level policy explicitly forwards all requests to the backend service with a timeout of 120 and does not inherit the parent API level backend policy. If the backend service responds with an error status code from 400 to 599 inclusive, [on-error](api-management-error-handling-policies.md) section will be triggered.
+
+```xml
+<!-- operation level -->
+<policies>
+ <inbound>
+ <base/>
+ </inbound>
+ <backend>
+ <forward-request timeout="120" fail-on-error-status-code="true" />
+ <!-- effective policy. note the absence of <base/> -->
+ </backend>
+ <outbound>
+ <base/>
+ </outbound>
+</policies>
+
+```
+
+### Do not forward requests to backend
+
+This operation level policy does not forward requests to the backend service.
+
+```xml
+<!-- operation level -->
+<policies>
+ <inbound>
+ <base/>
+ </inbound>
+ <backend>
+ <!-- no forwarding to backend -->
+ </backend>
+ <outbound>
+ <base/>
+ </outbound>
+</policies>
+
+```
+
+## Related policies
+
+* [API Management advanced policies](api-management-advanced-policies.md)
+
api-management Front Door Api Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/front-door-api-management.md
Use API Management policies to ensure that your API Management instance accepts
### Restrict incoming IP addresses
-You can configure an inbound [ip-filter](api-management-access-restriction-policies.md#RestrictCallerIPs) policy in API Management to allow only Front Door-related traffic, which includes:
+You can configure an inbound [ip-filter](ip-filter-policy.md) policy in API Management to allow only Front Door-related traffic, which includes:
* **Front Door's backend IP address space** - Allow IP addresses corresponding to the *AzureFrontDoor.Backend* section in [Azure IP Ranges and Service Tags](https://www.microsoft.com/download/details.aspx?id=56519).
You can configure an inbound [ip-filter](api-management-access-restriction-polic
### Check Front Door header
-Requests routed through Front Door include headers specific to your Front Door configuration. You can configure the [check-header](api-management-access-restriction-policies.md#CheckHTTPHeader) policy to filter incoming requests based on the unique value of the `X-Azure-FDID` HTTP request header that is sent to API Management. This header value is the **Front Door ID**, which is shown in the portal on the **Overview** page of the Front Door profile.
+Requests routed through Front Door include headers specific to your Front Door configuration. You can configure the [check-header](check-header-policy.md) policy to filter incoming requests based on the unique value of the `X-Azure-FDID` HTTP request header that is sent to API Management. This header value is the **Front Door ID**, which is shown in the portal on the **Overview** page of the Front Door profile.
In the following policy example, the Front Door ID is specified using a [named value](api-management-howto-properties.md) named `FrontDoorId`.
api-management Get Authorization Context Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/get-authorization-context-policy.md
+
+ Title: Azure API Management policy reference - get-authorization-context | Microsoft Docs
+description: Reference for the get-authorization-context policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 12/08/2022+++
+# Get authorization context
+
+Use the `get-authorization-context` policy to get the authorization context of a specified [authorization](authorizations-overview.md) (preview) configured in the API Management instance.
+
+The policy fetches and stores authorization and refresh tokens from the configured authorization provider.
+
+If `identity-type=jwt` is configured, a JWT token is required to be validated. The audience of this token must be `https://azure-api.net/authorization-manager`.
+++
+## Policy statement
+
+```xml
+<get-authorization-context
+ provider-id="authorization provider id"
+ authorization-id="authorization id"
+ context-variable-name="variable name"
+ identity-type="managed | jwt"
+ identity="JWT bearer token"
+ ignore-error="true | false" />
+```
++
+## Attributes
+
+| Attribute | Description | Required | Default |
+|||||
+| provider-id | The authorization provider resource identifier. | Yes | N/A |
+| authorization-id | The authorization resource identifier. | Yes | N/A |
+| context-variable-name | The name of the context variable to receive the [`Authorization` object](#authorization-object). | Yes | N/A |
+| identity-type | Type of identity to be checked against the authorization access policy. <br> - `managed`: managed identity of the API Management service. <br> - `jwt`: JWT bearer token specified in the `identity` attribute. | No | `managed` |
+| identity | An Azure AD JWT bearer token to be checked against the authorization permissions. Ignored for `identity-type` other than `jwt`. <br><br>Expected claims: <br> - audience: `https://azure-api.net/authorization-manager` <br> - `oid`: Permission object ID <br> - `tid`: Permission tenant ID | No | N/A |
+| ignore-error | Boolean. If acquiring the authorization context results in an error (for example, the authorization resource is not found or is in an error state): <br> - `true`: the context variable is assigned a value of null. <br> - `false`: return `500` | No | `false` |
+
+### Authorization object
+
+The Authorization context variable receives an object of type `Authorization`.
+
+```c#
+class Authorization
+{
+ public string AccessToken { get; }
+ public IReadOnlyDictionary<string, object> Claims { get; }
+}
+```
+
+| Property Name | Description |
+| -- | -- |
+| AccessToken | Bearer access token to authorize a backend HTTP request. |
+| Claims | Claims returned from the authorization serverΓÇÖs token response API (see [RFC6749#section-5.1](https://datatracker.ietf.org/doc/html/rfc6749#section-5.1)). |
+
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated
+
+## Examples
+
+### Get token back
+
+```xml
+<!-- Add to inbound policy. -->
+<get-authorization-context
+ provider-id="github-01"
+ authorization-id="auth-01"
+ context-variable-name="auth-context"
+ identity-type="managed"
+ identity="@(context.Request.Headers["Authorization"][0].Replace("Bearer ", ""))"
+ ignore-error="false" />
+<!-- Return the token -->
+<return-response>
+ <set-status code="200" />
+ <set-body template="none">@(((Authorization)context.Variables.GetValueOrDefault("auth-context"))?.AccessToken)</set-body>
+</return-response>
+```
+
+### Get token back with dynamically set attributes
+
+```xml
+<!-- Add to inbound policy. -->
+<get-authorization-context
+ provider-id="@(context.Request.Url.Query.GetValueOrDefault("authorizationProviderId"))"
+ authorization-id="@(context.Request.Url.Query.GetValueOrDefault("authorizationId"))" context-variable-name="auth-context"
+ ignore-error="false"
+ identity-type="managed" />
+<!-- Return the token -->
+<return-response>
+ <set-status code="200" />
+ <set-body template="none">@(((Authorization)context.Variables.GetValueOrDefault("auth-context"))?.AccessToken)</set-body>
+</return-response>
+```
+
+### Attach the token to the backend call
+
+```xml
+<!-- Add to inbound policy. -->
+<get-authorization-context
+ provider-id="github-01"
+ authorization-id="auth-01"
+ context-variable-name="auth-context"
+ identity-type="managed"
+ ignore-error="false" />
+<!-- Attach the token to the backend call -->
+<set-header name="Authorization" exists-action="override">
+ <value>@("Bearer " + ((Authorization)context.Variables.GetValueOrDefault("auth-context"))?.AccessToken)</value>
+</set-header>
+```
+
+### Get token from incoming request and return token
+
+```xml
+<!-- Add to inbound policy. -->
+<get-authorization-context
+ provider-id="github-01"
+ authorization-id="auth-01"
+ context-variable-name="auth-context"
+ identity-type="jwt"
+ identity="@(context.Request.Headers["Authorization"][0].Replace("Bearer ", ""))"
+ ignore-error="false" />
+<!-- Return the token -->
+<return-response>
+ <set-status code="200" />
+ <set-body template="none">@(((Authorization)context.Variables.GetValueOrDefault("auth-context"))?.AccessToken)</set-body>
+</return-response>
+```
+
+## Related policies
+
+* [API Management access restriction policies](api-management-access-restriction-policies.md)
+
api-management Graphql Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/graphql-policies.md
- Title: Azure API Management policies for GraphQL APIs | Microsoft Docs
-description: Reference for Azure API Management policies to validate and resolve GraphQL API queries. Provides policy usage, settings, and examples.
---- Previously updated : 07/08/2022---
-# API Management policies for GraphQL APIs
-
-This article provides a reference for API Management policies to validate and resolve queries to GraphQL APIs.
--
-## GraphQL API policies
--- [Validate GraphQL request](#validate-graphql-request) - Validates and authorizes a request to a GraphQL API. -- [Set GraphQL resolver](#set-graphql-resolver) - Retrieves or sets data for a GraphQL field in an object type specified in a GraphQL schema.-
-## Validate GraphQL request
-
-The `validate-graphql-request` policy validates the GraphQL request and authorizes access to specific query paths. An invalid query is a "request error". Authorization is only done for valid requests.
-
-**Permissions**
-Because GraphQL queries use a flattened schema:
-* Permissions may be applied at any leaf node of an output type:
- * Mutation, query, or subscription
- * Individual field in a type declaration.
-* Permissions may not be applied to:
- * Input types
- * Fragments
- * Unions
- * Interfaces
- * The schema element
-
-**Authorize element**
-Configure the `authorize` element to set an appropriate authorization rule for one or more paths.
-* Each rule can optionally provide a different action.
-* Use policy expressions to specify conditional actions.
-
-**Introspection system**
-The policy for path=`/__*` is the [introspection](https://graphql.org/learn/introspection/) system. You can use it to reject introspection requests (`__schema`, `__type`, etc.).
--
-### Policy statement
-
-```xml
-<validate-graphql-request error-variable-name="variable name" max-size="size in bytes" max-depth="query depth">
- <authorize>
- <rule path="query path, for example: '/listUsers' or '/__*'" action="string or policy expression that evaluates to 'allow|remove|reject|ignore'" />
- </authorize>
-</validate-graphql-request>
-```
-
-### Example: Query validation
-
-This example applies the following validation and authorization rules to a GraphQL query:
-* Requests larger than 100 kb or with query depth greater than 4 are rejected.
-* Requests to the introspection system are rejected.
-* The `/Missions/name` field is removed from requests containing more than two headers.
-
-```xml
-<validate-graphql-request error-variable-name="name" max-size="102400" max-depth="4">
- <authorize>
- <rule path="/__*" action="reject" />
- <rule path="/Missions/name" action="@(context.Request.Headers.Count > 2 ? "remove" : "allow")" />
- </authorize>
-</validate-graphql-request>
-```
-
-### Example: Mutation validation
-
-This example applies the following validation and authorization rules to a GraphQL mutation:
-* Requests larger than 100 kb or with query depth greater than 4 are rejected.
-* Requests to mutate the `deleteUser` field are denied except when the request is from IP address `198.51.100.1`.
-
-```xml
-<validate-graphql-request error-variable-name="name" max-size="102400" max-depth="4">
- <authorize>
- <rule path="/Mutation/deleteUser" action="@(context.Request.IpAddress <> "198.51.100.1" ? "deny" : "allow")" />
- </authorize>
-</validate-graphql-request>
-```
-
-### Elements
-
-| Name | Description | Required |
-| | | -- |
-| `validate-graphql-request` | Root element. | Yes |
-| `authorize` | Add this element to provide field-level authorization with both request- and field-level errors. | No |
-| `rule` | Add one or more of these elements to authorize specific query paths. Each rule can optionally specify a different [action](#request-actions). | No |
-
-### Attributes
-
-| Name | Description | Required | Default |
-| -- | - | -- | - |
-| `error-variable-name` | Name of the variable in `context.Variables` to log validation errors to. | No | N/A |
-| `max-size` | Maximum size of the request payload in bytes. Maximum allowed value: 102,400 bytes (100 KB). (Contact [support](https://azure.microsoft.com/support/options/) if you need to increase this limit.) | Yes | N/A |
-| `max-depth` | An integer. Maximum query depth. | No | 6 |
-| `path` | Path to execute authorization validation on. It must follow the pattern: `/type/field`. | Yes | N/A |
-| `action` | [Action](#request-actions) to perform if the rule applies. May be specified conditionally using a policy expression. | No | allow |
-
-### Request actions
-
-Available actions are described in the following table.
-
-|Action |Description |
-|||
-|`reject` | A request error happens, and the request is not sent to the back end. Additional rules if configured are not applied. |
-|`remove` | A field error happens, and the field is removed from the request. |
-|`allow` | The field is passed to the back end. |
-|`ignore` | The rule is not valid for this case and the next rule is applied. |
-
-### Error handling
-
-Failure to validate against the GraphQL schema, or a failure for the request's size or depth, is a request error and results in the request being failed with an errors block (but no data block).
-
-Similar to the [`Context.LastError`](api-management-error-handling-policies.md#lasterror) property, all GraphQL validation errors are automatically propagated in the `GraphQLErrors` variable. If the errors need to be propagated separately, you can specify an error variable name. Errors are pushed onto the `error` variable and the `GraphQLErrors` variable.
-
-### Usage
-
-This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
--- **Policy sections:** inbound--- **Policy scopes:** all scopes-
-## Set GraphQL resolver
-
-The `set-graphql-resolver` policy retrieves or sets data for a GraphQL field in an object type specified in a GraphQL schema. The schema must be imported to API Management. Currently the data must be resolved using an HTTP-based data source (REST or SOAP API).
--
-* This policy is invoked only when a matching GraphQL query is executed.
-* The policy resolves data for a single field. To resolve data for multiple fields, configure multiple occurrences of this policy in a policy definition.
---
-### Policy statement
-
-```xml
-<set-graphql-resolver parent-type="type" field="field">
- <http-data-source>
- <http-request>
- <set-method>...set-method policy configuration...</set-method>
- <set-url>URL</set-url>
- <set-header>...set-header policy configuration...</set-header>
- <set-body>...set-body policy configuration...</set-body>
- <authentication-certificate>...authentication-certificate policy configuration...</authentication-certificate>
- </http-request>
- <http-response>
- <json-to-xml>...json-to-xml policy configuration...</json-to-xml>
- <xml-to-json>...xml-to-json policy configuration...</xml-to-json>
- <find-and-replace>...find-and-replace policy configuration...</find-and-replace>
- </http-response>
- </http-data-source>
-</set-graphql-resolver>
-```
-
-### Elements
-
-| Name | Description | Required |
-| | | -- |
-| `set-graphql-resolver` | Root element. | Yes |
-| `http-data-source` | Configures the HTTP request and optionally the HTTP response that are used to resolve data for the given `parent-type` and `field`. | Yes |
-| `http-request` | Specifies a URL and child policies to configure the resolver's HTTP request. Each child element can be specified at most once. | Yes |
-| `set-method`| Method of the resolver's HTTP request, configured using the [set-method](api-management-advanced-policies.md#SetRequestMethod) policy. | Yes |
-| `set-url` | URL of the resolver's HTTP request. | Yes |
-| `set-header` | Header set in the resolver's HTTP request, configured using the [set-header](api-management-transformation-policies.md#SetHTTPheader) policy. | No |
-| `set-body` | Body set in the resolver's HTTP request, configured using the [set-body](api-management-transformation-policies.md#SetBody) policy. | No |
-| `authentication-certificate` | Client certificate presented in the resolver's HTTP request, configured using the [authentication-certificate](api-management-authentication-policies.md#ClientCertificate) policy. | No |
-| `http-response` | Optionally specifies child policies to configure the resolver's HTTP response. If not specified, the response is returned as a raw string. Each child element can be specified at most once. |
-| `json-to-xml` | Transforms the resolver's HTTP response using the [json-to-xml](api-management-transformation-policies.md#ConvertJSONtoXML) policy. | No |
-| `xml-to-json` | Transforms the resolver's HTTP response using the [xml-to-json](api-management-transformation-policies.md#ConvertJSONtoXML) policy. | No |
-| `find-and-replace` | Transforms the resolver's HTTP response using the [find-and-replace](api-management-transformation-policies.md#Findandreplacestringinbody) policy. | No |
--
-### Attributes
-
-| Name | Description | Required | Default |
-| -- | - | -- | - |
-| `parent-type`| An object type in the GraphQL schema. | Yes | N/A |
-| `field`| A field of the specified `parent-type` in the GraphQL schema. | Yes | N/A |
-
-> [!NOTE]
-> Currently, the values of `parent-type` and `field` aren't validated by this policy. If they aren't valid, the policy is ignored, and the GraphQL query is forwarded to a GraphQL endpoint (if one is configured).
-
-### Usage
-
-This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
--- **Policy sections:** backend-- **Policy scopes:** all scopes-
-### GraphQL Context
-
-* The context for the HTTP request and HTTP response (if specified) differs from the context for the original gateway API request:
- * `context.ParentResult` is set to the parent object for the current resolver execution.
- * The HTTP request context contains arguments that are passed in the GraphQL query as its body.
- * The HTTP response context is the response from the independent HTTP call made by the resolver, not the context for the complete response for the gateway request.
-The `context` variable that is passed through the request and response pipeline is augmented with the GraphQL context when used with `<set-graphql-resolver>` policies.
-
-#### ParentResult
-
-The `context.ParentResult` is set to the parent object for the current resolver execution. Consider the following partial schema:
-
-``` graphql
-type Comment {
- id: ID!
- owner: string!
- content: string!
-}
-
-type Blog {
- id: ID!
- Title: string!
- content: string!
- comments: [Comment]!
- comment(id: ID!): Comment
-}
-
-type Query {
- getBlog(): [Blog]!
- getBlog(id: ID!): Blog
-}
-```
-
-Also, consider a GraphQL query for all the information for a specific blog:
-
-``` graphql
-query {
- getBlog(id: 1) {
- title
- content
- comments {
- id
- owner
- content
- }
- }
-}
-```
-
-If you set a resolver for `parent-type="Blog" field="comments"`, you will want to understand which blog ID to use. You can get the ID of the blog using `context.ParentResult.AsJObject()["id"].ToString()`. The policy for configuring this resolver would resemble:
-
-``` xml
-<set-graphql-resolver parent-type="Blog" field="comments">
- <http-data-source>
- <http-request>
- <set-method>GET</set-method>
- <set-url>@{
- var blogId = context.ParentResult.AsJObject()["id"].ToString();
- return $"https://data.contoso.com/api/blog/{blogId}";
- }</set-url>
- </http-request>
- </http-data-source>
-</set-graphql-resolver>
-```
-
-#### Arguments
-
-The arguments for a parameterized GraphQL query are added to the body of the request. For example, consider the following two queries:
-
-``` graphql
-query($id: Int) {
- getComment(id: $id) {
- content
- }
-}
-
-query {
- getComment(id: 2) {
- content
- }
-}
-```
-
-These queries are two ways of calling the `getComment` resolver. GraphQL sends the following JSON payload:
-
-``` json
-{
- "query": "query($id: Int) { getComment(id: $id) { content } }",
- "variables": { "id": 2 }
-}
-
-{
- "query": "query { getComment(id: 2) { content } }"
-}
-```
-
-When the resolver is executed, the `arguments` property is added to the body. You can define the resolver as follows:
-
-``` xml
-<set-graphql-resolver parent-type="Blog" field="comments">
- <http-data-source>
- <http-request>
- <set-method>GET</set-method>
- <set-url>@{
- var commentId = context.Request.Body.As<JObject>(true)["arguments"]["id"];
- return $"https://data.contoso.com/api/comment/{commentId}";
- }</set-url>
- </http-request>
- </http-data-source>
-</set-graphql-resolver>
-```
-
-### More examples
-
-#### Resolver for GraphQL query
-
-The following example resolves a query by making an HTTP `GET` call to a backend data source.
-
-##### Example schema
-
-```
-type Query {
- users: [User]
-}
-
-type User {
- id: String!
- name: String!
-}
-```
-
-##### Example policy
-
-```xml
-<set-graphql-resolver parent-type="Query" field="users">
- <http-data-source>
- <http-request>
- <set-method>GET</set-method>
- <set-url>https://data.contoso.com/get/users</set-url>
- </http-request>
- </http-data-source>
-</set-graphql-resolver>
-```
-
-#### Resolver for a GraqhQL query that returns a list, using a liquid template
-
-The following example uses a liquid template, supported for use in the [set-body](api-management-transformation-policies.md#SetBody) policy, to return a list in the HTTP response to a query. It also renames the `username` field in the response from the REST API to `name` in the GraphQL response.
-
-##### Example schema
-
-```
-type Query {
- users: [User]
-}
-
-type User {
- id: String!
- name: String!
-}
-```
-
-##### Example policy
-
-```xml
-<set-graphql-resolver parent-type="Query" field="users">
- <http-data-source>
- <http-request>
- <set-method>GET</set-method>
- <set-url>https://data.contoso.com/users</set-url>
- </http-request>
- <http-response>
- <set-body template="liquid">
- [
- {% JSONArrayFor elem in body %}
- {
- "name": "{{elem.username}}"
- }
- {% endJSONArrayFor %}
- ]
- </set-body>
- </http-response>
- </http-data-source>
-</set-graphql-resolver>
-```
-
-#### Resolver for GraphQL mutation
-
-The following example resolves a mutation that inserts data by making a `POST` request to an HTTP data source. The policy expression in the `set-body` policy of the HTTP request modifies a `name` argument that is passed in the GraphQL query as its body. The body that is sent will look like the following JSON:
-
-``` json
-{
- "name": "the-provided-name"
-}
-```
-
-##### Example schema
-
-```
-type Query {
- users: [User]
-}
-
-type Mutation {
- makeUser(name: String!): User
-}
-
-type User {
- id: String!
- name: String!
-}
-```
-
-##### Example policy
-
-```xml
-<set-graphql-resolver parent-type="Mutation" field="makeUser">
- <http-data-source>
- <http-request>
- <set-method>POST</set-method>
- <set-url> https://data.contoso.com/user/create </set-url>
- <set-header name="Content-Type" exists-action="override">
- <value>application/json</value>
- </set-header>
- <set-body>@{
- var args = context.Request.Body.As<JObject>(true)["arguments"];
- JObject jsonObject = new JObject();
- jsonObject.Add("name", args["name"])
- return jsonObject.ToString();
- }</set-body>
- </http-request>
- </http-data-source>
-</set-graphql-resolver>
-```
-
api-management Graphql Schema Resolve Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/graphql-schema-resolve-api.md
If you want to expose an existing GraphQL endpoint as an API, see [Import a Grap
## Configure resolver
-Configure the [set-graphql-resolver](graphql-policies.md#set-graphql-resolver) policy to map a field in the schema to an existing HTTP endpoint.
+Configure the [set-graphql-resolver](set-graphql-resolver-policy.md) policy to map a field in the schema to an existing HTTP endpoint.
Suppose you imported the following basic GraphQL schema and wanted to set up a resolver for the *users* query.
api-management How To Configure Service Fabric Backend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-configure-service-fabric-backend.md
For steps to add a certificate to your API Management instance, see [How to secu
## Use the backend
-To use a custom backend, reference it using the [`set-backend-service`](api-management-transformation-policies.md#SetBackendService) policy. This policy transforms the default backend service base URL of an incoming API request to a specified backend, in this case the Service Fabric backend.
+To use a custom backend, reference it using the [`set-backend-service`](set-backend-service-policy.md) policy. This policy transforms the default backend service base URL of an incoming API request to a specified backend, in this case the Service Fabric backend.
The `set-backend-service` policy can be useful with an existing API to transform an incoming request to a different backend than the one specified in the API settings. For demonstration purposes in this article, create a test API and set the policy to direct API requests to the Service Fabric backend.
To test the integration of API Management with the cluster, add the correspondin
### Configure `set-backend-service` policy
-Add the [`set-backend-service`](api-management-transformation-policies.md#SetBackendService) policy to the test API.
+Add the [`set-backend-service`](set-backend-service-policy.md) policy to the test API.
1. On the **Design** tab, in the **Inbound processing** section, select the code editor (**</>**) icon. 1. Position the cursor inside the **&lt;inbound&gt;** element
api-management How To Server Sent Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-server-sent-events.md
Follow these guidelines when using API Management to reach a backend API that im
This configuration is needed to override the idle session timeout of 4 minutes that is enforced by the Azure Load Balancer, which is used in the API Management infrastructure.
-* **Relay events immediately to clients** - Turn off response buffering on the [`forward-request` policy](api-management-advanced-policies.md#ForwardRequest) so that events are immediately relayed to the clients. For example:
+* **Relay events immediately to clients** - Turn off response buffering on the [`forward-request` policy](forward-request-policy.md) so that events are immediately relayed to the clients. For example:
```xml <forward-request timeout="120" fail-on-error-status-code="true" buffer-response="false"/> ```
-* **Avoid other policies that buffer responses** - Certain policies such as [`validate-content`](validation-policies.md#validate-content) can also buffer response content and shouldn't be used with APIs that implement SSE.
+* **Avoid other policies that buffer responses** - Certain policies such as [`validate-content`](validate-content-policy.md) can also buffer response content and shouldn't be used with APIs that implement SSE.
* **Disable response caching** - To ensure that notifications to the client are timely, verify that [response caching](api-management-howto-cache.md) isn't enabled. For more information, see [API Management caching policies](api-management-caching-policies.md).
api-management Howto Protect Backend Frontend Azure Ad B2c https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/howto-protect-backend-frontend-azure-ad-b2c.md
You'll need to add CIDR formatted blocks of addresses to the IP restrictions pan
> Congratulations, you now have Azure AD B2C, API Management and Azure Functions working together to publish, secure AND consume an API! > [!TIP]
- > If you're using the API Management consumption tier then instead of rate limiting by the JWT subject or incoming IP Address (Limit call rate by key policy is not supported today for the "Consumption" tier), you can Limit by call rate quota see [here](./api-management-access-restriction-policies.md#LimitCallRate).
+ > If you're using the API Management consumption tier then instead of rate limiting by the JWT subject or incoming IP Address (Limit call rate by key policy is not supported today for the "Consumption" tier), you can Limit by call rate quota see [here](rate-limit-policy.md).
> As this example is a JavaScript Single Page Application, we use the API Management Key only for rate-limiting and billing calls. The actual Authorization and Authentication is handled by Azure AD B2C, and is encapsulated in the JWT, which gets validated twice, once by API Management, and then by the backend Azure Function. ## Upload the JavaScript SPA sample to static storage
api-management Include Fragment Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/include-fragment-policy.md
+
+ Title: Azure API Management policy reference - include-fragment | Microsoft Docs
+description: Reference for the include-fragment policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 12/08/2022+++
+# Include fragment
+
+The `include-fragment` policy inserts the contents of a previously created [policy fragment](policy-fragments.md) in the policy definition. A policy fragment is a centrally managed, reusable XML policy snippet that can be included in policy definitions in your API Management instance.
+
+The policy inserts the policy fragment as-is at the location you select in the policy definition.
++
+## Policy statement
+
+```xml
+<include-fragment fragment-id="fragment" />
+```
+
+## Attributes
+
+| Attribute | Description | Required | Default |
+| | -- | -- | - |
+| fragment-id | A string. Policy expression allowed. Specifies the identifier (name) of a policy fragment created in the API Management instance. | Yes | N/A |
+
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+
+## Example
+
+In the following example, the policy fragment named *myFragment* is added in the inbound section of a policy definition.
+
+```xml
+<inbound>
+ <include-fragment fragment-id="myFragment" />
+ <base />
+</inbound>
+[...]
+```
+
+## Related policies
+
+* [API Management advanced policies](api-management-advanced-policies.md)
+
api-management Invoke Dapr Binding Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/invoke-dapr-binding-policy.md
+
+ Title: Azure API Management policy reference - invoke-dapr-binding | Microsoft Docs
+description: Reference for the invoke-dapr-binding policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 12/07/2022+++
+# Trigger output binding
+
+The `invoke-dapr-binding` policy instructs API Management gateway to trigger an outbound Dapr [binding](https://github.com/dapr/docs/blob/master/README.md). The policy accomplishes that by making an HTTP POST request to `http://localhost:3500/v1.0/bindings/{{bind-name}},` replacing the template parameter and adding content specified in the policy statement.
+
+The policy assumes that Dapr runtime is running in a sidecar container in the same pod as the gateway. Dapr runtime is responsible for invoking the external resource represented by the binding. Learn more about [Dapr integration with API Management](api-management-dapr-policies.md).
++
+## Policy statement
+
+```xml
+<invoke-dapr-binding name="bind-name" operation="op-name" ignore-error="false | true" response-variable-name="resp-var-name" timeout="in seconds" template="Liquid" content-type="application/json">
+ <metadata>
+ <item key="item-name"><!-- item-value --></item>
+ </metadata>
+ <data>
+ <!-- message content -->
+ </data>
+</invoke-dapr-binding>
+```
+
+## Attributes
+
+| Attribute | Description | Required | Default |
+|||-||
+| name | Target binding name. Must match the name of the bindings [defined](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/bindings_api.md#bindings-structure) in Dapr. | Yes | N/A |
+| operation | Target operation name (binding specific). Maps to the [operation](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/bindings_api.md#invoking-output-bindings) property in Dapr. | No | None |
+| ignore-error | If set to `true` instructs the policy not to trigger ["on-error"](api-management-error-handling-policies.md) section upon receiving error from Dapr runtime. | No | `false` |
+| response-variable-name | Name of the [Variables](api-management-policy-expressions.md#ContextVariables) collection entry to use for storing response from Dapr runtime. | No | None |
+| timeout | Time (in seconds) to wait for Dapr runtime to respond. Can range from 1 to 240 seconds. | No | 5 |
+| template | Templating engine to use for transforming the message content. "Liquid" is the only supported value. | No | None |
+| content-type | Type of the message content. "application/json" is the only supported value. | No | None |
+
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, on-error
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) self-hosted
+
+### Usage notes
+
+Dapr support must be [enabled](api-management-dapr-policies.md#enable-dapr-support-in-the-self-hosted-gateway) in the self-hosted gateway.
++
+## Example
+
+The following example demonstrates triggering of outbound binding named "external-systems" with operation named "create", metadata consisting of two key/value items named "source" and "client-ip", and the body coming from the original request. Response received from the Dapr runtime is captured in the "bind-response" entry of the Variables collection in the [context](api-management-policy-expressions.md#ContextVariables) object.
+
+If Dapr runtime fails for some reason and responds with an error, the "on-error" section is triggered and response received from the Dapr runtime is returned to the caller verbatim. Otherwise, default `200 OK` response is returned.
+
+The "backend" section is empty and the request is not forwarded to the backend.
+
+```xml
+<policies>
+ <inbound>
+ <base />
+ <invoke-dapr-binding
+ name="external-system"
+ operation="create"
+ response-variable-name="bind-response">
+ <metadata>
+ <item key="source">api-management</item>
+ <item key="client-ip">@(context.Request.IpAddress )</item>
+ </metadata>
+ <data>
+ @(context.Request.Body.As<string>() )
+ </data>
+ </invoke-dapr-binding>
+ </inbound>
+ <backend>
+ </backend>
+ <outbound>
+ <base />
+ </outbound>
+ <on-error>
+ <base />
+ <return-response response-variable-name="bind-response" />
+ </on-error>
+</policies>
+```
+
+## Related policies
+
+* [API Management Dapr integration policies](api-management-dapr-policies.md)
+
api-management Ip Filter Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/ip-filter-policy.md
+
+ Title: Azure API Management policy reference - ip-filter | Microsoft Docs
+description: Reference for the ip-filter policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 12/08/2022++
+# Restrict caller IPs
+
+The `ip-filter` policy filters (allows/denies) calls from specific IP addresses and/or address ranges.
++
+## Policy statement
+
+```xml
+<ip-filter action="allow | forbid">
+ <address>address</address>
+ <address-range from="address" to="address" />
+</ip-filter>
+```
++
+## Attributes
+
+| Attribute | Description | Required | Default |
+| -- | - | -- | - |
+| address-range from="address" to="address" | A range of IP addresses to allow or deny access for. | Required when the `address-range` element is used. | N/A |
+| action | Specifies whether calls should be allowed (`allow`) or not (`forbid`) for the specified IP addresses and ranges. | Yes | N/A |
+
+## Elements
+
+| Element | Description | Required |
+| -- | | -- |
+| address | Add one or more of these elements to specify a single IP address on which to filter. | At least one `address` or `address-range` element is required. |
+| address-range | Add one or more of these elements to specify a range of IP addresses `from` "address" `to` "address" on which to filter. | At least one `address` or `address-range` element is required. |
++
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+
+### Usage notes
+
+If you configure this policy at more than one scope, IP filtering is applied in the order of [policy evaluation](set-edit-policies.md#use-base-element-to-set-policy-evaluation-order) in your policy definition.
+
+## Example
+
+In the following example, the policy only allows requests coming either from the single IP address or range of IP addresses specified.
+
+```xml
+<ip-filter action="allow">
+ <address>13.66.201.169</address>
+ <address-range from="13.66.140.128" to="13.66.140.143" />
+</ip-filter>
+```
+
+## Related policies
+
+* [API Management access restriction policies](api-management-access-restriction-policies.md)
+
api-management Json To Xml Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/json-to-xml-policy.md
+
+ Title: Azure API Management policy reference - json-to-xml | Microsoft Docs
+description: Reference for the json-to-xml policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 12/08/2022+++
+# Convert JSON to XML
+The `json-to-xml` policy converts a request or response body from JSON to XML.
++
+## Policy statement
+
+```xml
+<json-to-xml
+ apply="always | content-type-json"
+ consider-accept-header="true | false"
+ parse-date="true | false"
+ namespace-separator="separator character"
+ namespace-prefix="namepsace prefix"
+ attribute-block-name="name" />
+```
+
+## Attributes
+
+| Attribute | Description | Required | Default |
+| -- | | -- | - |
+|apply|The attribute must be set to one of the following values.<br /><br /> - `always` - always apply conversion.<br />- `content-type-json` - convert only if response Content-Type header indicates presence of JSON.|Yes|N/A|
+|consider-accept-header|The attribute must be set to one of the following values.<br /><br /> - `true` - apply conversion if XML is requested in request Accept header.<br />- `false` - always apply conversion.|No|`true`|
+|parse-date|When set to `false` date values are simply copied during transformation.|No|`true`|
+|namespace-separator|The character to use as a namespace separator.|No|Underscore|
+|namespace-prefix|The string that identifies property as namespace attribute, usually "xmlns". Properties with names beginning with specified prefix will be added to current element as namespace declarations.|No|N/A|
+|attribute-block-name|When set, properties inside the named object will be added to the element as attributes|No|Not set|
+
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, on-error
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+
+## Example
+
+Consider the following policy:
+
+```xml
+<policies>
+ <inbound>
+ <base />
+ </inbound>
+ <outbound>
+ <base />
+ <json-to-xml apply="always" consider-accept-header="false" parse-date="false" namespace-separator=":" namespace-prefix="xmlns" attribute-block-name="#attrs" />
+ </outbound>
+</policies>
+```
+
+If the backend returns the following JSON:
+
+``` json
+{
+ "soapenv:Envelope": {
+ "xmlns:soapenv": "http://schemas.xmlsoap.org/soap/envelope/",
+ "xmlns:v1": "http://localdomain.com/core/v1",
+ "soapenv:Header": {},
+ "soapenv:Body": {
+ "v1:QueryList": {
+ "#attrs": {
+ "queryName": "test"
+ },
+ "v1:QueryItem": {
+ "name": "dummy text"
+ }
+ }
+ }
+ }
+}
+```
+
+The XML response to the client will be:
+
+``` xml
+<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:v1="http://localdomain.com/core/v1">
+ <soapenv:Header />
+ <soapenv:Body>
+ <v1:QueryList queryName="test">
+ <name>dummy text</name>
+ </v1:QueryList>
+ </soapenv:Body>
+</soapenv:Envelope>
+```
+++
+## Related policies
+
+* [API Management transformation policies](api-management-transformation-policies.md)
+
api-management Jsonp Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/jsonp-policy.md
+
+ Title: Azure API Management policy reference - jsonp | Microsoft Docs
+description: Reference for the jsonp policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 12/07/2022+++
+# JSONP
+
+The `jsonp` policy adds JSON with padding (JSONP) support to an operation or an API to allow cross-domain calls from JavaScript browser-based clients. JSONP is a method used in JavaScript programs to request data from a server in a different domain. JSONP bypasses the limitation enforced by most web browsers where access to web pages must be in the same domain.
++
+## Policy statement
+
+```xml
+<jsonp callback-parameter-name="callback function name" />
+```
+
+## Attributes
+
+|Name|Description|Required|Default|
+|-|--|--|-|
+|callback-parameter-name|The cross-domain JavaScript function call prefixed with the fully qualified domain name where the function resides.|Yes|N/A|
+
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) outbound
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+## Example
+
+```xml
+<jsonp callback-parameter-name="cb" />
+```
+
+If you call the method without the callback parameter `?cb=XXX`, it will return plain JSON (without a function call wrapper).
+
+If you add the callback parameter `?cb=XXX`, it will return a JSONP result, wrapping the original JSON results around the callback function like `XYZ('<json result goes here>');`
+
+## Related policies
+
+* [API Management cross-domain policies](api-management-cross-domain-policies.md)
+
api-management Limit Concurrency Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/limit-concurrency-policy.md
+
+ Title: Azure API Management policy reference - limit-concurrency | Microsoft Docs
+description: Reference for the limit-concurrency policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 12/08/2022+++
+# Limit concurrency
+
+The `limit-concurrency` policy prevents enclosed policies from executing by more than the specified number of requests at any time. When that number is exceeded, new requests will fail immediately with the `429` Too Many Requests status code.
++
+## Policy statement
+
+```xml
+<limit-concurrency key="expression" max-count="number">
+ <!ΓÇö nested policy statements -->
+</limit-concurrency>
+```
+
+## Attributes
+
+| Attribute | Description | Required | Default |
+| | -- | -- | - |
+| key | A string. Policy expression allowed. Specifies the concurrency scope. Can be shared by multiple policies. | Yes | N/A |
+| max-count | An integer. Specifies a maximum number of requests that are allowed to enter the policy. | Yes | N/A |
++
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+
+## Example
+
+The following example demonstrates how to limit number of requests forwarded to a backend based on the value of a context variable.
+
+```xml
+<policies>
+ <inbound>…</inbound>
+ <backend>
+ <limit-concurrency key="@((string)context.Variables["connectionId"])" max-count="3">
+ <forward-request timeout="120"/>
+ </limit-concurrency>
+ </backend>
+ <outbound>…</outbound>
+</policies>
+```
+
+## Related policies
+
+* [API Management advanced policies](api-management-advanced-policies.md)
+
api-management Log To Eventhub Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/log-to-eventhub-policy.md
+
+ Title: Azure API Management policy reference - log-to-eventhub | Microsoft Docs
+description: Reference for the log-to-eventhub policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 12/08/2022+++
+# Log to event hub
+
+The `log-to-eventhub` policy sends messages in the specified format to an event hub defined by a [Logger](/rest/api/apimanagement/current-ga/logger) entity. As its name implies, the policy is used for saving selected request or response context information for online or offline analysis.
+
+> [!NOTE]
+> For a step-by-step guide on configuring an event hub and logging events, see [How to log API Management events with Azure Event Hubs](./api-management-howto-log-event-hubs.md).
+++
+## Policy statement
+
+```xml
+<log-to-eventhub logger-id="id of the logger entity" partition-id="index of the partition where messages are sent" partition-key="value used for partition assignment">
+ Expression returning a string to be logged
+</log-to-eventhub>
+```
+
+## Attributes
+
+| Attribute | Description | Required | Default |
+| - | - | -- | -|
+| logger-id | The ID of the Logger registered with your API Management service. | Yes | N/A |
+| partition-id | Specifies the index of the partition where messages are sent. | Optional. Do not use if `partition-key` is used. | N/A |
+| partition-key | Specifies the value used for partition assignment when messages are sent. | Optional. Do not use if `partition-id` is used. | N/A |
+
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+
+### Usage notes
+
+* The policy is not affected by Application Insights sampling. All invocations of the policy will be logged.
+* The maximum supported message size that can be sent to an event hub from this policy is 200 kilobytes (KB). A larger message will be automatically truncated to 200 KB before transfer to an event hub.
+
+## Example
+
+Any string can be used as the value to be logged in Event Hubs. In this example the date and time, deployment service name, request ID, IP address, and operation name for all inbound calls are logged to the event hub Logger registered with the `contoso-logger` ID.
+
+```xml
+<policies>
+ <inbound>
+ <log-to-eventhub logger-id ='contoso-logger'>
+ @( string.Join(",", DateTime.UtcNow, context.Deployment.ServiceName, context.RequestId, context.Request.IpAddress, context.Operation.Name) )
+ </log-to-eventhub>
+ </inbound>
+ <outbound>
+ </outbound>
+</policies>
+```
+
+## Related policies
+
+* [API Management advanced policies](api-management-advanced-policies.md)
+
api-management Mitigate Owasp Api Threats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/mitigate-owasp-api-threats.md
More information about this threat: [API1:2019 Broken Object Level Authorization
* Implement a custom policy to map identifiers from request to backend and from backend to client, so that internal identifiers aren't exposed.
- In these cases, the custom policy could be a [policy expression](api-management-policy-expressions.md) with a look-up (for example, a dictionary) or integration with another service through the [send request](api-management-advanced-policies.md#SendRequest) policy.
+ In these cases, the custom policy could be a [policy expression](api-management-policy-expressions.md) with a look-up (for example, a dictionary) or integration with another service through the [send request](send-request-policy.md) policy.
-* For GraphQL scenarios, enforce object-level authorization through the [validate GraphQL request](graphql-policies.md#validate-graphql-request) policy, using the `authorize` element.
+* For GraphQL scenarios, enforce object-level authorization through the [validate GraphQL request](validate-graphql-request-policy.md) policy, using the `authorize` element.
## Broken user authentication
Use API Management for user authentication and authorization:
* **Authentication** - API Management supports the following [authentication methods](api-management-authentication-policies.md):
- * [Basic authentication](api-management-authentication-policies.md#Basic) policy - Username and password credentials.
+ * [Basic authentication](authentication-basic-policy.md) policy - Username and password credentials.
* [Subscription key](api-management-subscriptions.md) - A subscription key provides a similar level of security as basic authentication and may not be sufficient alone. If the subscription key is compromised, an attacker may get unlimited access to the system.
- * [Client certificate](api-management-authentication-policies.md#ClientCertificate) policy - Using client certificates is more secure than basic credentials or subscription key, but it doesn't allow the flexibility provided by token-based authorization protocols such as OAuth 2.0.
+ * [Client certificate](authentication-certificate-policy.md) policy - Using client certificates is more secure than basic credentials or subscription key, but it doesn't allow the flexibility provided by token-based authorization protocols such as OAuth 2.0.
-* **Authorization** - API Management supports a [validate JWT](api-management-access-restriction-policies.md#ValidateJWT) policy to check the validity of an incoming OAuth 2.0 JWT access token based on information obtained from the OAuth identity provider's metadata endpoint. Configure the policy to check relevant token claims, audience, and expiration time. Learn more about protecting an API using [OAuth 2.0 authorization and Azure Active Directory](api-management-howto-protect-backend-with-aad.md).
+* **Authorization** - API Management supports a [validate JWT](validate-jwt-policy.md) policy to check the validity of an incoming OAuth 2.0 JWT access token based on information obtained from the OAuth identity provider's metadata endpoint. Configure the policy to check relevant token claims, audience, and expiration time. Learn more about protecting an API using [OAuth 2.0 authorization and Azure Active Directory](api-management-howto-protect-backend-with-aad.md).
More recommendations:
-* Use [access restriction policies](api-management-access-restriction-policies.md) in API Management to increase security. For example, [call rate limiting](api-management-access-restriction-policies.md#LimitCallRate) slows down bad actors using brute force attacks to compromise credentials.
+* Use [access restriction policies](api-management-access-restriction-policies.md) in API Management to increase security. For example, [call rate limiting](rate-limit-policy.md) slows down bad actors using brute force attacks to compromise credentials.
* APIs should use TLS/SSL (transport security) to protect the credentials or tokens. Credentials and tokens should be sent in request headers and not as query parameters.
More information about this threat: [API3:2019 Excessive Data Exposure](https://
* If it's not possible to alter the backend interface design and excessive data is a concern, use API Management [transformation policies](transform-api.md) to rewrite response payloads and mask or filter data. For example, [remove unneeded JSON properties](./policies/filter-response-content.md) from a response body.
-* [Response content validation](validation-policies.md#validate-content) in API Management can be used with an XML or JSON schema to block responses with undocumented properties or improper values. The policy also supports blocking responses exceeding a specified size.
+* [Response content validation](validate-content-policy.md) in API Management can be used with an XML or JSON schema to block responses with undocumented properties or improper values. The policy also supports blocking responses exceeding a specified size.
-* Use the [validate status code](validation-policies.md#validate-status-code) policy to block responses with errors undefined in the API schema.
+* Use the [validate status code](validate-status-code-policy.md) policy to block responses with errors undefined in the API schema.
-* Use the [validate headers](validation-policies.md#validate-headers) policy to block responses with headers that aren't defined in the schema or don't comply to their definition in the schema. Remove unwanted headers with the [set header](api-management-transformation-policies.md#SetHTTPheader) policy.
+* Use the [validate headers](validate-headers-policy.md) policy to block responses with headers that aren't defined in the schema or don't comply to their definition in the schema. Remove unwanted headers with the [set header](set-header-policy.md) policy.
-* For GraphQL scenarios, use the [validate GraphQL request](graphql-policies.md#validate-graphql-request) policy to validate GraphQL requests, authorize access to specific query paths, and limit response size.
+* For GraphQL scenarios, use the [validate GraphQL request](validate-graphql-request-policy.md) policy to validate GraphQL requests, authorize access to specific query paths, and limit response size.
## Lack of resources and rate limiting
More information about this threat: [API4:2019 Lack of resources and rate limiti
### Recommendations
-* Use [rate limit](api-management-access-restriction-policies.md#LimitCallRate) (short-term) and [quota limit](api-management-access-restriction-policies.md#SetUsageQuota) (long-term) policies to control the allowed number of API calls or bandwidth per consumer.
+* Use [rate limit](rate-limit-policy.md) (short-term) and [quota limit](quota-policy.md) (long-term) policies to control the allowed number of API calls or bandwidth per consumer.
-* Define strict request object definitions and their properties in the OpenAPI definition. For example, define the max value for paging integers, maxLength and regular expression (regex) for strings. Enforce those schemas with the [validate content](validation-policies.md#validate-content) and [validate parameters](validation-policies.md#validate-parameters) policies in API Management.
+* Define strict request object definitions and their properties in the OpenAPI definition. For example, define the max value for paging integers, maxLength and regular expression (regex) for strings. Enforce those schemas with the [validate content](validate-content-policy.md) and [validate parameters](validate-parameters-policy.md) policies in API Management.
-* Enforce maximum size of the request with the [validate content](validation-policies.md#validate-content) policy.
+* Enforce maximum size of the request with the [validate content](validate-content-policy.md) policy.
* Optimize performance with [built-in caching](api-management-howto-cache.md), thus reducing the consumption of CPU, memory, and networking resources for certain operations.
-* Enforce authentication for API calls (see [Broken user authentication](#broken-user-authentication)). Revoke access for abusive users. For example, deactivate the subscription key, block the IP address with the [restrict caller IPs](api-management-access-restriction-policies.md#RestrictCallerIPs) policy, or reject requests for a certain user claim from a [JWT token](api-management-access-restriction-policies.md#ValidateJWT).
+* Enforce authentication for API calls (see [Broken user authentication](#broken-user-authentication)). Revoke access for abusive users. For example, deactivate the subscription key, block the IP address with the [restrict caller IPs](ip-filter-policy.md) policy, or reject requests for a certain user claim from a [JWT token](validate-jwt-policy.md).
-* Apply a [CORS](api-management-cross-domain-policies.md#CORS) policy to control the websites that are allowed to load the resources served through the API. To avoid overly permissive configurations, donΓÇÖt use wildcard values (`*`) in the CORS policy.
+* Apply a [CORS](cors-policy.md) policy to control the websites that are allowed to load the resources served through the API. To avoid overly permissive configurations, donΓÇÖt use wildcard values (`*`) in the CORS policy.
* Minimize the time it takes a backend service to respond. The longer the backend service takes to respond, the longer the connection is occupied in API Management, therefore reducing the number of requests that can be served in a given timeframe.
- * Define `timeout` in the [forward request](api-management-advanced-policies.md#ForwardRequest) policy.
+ * Define `timeout` in the [forward request](forward-request-policy.md) policy.
- * Use the [validate GraphQL request](graphql-policies.md#validate-graphql-request) policy for GraphQL APIs and configure `max-depth` and `max-size` parameters.
+ * Use the [validate GraphQL request](validate-graphql-request-policy.md) policy for GraphQL APIs and configure `max-depth` and `max-size` parameters.
- * Limit the number of parallel backend connections with the [limit concurrency](api-management-advanced-policies.md#LimitConcurrency) policy.
+ * Limit the number of parallel backend connections with the [limit concurrency](limit-concurrency-policy.md) policy.
* While API Management can protect backend services from DDoS attacks, it may be vulnerable to those attacks itself. Deploy a bot protection service in front of API Management (for example, [Azure Application Gateway](api-management-howto-integrate-internal-vnet-appgateway.md), [Azure Front Door](front-door-api-management.md), or [Azure DDoS Protection](protect-with-ddos-protection.md)) to better protect against DDoS attacks. When using a WAF with Azure Application Gateway or Azure Front Door, consider using [Microsoft_BotManagerRuleSet_1.0](../web-application-firewall/afds/afds-overview.md#bot-protection-rule-set).
More information about this threat: [API5:2019 Broken function level authorizati
* By default, protect all API endpoints in API Management with [subscription keys](api-management-subscriptions.md).
-* Define a [validate JWT](api-management-access-restriction-policies.md#ValidateJWT) policy and enforce required token claims. If certain operations require stricter claims enforcement, define extra `validate-jwt` policies for those operations only.
+* Define a [validate JWT](validate-jwt-policy.md) policy and enforce required token claims. If certain operations require stricter claims enforcement, define extra `validate-jwt` policies for those operations only.
* Use an Azure virtual network or Private Link to hide API endpoints from the internet. Learn more about [virtual network options](virtual-network-concepts.md) with API Management.
More information about this threat: [API6:2019 Mass assignment](https://github.c
* External API interfaces should be decoupled from the internal data implementation. Avoid binding API contracts directly to data contracts in backend services. Review the API design frequently, and deprecate and remove legacy properties using [versioning](api-management-versions.md) in API Management.
-* Precisely define XML and JSON contracts in the API schema and use [validate content](validation-policies.md#validate-content) and [validate parameters](validation-policies.md#validate-parameters) policies to block requests and responses with undocumented properties. Blocking requests with undocumented properties mitigates attacks, while blocking responses with undocumented properties makes it harder to reverse-engineer potential attack vectors.
+* Precisely define XML and JSON contracts in the API schema and use [validate content](validate-content-policy.md) and [validate parameters](validate-parameters-policy.md) policies to block requests and responses with undocumented properties. Blocking requests with undocumented properties mitigates attacks, while blocking responses with undocumented properties makes it harder to reverse-engineer potential attack vectors.
* If the backend interface can't be changed, use [transformation policies](transform-api.md) to rewrite request and response payloads and decouple the API contracts from backend contracts. For example, mask or filter data or [remove unneeded JSON properties](./policies/filter-response-content.md).
More information about this threat: [API7:2019 Security misconfiguration](https:
* Always inherit parent policies through the `<base>` tag.
- * When using OAuth 2.0, configure and test the [validate JWT](api-management-access-restriction-policies.md#ValidateJWT) policy to check the existence and validity of the JWT token before it reaches the backend. Automatically check the token expiration time, token signature, and issuer. Enforce claims, audiences, token expiration, and token signature through policy settings.
+ * When using OAuth 2.0, configure and test the [validate JWT](validate-jwt-policy.md) policy to check the existence and validity of the JWT token before it reaches the backend. Automatically check the token expiration time, token signature, and issuer. Enforce claims, audiences, token expiration, and token signature through policy settings.
- * Configure the [CORS](api-management-cross-domain-policies.md#CORS) policy and don't use wildcard `*` for any configuration option. Instead, explicitly list allowed values.
+ * Configure the [CORS](cors-policy.md) policy and don't use wildcard `*` for any configuration option. Instead, explicitly list allowed values.
* Set [validation policies](validation-policies.md) to `prevent` in production environments to validate JSON and XML schemas, headers, query parameters, and status codes, and to enforce the maximum size for request or response.
- * If API Management is outside a network boundary, client IP validation is still possible using the [restrict caller IPs](api-management-access-restriction-policies.md#RestrictCallerIPs) policy. Ensure that it uses an allowlist, not a blocklist.
+ * If API Management is outside a network boundary, client IP validation is still possible using the [restrict caller IPs](ip-filter-policy.md) policy. Ensure that it uses an allowlist, not a blocklist.
- * If client certificates are used between caller and API Management, use the [validate client certificate](api-management-access-restriction-policies.md#validate-client-certificate) policy. Ensure that the `validate-revocation`, `validate-trust`, `validate-not-before`, and `validate-not-after` attributes are all set to `true`.
+ * If client certificates are used between caller and API Management, use the [validate client certificate](validate-client-certificate-policy.md) policy. Ensure that the `validate-revocation`, `validate-trust`, `validate-not-before`, and `validate-not-after` attributes are all set to `true`.
* Client certificates (mutual TLS) can also be applied between API Management and the backend. The backend should:
More information about this threat: [API7:2019 Security misconfiguration](https:
* Validate the certificate name where applicable
-* For GraphQL scenarios, use the [validate GraphQL request](graphql-policies.md#validate-graphql-request) policy. Ensure that the `authorization` element and `max-size` and `max-depth` attributes are set.
+* For GraphQL scenarios, use the [validate GraphQL request](validate-graphql-request-policy.md) policy. Ensure that the `authorization` element and `max-size` and `max-depth` attributes are set.
* Don't store secrets in policy files or in source control. Always use API Management [named values](api-management-howto-properties.md) or fetch the secrets at runtime using custom policy expressions.
More information about this threat: [API8:2019 Injection](https://github.com/OWA
* [Modern Web Application Firewall (WAF) policies](https://github.com/SpiderLabs/ModSecurity) cover many common injection vulnerabilities. While API Management doesnΓÇÖt have a built-in WAF component, deploying a WAF upstream (in front) of the API Management instance is strongly recommended. For example, use [Azure Application Gateway](/azure/architecture/reference-architectures/apis/protect-apis) or [Azure Front Door](front-door-api-management.md). > [!IMPORTANT]
- > Ensure that a bad actor can't bypass the gateway hosting the WAF and connect directly to the API Management gateway or backend API itself. Possible mitigations include: [network ACLs](../virtual-network/network-security-groups-overview.md), using API Management policy to [restrict inbound traffic by client IP](api-management-access-restriction-policies.md#RestrictCallerIPs), removing public access where not required, and [client certificate authentication](api-management-howto-mutual-certificates-for-clients.md) (also known as mutual TLS or mTLS).
+ > Ensure that a bad actor can't bypass the gateway hosting the WAF and connect directly to the API Management gateway or backend API itself. Possible mitigations include: [network ACLs](../virtual-network/network-security-groups-overview.md), using API Management policy to [restrict inbound traffic by client IP](ip-filter-policy.md), removing public access where not required, and [client certificate authentication](api-management-howto-mutual-certificates-for-clients.md) (also known as mutual TLS or mTLS).
* Use schema and parameter [validation](validation-policies.md) policies, where applicable, to further constrain and validate the request before it reaches the backend API service.
More information about this threat: [API10:2019 Insufficient logging and monito
* Set alerts in Azure Monitor and Application Insights - for example, for the [capacity metric](api-management-howto-autoscale.md) or for excessive requests or bandwidth transfer.
-* Use the [emit metrics](api-management-advanced-policies.md#emit-metrics) policy for custom metrics.
+* Use the [emit-metric](emit-metric-policy.md) policy for custom metrics.
* Use the Azure Activity log for tracking activity in the service.
api-management Mock Api Responses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/mock-api-responses.md
Keep this operation for use in the rest of this article.
1. Select **Save**. > [!TIP]
- > A yellow bar with the text **Mocking is enabled** displays. This indicates that the responses returned from API Management are mocked by the [mocking policy](api-management-advanced-policies.md#mock-response) and aren't produced by the backend.
+ > A yellow bar with the text **Mocking is enabled** displays. This indicates that the responses returned from API Management are mocked by the [mocking policy](mock-response-policy.md) and aren't produced by the backend.
## Test the mocked API
api-management Mock Response Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/mock-response-policy.md
+
+ Title: Azure API Management policy reference - mock-response | Microsoft Docs
+description: Reference for the mock-response policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 12/08/2022+++
+# Mock response
+
+The `mock-response` policy, as the name implies, is used to mock APIs and operations. It cancels normal pipeline execution and returns a mocked response to the caller. The policy always tries to return responses of highest fidelity. It prefers response content examples, when available. It generates sample responses from schemas, when schemas are provided and examples are not. If neither examples or schemas are found, responses with no content are returned.
+++
+## Policy statement
+
+```xml
+<mock-response status-code="code" content-type="media type"/>
+```
+
+## Attributes
+
+| Attribute | Description | Required | Default |
+| | -- | -- | - |
+| status-code | Specifies response status code and is used to select corresponding example or schema. | No | 200 |
+| content-type | Specifies `Content-Type` response header value and is used to select corresponding example or schema. | No | None |
+
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, on-error
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+
+## Examples
+
+```xml
+<!-- Returns 200 OK status code. Content is based on an example or schema, if provided for this status code. First found content type is used. If no example or schema is found, the content is empty. -->
+<mock-response/>
+
+<!-- Returns 200 OK status code. Content is based on an example or schema, if provided for this status code and media type. If no example or schema found, the content is empty. -->
+<mock-response status-code='200' content-type='application/json'/>
+```
+
+## Related policies
+
+* [API Management advanced policies](api-management-advanced-policies.md)
+
api-management Policy Fragments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policy-fragments.md
A policy fragment:
* Must be valid XML containing one or more policy configurations * May include [policy expressions](api-management-policy-expressions.md), if a referenced policy supports them
-* Is inserted as-is in a policy definition by using the [include-fragment](api-management-advanced-policies.md#IncludeFragment) policy
+* Is inserted as-is in a policy definition by using the [include-fragment](include-fragment-policy.md) policy
Limitations:
While not required, you may want to [configure](set-edit-policies.md) one or mor
:::image type="content" source="media/policy-fragments/create-fragment.png" alt-text="Screenshot showing the create a new policy fragment form.":::
- For example, the following fragment contains a [`set-header`](api-management-transformation-policies.md#SetHTTPheader) policy configuration to forward context information to a backend service. This fragment would be included in an inbound policy section. The policy expressions in this example access the built-in [`context` variable](api-management-policy-expressions.md#ContextVariables).
+ For example, the following fragment contains a [`set-header`](set-header-policy.md) policy configuration to forward context information to a backend service. This fragment would be included in an inbound policy section. The policy expressions in this example access the built-in [`context` variable](api-management-policy-expressions.md#ContextVariables).
```xml <fragment>
While not required, you may want to [configure](set-edit-policies.md) one or mor
## Include a fragment in a policy definition
-Configure the [`include-fragment`](api-management-advanced-policies.md#IncludeFragment) policy to insert a policy fragment in a policy definition. For more information about policy definitions, see [Set or edit policies](set-edit-policies.md).
+Configure the [`include-fragment`](include-fragment-policy.md) policy to insert a policy fragment in a policy definition. For more information about policy definitions, see [Set or edit policies](set-edit-policies.md).
* You may include a fragment at any scope and in any policy section, as long as the underlying policy or policies in the fragment support that usage. * You may include multiple policy fragments in a policy definition.
api-management Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policy-reference.md
Title: Built-in policy definitions for Azure API Management description: Lists Azure Policy built-in policy definitions for Azure API Management. These built-in policy definitions provide approaches to managing your Azure resources. Previously updated : 11/28/2022 Last updated : 01/05/2023
api-management Proxy Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/proxy-policy.md
+
+ Title: Azure API Management policy reference - proxy | Microsoft Docs
+description: Reference for the proxy policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 12/08/2022+++
+# Set HTTP proxy
+
+The `proxy` policy allows you to route requests forwarded to backends via an HTTP proxy. Only HTTP (not HTTPS) is supported between the gateway and the proxy. Basic and NTLM authentication only.
+++
+## Policy statement
+
+```xml
+<proxy url="http://hostname-or-ip:port" username="username" password="password" />
+```
+
+## Attributes
+
+| Attribute | Description | Required | Default |
+| -- | | -- | - |
+| url | Proxy URL in the form of `http://host:port`. | Yes | N/A |
+| username | Username to be used for authentication with the proxy. | No | N/A |
+| password | Password to be used for authentication with the proxy. | No | N/A |
+
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+
+## Example
+
+In this example, [named values](api-management-howto-properties.md) are used for the username and password to avoid storing sensitive information in the policy document.
+
+```xml
+<proxy url="http://192.168.1.1:8080" username={{username}} password={{password}} />
+```
++
+## Related policies
+
+* [API Management advanced policies](api-management-advanced-policies.md)
+
api-management Publish To Dapr Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/publish-to-dapr-policy.md
+
+ Title: Azure API Management policy reference - publish-to-dapr | Microsoft Docs
+description: Reference for the publish-to-dapr policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 12/07/2022+++
+# Send message to Pub/Sub topic
+
+The `publish-to-dapr` policy instructs API Management gateway to send a message to a Dapr Publish/Subscribe topic. The policy accomplishes that by making an HTTP POST request to `http://localhost:3500/v1.0/publish/{{pubsub-name}}/{{topic}}`, replacing template parameters and adding content specified in the policy statement.
+
+The policy assumes that Dapr runtime is running in a sidecar container in the same pod as the gateway. Dapr runtime implements the Pub/Sub semantics. Learn more about [Dapr integration with API Management](api-management-dapr-policies.md).
++
+## Policy statement
+
+```xml
+<publish-to-dapr pubsub-name="pubsub-name" topic="topic-name" ignore-error="false|true" response-variable-name="resp-var-name" timeout="in seconds" template="Liquid" content-type="application/json">
+ <!-- message content -->
+</publish-to-dapr>
+```
++
+## Attributes
+
+| Attribute | Description | Required | Default |
+|||-||
+| pubsub-name | The name of the target PubSub component. Maps to the [pubsubname](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/pubsub_api.md) parameter in Dapr. If not present, the `topic` attribute value must be in the form of `pubsub-name/topic-name`. | No | None |
+| topic | The name of the topic. Maps to the [topic](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/pubsub_api.md) parameter in Dapr. | Yes | N/A |
+| ignore-error | If set to `true`, instructs the policy not to trigger ["on-error"](api-management-error-handling-policies.md) section upon receiving error from Dapr runtime. | No | `false` |
+| response-variable-name | Name of the [Variables](api-management-policy-expressions.md#ContextVariables) collection entry to use for storing response from Dapr runtime. | No | None |
+| timeout | Time (in seconds) to wait for Dapr runtime to respond. Can range from 1 to 240 seconds. | No | 5 |
+| template | Templating engine to use for transforming the message content. "Liquid" is the only supported value. | No | None |
+| content-type | Type of the message content. "application/json" is the only supported value. | No | None |
+
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) self-hosted
+
+### Usage notes
+
+Dapr support must be [enabled](api-management-dapr-policies.md#enable-dapr-support-in-the-self-hosted-gateway) in the self-hosted gateway.
+
+## Example
+
+The following example demonstrates sending the body of the current request to the "new" [topic](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/pubsub_api.md#url-parameters) of the "orders" Pub/Sub [component](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/pubsub_api.md#url-parameters). Response received from the Dapr runtime is stored in the "dapr-response" entry of the Variables collection in the [context](api-management-policy-expressions.md#ContextVariables) object.
+
+If Dapr runtime can't locate the target topic, for example, and responds with an error, the "on-error" section is triggered. The response received from the Dapr runtime is returned to the caller verbatim. Otherwise, default `200 OK` response is returned.
+
+The "backend" section is empty and the request is not forwarded to the backend.
+
+```xml
+<policies>
+ <inbound>
+ <base />
+ <publish-to-dapr
+ pubsub-name="orders"
+ topic="new"
+ response-variable-name="dapr-response">
+ @(context.Request.Body.As<string>())
+ </publish-to-dapr>
+ </inbound>
+ <backend>
+ </backend>
+ <outbound>
+ <base />
+ </outbound>
+ <on-error>
+ <base />
+ <return-response response-variable-name="pubsub-response" />
+ </on-error>
+</policies>
+```
+
+## Related policies
+
+* [API Management Dapr integration policies](api-management-dapr-policies.md)
+
api-management Quota By Key Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/quota-by-key-policy.md
+
+ Title: Azure API Management policy reference - quota-by-key | Microsoft Docs
+description: Reference for the quota-by-key policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 12/08/2022++
+# Set usage quota by key
+
+The `quota-by-key` policy enforces a renewable or lifetime call volume and/or bandwidth quota, on a per key basis. The key can have an arbitrary string value and is typically provided using a policy expression. Optional increment condition can be added to specify which requests should be counted towards the quota. If multiple policies would increment the same key value, it is incremented only once per request. When the quota is exceeded, the caller receives a `403 Forbidden` response status code, and the response includes a `Retry-After` header whose value is the recommended retry interval in seconds.
+
+To understand the difference between rate limits and quotas, [see Rate limits and quotas.](./api-management-sample-flexible-throttling.md#rate-limits-and-quotas)
++++
+## Policy statement
+
+```xml
+<quota-by-key calls="number"
+ bandwidth="kilobytes"
+ renewal-period="seconds"
+ increment-condition="condition"
+ counter-key="key value"
+ first-period-start="date-time" />
+```
+
+## Attributes
+
+| Attribute | Description | Required | Default |
+| - | | - | - |
+| bandwidth | The maximum total number of kilobytes allowed during the time interval specified in the `renewal-period`. | Either `calls`, `bandwidth`, or both together must be specified. | N/A |
+| calls | The maximum total number of calls allowed during the time interval specified in the `renewal-period`. | Either `calls`, `bandwidth`, or both together must be specified. | N/A |
+| counter-key | The key to use for the `quota policy`. For each key value, a single counter is used for all scopes at which the policy is configured. | Yes | N/A |
+| increment-condition | The Boolean expression specifying if the request should be counted towards the quota (`true`) | No | N/A |
+| renewal-period | The length in seconds of the fixed window after which the quota resets. The start of each period is calculated relative to `first-period-start`. When `renewal-period` is set to `0`, the period is set to infinite. | Yes | N/A |
+| first-period-start | The starting date and time for quota renewal periods, in the following format: `yyyy-MM-ddTHH:mm:ssZ` as specified by the ISO 8601 standard. | No | `0001-01-01T00:00:00Z` |
++
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, self-hosted
+
+### Usage notes
+
+The `counter-key` attribute value must be unique across all the APIs in the API Management instance if you don't want to share the total between the other APIs.
+
+## Example
+
+```xml
+<policies>
+ <inbound>
+ <base />
+ <quota-by-key calls="10000" bandwidth="40000" renewal-period="3600"
+ increment-condition="@(context.Response.StatusCode >= 200 && context.Response.StatusCode < 400)"
+ counter-key="@(context.Request.IpAddress)" />
+ </inbound>
+ <outbound>
+ <base />
+ </outbound>
+</policies>
+```
+
+For more information and examples of this policy, see [Advanced request throttling with Azure API Management](./api-management-sample-flexible-throttling.md).
+
+## Related policies
+
+* [API Management access restriction policies](api-management-access-restriction-policies.md)
+
api-management Quota Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/quota-policy.md
+
+ Title: Azure API Management policy reference - quota | Microsoft Docs
+description: Reference for the quota policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 09/27/2022+++
+# Set usage quota by subscription
+
+The `quota` policy enforces a renewable or lifetime call volume and/or bandwidth quota, on a per subscription basis. When the quota is exceeded, the caller receives a `403 Forbidden` response status code, and the response includes a `Retry-After` header whose value is the recommended retry interval in seconds.
+
+To understand the difference between rate limits and quotas, [see Rate limits and quotas.](./api-management-sample-flexible-throttling.md#rate-limits-and-quotas)
+++
+## Policy statement
+
+```xml
+<quota calls="number" bandwidth="kilobytes" renewal-period="seconds">
+ <api name="API name" id="API id" calls="number">
+ <operation name="operation name" id="operation id" calls="number" />
+ </api>
+</quota>
+```
+
+## Attributes
+
+| Attribute | Description | Required | Default |
+| -- | | - | - |
+| bandwidth | The maximum total number of kilobytes allowed during the time interval specified in the `renewal-period`. | Either `calls`, `bandwidth`, or both together must be specified. | N/A |
+| calls | The maximum total number of calls allowed during the time interval specified in the `renewal-period`. | Either `calls`, `bandwidth`, or both together must be specified. | N/A |
+| renewal-period | The length in seconds of the fixed window after which the quota resets. The start of each period is calculated relative to the start time of the subscription. When `renewal-period` is set to `0`, the period is set to infinite.| Yes | N/A |
+
+## Elements
+
+| Element | Description | Required |
+| | -- | -- |
+| api | Add one or more of these elements to impose call quota on APIs within the product. Product and API call quotas are applied independently. API can be referenced either via `name` or `id`. If both attributes are provided, `id` will be used and `name` will be ignored. | No |
+| operation | Add one or more of these elements to impose call quota on operations within an API. Product, API, and operation call quotas are applied independently. Operation can be referenced either via `name` or `id`. If both attributes are provided, `id` will be used and `name` will be ignored. | No |
++
+## api attributes
+
+| Attribute | Description | Required | Default |
+| -- | | - | - |
+| name | The name of the API for which to apply the call quota limit. | Either `name` or `id` must be specified. | N/A |
+| id | The ID of the API for which to apply the call quota. | Either `name` or `id` must be specified. | N/A |
+| bandwidth | The maximum total number of kilobytes allowed during the time interval specified in the `renewal-period`. | Either `calls`, `bandwidth`, or both together must be specified. | N/A |
+| calls | The maximum total number of calls allowed during the time interval specified in the `renewal-period`. | Either `calls`, `bandwidth`, or both together must be specified. | N/A |
+| renewal-period | The length in seconds of the fixed window after which the quota resets. The start of each period is calculated relative to the start time of the subscription. When `renewal-period` is set to `0`, the period is set to infinite.| Yes | N/A |
+
+## operation attributes
+
+| Attribute | Description | Required | Default |
+| -- | | - | - |
+| name | The name of the operation for which to apply the rate limit. | Either `name` or `id` must be specified. | N/A |
+| id | The ID of the operation for which to apply the rate limit. | Either `name` or `id` must be specified. | N/A |
+| bandwidth | The maximum total number of kilobytes allowed during the time interval specified in the `renewal-period`. | Either `calls`, `bandwidth`, or both together must be specified. | N/A |
+| calls | The maximum total number of calls allowed during the time interval specified in the `renewal-period`. | Either `calls`, `bandwidth`, or both together must be specified. | N/A |
+| renewal-period | The length in seconds of the fixed window after which the quota resets. The start of each period is calculated relative to the start time of the subscription. When `renewal-period` is set to `0`, the period is set to infinite.| Yes | N/A |
+
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) product
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+
+### Usage notes
+
+* This policy can be used only once per policy definition.
+* [Policy expressions](api-management-policy-expressions.md) can't be used in attribute values for this policy.
+* This policy is only applied when an API is accessed using a subscription key.
+++
+## Example
+
+```xml
+<policies>
+ <inbound>
+ <base />
+ <quota calls="10000" bandwidth="40000" renewal-period="3600" />
+ </inbound>
+ <outbound>
+ <base />
+ </outbound>
+</policies>
+```
+
+## Related policies
+
+* [API Management access restriction policies](api-management-access-restriction-policies.md)
+
api-management Rate Limit By Key Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/rate-limit-by-key-policy.md
+
+ Title: Azure API Management policy reference - rate-limit-by-key | Microsoft Docs
+description: Reference for the rate-limit-by-key policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 12/08/2022++
+# Limit call rate by key
+
+The `rate-limit-by-key` policy prevents API usage spikes on a per key basis by limiting the call rate to a specified number per a specified time period. The key can have an arbitrary string value and is typically provided using a policy expression. Optional increment condition can be added to specify which requests should be counted towards the limit. When this call rate is exceeded, the caller receives a `429 Too Many Requests` response status code.
+
+To understand the difference between rate limits and quotas, [see Rate limits and quotas.](./api-management-sample-flexible-throttling.md#rate-limits-and-quotas)
+++
+## Policy statement
+
+```xml
+<rate-limit-by-key calls="number"
+ renewal-period="seconds"
+ increment-condition="condition"
+ increment-count="number"
+ counter-key="key value"
+ retry-after-header-name="custom header name, replaces default 'Retry-After'"
+ retry-after-variable-name="policy expression variable name"
+ remaining-calls-header-name="header name"
+ remaining-calls-variable-name="policy expression variable name"
+ total-calls-header-name="header name"/>
+
+```
+
+## Attributes
+
+| Attribute | Description | Required | Default |
+| - | -- | -- | - |
+| calls | The maximum total number of calls allowed during the time interval specified in the `renewal-period`. Policy expression is allowed. | Yes | N/A |
+| counter-key | The key to use for the rate limit policy. For each key value, a single counter is used for all scopes at which the policy is configured. | Yes | N/A |
+| increment-condition | The Boolean expression specifying if the request should be counted towards the rate (`true`). | No | N/A |
+| increment-count | The number by which the counter is increased per request. | No | 1 |
+| renewal-period | The length in seconds of the sliding window during which the number of allowed requests should not exceed the value specified in `calls`. Policy expression is allowed. Maximum allowed value: 300 seconds. | Yes | N/A |
+| retry-after-header-name | The name of a custom response header whose value is the recommended retry interval in seconds after the specified call rate is exceeded. | No | `Retry-After` |
+| retry-after-variable-name | The name of a policy expression variable that stores the recommended retry interval in seconds after the specified call rate is exceeded. | No | N/A |
+| remaining-calls-header-name | The name of a response header whose value after each policy execution is the number of remaining calls allowed for the time interval specified in the `renewal-period`. | No | N/A |
+| remaining-calls-variable-name | The name of a policy expression variable that after each policy execution stores the number of remaining calls allowed for the time interval specified in the `renewal-period`. | No | N/A |
+| total-calls-header-name | The name of a response header whose value is the value specified in `calls`. | No | N/A |
+
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, self-hosted
+
+## Example
+
+In the following example, the rate limit of 10 calls per 60 seconds is keyed by the caller IP address. After each policy execution, the remaining calls allowed in the time period are stored in the variable `remainingCallsPerIP`.
+
+```xml
+<policies>
+ <inbound>
+ <base />
+ <rate-limit-by-key calls="10"
+ renewal-period="60"
+ increment-condition="@(context.Response.StatusCode == 200)"
+ counter-key="@(context.Request.IpAddress)"
+ remaining-calls-variable-name="remainingCallsPerIP"/>
+ </inbound>
+ <outbound>
+ <base />
+ </outbound>
+</policies>
+```
+
+For more information and examples of this policy, see [Advanced request throttling with Azure API Management](./api-management-sample-flexible-throttling.md).
+
+## Related policies
+
+* [API Management access restriction policies](api-management-access-restriction-policies.md)
+
api-management Rate Limit Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/rate-limit-policy.md
+
+ Title: Azure API Management policy reference - rate-limit | Microsoft Docs
+description: Reference for the rate-limit policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 12/08/2022+++
+# Limit call rate by subscription
+
+The `rate-limit` policy prevents API usage spikes on a per subscription basis by limiting the call rate to a specified number per a specified time period. When the call rate is exceeded, the caller receives a `429 Too Many Requests` response status code.
+
+To understand the difference between rate limits and quotas, [see Rate limits and quotas.](./api-management-sample-flexible-throttling.md#rate-limits-and-quotas)
++++
+## Policy statement
+
+```xml
+<rate-limit calls="number" renewal-period="seconds" retry-after-header-name="custom header name, replaces default 'Retry-After'"
+ retry-after-variable-name="policy expression variable name"
+ remaining-calls-header-name="header name"
+ remaining-calls-variable-name="policy expression variable name"
+ total-calls-header-name="header name">
+ <api name="API name" id="API id" calls="number" renewal-period="seconds" />
+ <operation name="operation name" id="operation id" calls="number" renewal-period="seconds" />
+ </api>
+</rate-limit>
+```
+## Attributes
+
+| Attribute | Description | Required | Default |
+| -- | -- | -- | - |
+| calls | The maximum total number of calls allowed during the time interval specified in `renewal-period`. | Yes | N/A |
+| renewal-period | The length in seconds of the sliding window during which the number of allowed requests should not exceed the value specified in `calls`. Maximum allowed value: 300 seconds. | Yes | N/A |
+| total-calls-header-name | The name of a response header whose value is the value specified in `calls`. | No | N/A |
+| retry-after-header-name | The name of a custom response header whose value is the recommended retry interval in seconds after the specified call rate is exceeded. | No | `Retry-After` |
+| retry-after-variable-name | The name of a policy expression variable that stores the recommended retry interval in seconds after the specified call rate is exceeded. | No | N/A |
+| remaining-calls-header-name | The name of a response header whose value after each policy execution is the number of remaining calls allowed for the time interval specified in the `renewal-period`. | No | N/A |
+| remaining-calls-variable-name | The name of a policy expression variable that after each policy execution stores the number of remaining calls allowed for the time interval specified in the `renewal-period`. | No | N/A |
+| total-calls-header-name | The name of a response header whose value is the value specified in `calls`. | No | N/A |
++
+## Elements
+
+| Element | Description | Required |
+| - | -- | -- |
+| api | Add one or more of these elements to impose a call rate limit on APIs within the product. Product and API call rate limits are applied independently. API can be referenced either via `name` or `id`. If both attributes are provided, `id` will be used and `name` will be ignored. | No |
+| operation | Add one or more of these elements to impose a call rate limit on operations within an API. Product, API, and operation call rate limits are applied independently. Operation can be referenced either via `name` or `id`. If both attributes are provided, `id` will be used and `name` will be ignored. | No |
++
+### api attributes
+
+| Attribute | Description | Required | Default |
+| -- | -- | -- | - |
+| name | The name of the API for which to apply the rate limit. | Either `name` or `id` must be specified. | N/A |
+| id | The ID of the API for which to apply the rate limit. | Either `name` or `id` must be specified. | N/A |
+| calls | The maximum total number of calls allowed during the time interval specified in `renewal-period`. | Yes | N/A |
+| renewal-period | The length in seconds of the sliding window during which the number of allowed requests should not exceed the value specified in `calls`. Maximum allowed value: 300 seconds. | Yes | N/A |
++
+### operation attributes
+
+| Attribute | Description | Required | Default |
+| -- | -- | -- | - |
+| name | The name of the operation for which to apply the rate limit. | Either `name` or `id` must be specified. | N/A |
+| id | The ID of the operation for which to apply the rate limit. | Either `name` or `id` must be specified. | N/A |
+| calls | The maximum total number of calls allowed during the time interval specified in `renewal-period`. | Yes | N/A |
+| renewal-period | The length in seconds of the sliding window during which the number of allowed requests should not exceed the value specified in `calls`. Maximum allowed value: 300 seconds. | Yes | N/A |
+
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+
+### Usage notes
+
+* This policy can be used only once per policy definition.
+* Except where noted, [policy expressions](api-management-policy-expressions.md) can't be used in attribute values for this policy.
+* This policy is only applied when an API is accessed using a subscription key.
+
+## Example
+
+In the following example, the per subscription rate limit is 20 calls per 90 seconds. After each policy execution, the remaining calls allowed in the time period are stored in the variable `remainingCallsPerSubscription`.
+
+```xml
+<policies>
+ <inbound>
+ <base />
+ <rate-limit calls="20" renewal-period="90" remaining-calls-variable-name="remainingCallsPerSubscription"/>
+ </inbound>
+ <outbound>
+ <base />
+ </outbound>
+</policies>
+```
++
+## Related policies
+
+* [API Management access restriction policies](api-management-access-restriction-policies.md)
+
api-management Redirect Content Urls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/redirect-content-urls-policy.md
+
+ Title: Azure API Management policy reference - redirect-content-urls | Microsoft Docs
+description: Reference for the redirect-content-urls policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 12/02/2022+++
+# Mask URLs in content
+The `redirect-content-urls` policy rewrites (masks) links in the response body so that they point to the equivalent link via the gateway. Use in the outbound section to rewrite response body links to make them point to the gateway. Use in the inbound section for an opposite effect.
+
+> [!NOTE]
+> This policy does not change any header values such as `Location` headers. To change header values, use the [set-header](set-header-policy.md) policy.
++
+## Policy statement
+
+```xml
+<redirect-content-urls />
+```
+
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+
+## Example
+
+```xml
+<redirect-content-urls />
+```
+
+## Related policies
+
+* [API Management transformation policies](api-management-transformation-policies.md)
+
api-management Retry Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/retry-policy.md
+
+ Title: Azure API Management policy reference - retry | Microsoft Docs
+description: Reference for the retry policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 12/08/2022+++
+# Retry
+
+The `retry` policy executes its child policies once and then retries their execution until the retry `condition` becomes `false` or retry `count` is exhausted.
+++
+## Policy statement
+
+```xml
+<retry
+ condition="Boolean expression or literal"
+ count="number of retry attempts"
+ interval="retry interval in seconds"
+ max-interval="maximum retry interval in seconds"
+ delta="retry interval delta in seconds"
+ first-fast-retry="boolean expression or literal">
+ <!-- One or more child policies. No restrictions. -->
+</retry>
+```
++
+## Attributes
+
+| Attribute | Description | Required | Default |
+| - | -- | -- | - |
+| condition | A Boolean literal or [expression](api-management-policy-expressions.md) specifying if retries should be stopped (`false`) or continued (`true`). | Yes | N/A |
+| count | A positive number specifying the maximum number of retries to attempt. | Yes | N/A |
+| interval | A positive number in seconds specifying the wait interval between the retry attempts. | Yes | N/A |
+| max-interval | A positive number in seconds specifying the maximum wait interval between the retry attempts. It is used to implement an exponential retry algorithm. | No | N/A |
+| delta | A positive number in seconds specifying the wait interval increment. It is used to implement the linear and exponential retry algorithms. | No | N/A |
+| first-fast-retry | If set to `true` , the first retry attempt is performed immediately. | No | `false` |
+
+## Retry wait times
+
+* When only the `interval` is specified, **fixed** interval retries are performed.
+* When only the `interval` and `delta` are specified, a **linear** interval retry algorithm is used. The wait time between retries increases according to the following formula: `interval + (count - 1)*delta`.
+* When the `interval`, `max-interval` and `delta` are specified, an **exponential** interval retry algorithm is applied. The wait time between the retries increases exponentially according to the following formula: `interval + (2^count - 1) * random(delta * 0.8, delta * 1.2)`, up to a maximum interval set by `max-interval`.
+
+ For example, when `interval` and `delta` are both set to 10 seconds, and `max-interval` is 100 seconds, the approximate wait time between retries increases as follows: 10 seconds, 20 seconds, 40 seconds, 80 seconds, with 100 seconds wait time used for remaining retries.
+
+## Elements
+
+The `retry` policy may contain any other policies as its child elements.
+
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+
+## Examples
+
+### Request forwarding with exponential retry
+
+In the following example, request forwarding is retried up to ten times using an exponential retry algorithm. Since `first-fast-retry` is set to `false`, all retry attempts are subject to exponentially increasing retry wait times (in this example, approximately 10 seconds, 20 seconds, 40 seconds, ...), up to a maximum wait of `max-interval`.
+
+```xml
+<retry
+ condition="@(context.Response.StatusCode == 500)"
+ count="10"
+ interval="10"
+ max-interval="100"
+ delta="10"
+ first-fast-retry="false">
+ <forward-request buffer-request-body="true" />
+</retry>
+```
+
+### Send request upon initial request failure
+
+In the following example, sending a request to a URL other than the defined backend is retried up to three times if the connection is dropped/timed out, or the request results in a server-side error. Since `first-fast-retry` is set to true, the first retry is executed immediately upon the initial request failure. Note that `send-request` must set `ignore-error` to true in order for `response-variable-name` to be null in the event of an error.
+
+```xml
+
+<retry
+ condition="@(context.Variables["response"] == null || ((IResponse)context.Variables["response"]).StatusCode >= 500)"
+ count="3"
+ interval="1"
+ first-fast-retry="true">
+ <send-request
+ mode="new"
+ response-variable-name="response"
+ timeout="3"
+ ignore-error="true">
+ <set-url>https://api.contoso.com/products/5</set-url>
+ <set-method>GET</set-method>
+ </send-request>
+</retry>
+```
+
+## Related policies
+
+* [API Management advanced policies](api-management-advanced-policies.md)
+
api-management Return Response Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/return-response-policy.md
+
+ Title: Azure API Management policy reference - return-response | Microsoft Docs
+description: Reference for the return-response policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 12/08/2022+++
+# Return response
+
+The `return-response` policy cancels pipeline execution and returns either a default or custom response to the caller. Default response is `200 OK` with no body. Custom response can be specified via a context variable or policy statements. When both are provided, the response contained within the context variable is modified by the policy statements before being returned to the caller.
++
+## Policy statement
+
+```xml
+<return-response response-variable-name="existing context variable">
+ <set-status>...</set-status>
+ <set-header>...</set-header>
+ <set-body>...</set-body>
+</return-response>
+```
+
+## Attributes
+
+| Attribute | Description | Required | Default |
+| - | | | - |
+| response-variable-name | The name of the context variable referenced from, for example, an upstream [send-request](send-request-policy.md) policy and containing a `Response` object. | No | N/A |
+
+## Elements
+
+| Element | Description | Required |
+| | -- | -- |
+| set-status | A [set-status](set-status-policy.md) policy statement. | No |
+| set-header | A [set-header](set-header-policy.md) policy statement. | No |
+| set-body | A [set-body](set-body-policy.md) policy statement. | No |
+
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+
+## Example
+
+```xml
+<return-response>
+ <set-status code="401" reason="Unauthorized"/>
+ <set-header name="WWW-Authenticate" exists-action="override">
+ <value>Bearer error="invalid_token"</value>
+ </set-header>
+</return-response>
+```
+
+## Related policies
+
+* [API Management advanced policies](api-management-advanced-policies.md)
+
api-management Rewrite Uri Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/rewrite-uri-policy.md
+
+ Title: Azure API Management policy reference - rewrite-uri | Microsoft Docs
+description: Reference for the rewrite-uri policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 12/08/2022+++
+# Rewrite URL
+
+The `rewrite-uri` policy converts a request URL from its public form to the form expected by the web service, as shown in the following example.
+
+- Public URL - `http://api.example.com/storenumber/ordernumber`
+
+- Request URL - `http://api.example.com/v2/US/hardware/storenumber&ordernumber?City&State`
+
+This policy can be used when a human and/or browser-friendly URL should be transformed into the URL format expected by the web service. This policy only needs to be applied when exposing an alternative URL format, such as clean URLs, RESTful URLs, user-friendly URLs or SEO-friendly URLs that are purely structural URLs that do not contain a query string and instead contain only the path of the resource (after the scheme and the authority). This is often done for aesthetic, usability, or search engine optimization (SEO) purposes.
++
+## Policy statement
+
+```xml
+<rewrite-uri template="uri template" copy-unmatched-params="true | false" />
+```
++
+## Attributes
+
+|Name|Description|Required|Default|
+|-|--|--|-|
+|template|The actual web service URL with any query string parameters. When using expressions, the whole value must be an expression.|Yes|N/A|
+|copy-unmatched-params|Specifies whether query parameters in the incoming request not present in the original URL template are added to the URL defined by the rewrite template.|No|`true`|
+
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+
+### Usage notes
+
+You can only add query string parameters using the policy. You cannot add extra template path parameters in the rewrite URL.
+
+## Example
+
+```xml
+<policies>
+ <inbound>
+ <base />
+ <rewrite-uri template="/v2/US/hardware/{storenumber}&{ordernumber}?City=city&State=state" />
+ </inbound>
+ <outbound>
+ <base />
+ </outbound>
+</policies>
+```
+```xml
+<!-- Assuming incoming request is /get?a=b&c=d and operation template is set to /get?a={b} -->
+<policies>
+ <inbound>
+ <base />
+ <rewrite-uri template="/put" />
+ </inbound>
+ <outbound>
+ <base />
+ </outbound>
+</policies>
+<!-- Resulting URL will be /put?c=d -->
+```
+```xml
+<!-- Assuming incoming request is /get?a=b&c=d and operation template is set to /get?a={b} -->
+<policies>
+ <inbound>
+ <base />
+ <rewrite-uri template="/put" copy-unmatched-params="false" />
+ </inbound>
+ <outbound>
+ <base />
+ </outbound>
+</policies>
+<!-- Resulting URL will be /put -->
+```
+
+## Related policies
+
+- [API Management transformation policies](api-management-transformation-policies.md)
+
api-management Sap Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/sap-api.md
In this article, you'll:
Operation |Description |Further configuration for operation | ||||
- |`GET /` | Enables policy configuration at service root. | Configure the following inbound [rewrite-uri](api-management-transformation-policies.md#RewriteURL) policy to append a trailing slash to requests that are forwarded to service root:<br/><br> `<rewrite-uri template="/" copy-unmatched-params="true" />` <br/><br/>This policy removes potential ambiguity of requests with or without trailing slashes, which are treated differently by some backends.|
+ |`GET /` | Enables policy configuration at service root. | Configure the following inbound [rewrite-uri](rewrite-uri-policy.md) policy to append a trailing slash to requests that are forwarded to service root:<br/><br> `<rewrite-uri template="/" copy-unmatched-params="true" />` <br/><br/>This policy removes potential ambiguity of requests with or without trailing slashes, which are treated differently by some backends.|
:::image type="content" source="media/sap-api/get-root-operation.png" alt-text="Get operation for service root":::
api-management Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure API Management description: Lists Azure Policy Regulatory Compliance controls available for Azure API Management. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/04/2022 Last updated : 01/05/2023
api-management Self Hosted Gateway Settings Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-settings-reference.md
Title: Reference - Self-hosted gateway settings - Azure API Management
+ Title: Reference - Self-hosted gateway container settings - Azure API Management
description: Reference for the required and optional settings to configure the Azure API Management self-hosted gateway.
Last updated 06/28/2022
-# Reference: Self-hosted gateway configuration settings
+# Reference: Self-hosted gateway container configuration settings
-This article provides a reference for required and optional settings that are used to configure the API Management [self-hosted gateway](self-hosted-gateway-overview.md).
+This article provides a reference for required and optional settings that are used to configure the API Management [self-hosted gateway container](self-hosted-gateway-overview.md).
> [!IMPORTANT] > This reference applies only to the self-hosted gateway v2.
api-management Send One Way Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/send-one-way-request-policy.md
+
+ Title: Azure API Management policy reference - send-one-way-request | Microsoft Docs
+description: Reference for the send-one-way-request policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 12/08/2022+++
+# Send one way request
+
+The `send-one-way-request` policy sends the provided request to the specified URL without waiting for a response.
+++
+## Policy statement
+
+```xml
+<send-one-way-request mode="new | copy" timeout="time in seconds">
+ <set-url>request URL</set-url>
+ <set-method>...</set-method>
+ <set-header>...</set-header>
+ <set-body>...</set-body>
+ <authentication-certificate thumbprint="thumbprint" />
+</send-one-way-request>
+```
+
+## Attributes
+
+| Attribute | Description | Required | Default |
+| - | -- | -- | -- |
+| mode | Determines whether this is a `new` request or a `copy` of the current request. In outbound mode, `mode=copy` does not initialize the request body. | No | `new` |
+| timeout| The timeout interval in seconds before the call to the URL fails. | No | 60 |
++
+## Elements
+
+| Element | Description | Required |
+| -- | -- | - |
+| set-url | The URL of the request. | No if `mode=copy`; otherwise yes. |
+| set-method | A [set-method](set-method-policy.md) policy statement. | No if `mode=copy`; otherwise yes. |
+| set-header | A [set-header](set-header-policy.md) policy statement. Use multiple `set-header` elements for multiple request headers. | No |
+| set-body | A [set-body](set-body-policy.md) policy statement. | No |
+| authentication-certificate | [Certificate to use for client authentication](authentication-certificate-policy.md), specified in a `thumbprint` attribute. | No |
++
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+
+## Example
+
+This example uses the `send-one-way-request` policy to send a message to a Slack chat room if the HTTP response code is greater than or equal to 500. For more information on this sample, see [Using external services from the Azure API Management service](./api-management-sample-send-request.md).
+
+```xml
+<choose>
+ <when condition="@(context.Response.StatusCode >= 500)">
+ <send-one-way-request mode="new" timeout="20">
+ <set-url>https://hooks.slack.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX</set-url>
+ <set-method>POST</set-method>
+ <set-body>@{
+ return new JObject(
+ new JProperty("username","APIM Alert"),
+ new JProperty("icon_emoji", ":ghost:"),
+ new JProperty("text", String.Format("{0} {1}\nHost: {2}\n{3} {4}\n User: {5}",
+ context.Request.Method,
+ context.Request.Url.Path + context.Request.Url.QueryString,
+ context.Request.Url.Host,
+ context.Response.StatusCode,
+ context.Response.StatusReason,
+ context.User.Email
+ ))
+ ).ToString();
+ }</set-body>
+ </send-one-way-request>
+ </when>
+</choose>
+
+```
+
+## Related policies
+
+* [API Management advanced policies](api-management-advanced-policies.md)
+
api-management Send Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/send-request-policy.md
+
+ Title: Azure API Management policy reference - send-request | Microsoft Docs
+description: Reference for the send-request policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 12/08/2022+++
+# Send request
+
+The `send-request` policy sends the provided request to the specified URL, waiting no longer than the set timeout value.
+++
+## Policy statement
+
+```xml
+<send-request mode="new | copy" response-variable-name="" timeout="60 sec" ignore-error
+="false | true">
+ <set-url>request URL</set-url>
+ <set-method>.../set-method>
+ <set-header>...</set-header>
+ <set-body>...</set-body>
+ <authentication-certificate thumbprint="thumbprint" />
+ <proxy>...</proxy>
+</send-request>
+```
+
+## Attributes
+
+| Attribute | Description | Required | Default |
+| - | -- | -- | -- |
+| mode | Determines whether this is a `new` request or a `copy` of the current request. In outbound mode, `mode=copy` does not initialize the request body. | No | `new` |
+| response-variable-name | The name of context variable that will receive a response object. If the variable doesn't exist, it will be created upon successful execution of the policy and will become accessible via [`context.Variable`](api-management-policy-expressions.md#ContextVariables) collection. | Yes | N/A |
+| timeout | The timeout interval in seconds before the call to the URL fails. | No | 60 |
+| ignore-error | If `true` and the request results in an error, the error will be ignored, and the response variable will contain a null value. | No | `false` |
+
+## Elements
+
+| Element | Description | Required |
+| -- | -- | - |
+| set-url | The URL of the request. | No if `mode=copy`; otherwise yes. |
+| set-method | A [set-method](set-method-policy.md) policy statement. | No if `mode=copy`; otherwise yes. |
+| set-header | A [set-header](set-header-policy.md) policy statement. Use multiple `set-header` elements for multiple request headers. | No |
+| set-body | A [set-body](set-body-policy.md) policy statement. | No |
+| authentication-certificate | [Certificate to use for client authentication](authentication-certificate-policy.md), specified in a `thumbprint` attribute. | No |
+| proxy | A [proxy](proxy-policy.md) policy statement. Used to route request via HTTP proxy | No |
+
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+
+## Example
+
+This example shows one way to verify a reference token with an authorization server. For more information on this sample, see [Using external services from the Azure API Management service](./api-management-sample-send-request.md).
+
+```xml
+<inbound>
+ <!-- Extract token from Authorization header parameter -->
+ <set-variable name="token" value="@(context.Request.Headers.GetValueOrDefault("Authorization","scheme param").Split(' ').Last())" />
+
+ <!-- Send request to Token Server to validate token (see RFC 7662) -->
+ <send-request mode="new" response-variable-name="tokenstate" timeout="20" ignore-error="true">
+ <set-url>https://microsoft-apiappec990ad4c76641c6aea22f566efc5a4e.azurewebsites.net/introspection</set-url>
+ <set-method>POST</set-method>
+ <set-header name="Authorization" exists-action="override">
+ <value>basic dXNlcm5hbWU6cGFzc3dvcmQ=</value>
+ </set-header>
+ <set-header name="Content-Type" exists-action="override">
+ <value>application/x-www-form-urlencoded</value>
+ </set-header>
+ <set-body>@($"token={(string)context.Variables["token"]}")</set-body>
+ </send-request>
+
+ <choose>
+ <!-- Check active property in response -->
+ <when condition="@((bool)((IResponse)context.Variables["tokenstate"]).Body.As<JObject>()["active"] == false)">
+ <!-- Return 401 Unauthorized with http-problem payload -->
+ <return-response>
+ <set-status code="401" reason="Unauthorized" />
+ <set-header name="WWW-Authenticate" exists-action="override">
+ <value>Bearer error="invalid_token"</value>
+ </set-header>
+ </return-response>
+ </when>
+ </choose>
+ <base />
+</inbound>
+```
+
+## Related policies
+
+* [API Management advanced policies](api-management-advanced-policies.md)
+
api-management Set Backend Service Dapr Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-backend-service-dapr-policy.md
+
+ Title: Azure API Management policy reference - set-backend-service (Dapr) | Microsoft Docs
+description: Reference for the set-backend-service policy available for use in Dapr integration with Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 12/07/2022+++
+# Send request to a service
+
+The `set-backend-service` policy sets the target URL for the current request to `http://localhost:3500/v1.0/invoke/{app-id}[.{ns-name}]/method/{method-name}`, replacing template parameters with values specified in the policy statement.
+
+The policy assumes that Dapr runs in a sidecar container in the same pod as the gateway. Upon receiving the request, Dapr runtime performs service discovery and actual invocation, including possible protocol translation between HTTP and gRPC, retries, distributed tracing, and error handling. Learn more about [Dapr integration with API Management](api-management-dapr-policies.md).
++
+## Policy statement
+
+```xml
+<set-backend-service backend-id="dapr" dapr-app-id="app-id" dapr-method="method-name" dapr-namespace="ns-name" />
+```
+
+## Attributes
+
+| Attribute | Description | Required | Default |
+|||-||
+| backend-id | Must be set to "dapr". | Yes | N/A |
+| dapr-app-id | Name of the target microservice. Used to form the [appId](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/service_invocation_api.md) parameter in Dapr.| Yes | N/A |
+| dapr-method | Name of the method or a URL to invoke on the target microservice. Maps to the [method-name](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/service_invocation_api.md) parameter in Dapr.| Yes | N/A |
+| dapr-namespace | Name of the namespace the target microservice is residing in. Used to form the [appId](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/service_invocation_api.md) parameter in Dapr.| No | N/A |
+
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) self-hosted
+
+### Usage notes
+
+Dapr support must be [enabled](api-management-dapr-policies.md#enable-dapr-support-in-the-self-hosted-gateway) in the self-hosted gateway.
+
+## Example
+
+The following example demonstrates invoking the method named "back" on the microservice called "echo". The `set-backend-service` policy sets the destination URL to `http://localhost:3500/v1.0/invoke/echo.echo-app/method/back`. The [`forward-request`](forward-request-policy.md) policy dispatches the request to the Dapr runtime, which delivers it to the microservice.
+
+The `forward-request` policy is shown here for clarity. The policy is typically "inherited" from the global scope via the `base` keyword.
+
+```xml
+<policies>
+ <inbound>
+ <base />
+ <set-backend-service backend-id="dapr" dapr-app-id="echo" dapr-method="back" dapr-namespace="echo-app" />
+ </inbound>
+ <backend>
+ <forward-request />
+ </backend>
+ <outbound>
+ <base />
+ </outbound>
+ <on-error>
+ <base />
+ </on-error>
+</policies>
+```
+
+## Related policies
+
+* [API Management Dapr integration policies](api-management-dapr-policies.md)
+
api-management Set Backend Service Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-backend-service-policy.md
+
+ Title: Azure API Management policy reference - set-backend-service | Microsoft Docs
+description: Reference for the set-backend-service policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 12/02/2022+++
+# Set backend service
+Use the `set-backend-service` policy to redirect an incoming request to a different backend than the one specified in the API settings for that operation. This policy changes the backend service base URL of the incoming request to the one specified in the policy.
+
+> [!NOTE]
+> Backend entities can be managed via [Azure portal](how-to-configure-service-fabric-backend.md), management [API](/rest/api/apimanagement), and [PowerShell](https://www.powershellgallery.com/packages?q=apimanagement).
++
+## Policy statement
+
+```xml
+<set-backend-service base-url="base URL of the backend service" backend-id="name of the backend entity specifying base URL of the backend service" sf-resolve-condition="condition" sf-service-instance-name="Service Fabric service name" sf-listener-name="Service Fabric listener name" />
+```
+
+## Attributes
+
+| Attribute | Description | Required | Default |
+| -- | | -- | - |
+|base-url|New backend service base URL.|One of `base-url` or `backend-id` must be present.|N/A|
+|backend-id|Identifier (name) of the backend to route primary or secondary replica of a partition. |One of `base-url` or `backend-id` must be present.|N/A|
+|sf-resolve-condition|Only applicable when the backend is a Service Fabric service. Condition identifying if the call to Service Fabric backend has to be repeated with new resolution.|No|N/A|
+|sf-service-instance-name|Only applicable when the backend is a Service Fabric service. Allows changing service instances at runtime. |No|N/A|
+|sf-listener-name|Only applicable when the backend is a Service Fabric service and is specified using `backend-id`. Service Fabric Reliable Services allows you to create multiple listeners in a service. This attribute is used to select a specific listener when a backend Reliable Service has more than one listener. If this attribute isn't specified, API Management will attempt to use a listener without a name. A listener without a name is typical for Reliable Services that have only one listener. |No|N/A|
+
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, backend
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+
+### Usage notes
+
+Currently, if you define a base `set-backend-service` policy using the `backend-id` attribute and inherit the base policy using `<base />` within the scope, then it can only be overridden with a policy using the `backend-id` attribute, not the `base-url` attribute.
+
+## Examples
+
+### Route request based on value in query string
+
+In this example the `set-backend-service` policy routes requests based on the version value passed in the query string to a different backend service than the one specified in the API.
++
+```xml
+<policies>
+ <inbound>
+ <choose>
+ <when condition="@(context.Request.Url.Query.GetValueOrDefault("version") == "2013-05")">
+ <set-backend-service base-url="http://contoso.com/api/8.2/" />
+ </when>
+ <when condition="@(context.Request.Url.Query.GetValueOrDefault("version") == "2014-03")">
+ <set-backend-service base-url="http://contoso.com/api/9.1/" />
+ </when>
+ </choose>
+ <base />
+ </inbound>
+ <outbound>
+ <base />
+ </outbound>
+</policies>
+```
+
+Initially the backend service base URL is derived from the API settings. So the request URL `https://contoso.azure-api.net/api/partners/15?version=2013-05&subscription-key=abcdef` becomes `http://contoso.com/api/10.4/partners/15?version=2013-05&subscription-key=abcdef` where `http://contoso.com/api/10.4/` is the backend service URL specified in the API settings.
+
+When the [<choose\>](choose-policy.md) policy statement is applied the backend service base URL may change again either to `http://contoso.com/api/8.2` or `http://contoso.com/api/9.1`, depending on the value of the version request query parameter. For example, if the value is `"2013-15"` the final request URL becomes `http://contoso.com/api/8.2/partners/15?version=2013-05&subscription-key=abcdef`.
+
+If further transformation of the request is desired, other [Transformation policies](api-management-transformation-policies.md) can be used. For example, to remove the version query parameter now that the request is being routed to a version specific backend, the [Set query string parameter](set-query-parameter-policy.md) policy can be used to remove the now redundant version attribute.
+
+### Route requests to a service fabric backend
+
+In this example the policy routes the request to a service fabric backend, using the userId query string as the partition key and using the primary replica of the partition.
+
+```xml
+<policies>
+ <inbound>
+ <set-backend-service backend-id="my-sf-service" sf-partition-key="@(context.Request.Url.Query.GetValueOrDefault("userId","")" sf-replica-type="primary" />
+ </inbound>
+ <outbound>
+ <base />
+ </outbound>
+</policies>
+```
+++
+## Related policies
+
+* [API Management transformation policies](api-management-transformation-policies.md)
+
api-management Set Body Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-body-policy.md
+
+ Title: Azure API Management policy reference - set-body | Microsoft Docs
+description: Reference for the set-body policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 12/02/2022+++
+# Set body
+
+Use the `set-body` policy to set the message body for incoming and outgoing requests. To access the message body you can use the `context.Request.Body` property or the `context.Response.Body`, depending on whether the policy is in the inbound or outbound section.
+
+> [!IMPORTANT]
+> By default when you access the message body using `context.Request.Body` or `context.Response.Body`, the original message body is lost and must be set by returning the body back in the expression. To preserve the body content, set the `preserveContent` parameter to `true` when accessing the message. If `preserveContent` is set to `true` and a different body is returned by the expression, the returned body is used.
+>
++
+## Policy statement
+
+```xml
+<set-body template="liquid" xsi-nil="blank | null">
+ new body value as text
+</set-body>
+```
+
+## Attributes
+
+| Attribute | Description | Required | Default |
+| -- | | -- | - |
+|template|Used to change the templating mode that the `set-body` policy will run in. Currently the only supported value is:<br /><br />- `liquid` - the `set-body` policy will use the liquid templating engine |No| N/A|
+|xsi-nil| Used to control how elements marked with `xsi:nil="true"` are represented in XML payloads. Set to one of the following values:<br /><br />- `blank` - `nil` is represented with an empty string.<br />- `null` - `nil` is represented with a null value.|No | `blank` |
+
+For accessing information about the request and response, the Liquid template can bind to a context object with the following properties: <br />
+<pre>context.
+ Request.
+ Url
+ Method
+ OriginalMethod
+ OriginalUrl
+ IpAddress
+ MatchedParameters
+ HasBody
+ ClientCertificates
+ Headers
+
+ Response.
+ StatusCode
+ Method
+ Headers
+Url.
+ Scheme
+ Host
+ Port
+ Path
+ Query
+ QueryString
+ ToUri
+ ToString
+
+OriginalUrl.
+ Scheme
+ Host
+ Port
+ Path
+ Query
+ QueryString
+ ToUri
+ ToString
+</pre>
++
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+
+### Usage notes
+
+ - If you are using the `set-body` policy to return a new or updated body, you don't need to set `preserveContent` to `true` because you are explicitly supplying the new body contents.
+ - Preserving the content of a response in the inbound pipeline doesn't make sense because there is no response yet.
+ - Preserving the content of a request in the outbound pipeline doesn't make sense because the request has already been sent to the backend at this point.
+ - If this policy is used when there is no message body, for example in an inbound `GET`, an exception is thrown.
+
+For more information, see the `context.Request.Body`, `context.Response.Body`, and the `IMessageBody` sections in the [Context variable](api-management-policy-expressions.md#ContextVariables) table.
+
+## Using Liquid templates with set-body
+The `set-body` policy can be configured to use the [Liquid](https://shopify.github.io/liquid/basics/introduction/) templating language to transform the body of a request or response. This can be effective if you need to completely reshape the format of your message.
+
+> [!IMPORTANT]
+> The implementation of Liquid used in the `set-body` policy is configured in 'C# mode'. This is particularly important when doing things such as filtering. As an example, using a date filter requires the use of Pascal casing and C# date formatting e.g.:
+>
+> {{body.foo.startDateTime| Date:"yyyyMMddTHH:mm:ssZ"}}
+
+> [!IMPORTANT]
+> In order to correctly bind to an XML body using the Liquid template, use a `set-header` policy to set Content-Type to either application/xml, text/xml (or any type ending with +xml); for a JSON body, it must be application/json, text/json (or any type ending with +json).
+
+### Supported Liquid filters
+
+The following Liquid filters are supported in the `set-body` policy. For filter examples, see the [Liquid documentation](https://shopify.github.io/liquid/).
+
+> [!NOTE]
+> The policy requires Pascal casing for Liquid filter names (for example, "AtLeast" instead of "at_least").
+>
+* Abs
+* Append
+* AtLeast
+* AtMost
+* Capitalize
+* Compact
+* Currency
+* Date
+* Default
+* DividedBy
+* Downcase
+* Escape
+* First
+* H
+* Join
+* Last
+* Lstrip
+* Map
+* Minus
+* Modulo
+* NewlineToBr
+* Plus
+* Prepend
+* Remove
+* RemoveFirst
+* Replace
+* ReplaceFirst
+* Round
+* Rstrip
+* Size
+* Slice
+* Sort
+* Split
+* Strip
+* StripHtml
+* StripNewlines
+* Times
+* Truncate
+* TruncateWords
+* Uniq
+* Upcase
+* UrlDecode
+* UrlEncode
++
+## Examples
+
+### Literal text
+
+```xml
+<set-body>Hello world!</set-body>
+```
+
+### Accessing the body as a string
+
+We are preserving the original request body so that we can access it later in the pipeline.
+
+```xml
+<set-body>
+@{ 
+ string inBody = context.Request.Body.As<string>(preserveContent: true); 
+ if (inBody[0] =='c') { 
+ inBody[0] = 'm'; 
+ } 
+ return inBody; 
+}
+</set-body>
+```
+
+### Accessing the body as a JObject
+
+Since we are not reserving the original request body, accessing it later in the pipeline will result in an exception.
+
+```xml
+<set-body> 
+@{ 
+ JObject inBody = context.Request.Body.As<JObject>(); 
+ if (inBody.attribute == <tag>) { 
+ inBody[0] = 'm'; 
+ } 
+ return inBody.ToString(); 
+} 
+</set-body>
+
+```
+
+### Filter response based on product
+
+This example shows how to perform content filtering by removing data elements from the response received from a backend service when using the `Starter` product. The example backend response includes root-level properties similar to the [OpenWeather One Call API](https://openweathermap.org/api/one-call-api).
+
+```xml
+<!-- Copy this snippet into the outbound section to remove a number of data elements from the response received from the backend service based on the name of the product -->
+<choose>
+ <when condition="@(context.Response.StatusCode == 200 && context.Product.Name.Equals("Starter"))">
+ <set-body>@{
+ var response = context.Response.Body.As<JObject>();
+ foreach (var key in new [] {"current", "minutely", "hourly", "daily", "alerts"}) {
+ response.Property (key).Remove ();
+ }
+ return response.ToString();
+ }
+ </set-body>
+ </when>
+</choose>
+```
+
+### Convert JSON to SOAP using a Liquid template
+```xml
+<set-body template="liquid">
+ <soap:Envelope xmlns="http://tempuri.org/" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
+ <soap:Body>
+ <GetOpenOrders>
+ <cust>{{body.getOpenOrders.cust}}</cust>
+ </GetOpenOrders>
+ </soap:Body>
+ </soap:Envelope>
+</set-body>
+```
+
+### Transform JSON using a Liquid template
+```xml
+{
+"order": {
+ "id": "{{body.customer.purchase.identifier}}",
+ "summary": "{{body.customer.purchase.orderShortDesc}}"
+ }
+}
+```
+
+## Related policies
+
+* [API Management transformation policies](api-management-transformation-policies.md)
+
api-management Set Edit Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-edit-policies.md
API Management gives you flexibility to configure policy definitions at multiple
> Not all policies can be applied at each scope or policy section. If the policy that you want to add isn't enabled, ensure that you are in a supported policy section and scope for that policy. To review the policy sections and scopes for a policy, check the **Usage** section in the [Policy reference](api-management-policies.md) topics. > [!NOTE]
-> The **Backend** policy section can only contain one policy element. By default, API Management configures the [`forward-request`](api-management-advanced-policies.md#ForwardRequest) policy in the **Backend** section at the global scope, and the `base` element at other scopes.
+> The **Backend** policy section can only contain one policy element. By default, API Management configures the [`forward-request`](forward-request-policy.md) policy in the **Backend** section at the global scope, and the `base` element at other scopes.
### Global scope
Operation scope is configured for a selected API operation.
You can create reusable [policy fragments](policy-fragments.md) in your API Management instance. Policy fragments are XML elements containing your configurations of one or more policies. Policy fragments help you configure policies consistently and maintain policy definitions without needing to repeat or retype XML code.
-Use the [`include-fragment`](api-management-advanced-policies.md#IncludeFragment) policy to insert a policy fragment in a policy definition.
+Use the [`include-fragment`](include-fragment-policy.md) policy to insert a policy fragment in a policy definition.
## Use `base` element to set policy evaluation order
api-management Set Graphql Resolver Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-graphql-resolver-policy.md
+
+ Title: Azure API Management policy reference - set-graphql-resolver | Microsoft Docs
+description: Reference for the set-graphql-resolver policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 12/07/2022+++
+# Set GraphQL resolver
+
+The `set-graphql-resolver` policy retrieves or sets data for a GraphQL field in an object type specified in a GraphQL schema. The schema must be imported to API Management. Currently the data must be resolved using an HTTP-based data source (REST or SOAP API).
++++
+## Policy statement
+
+```xml
+<set-graphql-resolver parent-type="type" field="field">
+ <http-data-source>
+ <http-request>
+ <set-method>...set-method policy configuration...</set-method>
+ <set-url>URL</set-url>
+ <set-header>...set-header policy configuration...</set-header>
+ <set-body>...set-body policy configuration...</set-body>
+ <authentication-certificate>...authentication-certificate policy configuration...</authentication-certificate>
+ </http-request>
+ <http-response>
+ <json-to-xml>...json-to-xml policy configuration...</json-to-xml>
+ <xml-to-json>...xml-to-json policy configuration...</xml-to-json>
+ <find-and-replace>...find-and-replace policy configuration...</find-and-replace>
+ </http-response>
+ </http-data-source>
+</set-graphql-resolver>
+```
+
+## Attributes
+
+| Attribute | Description | Required | Default |
+| -- | | -- | - |
+| parent-type| An object type in the GraphQL schema. | Yes | N/A |
+| field| A field of the specified `parent-type` in the GraphQL schema. | Yes | N/A |
+
+> [!NOTE]
+> Currently, the values of `parent-type` and `field` aren't validated by this policy. If they aren't valid, the policy is ignored, and the GraphQL query is forwarded to a GraphQL endpoint (if one is configured).
+
+## Elements
+
+|Name|Description|Required|
+|-|--|--|
+| http-data-source | Configures the HTTP request and optionally the HTTP response that are used to resolve data for the given `parent-type` and `field`. | Yes |
+| http-request | Specifies a URL and child policies to configure the resolver's HTTP request. Each child element can be specified at most once. | Yes |
+| set-method| Method of the resolver's HTTP request, configured using the [set-method](set-method-policy.md) policy. | Yes |
+| set-url | URL of the resolver's HTTP request. | Yes |
+| set-header | Header set in the resolver's HTTP request, configured using the [set-header](set-header-policy.md) policy. | No |
+| set-body | Body set in the resolver's HTTP request, configured using the [set-body](set-body-policy.md) policy. | No |
+| authentication-certificate | Client certificate presented in the resolver's HTTP request, configured using the [authentication-certificate](authentication-certificate-policy.md) policy. | No |
+| http-response | Optionally specifies child policies to configure the resolver's HTTP response. If not specified, the response is returned as a raw string. Each child element can be specified at most once. |
+| json-to-xml | Transforms the resolver's HTTP response using the [json-to-xml](json-to-xml-policy.md) policy. | No |
+| xml-to-json | Transforms the resolver's HTTP response using the [xml-to-json](xml-to-json-policy.md) policy. | No |
+| find-and-replace | Transforms the resolver's HTTP response using the [find-and-replace](find-and-replace-policy.md) policy. | No |
+
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) backend
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated
+
+### Usage notes
+
+* This policy is invoked only when a matching GraphQL query is executed.
+* The policy resolves data for a single field. To resolve data for multiple fields, configure multiple occurrences of this policy in a policy definition.
++
+## GraphQL context
+
+* The context for the HTTP request and HTTP response (if specified) differs from the context for the original gateway API request:
+ * `context.ParentResult` is set to the parent object for the current resolver execution.
+ * The HTTP request context contains arguments that are passed in the GraphQL query as its body.
+ * The HTTP response context is the response from the independent HTTP call made by the resolver, not the context for the complete response for the gateway request.
+The `context` variable that is passed through the request and response pipeline is augmented with the GraphQL context when used with `<set-graphql-resolver>` policies.
+
+### ParentResult
+
+The `context.ParentResult` is set to the parent object for the current resolver execution. Consider the following partial schema:
+
+``` graphql
+type Comment {
+ id: ID!
+ owner: string!
+ content: string!
+}
+
+type Blog {
+ id: ID!
+ Title: string!
+ content: string!
+ comments: [Comment]!
+ comment(id: ID!): Comment
+}
+
+type Query {
+ getBlog(): [Blog]!
+ getBlog(id: ID!): Blog
+}
+```
+
+Also, consider a GraphQL query for all the information for a specific blog:
+
+``` graphql
+query {
+ getBlog(id: 1) {
+ title
+ content
+ comments {
+ id
+ owner
+ content
+ }
+ }
+}
+```
+
+If you set a resolver for `parent-type="Blog" field="comments"`, you will want to understand which blog ID to use. You can get the ID of the blog using `context.ParentResult.AsJObject()["id"].ToString()`. The policy for configuring this resolver would resemble:
+
+``` xml
+<set-graphql-resolver parent-type="Blog" field="comments">
+ <http-data-source>
+ <http-request>
+ <set-method>GET</set-method>
+ <set-url>@{
+ var blogId = context.ParentResult.AsJObject()["id"].ToString();
+ return $"https://data.contoso.com/api/blog/{blogId}";
+ }</set-url>
+ </http-request>
+ </http-data-source>
+</set-graphql-resolver>
+```
+
+### Arguments
+
+The arguments for a parameterized GraphQL query are added to the body of the request. For example, consider the following two queries:
+
+``` graphql
+query($id: Int) {
+ getComment(id: $id) {
+ content
+ }
+}
+
+query {
+ getComment(id: 2) {
+ content
+ }
+}
+```
+
+These queries are two ways of calling the `getComment` resolver. GraphQL sends the following JSON payload:
+
+``` json
+{
+ "query": "query($id: Int) { getComment(id: $id) { content } }",
+ "variables": { "id": 2 }
+}
+
+{
+ "query": "query { getComment(id: 2) { content } }"
+}
+```
+
+When the resolver is executed, the `arguments` property is added to the body. You can define the resolver as follows:
+
+``` xml
+<set-graphql-resolver parent-type="Blog" field="comments">
+ <http-data-source>
+ <http-request>
+ <set-method>GET</set-method>
+ <set-url>@{
+ var commentId = context.Request.Body.As<JObject>(true)["arguments"]["id"];
+ return $"https://data.contoso.com/api/comment/{commentId}";
+ }</set-url>
+ </http-request>
+ </http-data-source>
+</set-graphql-resolver>
+```
+
+## More examples
+
+### Resolver for GraphQL query
+
+The following example resolves a query by making an HTTP `GET` call to a backend data source.
+
+#### Example schema
+
+```
+type Query {
+ users: [User]
+}
+
+type User {
+ id: String!
+ name: String!
+}
+```
+
+#### Example policy
+
+```xml
+<set-graphql-resolver parent-type="Query" field="users">
+ <http-data-source>
+ <http-request>
+ <set-method>GET</set-method>
+ <set-url>https://data.contoso.com/get/users</set-url>
+ </http-request>
+ </http-data-source>
+</set-graphql-resolver>
+```
+
+### Resolver for a GraqhQL query that returns a list, using a liquid template
+
+The following example uses a liquid template, supported for use in the [set-body](set-body-policy.md) policy, to return a list in the HTTP response to a query. It also renames the `username` field in the response from the REST API to `name` in the GraphQL response.
+
+#### Example schema
+
+```
+type Query {
+ users: [User]
+}
+
+type User {
+ id: String!
+ name: String!
+}
+```
+
+#### Example policy
+
+```xml
+<set-graphql-resolver parent-type="Query" field="users">
+ <http-data-source>
+ <http-request>
+ <set-method>GET</set-method>
+ <set-url>https://data.contoso.com/users</set-url>
+ </http-request>
+ <http-response>
+ <set-body template="liquid">
+ [
+ {% JSONArrayFor elem in body %}
+ {
+ "name": "{{elem.username}}"
+ }
+ {% endJSONArrayFor %}
+ ]
+ </set-body>
+ </http-response>
+ </http-data-source>
+</set-graphql-resolver>
+```
+
+### Resolver for GraphQL mutation
+
+The following example resolves a mutation that inserts data by making a `POST` request to an HTTP data source. The policy expression in the `set-body` policy of the HTTP request modifies a `name` argument that is passed in the GraphQL query as its body. The body that is sent will look like the following JSON:
+
+``` json
+{
+ "name": "the-provided-name"
+}
+```
+
+#### Example schema
+
+```
+type Query {
+ users: [User]
+}
+
+type Mutation {
+ makeUser(name: String!): User
+}
+
+type User {
+ id: String!
+ name: String!
+}
+```
+
+#### Example policy
+
+```xml
+<set-graphql-resolver parent-type="Mutation" field="makeUser">
+ <http-data-source>
+ <http-request>
+ <set-method>POST</set-method>
+ <set-url> https://data.contoso.com/user/create </set-url>
+ <set-header name="Content-Type" exists-action="override">
+ <value>application/json</value>
+ </set-header>
+ <set-body>@{
+ var args = context.Request.Body.As<JObject>(true)["arguments"];
+ JObject jsonObject = new JObject();
+ jsonObject.Add("name", args["name"])
+ return jsonObject.ToString();
+ }</set-body>
+ </http-request>
+ </http-data-source>
+</set-graphql-resolver>
+```
+
+## Related policies
+
+* [API Management policies for GraphQL APIs](graphql-policies.md)
+
api-management Set Header Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-header-policy.md
+
+ Title: Azure API Management policy reference - set-header | Microsoft Docs
+description: Reference for the set-header policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 12/08/2022+++
+# Set header
+
+The `set-header` policy assigns a value to an existing HTTP response and/or request header or adds a new response and/or request header.
+
+ Use the policy to insert a list of HTTP headers into an HTTP message. When placed in an inbound pipeline, this policy sets the HTTP headers for the request being passed to the target service. When placed in an outbound pipeline, this policy sets the HTTP headers for the response being sent to the gatewayΓÇÖs client.
++
+## Policy statement
+
+```xml
+<set-header name="header name" exists-action="override | skip | append | delete">
+ <value>value</value> <!--for multiple headers with the same name add additional value elements-->
+</set-header>
+```
+
+## Attributes
+
+|Name|Description|Required|Default|
+|-|--|--|-|
+|exists-action|Specifies action to take when the header is already specified. This attribute must have one of the following values.<br /><br /> - `override` - replaces the value of the existing header.<br />- `skip` - does not replace the existing header value.<br />- `append` - appends the value to the existing header value.<br />- `delete` - removes the header from the request.<br /><br /> When set to `override`, enlisting multiple entries with the same name results in the header being set according to all entries (which will be listed multiple times); only listed values will be set in the result.|No|`override`|
+|name|Specifies name of the header to be set.|Yes|N/A|
++
+## Elements
+
+|Name|Description|Required|
+|-|--|--|
+|value|Specifies the value of the header to be set. For multiple headers with the same name, add additional `value` elements.|No|
+
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+
+### Usage notes
+
+ Multiple values of a header are concatenated to a CSV string, for example:
+
+`headerName: value1,value2,value3`
+
+Exceptions include standardized headers whose values:
+- may contain commas (`User-Agent`, `WWW-Authenticate`, `Proxy-Authenticate`)
+- may contain date (`Cookie`, `Set-Cookie`, `Warning`),
+- contain date (`Date`, `Expires`, `If-Modified-Since`, `If-Unmodified-Since`, `Last-Modified`, `Retry-After`).
+
+In case of those exceptions, multiple header values will not be concatenated into one string and will be passed as separate headers, for example:
+
+```
+User-Agent: value1
+User-Agent: value2
+User-Agent: value3
+```
+
+## Examples
+
+### Add header, override existing
+
+```xml
+<set-header name="some header name" exists-action="override">
+ <value>20</value>
+</set-header>
+```
+### Remove header
+
+```xml
+ <set-header name="some header name" exists-action="delete" />
+```
+
+### Forward context information to the backend service
+
+This example shows how to apply policy at the API level to supply context information to the backend service.
+
+```xml
+<!-- Copy this snippet into the inbound element to forward some context information, user id and the region the gateway is hosted in, to the backend service for logging or evaluation -->
+<set-header name="x-request-context-data" exists-action="override">
+ <value>@(context.User.Id)</value>
+ <value>@(context.Deployment.Region)</value>
+</set-header>
+```
+
+ For more information, see [Policy expressions](api-management-policy-expressions.md) and [Context variable](api-management-policy-expressions.md#ContextVariables).
+
+## Related policies
+
+- [API Management transformation policies](api-management-transformation-policies.md)
+
api-management Set Method Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-method-policy.md
+
+ Title: Azure API Management policy reference - set-method | Microsoft Docs
+description: Reference for the set-method policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 12/08/2022+++
+# Set request method
+
+The `set-method` policy allows you to change the HTTP request method for a request.
+++
+## Policy statement
+
+```xml
+<set-method>HTTP method</set-method>
+```
+
+The value of the element specifies the HTTP method, such as `POST`, `GET`, and so on.
+
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, on-error
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+
+## Example
+
+This example uses the `set-method` policy to send a message to a Slack chat room if the HTTP response code is greater than or equal to 500. For more information on this sample, see [Using external services from the Azure API Management service](./api-management-sample-send-request.md).
+
+```xml
+<choose>
+ <when condition="@(context.Response.StatusCode >= 500)">
+ <send-one-way-request mode="new">
+ <set-url>https://hooks.slack.com/services/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</set-url>
+ <set-method>POST</set-method>
+ <set-body>@{
+ return new JObject(
+ new JProperty("username","APIM Alert"),
+ new JProperty("icon_emoji", ":ghost:"),
+ new JProperty("text", String.Format("{0} {1}\nHost: {2}\n{3} {4}\n User: {5}",
+ context.Request.Method,
+ context.Request.Url.Path + context.Request.Url.QueryString,
+ context.Request.Url.Host,
+ context.Response.StatusCode,
+ context.Response.StatusReason,
+ context.User.Email
+ ))
+ ).ToString();
+ }</set-body>
+ </send-one-way-request>
+ </when>
+</choose>
+
+```
+++
+## Related policies
+
+* [API Management advanced policies](api-management-advanced-policies.md)
+
api-management Set Query Parameter Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-query-parameter-policy.md
+
+ Title: Azure API Management policy reference - set-query-parameter | Microsoft Docs
+description: Reference for the set-query-parameter policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 12/08/2022+++
+# Set query string parameter
+
+The `set-query-parameter` policy adds, replaces value of, or deletes request query string parameter. Can be used to pass query parameters expected by the backend service which are optional or never present in the request.
++
+## Policy statement
+
+```xml
+<set-query-parameter name="param name" exists-action="override | skip | append | delete">
+ <value>value</value> <!--for multiple parameters with the same name add additional value elements-->
+</set-query-parameter>
+```
++
+## Attributes
+
+|Name|Description|Required|Default|
+|-|--|--|-|
+|exists-action|Specifies what action to take when the query parameter is already specified. This attribute must have one of the following values.<br /><br /> - `override` - replaces the value of the existing parameter.<br />- `skip` - does not replace the existing query parameter value.<br />- `append` - appends the value to the existing query parameter value.<br />- `delete` - removes the query parameter from the request.<br /><br /> When set to `override` enlisting multiple entries with the same name results in the query parameter being set according to all entries (which will be listed multiple times); only listed values will be set in the result.|No|`override`|
+|name|Specifies name of the query parameter to be set.|Yes|N/A|
+
+## Elements
+
+|Name|Description|Required|
+|-|--|--|
+|value|Specifies the value of the query parameter to be set. For multiple query parameters with the same name, add additional `value` elements.|Yes|
+
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, backend
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+
+## Examples
+
+### Set value of query parameter
+
+```xml
+
+<set-query-parameter name="api-key" exists-action="skip">
+ <value>12345678901</value>
+</set-query-parameter>
+
+```
+
+### Set query parameter to forward context to the backend
+
+ This example shows how to apply policy at the API level to supply context information to the backend service.
+
+```xml
+<!-- Copy this snippet into the inbound element to forward a piece of context, product name in this example, to the backend service for logging or evaluation -->
+<set-query-parameter name="x-product-name" exists-action="override">
+ <value>@(context.Product.Name)</value>
+</set-query-parameter>
+```
+
+ For more information, see [Policy expressions](api-management-policy-expressions.md) and [Context variable](api-management-policy-expressions.md#ContextVariables).
+
+## Related policies
+
+- [API Management transformation policies](api-management-transformation-policies.md)
+
api-management Set Status Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-status-policy.md
+
+ Title: Azure API Management policy reference - set-status | Microsoft Docs
+description: Reference for the set-status policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 12/08/2022+++
+# Set status code
+
+The `set-status` policy sets the HTTP status code to the specified value.
+++
+## Policy statement
+
+```xml
+<set-status code="HTTP status code" reason="description"/>
+```
++
+## Attributes
+
+| Attribute | Description | Required | Default |
+| | - | -- | - |
+| code | Integer. The HTTP status code to return. | Yes | N/A |
+| reason | String. A description of the reason for returning the status code. | Yes | N/A |
++
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+
+## Example
+
+This example shows how to return a 401 response if the authorization token is invalid. For more information, see [Using external services from the Azure API Management service](./api-management-sample-send-request.md).
+
+```xml
+<choose>
+ <when condition="@((bool)((IResponse)context.Variables["tokenstate"]).Body.As<JObject>()["active"] == false)">
+ <return-response response-variable-name="existing response variable">
+ <set-status code="401" reason="Unauthorized" />
+ <set-header name="WWW-Authenticate" exists-action="override">
+ <value>Bearer error="invalid_token"</value>
+ </set-header>
+ </return-response>
+ </when>
+</choose>
+```
+++
+## Related policies
+
+* [API Management advanced policies](api-management-advanced-policies.md)
+
api-management Set Variable Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-variable-policy.md
+
+ Title: Azure API Management policy reference - set-variable | Microsoft Docs
+description: Reference for the set-variable policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 12/08/2022+++
+# Set variable
+
+The `set-variable` policy declares a [context](api-management-policy-expressions.md#ContextVariables) variable and assigns it a value specified via an [expression](api-management-policy-expressions.md) or a string literal. if the expression contains a literal it will be converted to a string and the type of the value will be `System.String`.
++
+## Policy statement
+
+```xml
+<set-variable name="variable name" value="Expression | String literal" />
+```
+
+## Attributes
+
+| Attribute | Description | Required |
+| | | -- |
+| name | The name of the variable. | Yes |
+| value | The value of the variable. This can be an expression or a literal value. | Yes |
+
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+
+## Allowed types
+
+Expressions used in the `set-variable` policy must return one of the following basic types.
+
+- System.Boolean
+- System.SByte
+- System.Byte
+- System.UInt16
+- System.UInt32
+- System.UInt64
+- System.Int16
+- System.Int32
+- System.Int64
+- System.Decimal
+- System.Single
+- System.Double
+- System.Guid
+- System.String
+- System.Char
+- System.DateTime
+- System.TimeSpan
+- System.Byte?
+- System.UInt16?
+- System.UInt32?
+- System.UInt64?
+- System.Int16?
+- System.Int32?
+- System.Int64?
+- System.Decimal?
+- System.Single?
+- System.Double?
+- System.Guid?
+- System.String?
+- System.Char?
+- System.DateTime?
+
+## Example
+
+The following example demonstrates a `set-variable` policy in the inbound section. This set variable policy creates an `isMobile` Boolean [context](api-management-policy-expressions.md#ContextVariables) variable that is set to true if the `User-Agent` request header contains the text `iPad` or `iPhone`.
+
+```xml
+<set-variable name="IsMobile" value="@(context.Request.Headers.GetValueOrDefault("User-Agent","").Contains("iPad") || context.Request.Headers.GetValueOrDefault("User-Agent","").Contains("iPhone"))" />
+```
+
+## Related policies
+
+* [API Management advanced policies](api-management-advanced-policies.md)
+
api-management Trace Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/trace-policy.md
+
+ Title: Azure API Management policy reference - trace | Microsoft Docs
+description: Reference for the trace policy available for use in Azure API Management. Provides policy usage, settings, and examples.
++++ Last updated : 12/08/2022+++
+# Trace
+
+The `trace` policy adds a custom trace into the request tracing output in the test console, Application Insights telemetries, and/or resource logs.
+
+- The policy adds a custom trace to the [request tracing](./api-management-howto-api-inspector.md) output in the test console when tracing is triggered, that is, `Ocp-Apim-Trace` request header is present and set to `true` and `Ocp-Apim-Subscription-Key` request header is present and holds a valid key that allows tracing.
+- The policy creates a [Trace](../azure-monitor/app/data-model-trace-telemetry.md) telemetry in Application Insights, when [Application Insights integration](./api-management-howto-app-insights.md) is enabled and the `severity` specified in the policy is equal to or greater than the `verbosity` specified in the diagnostic setting.
+- The policy adds a property in the log entry when [resource logs](./api-management-howto-use-azure-monitor.md#resource-logs) are enabled and the severity level specified in the policy is at or higher than the verbosity level specified in the diagnostic setting.
+- The policy is not affected by Application Insights sampling. All invocations of the policy will be logged.
++
+## Policy statement
+
+```xml
+<trace source="arbitrary string literal" severity="verbose | information | error">
+ <message>String literal or expressions</message>
+ <metadata name="string literal or expressions" value="string literal or expressions"/>
+</trace>
+```
+
+## Attributes
+
+| Attribute | Description | Required | Default |
+| | - | -- | - |
+| source | String literal meaningful to the trace viewer and specifying the source of the message. | Yes | N/A |
+| severity | Specifies the severity level of the trace. Allowed values are `verbose`, `information`, `error` (from lowest to highest). | No | `verbose` |
+
+## Elements
+
+|Name|Description|Required|
+|-|--|--|
+| message | A string or expression to be logged. | Yes |
+| metadata | Adds a custom property to the Application Insights [Trace](../azure-monitor/app/data-model-trace-telemetry.md) telemetry. | No |
+
+### metadata attributes
+
+| Attribute | Description | Required | Default |
+| | - | -- | - |
+| name | Name of the property. | Yes | N/A |
+| value | Value of the property. | Yes | N/A |
+
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+
+## Example
+
+```xml
+<trace source="PetStore API" severity="verbose">
+ <message>@((string)context.Variables["clientConnectionID"])</message>
+ <metadata name="Operation Name" value="New-Order"/>
+</trace>
+```
+
+## Related policies
+
+* [API Management advanced policies](api-management-advanced-policies.md)
+
api-management Transform Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/transform-api.md
In this tutorial, you'll learn about configuring common [policies](api-managemen
This tutorial also explains how to add protection to your backend API by configuring a rate limit policy, so that the API isn't overused by developers. For more policy options, see [API Management policies](api-management-policies.md). > [!NOTE]
-> By default, API Management configures a global [`forward-request`](api-management-advanced-policies.md#ForwardRequest) policy. The `forward-request` policy is needed for the gateway to complete a request to a backend service.
+> By default, API Management configures a global [`forward-request`](forward-request-policy.md) policy. The `forward-request` policy is needed for the gateway to complete a request to a backend service.
In this tutorial, you learn how to:
api-management Validate Azure Ad Token Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-azure-ad-token-policy.md
+
+ Title: Azure API Management policy reference - validate-azure-ad-token | Microsoft Docs
+description: Reference for the validate-azure-ad-token policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+
+documentationcenter: ''
++++ Last updated : 12/08/2022+++
+# Validate Azure Active Directory token
+
+The `validate-azure-ad-token` policy enforces the existence and validity of a JSON web token (JWT) that was provided by the Azure Active Directory service. The JWT can be extracted from a specified HTTP header, query parameter, or value provided using a policy expression or context variable.
+
+> [!NOTE]
+> To validate a JWT that was provided by another identity provider, API Management also provides the generic [`validate-jwt`](validate-jwt-policy.md) policy.
++++
+## Policy statement
+
+```xml
+<validate-azure-ad-token
+ tenant-id="tenant ID or URL (for example, "contoso.onmicrosoft.com") of the Azure Active Directory service"
+ header-name="name of HTTP header containing the token (alternatively, use query-parameter-name or token-value attribute to specify token)"
+ query-parameter-name="name of query parameter used to pass the token (alternative, use header-name or token-value attribute to specify token)"
+ token-value="expression returning the token as a stripng (alternatively, use header-name or query-parameter attribute to specify token)"
+ failed-validation-httpcode="HTTP status code to return on failure"
+ failed-validation-error-message="error message to return on failure"
+ output-token-variable-name="name of a variable to receive a JWT object representing successfully validated token">
+ <client-application-ids>
+ <application-id>Client application ID from Azure Active Directory</application-id>
+ <!-- If there are multiple client application IDs, then add additional application-id elements -->
+ </client-application-ids>
+ <backend-application-ids>
+ <application-id>Backend application ID from Azure Active Directory</application-id>
+ <!-- If there are multiple backend application IDs, then add additional application-id elements -->
+ </backend-application-ids>
+ <audiences>
+ <audience>audience string</audience>
+ <!-- if there are multiple possible audiences, then add additional audience elements -->
+ </audiences>
+ <required-claims>
+ <claim name="name of the claim as it appears in the token" match="all|any" separator="separator character in a multi-valued claim">
+ <value>claim value as it is expected to appear in the token</value>
+ <!-- if there is more than one allowed value, then add additional value elements -->
+ </claim>
+ <!-- if there are multiple possible allowed values, then add additional value elements -->
+ </required-claims>
+</validate-azure-ad-token>
+```
+
+## Attributes
+
+| Attribute | Description | Required | Default |
+| - | | -- | |
+| tenant-id | Tenant ID or URL of the Azure Active Directory service. | Yes | N/A |
+| header-name | The name of the HTTP header holding the token. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A |
+| query-parameter-name | The name of the query parameter holding the token. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A |
+| token-value | Expression returning a string containing the token. You must not return `Bearer` as part of the token value. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A |
+| failed-validation-httpcode | HTTP status code to return if the JWT doesn't pass validation. | No | 401 |
+| failed-validation-error-message | Error message to return in the HTTP response body if the JWT doesn't pass validation. This message must have any special characters properly escaped. | No | Default error message depends on validation issue, for example "JWT not present." |
+| output-token-variable-name | String. Name of context variable that will receive token value as an object of type [`Jwt`](api-management-policy-expressions.md) upon successful token validation | No | N/A |
+++
+## Elements
+
+| Element | Description | Required |
+| - | -- | -- |
+| audiences | Contains a list of acceptable audience claims that can be present on the token. If multiple audience values are present, then each value is tried until either all are exhausted (in which case validation fails) or until one succeeds. At least one audience must be specified. | No |
+| backend-application-ids | Contains a list of acceptable backend application IDs. This is only required in advanced cases for the configuration of options and can generally be removed. | No |
+| client-application-ids | Contains a list of acceptable client application IDs. If multiple application-id elements are present, then each value is tried until either all are exhausted (in which case validation fails) or until one succeeds. At least one application-id must be specified. | Yes |
+| required-claims | Contains a list of `claim` elements for claim values expected to be present on the token for it to be considered valid. When the `match` attribute is set to `all`, every claim value in the policy must be present in the token for validation to succeed. When the `match` attribute is set to `any`, at least one claim must be present in the token for validation to succeed. | No |
+
+### claim attributes
+
+| Attribute | Description | Required | Default |
+| - | | -- | |
+| name | Name of the claim as it is expected to appear in the token. | Yes | N/A |
+| match | The `match` attribute on the `claim` element specifies whether every claim value in the policy must be present in the token for validation to succeed. Possible values are:<br /><br /> - `all` - every claim value in the policy must be present in the token for validation to succeed.<br /><br /> - `any` - at least one claim value must be present in the token for validation to succeed. | No | all |
+| separator | String. Specifies a separator (for example, ",") to be used for extracting a set of values from a multi-valued claim. | No | N/A |
+
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+
+### Usage notes
+
+* This policy can only be used with an Azure Active Directory tenant in the global Azure cloud. It doesn't support tenants configured in regional clouds or Azure clouds with restricted access.
+* Currently, this policy can only validate "v1" tokens from Azure Active Directory. Support for "v2" tokens will be added in a future release.
+* You can use access restriction policies in different scopes for different purposes. For example, you can secure the whole API with Azure AD authentication by applying the `validate-azure-ad-token` policy on the API level, or you can apply it on the API operation level and use `claims` for more granular control.
+
+## Examples
+
+### Simple token validation
+
+```xml
+<validate-jwt header-name="Authorization" require-scheme="Bearer">
+ <issuer-signing-keys>
+ <key>{{jwt-signing-key}}</key> <!-- signing key specified as a named value -->
+ </issuer-signing-keys>
+ <audiences>
+ <audience>@(context.Request.OriginalUrl.Host)</audience> <!-- audience is set to API Management host name -->
+ </audiences>
+ <issuers>
+ <issuer>http://contoso.com/</issuer>
+ </issuers>
+</validate-jwt>
+```
+
+### Simple token validation
+
+The following policy is the minimal form of the `validate-azure-ad-token` policy. It expects the JWT to be provided in the `Authorization` header using the `Bearer` scheme. In this example, the Azure AD tenant ID and client application ID are provided using named values.
+
+```xml
+<validate-azure-ad-token tenant-id="{{aad-tenant-id}}">
+ <client-application-ids>
+ <application-id>{{aad-client-application-id}}</application-id>
+ </client-application-ids>
+</validate-azure-ad-token>
+```
+
+### Validate that audience and claim are correct
+
+The following policy checks that the audience is the hostname of the API Management instance and that the `ctry` claim is `US`. The hostname is provided using a policy expression, and the Azure AD tenant ID and client application ID are provided using named values. The decoded JWT is provided in the `jwt` variable after validation.
+
+For more details on optional claims, read [Provide optional claims to your app](../active-directory/develop/active-directory-optional-claims.md).
+
+```xml
+<validate-azure-ad-token tenant-id="{{aad-tenant-id}}" output-token-variable-name="jwt">
+ <client-application-ids>
+ <application-id>{{aad-client-application-id}}</application-id>
+ </client-application-ids>
+ <audiences>
+ <audience>@(context.Request.OriginalUrl.Host)</audience>
+ </audiences>
+ <required-claims>
+ <claim name="ctry" match="any">
+ <value>US</value>
+ </claim>
+ </required-claims>
+</validate-azure-ad-token>
+```
+
+## Related policies
+* [API Management access restriction policies](api-management-access-restriction-policies.md)
++
api-management Validate Client Certificate Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-client-certificate-policy.md
+
+ Title: Azure API Management policy reference - validate-client-certificate | Microsoft Docs
+description: Reference for the validate-client-certificate policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 12/08/2022+++
+# Validate client certificate
+
+Use the `validate-client-certificate` policy to enforce that a certificate presented by a client to an API Management instance matches specified validation rules and claims such as subject or issuer for one or more certificate identities.
+
+To be considered valid, a client certificate must match all the validation rules defined by the attributes at the top-level element and match all defined claims for at least one of the defined identities.
+
+Use this policy to check incoming certificate properties against desired properties. Also use this policy to override default validation of client certificates in these cases:
+
+* If you have uploaded custom CA certificates to validate client requests to the managed gateway
+* If you configured custom certificate authorities to validate client requests to a self-managed gateway
+
+For more information about custom CA certificates and certificate authorities, see [How to add a custom CA certificate in Azure API Management](api-management-howto-ca-certificates.md).
+
+
+## Policy statement
+
+```xml
+<validate-client-certificate
+ validate-revocation="true | false"
+ validate-trust="true | false"
+ validate-not-before="true | false"
+ validate-not-after="true | false"
+ ignore-error="true | false">
+ <identities>
+ <identityΓÇ»
+ thumbprint="certificate thumbprint"
+ serial-number="certificate serial number"
+ common-name="certificate common name"
+ subject="certificate subject string"
+ dns-name="certificate DNS name"
+ issuer-subject="certificate issuer"
+ issuer-thumbprint="certificate issuer thumbprint"
+ issuer-certificate-id="certificate identifier"ΓÇ»/>
+ </identities>
+</validate-client-certificate>
+```
+
+## Attributes
+
+| Name | Description | Required | Default |
+| - | --| -- | -- |
+| validate-revocationΓÇ» | Boolean. Specifies whether certificate is validated against online revocation list.ΓÇ» | No | `true` |
+| validate-trustΓÇ»| Boolean. Specifies if validation should fail in case chain cannot be successfully built up to trusted CA. | No | `true` |
+| validate-not-before | Boolean. Validates value against current time. | NoΓÇ»| `true` |
+| validate-not-afterΓÇ» | Boolean. Validates value against current time. | NoΓÇ»| `true`|
+| ignore-errorΓÇ» | Boolean. Specifies if policy should proceed to the next handler or jump to on-error upon failed validation. | No | `false` |
+| identity | String. Combination of certificate claim values that make certificate valid. | Yes | N/A |
+
+## Elements
+
+| Element | Description | Required |
+| - | -- | -- |
+| identities | Add this element to specify one or more `identity` elements with defined claims on the client certificate. | No |
+
+## identity attributes
+
+| Name | Description | Required | Default |
+| - | --| -- | -- |
+| thumbprint | Certificate thumbprint. | No | N/A |
+| serial-number | Certificate serial number. | No | N/A |
+| common-name | Certificate common name (part of Subject string). | No | N/A |
+| subject | Subject string. Must follow format of Distinguished Name. | No | N/A |
+| dns-name | Value of dnsName entry inside Subject Alternative Name claim. | No | N/A |
+| issuer-subject | Issuer's subject. Must follow format of Distinguished Name. | No | N/A |
+| issuer-thumbprint | Issuer thumbprint. | No | N/A |
+| issuer-certificate-id | Identifier of existing certificate entity representing the issuer's public key. Mutually exclusive with other issuer attributes. | No | N/A |
+
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+
+## Example
+
+The following example validates a client certificate to match the policy's default validation rules and checks whether the subject and issuer name match specified values.
+
+```xml
+<validate-client-certificate
+ validate-revocation="true"
+ validate-trust="true"
+ validate-not-before="true"
+ validate-not-after="true"
+ ignore-error="false">
+ <identities>
+ <identity
+ subject="C=US, ST=Illinois, L=Chicago, O=Contoso Corp., CN=*.contoso.com"
+ issuer-subject="C=BE, O=FabrikamSign nv-sa, OU=Root CA, CN=FabrikamSign Root CA" />
+ </identities>
+</validate-client-certificate>
+```
+
+## Related policies
+
+* [API Management access restriction policies](api-management-access-restriction-policies.md)
+
api-management Validate Content Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-content-policy.md
+
+ Title: Azure API Management policy reference - validate-content | Microsoft Docs
+description: Reference for the validate-content policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 12/05/2022+++
+# Validate content
+The `validate-content` policy validates the size or content of a request or response body against one or more [supported schemas](#schemas-for-content-validation).
+
+The following table shows the schema formats and request or response content types that the policy supports. Content type values are case insensitive.
+
+| Format | Content types |
+|||
+|JSON | Examples: `application/json`<br/>`application/hal+json` |
+|XML | Example: `application/xml` |
+|SOAP | Allowed values: `application/soap+xml` for SOAP 1.2 APIs<br/>`text/xml` for SOAP 1.1 APIs|
++
+## What content is validated
+
+The policy validates the following content in the request or response against the schema:
+
+* Presence of all required properties.
+* Presence or absence of additional properties, if the schema has the `additionalProperties` field set. May be overridden with the `allow-additional-properties` attribute.
+* Types of all properties. For example, if a schema specifies a property as an integer, the request (or response) must include an integer and not another type, such as a string.
+* The format of the properties, if specified in the schema - for example, regex (if the `pattern` keyword is specified), `minimum` for integers, and so on.
+
+> [!TIP]
+> For examples of regex pattern constraints that can be used in schemas, see [OWASP Validation Regex Repository](https://owasp.org/www-community/OWASP_Validation_Regex_Repository).
++
+## Policy statement
+
+```xml
+<validate-content unspecified-content-type-action="ignore | prevent | detect" max-size="size in bytes" size-exceeded-action="ignore | prevent | detect" errors-variable-name="variable name">
+ <content-type-map any-content-type-value="content type string" missing-content-type-value="content type string">
+ <type from | when="content type string" to="content type string" />
+ </content-type-map>
+ <content type="content type string" validate-as="json | xml | soap" schema-id="schema id" schema-ref="#/local/reference/path" action="ignore | prevent | detect" allow-additional-properties="true | false" />
+</validate-content>
+```
+
+## Attributes
+
+| Attribute | Description | Required | Default |
+| -- | | -- | - |
+| unspecified-content-type-action | [Action](#actions) to perform for requests or responses with a content type that isnΓÇÖt specified in the API schema. | Yes | N/A |
+| max-size | Maximum length of the body of the request or response in bytes, checked against the `Content-Length` header. If the request body or response body is compressed, this value is the decompressed length. Maximum allowed value: 102,400 bytes (100 KB). (Contact [support](https://azure.microsoft.com/support/options/) if you need to increase this limit.) | Yes | N/A |
+| size-exceeded-action | [Action](#actions) to perform for requests or responses whose body exceeds the size specified in `max-size`. | Yes | N/A |
+| errors-variable-name | Name of the variable in `context.Variables` to log validation errors to. | No | N/A |
+
+## Elements
+
+|Name|Description|Required|
+|-|--|--|
+| content-type-map | Add this element to map the content type of the incoming request or response to another content type that is used to trigger validation. | No |
+| content | Add one or more of these elements to validate the content type in the request or response, or the mapped content type, and perform the specified [action](#actions). | No |
+
+### content-type-map attributes
+
+| Attribute | Description | Required | Default |
+| -- | | -- | - |
+| any-content-type-value | Content type used for validation of the body of a request or response, regardless of the incoming content type. | No | N/A |
+| missing-content-type-value | Content type used for validation of the body of a request or response, when the incoming content type is missing or empty. | No | N/A |
+
+### content-type-map-elements
+
+|Name|Description|Required|
+|-|--|--|
+| type | Add one or more of these elements to map an incoming content type to a content type used for validation of the body of a request or response. Use `from` to specify a known incoming content type, or use `when` with a policy expression to specify any incoming content type that matches a condition. Overrides the mapping in `any-content-type-value` and `missing-content-type-value`, if specified. | No |
++
+### content attributes
+
+| Attribute | Description | Required | Default |
+| -- | | -- | - |
+| type | Content type to execute body validation for, checked against the content type header or the value mapped in `content-type-mapping`, if specified. If empty, it applies to every content type specified in the API schema.<br/><br/>To validate SOAP requests and responses (`validate-as` attribute set to "soap"), set `type` to `application/soap+xml` for SOAP 1.2 APIs or `text/xml` for SOAP 1.1 APIs. | No | N/A |
+| validate-as | Validation engine to use for validation of the body of a request or response with a matching `type`. Supported values: "json", "xml", "soap".<br/><br/>When "soap" is specified, the XML from the request or response is extracted from the SOAP envelope and validated against an XML schema. | Yes | N/A |
+| schema-id | Name of an existing schema that was [added](#schemas-for-content-validation) to the API Management instance for content validation. If not specified, the default schema from the API definition is used. | No | N/A |
+| schema-ref| For a JSON schema specified in `schema-id`, optional reference to a valid local reference path in the JSON document. Example: `#/components/schemas/address`. The attribute should return a JSON object that API Management handles as a valid JSON schema.<br/><br/> For an XML schema, `schema-ref` isn't supported, and any top-level schema element can be used as the root of the XML request or response payload. The validation checks that all elements starting from the XML request or response payload root adhere to the provided XML schema. | No | N/A |
+| allow-additional-properties | Boolean. For a JSON schema, specifies whether to implement a runtime override of the `additionalProperties` value configured in the schema: <br> - `true`: allow additional properties in the request or response body, even if the JSON schema's `additionalProperties` field is configured to not allow additional properties. <br> - `false`: do not allow additional properties in the request or response body, even if the JSON schema's `additionalProperties` field is configured to allow additional properties.<br/><br/>If the attribute isn't specified, the policy validates additional properties according to configuration of the `additionalProperties` field in the schema. | No | N/A |
++
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, on-error
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
++
+## Schemas for content validation
+
+By default, validation of request or response content uses JSON or XML schemas from the API definition. These schemas can be specified manually or generated automatically when importing an API from an OpenAPI or WSDL specification into API Management.
+
+Using the `validate-content` policy, you may optionally validate against one or more JSON or XML schemas that youΓÇÖve added to your API Management instance and that aren't part of the API definition. A schema that you add to API Management can be reused across many APIs.
+
+To add a schema to your API Management instance using the Azure portal:
+
+1. In the [portal](https://portal.azure.com), navigate to your API Management instance.
+1. In the **APIs** section of the left-hand menu, select **Schemas** > **+ Add**.
+1. In the **Create schema** window, do the following:
+ 1. Enter a **Name** (ID) for the schema.
+ 1. In **Schema type**, select **JSON** or **XML**.
+ 1. Enter a **Description**.
+ 1. In **Create method**, do one of the following:
+ * Select **Create new** and enter or paste the schema.
+ * Select **Import from file** or **Import from URL** and enter a schema location.
+ > [!NOTE]
+ > To import a schema from URL, the schema needs to be accessible over the internet from the browser.
+ 1. Select **Save**.
++
+ :::image type="content" source="media/validation-policies/add-schema.png" alt-text="Create schema":::
+
+API Management adds the schema resource at the relative URI `/schemas/<schemaId>`, and the schema appears in the list on the **Schemas** page. Select a schema to view its properties or to edit in a schema editor.
+
+> [!NOTE]
+> A schema may cross-reference another schema that is added to the API Management instance. For example, include an XML schema added to API Management by using an element similar to:<br/><br/>`<xs:include schemaLocation="/schemas/myschema" />`
++
+> [!TIP]
+> Open-source tools to resolve WSDL and XSD schema references and to batch-import generated schemas to API Management are available on [GitHub](https://github.com/Azure-Samples/api-management-schema-import).
+
+## Examples
+
+### JSON schema validation
+
+In the following example, API Management interprets requests with an empty content type header or requests with a content type header `application/hal+json` as requests with the content type `application/json`. Then, API Management performs the validation in the detection mode against a schema defined for the `application/json` content type in the API definition. Messages with payloads larger than 100 KB are blocked. Requests containing additional properties are blocked, even if the schema's `additionalProperties` field is configured to allow additional properties.
+
+```xml
+<validate-content unspecified-content-type-action="prevent" max-size="102400" size-exceeded-action="prevent" errors-variable-name="requestBodyValidation">
+ <content-type-map missing-content-type-value="application/json">
+ <type from="application/hal+json" to="application/json" />
+ </content-type-map>
+ <content type="application/json" validate-as="json" action="detect" allow-additional-properties="false" />
+</validate-content>
+```
+
+### SOAP schema validation
+
+In the following example, API Management interprets any request as a request with the content type `application/soap+xml` (the content type that's used by SOAP 1.2 APIs), regardless of the incoming content type. The request could arrive with an empty content type header, content type header of `text/xml` (used by SOAP 1.1 APIs), or another content type header. Then, API Management extracts the XML payload from the SOAP envelope and performs the validation in prevention mode against the schema named "myschema". Messages with payloads larger than 100 KB are blocked.
+
+```xml
+<validate-content unspecified-content-type-action="prevent" max-size="102400" size-exceeded-action="prevent" errors-variable-name="requestBodyValidation">
+ <content-type-map any-content-type-value="application/soap+xml" />
+ <content type="application/soap+xml" validate-as="soap" schema-id="myschema" action="prevent" />
+</validate-content>
+```
++
+## Related policies
+
+* [API Management validation policies](validation-policies.md)
+
api-management Validate Graphql Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-graphql-request-policy.md
+
+ Title: Azure API Management policy reference - validate-graphql-request | Microsoft Docs
+description: Reference for the validate-graphql-request policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 12/02/2022+++
+# Validate GraphQL request
+
+The `validate-graphql-request` policy validates the GraphQL request and authorizes access to specific query paths. An invalid query is a "request error". Authorization is only done for valid requests.
++
+## Policy statement
+
+```xml
+<validate-graphql-request error-variable-name="variable name" max-size="size in bytes" max-depth="query depth">
+ <authorize>
+ <rule path="query path, for example: '/listUsers' or '/__*'" action="string or policy expression that evaluates to 'allow | remove | reject | ignore'" />
+ </authorize>
+</validate-graphql-request>
+```
+
+## Attributes
+
+| Attribute | Description | Required | Default |
+| -- | | -- | - |
+| error-variable-name | Name of the variable in `context.Variables` to log validation errors to. | No | N/A |
+| max-size | Maximum size of the request payload in bytes. Maximum allowed value: 102,400 bytes (100 KB). (Contact [support](https://azure.microsoft.com/support/options/) if you need to increase this limit.) | Yes | N/A |
+| max-depth | An integer. Maximum query depth. | No | 6 |
++
+## Elements
+
+|Name|Description|Required|
+|-|--|--|
+| authorize | Add this element to set an appropriate authorization rule for one or more paths. | No |
+| rule | Add one or more of these elements to authorize specific query paths. Each rule can optionally specify a different [action](#request-actions). May be specified conditionally using a policy expression. | No |
++
+### rule attributes
+
+| Attribute | Description | Required | Default |
+| -- | | -- | - |
+| path | Path to execute authorization validation on. It must follow the pattern: `/type/field`. | Yes | N/A |
+| action | [Action](#request-actions) to perform if the rule applies. May be specified conditionally using a policy expression. | No | allow |
+
+### Introspection system
+
+The policy for path=`/__*` is the [introspection](https://graphql.org/learn/introspection/) system. You can use it to reject introspection requests (`__schema`, `__type`, etc.).
+
+### Request actions
+
+Available actions are described in the following table.
+
+|Action |Description |
+|||
+|reject | A request error happens, and the request is not sent to the backend. Additional rules if configured are not applied. |
+|remove | A field error happens, and the field is removed from the request. |
+|allow | The field is passed to the backend. |
+|ignore | The rule is not valid for this case and the next rule is applied. |
+
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+
+### Usage notes
+
+Because GraphQL queries use a flattened schema, permissions may be applied at any leaf node of an output type:
+
+* Mutation, query, or subscription
+* Individual field in a type declaration
+
+Permissions may not be applied to:
+
+* Input types
+* Fragments
+* Unions
+* Interfaces
+* The schema element
+
+## Error handling
+
+Failure to validate against the GraphQL schema, or a failure for the request's size or depth, is a request error and results in the request being failed with an errors block (but no data block).
+
+Similar to the [`Context.LastError`](api-management-error-handling-policies.md#lasterror) property, all GraphQL validation errors are automatically propagated in the `GraphQLErrors` variable. If the errors need to be propagated separately, you can specify an error variable name. Errors are pushed onto the `error` variable and the `GraphQLErrors` variable.
+
+## Examples
+
+### Query validation
+
+This example applies the following validation and authorization rules to a GraphQL query:
+* Requests larger than 100 kb or with query depth greater than 4 are rejected.
+* Requests to the introspection system are rejected.
+* The `/Missions/name` field is removed from requests containing more than two headers.
+
+```xml
+<validate-graphql-request error-variable-name="name" max-size="102400" max-depth="4">
+ <authorize>
+ <rule path="/__*" action="reject" />
+ <rule path="/Missions/name" action="@(context.Request.Headers.Count > 2 ? "remove" : "allow")" />
+ </authorize>
+</validate-graphql-request>
+```
+
+### Mutation validation
+
+This example applies the following validation and authorization rules to a GraphQL mutation:
+* Requests larger than 100 kb or with query depth greater than 4 are rejected.
+* Requests to mutate the `deleteUser` field are denied except when the request is from IP address `198.51.100.1`.
+
+```xml
+<validate-graphql-request error-variable-name="name" max-size="102400" max-depth="4">
+ <authorize>
+ <rule path="/Mutation/deleteUser" action="@(context.Request.IpAddress <> "198.51.100.1" ? "deny" : "allow")" />
+ </authorize>
+</validate-graphql-request>
+```
+
+## Related policies
+
+* [API Management policies for GraphQL APIs](graphql-policies.md)
+
api-management Validate Headers Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-headers-policy.md
+
+ Title: Azure API Management policy reference - validate-headers | Microsoft Docs
+description: Reference for the validate-headers policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 12/05/2022+++
+# Validate headers
+
+The `validate-headers` policy validates the response headers against the API schema.
+
+> [!IMPORTANT]
+> If you imported an API using a management API version prior to `2021-01-01-preview`, the `validate-headers` policy might not work. You may need to reimport your API using management API version `2021-01-01-preview` or later.
+++
+## Policy statement
+
+```xml
+<validate-headers specified-header-action="ignore | prevent | detect" unspecified-header-action="ignore | prevent | detect" errors-variable-name="variable name">
+ <header name="header name" action="ignore | prevent | detect" />
+</validate-headers>
+```
+
+## Attributes
+
+| Attribute | Description | Required | Default |
+| -- | | -- | - |
+| specified-header-action | [Action](#actions) to perform for response headers specified in the API schema. | Yes | N/A |
+| unspecified-header-action | [Action](#actions) to perform for response headers that arenΓÇÖt specified in the API schema. | Yes | N/A |
+| errors-variable-name | Name of the variable in `context.Variables` to log validation errors to. | No | N/A |
+
+## Elements
+
+|Name|Description|Required|
+|-|--|--|
+| header | Add one or more elements for named headers to override the default validation [actions](#actions) for headers in responses. | No |
++
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) outbound, on-error
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+++
+## Example
+
+```xml
+<validate-headers specified-header-action="ignore" unspecified-header-action="prevent" errors-variable-name="responseHeadersValidation" />
+```
++
+## Related policies
+
+* [API Management validation policies](validation-policies.md)
+
api-management Validate Jwt Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-jwt-policy.md
+
+ Title: Azure API Management policy reference - validate-jwt | Microsoft Docs
+description: Reference for the validate-jwt policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+
+documentationcenter: ''
++++ Last updated : 12/08/2022+++
+# Validate JWT
+
+The `validate-jwt` policy enforces existence and validity of a supported JSON web token (JWT) extracted from a specified HTTP header, extracted from a specified query parameter, or matching a specific value.
+
+> [!NOTE]
+> To validate a JWT that was provided by the Azure Active Directory service, API Management also provides the [`validate-azure-ad-token`](validate-azure-ad-token-policy.md) policy.
+++
+## Policy statement
+
+```xml
+<validate-jwt
+ header-name="name of HTTP header containing the token (alternatively, use query-parameter-name or token-value attribute to specify token)"
+ query-parameter-name="name of query parameter used to pass the token (alternative, use header-name or token-value attribute to specify token)"
+ token-value="expression returning the token as a string (alternatively, use header-name or query-parameter attribute to specify token)"
+ failed-validation-httpcode="HTTP status code to return on failure"
+ failed-validation-error-message="error message to return on failure"
+ require-expiration-time="true | false"
+ require-scheme="scheme"
+ require-signed-tokens="true | false"
+ clock-skew="allowed clock skew in seconds"
+ output-token-variable-name="name of a variable to receive a JWT object representing successfully validated token">
+ <openid-config url="full URL of the configuration endpoint, for example, https://login.constoso.com/openid-configuration" />
+ <issuer-signing-keys>
+ <key>Base64 encoded signing key | certificate-id="mycertificate" | n="modulus" e="exponent"</key>
+ <!-- if there are multiple keys, then add additional key elements -->
+ </issuer-signing-keys>
+ <decryption-keys>
+ <key>Base64 encoded signing key | certificate-id="mycertificate" | n="modulus" e="exponent" </key>
+ <!-- if there are multiple keys, then add additional key elements -->
+ </decryption-keys>
+ <audiences>
+ <audience>audience string</audience>
+ <!-- if there are multiple possible audiences, then add additional audience elements -->
+ </audiences>
+ <issuers>
+ <issuer>issuer string</issuer>
+ <!-- if there are multiple possible issuers, then add additional issuer elements -->
+ </issuers>
+ <required-claims>
+ <claim name="name of the claim as it appears in the token" match="all | any" separator="separator character in a multi-valued claim">
+ <value>claim value as it is expected to appear in the token</value>
+ <!-- if there is more than one allowed value, then add additional value elements -->
+ </claim>
+ <!-- if there are multiple possible allowed claim, then add additional claim elements -->
+ </required-claims>
+</validate-jwt>
+```
+
+## Attributes
+
+| Attribute | Description | Required | Default |
+| - | | -- | |
+| header-name | The name of the HTTP header holding the token. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A |
+| query-parameter-name | The name of the query parameter holding the token. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A |
+| token-value | Expression returning a string containing the token. You must not return `Bearer ` as part of the token value. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A |
+| failed-validation-httpcode | HTTP Status code to return if the JWT doesn't pass validation. | No | 401 |
+| failed-validation-error-message | Error message to return in the HTTP response body if the JWT doesn't pass validation. This message must have any special characters properly escaped. | No | Default error message depends on validation issue, for example "JWT not present." |
+| require-expiration-time | Boolean. Specifies whether an expiration claim is required in the token. | No | true |
+| require-scheme | The name of the token scheme, for example, "Bearer". When this attribute is set, the policy will ensure that specified scheme is present in the Authorization header value. | No | N/A |
+| require-signed-tokens | Boolean. Specifies whether a token is required to be signed. | No | true |
+| clock-skew | Timespan. Use to specify maximum expected time difference between the system clocks of the token issuer and the API Management instance. | No | 0 seconds |
+| output-token-variable-name | String. Name of context variable that will receive token value as an object of type [`Jwt`](api-management-policy-expressions.md) upon successful token validation | No | N/A |
+++
+## Elements
+
+| Element | Description | Required |
+| - | -- | -- |
+| openid-config |Add one or more of these elements to specify a compliant OpenID configuration endpoint URL from which signing keys and issuer can be obtained.<br/><br/>Configuration including the JSON Web Key Set (JWKS) is pulled from the endpoint every 1 hour and cached. If the token being validated references a validation key (using `kid` claim) that is missing in cached configuration, or if retrieval fails, API Management pulls from the endpoint at most once per 5 min. These intervals are subject to change without notice. <br/><br/>The response should be according to specs as defined at URL: `https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderMetadata`. <br/><br/>For Azure Active Directory use the OpenID Connect [metadata endpoint](../active-directory/develop/v2-protocols-oidc.md#find-your-apps-openid-configuration-document-uri) configured in your app registration such as:<br/>- (v2) `https://login.microsoftonline.com/{tenant-name}/v2.0/.well-known/openid-configuration`<br/> - (v2 multitenant) ` https://login.microsoftonline.com/organizations/v2.0/.well-known/openid-configuration`<br/>- (v1) `https://login.microsoftonline.com/{tenant-name}/.well-known/openid-configuration` <br/><br/> substituting your directory tenant name or ID, for example `contoso.onmicrosoft.com`, for `{tenant-name}`. | No |
+| issuer-signing-keys | A list of Base64-encoded security keys, in [`key`](#key-attributes) subelements, used to validate signed tokens. If multiple security keys are present, then each key is tried until either all are exhausted (in which case validation fails) or one succeeds (useful for token rollover). <br/><br/>Optionally specify a key by using the `id` attribute to match a `kid` claim. To validate an RS256 signed token, optionally specify the public key using a `certificate-id` attribute with value the identifier of a certificate uploaded to API Management, or the RSA modulus `n` and exponent `e` pair of the RS256 signing key-in Base64url-encoded format. | No |
+| decryption-keys | A list of Base64-encoded keys, in [`key`](#key-attributes) subelements, used to decrypt the tokens. If multiple security keys are present, then each key is tried until either all keys are exhausted (in which case validation fails) or a key succeeds.<br/><br/>Optionally specify a key by using the `id` attribute to match a `kid` claim. To decrypt an RS256 signed token, optionally specify the public key using a `certificate-id` attribute with value the identifier of a certificate uploaded to API Management. | No |
+| audiences | A list of acceptable audience claims, in `audience` subelements, that can be present on the token. If multiple audience values are present, then each value is tried until either all are exhausted (in which case validation fails) or until one succeeds. At least one audience must be specified. | No |
+| issuers | A list of acceptable principals, in `issuer` subelements, that issued the token. If multiple issuer values are present, then each value is tried until either all are exhausted (in which case validation fails) or until one succeeds. | No |
+| required-claims | A list of claims, in [`claim`](#claim-attributes) subelements, expected to be present on the token for it to be considered valid. When multiple claims are present, the token must match claim values according to the value of the `match` attribute. | No |
+
+### key attributes
+| Attribute | Description | Required | Default |
+| - | | -- | |
+| id | String. Identifier used to match `kid` claim presented in JWT. | No | N/A |
+| certificate-id | Identifier of a certificate entity [uploaded](/rest/api/apimanagement/apimanagementrest/azure-api-management-rest-api-certificate-entity#Add) to API Management, used to specify the public key to verify an RS256 signed token. | No | N/A |
+| n | Modulus of the public key used to verify the issuer of an RS256 signed token. Must be specified with the value of the exponent `e`.| No | N/A|
+| e | Exponent of the public key used to verify the issuer an RS256 signed token. Must be specified with the value of the modulus `n`. | No | N/A|
+++
+### claim attributes
+| Attribute | Description | Required | Default |
+| - | | -- | |
+| match | The `match` attribute on the `claim` element specifies whether every claim value in the policy must be present in the token for validation to succeed. Possible values are:<br /><br /> - `all` - every claim value in the policy must be present in the token for validation to succeed.<br /><br /> - `any` - at least one claim value must be present in the token for validation to succeed. | No | all |
+| separator | String. Specifies a separator (for example, ",") to be used for extracting a set of values from a multi-valued claim. | No | N/A |
+
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+
+### Usage notes
+
+* The `validate-jwt` policy requires that the `exp` registered claim is included in the JWT token, unless `require-expiration-time` attribute is specified and set to `false`.
+* The policy supports HS256 and RS256 signing algorithms:
+ * **HS256** - the key must be provided inline within the policy in the Base64-encoded form.
+ * **RS256** - the key may be provided either via an OpenID configuration endpoint, or by providing the ID of an uploaded certificate (in PFX format) that contains the public key, or the modulus-exponent pair of the public key.
+* The policy supports tokens encrypted with symmetric keys using the following encryption algorithms: A128CBC-HS256, A192CBC-HS384, A256CBC-HS512.
+* You can use access restriction policies in different scopes for different purposes. For example, you can secure the whole API with Azure AD authentication by applying the `validate-jwt` policy on the API level, or you can apply it on the API operation level and use `claims` for more granular control.
++
+## Examples
+
+### Simple token validation
+
+```xml
+<validate-jwt header-name="Authorization" require-scheme="Bearer">
+ <issuer-signing-keys>
+ <key>{{jwt-signing-key}}</key> <!-- signing key specified as a named value -->
+ </issuer-signing-keys>
+ <audiences>
+ <audience>@(context.Request.OriginalUrl.Host)</audience> <!-- audience is set to API Management host name -->
+ </audiences>
+ <issuers>
+ <issuer>http://contoso.com/</issuer>
+ </issuers>
+</validate-jwt>
+```
+
+### Token validation with RSA certificate
+
+```xml
+<validate-jwt header-name="Authorization" require-scheme="Bearer">
+ <issuer-signing-keys>
+ <key certificate-id="my-rsa-cert" /> <!-- signing key specified as certificate ID, enclosed in double-quotes -->
+ </issuer-signing-keys>
+ <audiences>
+ <audience>@(context.Request.OriginalUrl.Host)</audience> <!-- audience is set to API Management host name -->
+ </audiences>
+ <issuers>
+ <issuer>http://contoso.com/</issuer>
+ </issuers>
+</validate-jwt>
+```
+
+### Azure Active Directory token validation
+
+```xml
+<validate-jwt header-name="Authorization" failed-validation-httpcode="401" failed-validation-error-message="Unauthorized. Access token is missing or invalid.">
+ <openid-config url="https://login.microsoftonline.com/contoso.onmicrosoft.com/.well-known/openid-configuration" />
+ <audiences>
+ <audience>25eef6e4-c905-4a07-8eb4-0d08d5df8b3f</audience>
+ </audiences>
+ <required-claims>
+ <claim name="id" match="all">
+ <value>insert claim here</value>
+ </claim>
+ </required-claims>
+</validate-jwt>
+```
+
+### Azure Active Directory B2C token validation
+
+```xml
+<validate-jwt header-name="Authorization" failed-validation-httpcode="401" failed-validation-error-message="Unauthorized. Access token is missing or invalid.">
+ <openid-config url="https://login.microsoftonline.com/tfp/contoso.onmicrosoft.com/b2c_1_signin/v2.0/.well-known/openid-configuration" />
+ <audiences>
+ <audience>d313c4e4-de5f-4197-9470-e509a2f0b806</audience>
+ </audiences>
+ <required-claims>
+ <claim name="id" match="all">
+ <value>insert claim here</value>
+ </claim>
+ </required-claims>
+</validate-jwt>
+```
+
+### Authorize access to operations based on token claims
+
+This example shows how to use the `validate-jwt` policy to authorize access to operations based on token claims value.
+
+```xml
+<validate-jwt header-name="Authorization" require-scheme="Bearer" output-token-variable-name="jwt">
+ <issuer-signing-keys>
+ <key>{{jwt-signing-key}}</key> <!-- signing key is stored in a named value -->
+ </issuer-signing-keys>
+ <audiences>
+ <audience>@(context.Request.OriginalUrl.Host)</audience>
+ </audiences>
+ <issuers>
+ <issuer>contoso.com</issuer>
+ </issuers>
+ <required-claims>
+ <claim name="group" match="any">
+ <value>finance</value>
+ <value>logistics</value>
+ </claim>
+ </required-claims>
+</validate-jwt>
+<choose>
+ <when condition="@(context.Request.Method == "POST" && !((Jwt)context.Variables["jwt"]).Claims["group"].Contains("finance"))">
+ <return-response>
+ <set-status code="403" reason="Forbidden" />
+ </return-response>
+ </when>
+</choose>
+```
+
+## Related policies
+* [API Management access restriction policies](api-management-access-restriction-policies.md)
+++
api-management Validate Parameters Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-parameters-policy.md
+
+ Title: Azure API Management policy reference - validate-parameters | Microsoft Docs
+description: Reference for the validate-parameters policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 12/05/2022+++
+# Validate parameters
+
+The `validate-parameters` policy validates the header, query, or path parameters in requests against the API schema.
+
+> [!IMPORTANT]
+> If you imported an API using a management API version prior to `2021-01-01-preview`, the `validate-parameters` policy might not work. You may need to [reimport your API](/rest/api/apimanagement/current-ga/apis/create-or-update) using management API version `2021-01-01-preview` or later.
+++
+## Policy statement
+
+```xml
+<validate-parameters specified-parameter-action="ignore | prevent | detect" unspecified-parameter-action="ignore | prevent | detect" errors-variable-name="variable name">
+ <headers specified-parameter-action="ignore | prevent | detect" unspecified-parameter-action="ignore | prevent | detect">
+ <parameter name="parameter name" action="ignore | prevent | detect" />
+ </headers>
+ <query specified-parameter-action="ignore | prevent | detect" unspecified-parameter-action="ignore | prevent | detect">
+ <parameter name="parameter name" action="ignore | prevent | detect" />
+ </query>
+ <path specified-parameter-action="ignore | prevent | detect">
+ <parameter name="parameter name" action="ignore | prevent | detect" />
+ </path>
+</validate-parameters>
+```
+
+## Attributes
+
+| Attribute | Description | Required | Default |
+| -- | | -- | - |
+| specified-parameter-action | [Action](#actions) to perform for request parameters specified in the API schema. <br/><br/> When provided in a `headers`, `query`, or `path` element, the value overrides the value of `specified-parameter-action` in the `validate-parameters` element. | Yes | N/A |
+| unspecified-parameter-action | [Action](#actions) to perform for request parameters that arenΓÇÖt specified in the API schema. <br/><br/>When provided in a `headers`or `query` element, the value overrides the value of `unspecified-parameter-action` in the `validate-parameters` element. | Yes | N/A |
+| errors-variable-name | Name of the variable in `context.Variables` to log validation errors to. | No | N/A |
+| name | Name of the parameter to override validation action for. This value is case insensitive. | Yes | N/A |
+| action | [Action](#actions) to perform for the parameter with the matching name. If the parameter is specified in the API schema, this value overrides the higher-level `specified-parameter-action` configuration. If the parameter isnΓÇÖt specified in the API schema, this value overrides the higher-level `unspecified-parameter-action` configuration.| Yes | N/A |
+
+## Elements
+
+|Name|Description|Required|
+|-|--|--|
+| headers | Add this element to override default validation [actions](#actions) for header parameters in requests. | No |
+| query | Add this element to override default validation [actions](#actions) for query parameters in requests. | No |
+| path | Add this element to override default validation [actions](#actions) for URL path parameters in requests. | No |
+| parameter | Add one or more elements for named parameters to override higher-level configuration of the validation [actions](#actions). | No |
++
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+++
+## Example
+
+In this example, all query and path parameters are validated in the prevention mode and headers in the detection mode. Validation is overridden for several header parameters:
+
+```xml
+<validate-parameters specified-parameter-action="prevent" unspecified-parameter-action="prevent" errors-variable-name="requestParametersValidation">
+ <headers specified-parameter-action="detect" unspecified-parameter-action="detect">
+ <parameter name="Authorization" action="prevent" />
+ <parameter name="User-Agent" action="ignore" />
+ <parameter name="Host" action="ignore" />
+ <parameter name="Referrer" action="ignore" />
+ </headers>
+</validate-parameters>
+```
++
+## Related policies
+
+* [API Management validation policies](validation-policies.md)
+
api-management Validate Status Code Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-status-code-policy.md
+
+ Title: Azure API Management policy reference - validate-status-code | Microsoft Docs
+description: Reference for the validate-status-code policy available for use in Azure API Management. Provides policy usage, settings, and examples.
++++ Last updated : 12/05/2022+++
+# Validate status code
+
+The `validate-status-code` policy validates the HTTP status codes in responses against the API schema. This policy may be used to prevent leakage of backend errors, which can contain stack traces.
+++
+## Policy statement
+
+```xml
+<validate-status-code unspecified-status-code-action="ignore | prevent | detect" errors-variable-name="variable name">
+ <status-code code="HTTP status code number" action="ignore | prevent | detect" />
+</validate-status-code>
+```
+
+## Attributes
+
+| Attribute | Description | Required | Default |
+| -- | | -- | - |
+| unspecified-status-code-action | [Action](#actions) to perform for HTTP status codes in responses that arenΓÇÖt specified in the API schema. | Yes | N/A |
+| errors-variable-name | Name of the variable in `context.Variables` to log validation errors to. | No | N/A |
+
+## Elements
+
+|Name|Description|Required|
+|-|--|--|
+| status-code | Add one or more elements for HTTP status codes to override the default validation [action](#actions) for status codes in responses. | No |
+
+### status-code attributes
+
+| Attribute | Description | Required | Default |
+| -- | | -- | - |
+| code | HTTP status code to override validation action for. | Yes | N/A |
+| action | [Action](#actions) to perform for the matching status code, which isnΓÇÖt specified in the API schema. If the status code is specified in the API schema, this override doesnΓÇÖt take effect. | Yes | N/A |
++
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) outbound, on-error
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
++
+## Example
+
+```xml
+<validate-status-code unspecified-status-code-action="prevent" errors-variable-name="responseStatusCodeValidation" />
+```
++
+## Related policies
+
+* [API Management validation policies](validation-policies.md)
+
api-management Validation Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validation-policies.md
- Title: Azure API Management validation policies | Microsoft Docs
-description: Reference for Azure API Management policies to validate API requests and responses. Provides policy usage, settings, and examples.
---- Previously updated : 09/09/2022---
-# API Management policies to validate requests and responses
-
-This article provides a reference for API Management policies to validate REST or SOAP API requests and responses against schemas defined in the API definition or supplementary JSON or XML schemas. Validation policies protect from vulnerabilities such as injection of headers or payload or leaking sensitive data. Learn more about common [API vulnerabilites](mitigate-owasp-api-threats.md).
-
-While not a replacement for a Web Application Firewall, validation policies provide flexibility to respond to an additional class of threats that arenΓÇÖt covered by security products that rely on static, predefined rules.
--
-## Validation policies
--- [Validate content](#validate-content) - Validates the size or content of a request or response body against one or more API schemas. The supported schema formats are JSON and XML.-- [Validate parameters](#validate-parameters) - Validates the request header, query, or path parameters against the API schema.-- [Validate headers](#validate-headers) - Validates the response headers against the API schema.-- [Validate status code](#validate-status-code) - Validates the HTTP status codes in responses against the API schema.-
-> [!NOTE]
-> The maximum size of the API schema that can be used by a validation policy is 4 MB. If the schema exceeds this limit, validation policies will return errors on runtime. To increase it, please contact [support](https://azure.microsoft.com/support/options/).
-
-## Actions
-
-Each validation policy includes an attribute that specifies an action, which API Management takes when validating an entity in an API request or response against the API schema.
-
-* An action may be specified for elements that are represented in the API schema and, depending on the policy, for elements that aren't represented in the API schema.
-
-* An action specified in a policy's child element overrides an action specified for its parent.
-
-Available actions:
-
-| Action | Description |
-| | |
-| `ignore` | Skip validation. |
-| `prevent` | Block the request or response processing, log the verbose [validation error](#validation-errors), and return an error. Processing is interrupted when the first set of errors is detected.
-| `detect` | Log [validation errors](#validation-errors), without interrupting request or response processing. |
-
-## Logs
-
-Details about the validation errors during policy execution are logged to the variable in `context.Variables` specified in the `errors-variable-name` attribute in the policy's root element. When configured in a `prevent` action, a validation error blocks further request or response processing and is also propagated to the `context.LastError` property.
-
-To investigate errors, use a [trace](api-management-advanced-policies.md#Trace) policy to log the errors from context variables to [Application Insights](api-management-howto-app-insights.md).
-
-## Performance implications
-
-Adding validation policies may affect API throughput. The following general principles apply:
-* The larger the API schema size, the lower the throughput will be.
-* The larger the payload in a request or response, the lower the throughput will be.
-* The size of the API schema has a larger impact on performance than the size of the payload.
-* Validation against an API schema that is several megabytes in size may cause request or response timeouts under some conditions. The effect is more pronounced in the **Consumption** and **Developer** tiers of the service.
-
-We recommend performing load tests with your expected production workloads to assess the impact of validation policies on API throughput.
-
-## Validate content
-
-The `validate-content` policy validates the size or content of a request or response body against one or more [supported schemas](#schemas-for-content-validation).
--
-The following table shows the schema formats and request or response content types that the policy supports. Content type values are case insensitive.
-
-| Format | Content types |
-|||
-|JSON | Examples: `application/json`<br/>`application/hal+json` |
-|XML | Example: `application/xml` |
-|SOAP | Allowed values: `application/soap+xml` for SOAP 1.2 APIs<br/>`text/xml` for SOAP 1.1 APIs|
-
-### What content is validated
-
-The policy validates the following content in the request or response against the schema:
-
-* Presence of all required properties.
-* Presence or absence of additional properties, if the schema has the `additionalProperties` field set. May be overriden with the `allow-additional-properties` attribute.
-* Types of all properties. For example, if a schema specifies a property as an integer, the request (or response) must include an integer and not another type, such as a string.
-* The format of the properties, if specified in the schema - for example, regex (if the `pattern` keyword is specified), `minimum` for integers, and so on.
-
-> [!TIP]
-> For examples of regex pattern constraints that can be used in schemas, see [OWASP Validation Regex Repository](https://owasp.org/www-community/OWASP_Validation_Regex_Repository).
-
-### Policy statement
-
-```xml
-<validate-content unspecified-content-type-action="ignore|prevent|detect" max-size="size in bytes" size-exceeded-action="ignore|prevent|detect" errors-variable-name="variable name">
- <content-type-map any-content-type-value="content type string" missing-content-type-value="content type string">
- <type from|when="content type string" to="content type string" />
- </content-type-map>
- <content type="content type string" validate-as="json|xml|soap" schema-id="schema id" schema-ref="#/local/reference/path" action="ignore|prevent|detect" allow-additional-properties="true|false" />
-</validate-content>
-```
-
-### Examples
-
-#### JSON schema validation
-
-In the following example, API Management interprets requests with an empty content type header or requests with a content type header `application/hal+json` as requests with the content type `application/json`. Then, API Management performs the validation in the detection mode against a schema defined for the `application/json` content type in the API definition. Messages with payloads larger than 100 KB are blocked. Requests containing additional properties are blocked, even if the schema's `additionalProperties` field is configured to allow additional properties.
-
-```xml
-<validate-content unspecified-content-type-action="prevent" max-size="102400" size-exceeded-action="prevent" errors-variable-name="requestBodyValidation">
- <content-type-map missing-content-type-value="application/json">
- <type from="application/hal+json" to="application/json" />
- </content-type-map>
- <content type="application/json" validate-as="json" action="detect" allow-additional-properties="false" />
-</validate-content>
-```
-
-#### SOAP schema validation
-
-In the following example, API Management interprets any request as a request with the content type `application/soap+xml` (the content type that's used by SOAP 1.2 APIs), regardless of the incoming content type. The request could arrive with an empty content type header, content type header of `text/xml` (used by SOAP 1.1 APIs), or another content type header. Then, API Management extracts the XML payload from the SOAP envelope and performs the validation in prevention mode against the schema named "myschema". Messages with payloads larger than 100 KB are blocked.
-
-```xml
-<validate-content unspecified-content-type-action="prevent" max-size="102400" size-exceeded-action="prevent" errors-variable-name="requestBodyValidation">
- <content-type-map any-content-type-value="application/soap+xml" />
- <content type="application/soap+xml" validate-as="soap" schema-id="myschema" action="prevent" />
-</validate-content>
-```
-
-### Elements
-
-| Name | Description | Required |
-| | | -- |
-| `validate-content` | Root element. | Yes |
-| `content-type-map` | Add this element to map the content type of the incoming request or response to another content type that is used to trigger validation. | No |
-| `content` | Add one or more of these elements to validate the content type in the request or response, or the mapped content type, and perform the specified action. | No |
-
-### Attributes
-
-| Name | Description | Required | Default |
-| -- | - | -- | - |
-| unspecified-content-type-action | [Action](#actions) to perform for requests or responses with a content type that isnΓÇÖt specified in the API schema. | Yes | N/A |
-| max-size | Maximum length of the body of the request or response in bytes, checked against the `Content-Length` header. If the request body or response body is compressed, this value is the decompressed length. Maximum allowed value: 102,400 bytes (100 KB). (Contact [support](https://azure.microsoft.com/support/options/) if you need to increase this limit.) | Yes | N/A |
-| size-exceeded-action | [Action](#actions) to perform for requests or responses whose body exceeds the size specified in `max-size`. | Yes | N/A |
-| errors-variable-name | Name of the variable in `context.Variables` to log validation errors to. | No | N/A |
-| any-content-type-value | Content type used for validation of the body of a request or response, regardless of the incoming content type. | No | N/A |
-| missing-content-type-value | Content type used for validation of the body of a request or response, when the incoming content type is missing or empty. | No | N/A |
-| content-type-map \ type | Add one or more of these elements to map an incoming content type to a content type used for validation of the body of a request or response. Use `from` to specify a known incoming content type, or use `when` with a policy expression to specify any incoming content type that matches a condition. Overrides the mapping in `any-content-type-value` and `missing-content-type-value`, if specified. | No | N/A |
-| content \ type | Content type to execute body validation for, checked against the content type header or the value mapped in `content-type-mapping`, if specified. If empty, it applies to every content type specified in the API schema.<br/><br/>To validate SOAP requests and responses (`validate-as` attribute set to "soap"), set `type` to `application/soap+xml` for SOAP 1.2 APIs or `text/xml` for SOAP 1.1 APIs. | No | N/A |
-| validate-as | Validation engine to use for validation of the body of a request or response with a matching `type`. Supported values: "json", "xml", "soap".<br/><br/>When "soap" is specified, the XML from the request or response is extracted from the SOAP envelope and validated against an XML schema. | Yes | N/A |
-| schema-id | Name of an existing schema that was [added](#schemas-for-content-validation) to the API Management instance for content validation. If not specified, the default schema from the API definition is used. | No | N/A |
-| schema-ref| For a JSON schema specified in `schema-id`, optional reference to a valid local reference path in the JSON document. Example: `#/components/schemas/address`. The attribute should return a JSON object that API Management handles as a valid JSON schema.<br/><br/> For an XML schema, `schema-ref` isn't supported, and any top-level schema element can be used as the root of the XML request or response payload. The validation checks that all elements starting from the XML request or response payload root adhere to the provided XML schema. | No | N/A |
-| action | [Action](#actions) to perform for requests or responses whose body doesn't match the specified content type. | Yes | N/A |
-| allow-additional-properties | Boolean. For a JSON schema, specifies whether to implement a runtime override of the `additionalProperties` value configured in the schema: <br> - `true`: allow additional properties in the request or response body, even if the JSON schema's `additionalProperties` field is configured to not allow additional properties. <br> - `false`: do not allow additional properties in the request or response body, even if the JSON schema's `additionalProperties` field is configured to allow additional properties.<br/><br/>If the attribute isn't specified, the policy validates additional properties according to configuration of the `additionalProperties` field in the schema. | No | N/A |
-
-### Schemas for content validation
-
-By default, validation of request or response content uses JSON or XML schemas from the API definition. These schemas can be specified manually or generated automatically when importing an API from an OpenAPI or WSDL specification into API Management.
-
-Using the `validate-content` policy, you may optionally validate against one or more JSON or XML schemas that youΓÇÖve added to your API Management instance and that aren't part of the API definition. A schema that you add to API Management can be reused across many APIs.
-
-To add a schema to your API Management instance using the Azure portal:
-
-1. In the [portal](https://portal.azure.com), navigate to your API Management instance.
-1. In the **APIs** section of the left-hand menu, select **Schemas** > **+ Add**.
-1. In the **Create schema** window, do the following:
- 1. Enter a **Name** (Id) for the schema.
- 1. In **Schema type**, select **JSON** or **XML**.
- 1. Enter a **Description**.
- 1. In **Create method**, do one of the following:
- * Select **Create new** and enter or paste the schema.
- * Select **Import from file** or **Import from URL** and enter a schema location.
- > [!NOTE]
- > To import a schema from URL, the schema needs to be accessible over the internet from the browser.
- 1. Select **Save**.
--
- :::image type="content" source="media/validation-policies/add-schema.png" alt-text="Create schema":::
-
-API Management adds the schema resource at the relative URI `/schemas/<schemaId>`, and the schema appears in the list on the **Schemas** page. Select a schema to view its properties or to edit in a schema editor.
-
-> [!NOTE]
-> A schema may cross-reference another schema that is added to the API Management instance. For example, include an XML schema added to API Management by using an element similar to:<br/><br/>`<xs:include schemaLocation="/schemas/myschema" />`
--
-> [!TIP]
-> Open-source tools to resolve WSDL and XSD schema references and to batch-import generated schemas to API Management are available on [GitHub](https://github.com/Azure-Samples/api-management-schema-import).
-
-### Usage
-
-This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
--- **Policy sections:** inbound, outbound, on-error--- **Policy scopes:** all scopes-
-## Validate parameters
-
-The `validate-parameters` policy validates the header, query, or path parameters in requests against the API schema.
-
-> [!IMPORTANT]
-> If you imported an API using a management API version prior to `2021-01-01-preview`, the `validate-parameters` policy might not work. You may need to [reimport your API](/rest/api/apimanagement/current-ga/apis/create-or-update) using management API version `2021-01-01-preview` or later.
---
-### Policy statement
-
-```xml
-<validate-parameters specified-parameter-action="ignore|prevent|detect" unspecified-parameter-action="ignore|prevent|detect" errors-variable-name="variable name">
- <headers specified-parameter-action="ignore|prevent|detect" unspecified-parameter-action="ignore|prevent|detect">
- <parameter name="parameter name" action="ignore|prevent|detect" />
- </headers>
- <query specified-parameter-action="ignore|prevent|detect" unspecified-parameter-action="ignore|prevent|detect">
- <parameter name="parameter name" action="ignore|prevent|detect" />
- </query>
- <path specified-parameter-action="ignore|prevent|detect">
- <parameter name="parameter name" action="ignore|prevent|detect" />
- </path>
-</validate-parameters>
-```
-
-### Example
-
-In this example, all query and path parameters are validated in the prevention mode and headers in the detection mode. Validation is overridden for several header parameters:
-
-```xml
-<validate-parameters specified-parameter-action="prevent" unspecified-parameter-action="prevent" errors-variable-name="requestParametersValidation">
- <headers specified-parameter-action="detect" unspecified-parameter-action="detect">
- <parameter name="Authorization" action="prevent" />
- <parameter name="User-Agent" action="ignore" />
- <parameter name="Host" action="ignore" />
- <parameter name="Referrer" action="ignore" />
- </headers>
-</validate-parameters>
-```
-
-### Elements
-
-| Name | Description | Required |
-| | | -- |
-| `validate-parameters` | Root element. Specifies default validation actions for all parameters in requests. | Yes |
-| `headers` | Add this element to override default validation actions for header parameters in requests. | No |
-| `query` | Add this element to override default validation actions for query parameters in requests. | No |
-| `path` | Add this element to override default validation actions for URL path parameters in requests. | No |
-| `parameter` | Add one or more elements for named parameters to override higher-level configuration of the validation actions. | No |
-
-### Attributes
-
-| Name | Description | Required | Default |
-| -- | - | -- | - |
-| `specified-parameter-action` | [Action](#actions) to perform for request parameters specified in the API schema. <br/><br/> When provided in a `headers`, `query`, or `path` element, the value overrides the value of `specified-parameter-action` in the `validate-parameters` element. | Yes | N/A |
-| `unspecified-parameter-action` | [Action](#actions) to perform for request parameters that arenΓÇÖt specified in the API schema. <br/><br/>When provided in a `headers`or `query` element, the value overrides the value of `unspecified-parameter-action` in the `validate-parameters` element. | Yes | N/A |
-| `errors-variable-name` | Name of the variable in `context.Variables` to log validation errors to. | No | N/A |
-| `name` | Name of the parameter to override validation action for. This value is case insensitive. | Yes | N/A |
-| `action` | [Action](#actions) to perform for the parameter with the matching name. If the parameter is specified in the API schema, this value overrides the higher-level `specified-parameter-action` configuration. If the parameter isnΓÇÖt specified in the API schema, this value overrides the higher-level `unspecified-parameter-action` configuration.| Yes | N/A |
-
-### Usage
-
-This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
--- **Policy sections:** inbound--- **Policy scopes:** all scopes-
-## Validate headers
-
-The `validate-headers` policy validates the response headers against the API schema.
-
-> [!IMPORTANT]
-> If you imported an API using a management API version prior to `2021-01-01-preview`, the `validate-headers` policy might not work. You may need to reimport your API using management API version `2021-01-01-preview` or later.
---
-### Policy statement
-
-```xml
-<validate-headers specified-header-action="ignore|prevent|detect" unspecified-header-action="ignore|prevent|detect" errors-variable-name="variable name">
- <header name="header name" action="ignore|prevent|detect" />
-</validate-headers>
-```
-
-### Example
-
-```xml
-<validate-headers specified-header-action="ignore" unspecified-header-action="prevent" errors-variable-name="responseHeadersValidation" />
-```
-### Elements
-
-| Name | Description | Required |
-| | | -- |
-| `validate-headers` | Root element. Specifies default validation actions for all headers in responses. | Yes |
-| `header` | Add one or more elements for named headers to override the default validation actions for headers in responses. | No |
-
-### Attributes
-
-| Name | Description | Required | Default |
-| -- | - | -- | - |
-| `specified-header-action` | [Action](#actions) to perform for response headers specified in the API schema. | Yes | N/A |
-| `unspecified-header-action` | [Action](#actions) to perform for response headers that arenΓÇÖt specified in the API schema. | Yes | N/A |
-| `errors-variable-name` | Name of the variable in `context.Variables` to log validation errors to. | No | N/A |
-| `name` | Name of the header to override validation action for. This value is case insensitive. | Yes | N/A |
-| `action` | [Action](#actions) to perform for header with the matching name. If the header is specified in the API schema, this value overrides value of `specified-header-action` in the `validate-headers` element. Otherwise, it overrides value of `unspecified-header-action` in the validate-headers element. | Yes | N/A |
-
-### Usage
-
-This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
--- **Policy sections:** outbound, on-error--- **Policy scopes:** all scopes-
-## Validate status code
-
-The `validate-status-code` policy validates the HTTP status codes in responses against the API schema. This policy may be used to prevent leakage of backend errors, which can contain stack traces.
--
-### Policy statement
-
-```xml
-<validate-status-code unspecified-status-code-action="ignore|prevent|detect" errors-variable-name="variable name">
- <status-code code="HTTP status code number" action="ignore|prevent|detect" />
-</validate-status-code>
-```
-
-### Example
-
-```xml
-<validate-status-code unspecified-status-code-action="prevent" errors-variable-name="responseStatusCodeValidation" />
-```
-
-### Elements
-
-| Name | Description | Required |
-| | | -- |
-| `validate-status-code` | Root element. | Yes |
-| `status-code` | Add one or more elements for HTTP status codes to override the default validation action for status codes in responses. | No |
-
-### Attributes
-
-| Name | Description | Required | Default |
-| -- | - | -- | - |
-| `unspecified-status-code-action` | [Action](#actions) to perform for HTTP status codes in responses that arenΓÇÖt specified in the API schema. | Yes | N/A |
-| `errors-variable-name` | Name of the variable in `context.Variables` to log validation errors to. | No | N/A |
-| `code` | HTTP status code to override validation action for. | Yes | N/A |
-| `action` | [Action](#actions) to perform for the matching status code, which isnΓÇÖt specified in the API schema. If the status code is specified in the API schema, this override doesnΓÇÖt take effect. | Yes | N/A |
-
-### Usage
-
-This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
--- **Policy sections:** outbound, on-error--- **Policy scopes:** all scopes--
-## Validation errors
-
-API Management generates validation errors in the following format:
-
-```
-{
- "Name": string,
- "Type": string,
- "ValidationRule": string,
- "Details": string,
- "Action": string
-}
-
-```
-
-The following table lists all possible errors of the validation policies.
-
-* **Details**: Can be used to investigate errors. Not meant to be shared publicly.
-* **Public response**: Error returned to the client. Does not leak implementation details.
-
-When a validation policy specifies the `prevent` action and produces an error, the response from API management includes an HTTP status code: 400 when the policy is applied in the inbound section, and 502 when the policy is applied in the outbound section.
--
-| **Name** | **Type** | **Validation rule** | **Details** | **Public response** | **Action** |
-|-|-||||-|
-| **validate-content** | | | | | |
-| |RequestBody | SizeLimit | Request's body is {size} bytes long and it exceeds the configured limit of {maxSize} bytes. | Request's body is {size} bytes long and it exceeds the limit of {maxSize} bytes. | detect / prevent |
-||ResponseBody | SizeLimit | Response's body is {size} bytes long and it exceeds the configured limit of {maxSize} bytes. | The request could not be processed due to an internal error. Contact the API owner. | detect / prevent |
-| {messageContentType} | RequestBody | Unspecified | Unspecified content type {messageContentType} is not allowed. | Unspecified content type {messageContentType} is not allowed. | detect / prevent |
-| {messageContentType} | ResponseBody | Unspecified | Unspecified content type {messageContentType} is not allowed. | The request could not be processed due to an internal error. Contact the API owner. | detect / prevent |
-| | ApiSchema | | API's schema does not exist or it could not be resolved. | The request could not be processed due to an internal error. Contact the API owner. | detect / prevent |
-| | ApiSchema | | API's schema does not specify definitions. | The request could not be processed due to an internal error. Contact the API owner. | detect / prevent |
-| {messageContentType} | RequestBody / ResponseBody | MissingDefinition | API's schema does not contain definition {definitionName}, which is associated with the content type {messageContentType}. | The request could not be processed due to an internal error. Contact the API owner. | detect / prevent |
-| {messageContentType} | RequestBody | IncorrectMessage | Body of the request does not conform to the definition {definitionName}, which is associated with the content type {messageContentType}.<br/><br/>{valError.Message} Line: {valError.LineNumber}, Position: {valError.LinePosition} | Body of the request does not conform to the definition {definitionName}, which is associated with the content type {messageContentType}.<br/><br/>{valError.Message} Line: {valError.LineNumber}, Position: {valError.LinePosition} | detect / prevent |
-| {messageContentType} | ResponseBody | IncorrectMessage | Body of the response does not conform to the definition {definitionName}, which is associated with the content type {messageContentType}.<br/><br/>{valError.Message} Line: {valError.LineNumber}, Position: {valError.LinePosition} | The request could not be processed due to an internal error. Contact the API owner. | detect / prevent |
-| | RequestBody | ValidationException | Body of the request cannot be validated for the content type {messageContentType}.<br/><br/>{exception details} | The request could not be processed due to an internal error. Contact the API owner. | detect / prevent |
-| | ResponseBody | ValidationException | Body of the response cannot be validated for the content type {messageContentType}.<br/><br/>{exception details} | The request could not be processed due to an internal error. Contact the API owner. | detect / prevent |
-| **validate-parameters / validate-headers** | | | | | |
-| {paramName} / {headerName} | QueryParameter / PathParameter / RequestHeader | Unspecified | Unspecified {path parameter / query parameter / header} {paramName} is not allowed. | Unspecified {path parameter / query parameter / header} {paramName} is not allowed. | detect / prevent |
-| {headerName} | ResponseHeader | Unspecified | Unspecified header {headerName} is not allowed. | The request could not be processed due to an internal error. Contact the API owner. | detect / prevent |
-| |ApiSchema | | API's schema doesn't exist or it couldn't be resolved. | The request could not be processed due to an internal error. Contact the API owner. | detect / prevent |
-| | ApiSchema | | API schema does not specify definitions. | The request could not be processed due to an internal error. Contact the API owner. | detect / prevent |
-| {paramName} | QueryParameter / PathParameter / RequestHeader / ResponseHeader | MissingDefinition | API's schema does not contain definition {definitionName}, which is associated with the {query parameter / path parameter / header} {paramName}. | The request could not be processed due to an internal error. Contact the API owner. | detect / prevent |
-| {paramName} | QueryParameter / PathParameter / RequestHeader | IncorrectMessage | Request cannot contain multiple values for the {query parameter / path parameter / header} {paramName}. | Request cannot contain multiple values for the {query parameter / path parameter / header} {paramName}. | detect / prevent |
-| {headerName} | ResponseHeader | IncorrectMessage | Response cannot contain multiple values for the header {headerName}. | The request could not be processed due to an internal error. Contact the API owner. | detect / prevent |
-| {paramName} | QueryParameter / PathParameter / RequestHeader | IncorrectMessage | Value of the {query parameter / path parameter / header} {paramName} does not conform to the definition.<br/><br/>{valError.Message} Line: {valError.LineNumber}, Position: {valError.LinePosition} | The value of the {query parameter / path parameter / header} {paramName} does not conform to the definition.<br/><br/>{valError.Message} Line: {valError.LineNumber}, Position: {valError.LinePosition} | detect / prevent |
-| {headerName} | ResponseHeader | IncorrectMessage | Value of the header {headerName} does not conform to the definition.<br/><br/>{valError.Message} Line: {valError.LineNumber}, Position: {valError.LinePosition} | The request could not be processed due to an internal error. Contact the API owner. | detect / prevent |
-| {paramName} | QueryParameter / PathParameter / RequestHeader | IncorrectMessage | Value of the {query parameter / path parameter / header} {paramName} cannot be parsed according to the definition. <br/><br/>{ex.Message} | Value of the {query parameter / path parameter / header} {paramName} couldn't be parsed according to the definition. <br/><br/>{ex.Message} | detect / prevent |
-| {headerName} | ResponseHeader | IncorrectMessage | Value of the header {headerName} couldn't be parsed according to the definition. | The request could not be processed due to an internal error. Contact the API owner. | detect / prevent |
-| {paramName} | QueryParameter / PathParameter / RequestHeader | ValidationError | {Query parameter / Path parameter / Header} {paramName} cannot be validated.<br/><br/>{exception details} | The request could not be processed due to an internal error. Contact the API owner. | detect / prevent |
-| {headerName} | ResponseHeader | ValidationError | Header {headerName} cannot be validated.<br/><br/>{exception details} | The request could not be processed due to an internal error. Contact the API owner. | detect / prevent |
-| **validate-status-code** | | | | | |
-| {status-code} | StatusCode | Unspecified | Response status code {status-code} is not allowed. | The request could not be processed due to an internal error. Contact the API owner. | detect / prevent |
--
-The following table lists all the possible Reason values of a validation error along with possible Message values:
-
-| **Reason** | **Message** |
-|||
-| Bad request | {Details} for context variable, {Public response} for client|
-| Response not allowed | {Details} for context variable, {Public response} for client |
------
api-management Virtual Network Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/virtual-network-reference.md
When an API Management service instance is hosted in a VNet, the ports in the fo
| * / 1886, 443 | Outbound | TCP | VirtualNetwork / AzureMonitor | Publish [Diagnostics Logs and Metrics](api-management-howto-use-azure-monitor.md), [Resource Health](../service-health/resource-health-overview.md), and [Application Insights](api-management-howto-app-insights.md) (optional) | External & Internal | | * / 6380 | Inbound & Outbound | TCP | VirtualNetwork / VirtualNetwork | Access external Azure Cache for Redis service for [caching](api-management-caching-policies.md) policies between machines (optional) | External & Internal | | * / 6381 - 6383 | Inbound & Outbound | TCP | VirtualNetwork / VirtualNetwork | Access internal Azure Cache for Redis service for [caching](api-management-caching-policies.md) policies between machines (optional) | External & Internal |
-| * / 4290 | Inbound & Outbound | UDP | VirtualNetwork / VirtualNetwork | Sync Counters for [Rate Limit](api-management-access-restriction-policies.md#LimitCallRateByKey) policies between machines (optional) | External & Internal |
+| * / 4290 | Inbound & Outbound | UDP | VirtualNetwork / VirtualNetwork | Sync Counters for [Rate Limit](rate-limit-policy.md) policies between machines (optional) | External & Internal |
| * / 6390 | Inbound | TCP | AzureLoadBalancer / VirtualNetwork | **Azure Infrastructure Load Balancer** | External & Internal | ### [stv1](#tab/stv1)
When an API Management service instance is hosted in a VNet, the ports in the fo
| * / 1886, 443 | Outbound | TCP | VirtualNetwork / AzureMonitor | Publish [Diagnostics Logs and Metrics](api-management-howto-use-azure-monitor.md), [Resource Health](../service-health/resource-health-overview.md), and [Application Insights](api-management-howto-app-insights.md) (optional) | External & Internal | | * / 6380 | Inbound & Outbound | TCP | VirtualNetwork / VirtualNetwork | Access external Azure Cache for Redis service for [caching](api-management-caching-policies.md) policies between machines (optional) | External & Internal | | * / 6381 - 6383 | Inbound & Outbound | TCP | VirtualNetwork / VirtualNetwork | Access internal Azure Cache for Redis service for [caching](api-management-caching-policies.md) policies between machines (optional) | External & Internal |
-| * / 4290 | Inbound & Outbound | UDP | VirtualNetwork / VirtualNetwork | Sync Counters for [Rate Limit](api-management-access-restriction-policies.md#LimitCallRateByKey) policies between machines (optional) | External & Internal |
+| * / 4290 | Inbound & Outbound | UDP | VirtualNetwork / VirtualNetwork | Sync Counters for [Rate Limit](rate-limit-policy.md) policies between machines (optional) | External & Internal |
| * / * | Inbound | TCP | AzureLoadBalancer / VirtualNetwork | **Azure Infrastructure Load Balancer** (required for Premium SKU, optional for other SKUs) | External & Internal |
api-management Wait Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/wait-policy.md
+
+ Title: Azure API Management policy reference - wait | Microsoft Docs
+description: Reference for the wait policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 12/08/2022+++
+# Wait
+
+The `wait` policy executes its immediate child policies in parallel, and waits for either all or one of its immediate child policies to complete before it completes. The `wait` policy can have as its immediate child policies one or more of the following: [`send-request`](send-request-policy.md), [`cache-lookup-value`](cache-lookup-value-policy.md), and [`choose`](choose-policy.md) policies.
+++
+## Policy statement
+
+```xml
+<wait for="all | any">
+ <!--Wait policy can contain send-request, cache-lookup-value,
+ and choose policies as child elements -->
+</wait>
+
+```
+
+## Attributes
+
+| Attribute | Description | Required | Default |
+| -- | | -- | - |
+| for | Determines whether the `wait` policy waits for all immediate child policies to be completed or just one. Allowed values are:<br /><br /> - `all` - wait for all immediate child policies to complete<br />- `any` - wait for any immediate child policy to complete. Once the first immediate child policy has completed, the `wait` policy completes and execution of any other immediate child policies is terminated. | No | `all` |
++
+## Elements
+
+May contain as child elements only `send-request`, `cache-lookup-value`, and `choose` policies.
+
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+
+## Example
+
+In the following example, there are two `choose` policies as immediate child policies of the `wait` policy. Each of these `choose` policies executes in parallel. Each `choose` policy attempts to retrieve a cached value. If there is a cache miss, a backend service is called to provide the value. In this example the `wait` policy does not complete until all of its immediate child policies complete, because the `for` attribute is set to `all`. In this example the context variables (`execute-branch-one`, `value-one`, `execute-branch-two`, and `value-two`) are declared outside of the scope of this example policy.
+
+```xml
+<wait for="all">
+ <choose>
+ <when condition="@((bool)context.Variables["execute-branch-one="])">
+ <cache-lookup-value key="key-one" variable-name="value-one" />
+ <choose>
+ <when condition="@(!context.Variables.ContainsKey("value-one="))">
+ <send-request mode="new" response-variable-name="value-one">
+ <set-url>https://backend-one</set-url>
+ <set-method>GET</set-method>
+ </send-request>
+ </when>
+ </choose>
+ </when>
+ </choose>
+ <choose>
+ <when condition="@((bool)context.Variables["execute-branch-two="])">
+ <cache-lookup-value key="key-two" variable-name="value-two" />
+ <choose>
+ <when condition="@(!context.Variables.ContainsKey("value-two="))">
+ <send-request mode="new" response-variable-name="value-two">
+ <set-url>https://backend-two</set-url>
+ <set-method>GET</set-method>
+ </send-request>
+ </when>
+ </choose>
+ </when>
+ </choose>
+</wait>
+```
+
+## Related policies
+
+* [API Management advanced policies](api-management-advanced-policies.md)
+
api-management Websocket Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/websocket-api.md
With API ManagementΓÇÖs WebSocket API solution, API publishers can quickly add a WebSocket API in API Management via the Azure portal, Azure CLI, Azure PowerShell, and other Azure tools.
-You can secure WebSocket APIs by applying existing access control policies, like [JWT validation](./api-management-access-restriction-policies.md#ValidateJWT). You can also test WebSocket APIs using the API test consoles in both Azure portal and developer portal. Building on existing observability capabilities, API Management provides metrics and logs for monitoring and troubleshooting WebSocket APIs.
+You can secure WebSocket APIs by applying existing access control policies, like [JWT validation](validate-jwt-policy.md). You can also test WebSocket APIs using the API test consoles in both Azure portal and developer portal. Building on existing observability capabilities, API Management provides metrics and logs for monitoring and troubleshooting WebSocket APIs.
In this article, you will: > [!div class="checklist"]
Below are the current restrictions of WebSocket support in API Management:
* WebSocket APIs are not supported yet in the [self-hosted gateway](./self-hosted-gateway-overview.md). * 200 active connections limit per unit. * WebSocket APIs support the following valid buffer types for messages: Close, BinaryFragment, BinaryMessage, UTF8Fragment, and UTF8Message.
-* Currently, the [set-header](api-management-transformation-policies.md#SetHTTPheader) policy doesn't support changing certain well-known headers, including `Host` headers, in onHandshake requests.
+* Currently, the [set-header](set-header-policy.md) policy doesn't support changing certain well-known headers, including `Host` headers, in onHandshake requests.
* During the TLS handshake with a WebSocket backend, API Management validates that the server certificate is trusted and that its subject name matches the hostname. With HTTP APIs, API Management validates that the certificate is trusted but doesnΓÇÖt validate that hostname and subject match. ### Unsupported policies
api-management Xml To Json Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/xml-to-json-policy.md
+
+ Title: Azure API Management policy reference - xml-to-json | Microsoft Docs
+description: Reference for the xml-to-json policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 12/02/2022+++
+# Convert XML to JSON
+The `xml-to-json` policy converts a request or response body from XML to JSON. This policy can be used to modernize APIs based on XML-only backend web services.
++
+## Policy statement
+
+```xml
+<xml-to-json kind="javascript-friendly | direct" apply="always | content-type-xml" consider-accept-header="true | false"/>
+```
++
+## Attributes
+
+| Attribute | Description | Required | Default |
+| -- | | -- | - |
+|kind|The attribute must be set to one of the following values.<br /><br /> - `javascript-friendly` - the converted JSON has a form friendly to JavaScript developers.<br />- `direct` - the converted JSON reflects the original XML document's structure.|Yes|N/A|
+|apply|The attribute must be set to one of the following values.<br /><br /> - `always` - convert always.<br />- `content-type-xml` - convert only if response Content-Type header indicates presence of XML.|Yes|N/A|
+|consider-accept-header|The attribute must be set to one of the following values.<br /><br /> - `true` - apply conversion if JSON is requested in request Accept header.<br />- `false` -always apply conversion.|No|`true`|
+
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, on-error
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+
+## Example
+
+```xml
+<policies>
+ <inbound>
+ <base />
+ </inbound>
+ <outbound>
+ <base />
+ <xml-to-json kind="direct" apply="always" consider-accept-header="false" />
+ </outbound>
+</policies>
+```
+
+## Related policies
+
+* [API Management transformation policies](api-management-transformation-policies.md)
+
api-management Xsl Transform Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/xsl-transform-policy.md
+
+ Title: Azure API Management policy reference - xsl-transform | Microsoft Docs
+description: Reference for the xsl-transform policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 08/26/2022+++
+# Transform XML using an XSLT
+
+The `xsl-transform` policy applies an XSL transformation to XML in the request or response body.
++
+## Policy statement
+
+```xml
+<xsl-transform>
+ <parameter parameter-name="...">...</parameter>
+ <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
+ <xsl:.../>
+ <xsl:.../>
+ </xsl:stylesheet>
+ </xsl-transform>
+```
+
+## Elements
+
+|Name|Description|Required|
+|-|--|--|
+|parameter|Used to define variables used in the transform|No|
+|xsl:stylesheet|Root stylesheet element. All elements and attributes defined within follow the standard [XSLT specification](https://www.w3.org/TR/xslt)|Yes|
++
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+
+## Examples
+
+### Transform request body
+
+```xml
+<inbound>
+ <base />
+ <xsl-transform>
+ <parameter name="User-Agent">@(context.Request.Headers.GetValueOrDefault("User-Agent","non-specified"))</parameter>
+ <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
+ <xsl:output method="xml" indent="yes" />
+ <xsl:param name="User-Agent" />
+ <xsl:template match="* | @* | node()">
+ <xsl:copy>
+ <xsl:if test="self::* and not(parent::*)">
+ <xsl:attribute name="User-Agent">
+ <xsl:value-of select="$User-Agent" />
+ </xsl:attribute>
+ </xsl:if>
+ <xsl:apply-templates select="* | @* | node()" />
+ </xsl:copy>
+ </xsl:template>
+ </xsl:stylesheet>
+ </xsl-transform>
+</inbound>
+```
+
+### Transform response body
+
+```xml
+<policies>
+ <inbound>
+ <base />
+ </inbound>
+ <outbound>
+ <base />
+ <xsl-transform>
+ <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
+ <xsl:output omit-xml-declaration="yes" method="xml" indent="yes" />
+ <!-- Copy all nodes directly-->
+ <xsl:template match="node()| @*|*">
+ <xsl:copy>
+ <xsl:apply-templates select="@* | node()|*" />
+ </xsl:copy>
+ </xsl:template>
+ </xsl:stylesheet>
+ </xsl-transform>
+ </outbound>
+</policies>
+```
+
+## Related policies
+
+- [API Management transformation policies](api-management-transformation-policies.md)
+
app-service App Service Sql Asp Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-sql-asp-github-actions.md
Title: "Tutorial: Use GitHub Actions to deploy to App Service and connect to a d
description: Deploy a database-backed ASP.NET core app to Azure with GitHub Actions ms.devlang: csharp Previously updated : 09/13/2021 Last updated : 01/09/2023
az group create --name {resource-group-name} --location {resource-group-location
## Generate deployment credentials
-You'll need to authenticate with a service principal for the resource deployment script to work. You can create a [service principal](../active-directory/develop/app-objects-and-service-principals.md#service-principal-object) with the [az ad sp create-for-rbac](/cli/azure/ad/sp#az-ad-sp-create-for-rbac) command in the [Azure CLI](/cli/azure/). Run this command with [Azure Cloud Shell](https://shell.azure.com/) in the Azure portal or by selecting the **Try it** button.
-
-```azurecli-interactive
- az ad sp create-for-rbac --name "{service-principal-name}" --sdk-auth --role contributor --scopes /subscriptions/{subscription-id}
-```
-
-In the example, replace the placeholders with your subscription ID, resource group name, and service principal name. The output is a JSON object with the role assignment credentials that provide access to your App Service app. Copy this JSON object for later. For help, go to [configure deployment credentials](https://github.com/Azure/login#configure-deployment-credentials).
-
-```output
- {
- "clientId": "<GUID>",
- "clientSecret": "<GUID>",
- "subscriptionId": "<GUID>",
- "tenantId": "<GUID>",
- (...)
- }
-```
## Configure the GitHub secret for authentication ## Add GitHub secrets for your build 1. Create [two new secrets](https://docs.github.com/en/actions/reference/encrypted-secrets#creating-encrypted-secrets-for-a-repository) in your GitHub repository for `SQLADMIN_PASS` and `SQLADMIN_LOGIN`. Make sure you choose a complex password, otherwise the create step for the SQL database server will fail. You won't be able to access this password again so save it separately. 2. Create an `AZURE_SUBSCRIPTION_ID` secret for your Azure subscription ID. If you do not know your subscription ID, use this command in the Azure Shell to find it. Copy the value in the `SubscriptionId` column.
- ```azurecli
+ ```azurecliu
az account list -o table ```
app-service App Service Sql Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-sql-github-actions.md
Title: "Tutorial: Use GitHub Actions to deploy to an App Service custom containe
description: Learn how to deploy an ASP.NET core app to Azure and to Azure SQL Database with GitHub Actions ms.devlang: csharp Previously updated : 04/22/2021 Last updated : 01/09/2023
Open the Azure Cloud Shell at https://shell.azure.com. You can alternately use t
## Generate deployment credentials
-You'll need to authenticate with a service principal for the resource deployment script to work. You can create a [service principal](../active-directory/develop/app-objects-and-service-principals.md#service-principal-object) with the [az ad sp create-for-rbac](/cli/azure/ad/sp#az-ad-sp-create-for-rbac) command in the [Azure CLI](/cli/azure/). Run this command with [Azure Cloud Shell](https://shell.azure.com/) in the Azure portal or by selecting the **Try it** button.
-
-```azurecli-interactive
- az ad sp create-for-rbac --name "{service-principal-name}" --sdk-auth --role contributor --scopes /subscriptions/{subscription-id}/resourceGroups/{resource-group-name}
-```
-
-In the example, replace the placeholders with your subscription ID, resource group name, and service principal name. The output is a JSON object with the role assignment credentials that provide access to your App Service app. Copy this JSON object for later. For help, go to [configure deployment credentials](https://github.com/Azure/login#configure-deployment-credentials).
-
-```output
- {
- "clientId": "<GUID>",
- "clientSecret": "<GUID>",
- "subscriptionId": "<GUID>",
- "tenantId": "<GUID>",
- (...)
- }
-```
-
-> [!IMPORTANT]
-> It is always a good practice to grant minimum access. The scope in the previous example is limited to the specific App Service app and not the entire resource group.
## Configure the GitHub secret for authentication ## Add a SQL Server secret
app-service Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/intro.md
The Azure App Service Environment v2 is an Azure App Service feature that provid
* Docker containers * Functions
+> [!NOTE]
+> Linux web apps and docker containers are not supported in Azure Government and Azure China regions.
+ App Service environments (ASEs) are appropriate for application workloads that require: * Very high scale.
app-service Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/networking.md
The normal app access ports inbound are as follows:
|Web Deploy service|8172| > [!NOTE]
-> For FTP access, even if you want to disallow standard FTP on port 21, you still need to allow traffic from the LoadBalancer to the App Service Environment subnet range, as this is used for internal health ping traffic for the ftp service specifically.
+> For FTP access, even if you want to disallow standard FTP on port 21, you still need to allow traffic from the LoadBalancer to the App Service Environment subnet range on port 21, as this is used for internal health ping traffic for the ftp service specifically.
## Network routing
app-service Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/policy-reference.md
Title: Built-in policy definitions for Azure App Service description: Lists Azure Policy built-in policy definitions for Azure App Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/28/2022 Last updated : 01/05/2023
app-service Samples Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/samples-bicep.md
To learn about the Bicep syntax and properties for App Services resources, see [
| [App with MySQL](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/webapp-managed-mysql)| Deploys an App Service app on Windows with Azure Database for MySQL. | | [App with a database in Azure SQL Database](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/web-app-sql-database)| Deploys an App Service app and a database in Azure SQL Database at the Basic service level. | | [App connected to a backend webapp](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/webapp-privateendpoint-vnet-injection)| Deploys two web apps (frontend and backend) securely connected together with VNet injection and Private Endpoint. |
+| [App connected to a backend webapp with staging slots](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/webapp-secure-ntier)| Deploys two web apps (frontend and backend) with staging slots securely connected together with VNet injection and Private Endpoint. |
| [App with a database, managed identity, and monitoring](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/web-app-managed-identity-sql-db)| Deploys an App Service App with a database, managed identity, and monitoring. |
+| [Two apps in separate regions with Azure Front Door](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/webapp-multi-region-front-door) | Deploys two identical web apps in separate regions with Azure Front Door to direct traffic. |
|**App Service Environment**| **Description** | | [Create an App Service environment v2](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/web-app-asev2-create) | Creates an App Service environment v2 in your virtual network. |
-| | |
app-service Samples Resource Manager Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/samples-resource-manager-templates.md
To learn about the JSON syntax and properties for App Services resources, see [M
| [App with a Blob storage connection](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/web-app-blob-connection)| Deploys an App Service app with an Azure Blob storage connection string. You can then use Blob storage from the app. | | [App with an Azure Cache for Redis](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/web-app-with-redis-cache)| Deploys an App Service app with an Azure Cache for Redis. | | [App connected to a backend webapp](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/webapp-privateendpoint-vnet-injection)| Deploys two web apps (frontend and backend) securely connected together with VNet injection and Private Endpoint. |
+| [App connected to a backend webapp with staging slots](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/webapp-secure-ntier)| Deploys two web apps (frontend and backend) with staging slots securely connected together with VNet injection and Private Endpoint. |
+| [Two apps in separate regions with Azure Front Door](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/webapp-multi-region-front-door) | Deploys two identical web apps in separate regions with Azure Front Door to direct traffic. |
|**App Service Environment**| **Description** | | [Create an App Service environment v2](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/web-app-asev2-create) | Creates an App Service environment v2 in your virtual network. | | [Create an App Service environment v2 with an ILB address](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/web-app-asev2-ilb-create) | Creates an App Service environment v2 in your virtual network with a private internal load balancer address. |
app-service Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure App Service description: Lists Azure Policy Regulatory Compliance controls available for Azure App Service. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/04/2022 Last updated : 01/05/2023
application-gateway Configuration Listeners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-listeners.md
To configure a global custom error page, see [Azure PowerShell configuration](./
## TLS policy
-You can centralize TLS/SSL certificate management and reduce encryption-decryption overhead for a backend server farm. Centralized TLS handling also lets you specify a central TLS policy that's suited to your security requirements. You can choose *default*, *predefined*, or *custom* TLS policy.
+You can centralize TLS/SSL certificate management and reduce encryption-decryption overhead for a backend server farm. Centralized TLS handling also lets you specify a central TLS policy that's suited to your security requirements. You can choose *predefined* or *custom* TLS policy.
-You configure TLS policy to control TLS protocol versions. You can configure an application gateway to use a minimum protocol version for TLS handshakes from TLS1.0, TLS1.1, and TLS1.2. By default, SSL 2.0 and 3.0 are disabled and aren't configurable. For more information, see [Application Gateway TLS policy overview](./application-gateway-ssl-policy-overview.md).
+You configure TLS policy to control TLS protocol versions. You can configure an application gateway to use a minimum protocol version for TLS handshakes from TLS1.0, TLS1.1, TLS1.2, and TLS1.3. By default, SSL 2.0 and 3.0 are disabled and aren't configurable. For more information, see [Application Gateway TLS policy overview](./application-gateway-ssl-policy-overview.md).
After you create a listener, you associate it with a request-routing rule. That rule determines how requests that are received on the listener are routed to the back end.
application-gateway Http Response Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/http-response-codes.md
HTTP 502 errors can have several root causes, for example:
For information about scenarios where 502 errors occur, and how to troubleshoot them, see [Troubleshoot Bad Gateway errors](application-gateway-troubleshooting-502.md).
-#### 504 ΓÇô Request timeout
+#### 504 ΓÇô Gateway timeout
-HTTP 504 errors are presented if a request is sent to application gateways using v2 sku, and the backend response time exceeds the time-out value associated to the listener's rule. This value is defined in the HTTP setting.
+HTTP 504 errors are presented if a request is sent to application gateways using v2 sku, and the backend response time exceeds the time-out value configured in the Backend Setting.
## Next steps
applied-ai-services Form Recognizer Container Install Run https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/containers/form-recognizer-container-install-run.md
Previously updated : 11/29/2022 Last updated : 01/04/2023 monikerRange: 'form-recog-2.1.0' recommendations: false
azure-cognitive-service-layout: container_name: azure-cognitive-service-layout image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout
- user: root
environment: - EULA=accept - billing={FORM_RECOGNIZER_ENDPOINT_URI}
applied-ai-services Deploy Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/deploy-label-tool.md
Previously updated : 10/10/2022 Last updated : 01/09/2023 monikerRange: 'form-recog-2.1.0' recommendations: false
recommendations: false
# Deploy the Sample Labeling tool
+**This article applies to:** ![Form Recognizer v2.1 checkmark](media/yes-icon.png) **Form Recognizer v2.1**.
+ >[!TIP] >
-> * For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio ](https://formrecognizer.appliedai.azure.com/studio).
+> * For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio](https://formrecognizer.appliedai.azure.com/studio).
> * The v3.0 Studio supports any model trained with v2.1 labeled data. > * You can refer to the [API migration guide](v3-migration-guide.md) for detailed information about migrating from v2.1 to v3.0. > * *See* our [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) or [**C#**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), [**Java**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), or [Python](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) SDK quickstarts to get started with the v3.0 version.
Follow these steps to create a new resource using the Azure portal:
### Continuous deployment
-After you have created your web app, you can enable the continuous deployment option:
+After you've created your web app, you can enable the continuous deployment option:
* From the left pane, choose **Container settings**. * In the main window, navigate to Continuous deployment and toggle between the **On** and **Off** buttons to set your preference:
applied-ai-services Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/label-tool.md
Previously updated : 10/10/2022 Last updated : 01/09/2023 monikerRange: 'form-recog-2.1.0' recommendations: false
recommendations: false
<!-- markdownlint-disable MD034 --> # Train a custom model using the Sample Labeling tool
+**This article applies to:** ![Form Recognizer v2.1 checkmark](media/yes-icon.png) **Form Recognizer v2.1**.
+ >[!TIP] > > * For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio](https://formrecognizer.appliedai.azure.com/studio).
In this article, you'll use the Form Recognizer REST API with the Sample Labelin
* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services) * Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer" title="Create a Form Recognizer resource" target="_blank">create a Form Recognizer resource </a> in the Azure portal to get your key and endpoint. After it deploys, select **Go to resource**.
- * You'll need the key and endpoint from the resource you create to connect your application to the Form Recognizer API. You'll paste your key and endpoint into the code below later in the quickstart.
+ * You'll need the key and endpoint from the resource you create to connect your application to the Form Recognizer API. You'll paste your key and endpoint into the code later in the quickstart.
* You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production. * A set of at least six forms of the same type. You'll use this data to train the model and test a form. You can use a [sample data set](https://go.microsoft.com/fwlink/?linkid=2090451) (download and extract *sample_data.zip*) for this quickstart. Upload the training files to the root of a blob storage container in a standard-performance-tier Azure Storage account.
You'll use the Docker engine to run the Sample Labeling tool. Follow these steps
| Container | Minimum | Recommended| |:--|:--|:--|
- |Sample Labeling tool|2 core, 4-GB memory|4 core, 8-GB memory|
+ |Sample Labeling tool|`2` core, 4-GB memory|`4` core, 8-GB memory|
Install Docker on your machine by following the appropriate instructions for your operating system:
You'll use the Docker engine to run the Sample Labeling tool. Follow these steps
docker run -it -p 3000:80 mcr.microsoft.com/azure-cognitive-services/custom-form/labeltool:latest-2.1 eula=accept ```
- This command will make the Sample Labeling tool available through a web browser. Go to `http://localhost:3000`.
+ This command will make the sample-labeling tool available through a web browser. Go to `http://localhost:3000`.
> [!NOTE] > You can also label documents and train models using the Form Recognizer REST API. To train and Analyze with the REST API, see [Train with labels using the REST API and Python](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/rest/python-labeled-data.md).
Next, you'll create tags (labels) and apply them to the text elements that you w
1. Press Enter to save the tag. 1. In the main editor, select words from the highlighted text elements or a region you drew in. 1. Select the tag you want to apply, or press the corresponding keyboard key. The number keys are assigned as hotkeys for the first 10 tags. You can reorder your tags using the up and down arrow icons in the tag editor pane.
+1. Follow these steps to label at least five of your forms.
> [!Tip] > Keep the following tips in mind when you're labeling your forms: >
Next, you'll create tags (labels) and apply them to the text elements that you w
:::image type="content" source="media/label-tool/main-editor-2-1.png" alt-text="Main editor window of Sample Labeling tool.":::
-Follow the steps above to label at least five of your forms.
- ### Specify tag value types You can set the expected data type for each tag. Open the context menu to the right of a tag and select a type from the menu. This feature allows the detection algorithm to make assumptions that will improve the text-detection accuracy. It also ensures that the detected values will be returned in a standardized format in the final JSON output. Value type information is saved in the **fields.json** file in the same path as your label files.
Choose the Train icon on the left pane to open the Training page. Then select th
:::image type="content" source="media/label-tool/train-screen.png" alt-text="Training view.":::
-After training finishes, examine the **Average Accuracy** value. If it's low, you should add more input documents and repeat the steps above. The documents you've already labeled will remain in the project index.
+After training finishes, examine the **Average Accuracy** value. If it's low, you should add more input documents and repeat the labeling steps. The documents you've already labeled will remain in the project index.
> [!TIP] > You can also run the training process with a REST API call. To learn how to do this, see [Train with labels using Python](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/rest/python-labeled-data.md).
After training finishes, examine the **Average Accuracy** value. If it's low, yo
With Model Compose, you can compose up to 100 models to a single model ID. When you call Analyze with the composed `modelID`, Form Recognizer will first classify the form you submitted, choose the best matching model, and then return results for that model. This operation is useful when incoming forms may belong to one of several templates.
-To compose models in the Sample Labeling tool, select the Model Compose (merging arrow) icon on the left. On the left, select the models you wish to compose together. Models with the arrows icon are already composed models.
-Choose the **Compose button**. In the pop-up, name your new composed model and select **Compose**. When the operation completes, your newly composed model should appear in the list.
+* To compose models in the Sample Labeling tool, select the Model Compose (merging arrow) icon from the navigation bar.
+* Select the models you wish to compose together. Models with the arrows icon are already composed models.
+* Choose the **Compose button**. In the pop-up, name your new composed model and select **Compose**.
+* When the operation completes, your newly composed model should appear in the list.
:::image type="content" source="media/label-tool/model-compose.png" alt-text="Model compose UX view."::: ## Analyze a form
-Select the Analyze (light bulb) icon on the left to test your model. Select source 'Local file'. Browse for a file and select a file from the sample dataset that you unzipped in the test folder. Then choose the **Run analysis** button to get key/value pairs, text and tables predictions for the form. The tool will apply tags in bounding boxes and will report the confidence of each tag.
+Select the Analyze icon from the navigation bar to test your model. Select source 'Local file'. Browse for a file and select a file from the sample dataset that you unzipped in the test folder. Then choose the **Run analysis** button to get key/value pairs, text and tables predictions for the form. The tool will apply tags in bounding boxes and will report the confidence of each tag.
:::image type="content" source="media/analyze.png" alt-text="Screenshot: analyze-a-custom-form window":::
Go to your project settings page (slider icon) and take note of the security tok
### Restore project credentials
-When you want to resume your project, you first need to create a connection to the same blob storage container. To do so, repeat the steps above. Then, go to the application settings page (gear icon) and see if your project's security token is there. If it isn't, add a new security token and copy over your token name and key from the previous step. Select **Save** to retain your settings.
+When you want to resume your project, you first need to create a connection to the same blob storage container. To do so, repeat the steps. Then, go to the application settings page (gear icon) and see if your project's security token is there. If it isn't, add a new security token and copy over your token name and key from the previous step. Select **Save** to retain your settings.
### Resume a project
applied-ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/language-support.md
- Previously updated : 10/20/2022+ Last updated : 01/06/2023
Prebuilt Receipt and Business Cards support all English receipts and business ca
|Language| Locale code | |:--|:-:|
-|English (Austrialia)|`en-au`|
+|English (Australia)|`en-au`|
|English (Canada)|`en-ca`| |English (United Kingdom)|`en-gb`| |English (India|`en-in`|
applied-ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/overview.md
Use the links in the table to learn more about each model and browse the API ref
| Model| Description | Development options | |-|--|-|
-|[**Layout analysis**](concept-layout.md?view=form-recog-2.1.0&preserve-view=true) | Extraction and analysis of text, selection marks, tables, and bounding box coordinates, from forms and documents. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-layout)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-layout-model)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?branch=main&tabs=layout#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Layout analysis**](concept-layout.md?view=form-recog-2.1.0&preserve-view=true) | Extraction and analysis of text, selection marks, tables, and bounding box coordinates, from forms and documents. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-layout)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-layout-model)</li><li>[**Client-library SDK**](quickstarts/get-started-sdks-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?branch=main&tabs=layout#run-the-container-with-the-docker-compose-up-command)</li></ul>|
|[**Custom model**](concept-custom.md?view=form-recog-2.1.0&preserve-view=true) | Extraction and analysis of data from forms and documents specific to distinct business data and use cases.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#train-a-custom-form-model)</li><li>[**REST API**](quickstarts/get-started-sdks-rest-api.md)</li><li>[**Sample Labeling Tool**](concept-custom.md?view=form-recog-2.1.0&preserve-view=true#build-a-custom-model)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)</li></ul>| |[**Invoice model**](concept-invoice.md?view=form-recog-2.1.0&preserve-view=true) | Automated data processing and extraction of key information from sales invoices. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](quickstarts/get-started-sdks-rest-api.md#try-it-prebuilt-model)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=invoice#run-the-container-with-the-docker-compose-up-command)</li></ul>| |[**Receipt model**](concept-receipt.md?view=form-recog-2.1.0&preserve-view=true) | Automated data processing and extraction of key information from sales receipts.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](quickstarts/get-started-sdks-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=receipt#run-the-container-with-the-docker-compose-up-command)</li></ul>|
applied-ai-services Try Sample Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-sample-label-tool.md
Last updated 10/10/2022 - monikerRange: 'form-recog-2.1.0' recommendations: false
recommendations: false
<!-- markdownlint-disable MD029 --> # Get started with the Form Recognizer Sample Labeling tool
+**This article applies to:** ![Form Recognizer v2.1 checkmark](../media/yes-icon.png) **Form Recognizer v2.1**.
>[!TIP] >
-> * For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio ](https://formrecognizer.appliedai.azure.com/studio).
+> * For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio](https://formrecognizer.appliedai.azure.com/studio).
> * The v3.0 Studio supports any model trained with v2.1 labeled data. > * You can refer to the API migration guide for detailed information about migrating from v2.1 to v3.0. > * *See* our [**REST API**](get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) or [**C#**](get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), [**Java**](get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), [**JavaScript**](get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), or [Python](get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) SDK quickstarts to get started with the v3.0 version.
Use the tags editor pane to create a new tag you'd like to identify:
1. In the main editor, select the total value from the highlighted text elements.
-1. Select the Total tag to apply to the value, or press the corresponding keyboard key. The number keys are assigned as hotkeys for the first 10 tags. You can reorder your tags using the up and down arrow icons in the tag editor pane.
+1. Select the Total tag to apply to the value, or press the corresponding keyboard key. The number keys are assigned as hotkeys for the first 10 tags. You can reorder your tags using the up and down arrow icons in the tag editor pane. Follow these steps to label all five forms in the sample dataset:
> [!Tip] > Keep the following tips in mind when you're labeling your forms:
Use the tags editor pane to create a new tag you'd like to identify:
> * To remove an applied tag without deleting the tag itself, select the tagged rectangle on the document view and press the delete key. >
-1. Continue to follow the steps above to label all five forms in the sample dataset.
- :::image type="content" source="../media/label-tool/custom-1.jpg" alt-text="Label the samples."::: #### Train a custom model
Choose the Train icon on the left pane to open the Training page. Then select th
#### Analyze a custom form
-1. Select the **Analyze** (light bulb) icon on the left to test your model.
+1. Select the **Analyze** icon from the navigation bar to test your model.
1. Select source **Local file** and browse for a file to select from the sample dataset that you unzipped in the test folder.
applied-ai-services Sdk Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/sdk-overview.md
Title: About the Form Recognizer SDK?
+ Title: Form Recognizer SDKs
-description: The Form Recognizer software development kit (SDK) exposes Form Recognizer models, features and capabilities, making it easier to develop document-processing applications.
+description: The Form Recognizer software development kits (SDKs) expose Form Recognizer models, features and capabilities, using C#, Java, JavaScript, or Python programming language.
- Previously updated : 10/27/2022+ Last updated : 01/06/2023 recommendations: false
recommendations: false
<!-- markdownlint-disable MD001 --> <!-- markdownlint-disable MD051 -->
-# What is the Form Recognizer SDK?
+# Form Recognizer SDKs
[!INCLUDE [applies to v3.0 and v2.1](includes/applies-to-v3-0-and-v2-1.md)]
The [Microsoft Q&A](/answers/topics/azure-form-recognizer.html) and [Stack Overf
> [**Try a Form Recognizer quickstart**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) > [!div class="nextstepaction"]
-> [**Explore the Form Recognizer REST API v3.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)
+> [**Explore the Form Recognizer REST API v3.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)
applied-ai-services Supervised Table Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/supervised-table-tags.md
Title: "How to use table tags to train your custom template model - Form Recognizer"
+ Title: "Train your custom template model with the sample-labeling tool and table tags"
description: Learn how to effectively use supervised table tag labeling.
Previously updated : 10/10/2022 Last updated : 01/09/2023 #Customer intent: As a user of the Form Recognizer custom model service, I want to ensure I'm training my model in the best way. monikerRange: 'form-recog-2.1.0' recommendations: false
-# Use table tags to train your custom template model
+# Train models with the sample-labeling tool
+
+**This article applies to:** ![Form Recognizer v2.1 checkmark](media/yes-icon.png) **Form Recognizer v2.1**.
>[!TIP] >
-> * For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio ](https://formrecognizer.appliedai.azure.com/studio).
+> * For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio](https://formrecognizer.appliedai.azure.com/studio).
> * The v3.0 Studio supports any model trained with v2.1 labeled data. > * You can refer to the [API migration guide](v3-migration-guide.md) for detailed information about migrating from v2.1 to v3.0. > * *See* our [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) or [**C#**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), [**Java**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), or [Python](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) SDK quickstarts to get started with version v3.0.
attestation Custom Tcb Baseline Enforcement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/custom-tcb-baseline-enforcement.md
# Custom TCB baseline enforcement for SGX attestation - Microsoft Azure Attestation is a unified solution for attesting different types of Trusted Execution Environments (TEEs) such as [Intel® Software Guard Extensions](https://www.intel.com/content/www/us/en/architecture-and-technology/software-guard-extensions.html) (SGX) enclaves. While attesting SGX enclaves, Azure Attestation validates the evidence against Azure default Trusted Computing Base (TCB) baseline. The default TCB baseline is provided by an Azure service named [Trusted Hardware Identity Management](/azure/security/fundamentals/trusted-hardware-identity-management) (THIM) and includes collateral fetched from Intel like certificate revocation lists (CRLs), Intel certificates, Trusted Computing Base (TCB) information and Quoting Enclave identity (QEID). The default TCB baseline from THIM lags the latest baseline offered by Intel and is expected to remain at tcbEvaluationDataNumber 10.
-The custom TCB baseline enforcement feature in Azure Attestation will enable you to perform SGX attestation against a desired TCB baseline, as opposed to the Azure default TCB baseline which is applied across [Azure Confidential Computing](/solutions/confidential-compute/) (ACC) fleet today.
+The custom TCB baseline enforcement feature in Azure Attestation will enable you to perform SGX attestation against a desired TCB baseline, as opposed to the Azure default TCB baseline which is applied across [Azure Confidential Computing](/azure/confidential-computing/) (ACC) fleet today.
## Why use custom TCB baseline enforcement feature?
Minimum PSW Windows version: "2.7.101.2"
## How to configure an attestation policy with custom TCB baseline using Azure portal experience
+### New users
+
+1. Create an attestation provider using Azure portal experience. [Details here](/azure/attestation/quickstart-portal#create-and-configure-the-provider-with-unsigned-policies)
+
+2. Go to overview page and view the current default policy of the attestation provider. [Details here](/azure/attestation/quickstart-portal#view-an-attestation-policy)
+
+3. Click on **View current and available TCB baselines for attestation**, view **Available TCB baselines**, identify the desired TCB identifier and click Cancel
+
+4. Click Configure, set **x-ms-sgx-tcbidentifier** claim value in the policy to the desired value and click Save
+
+### Existing shared provider users
+
+Shared provider users need to migrate to custom providers to be able to perform attestation against custom TCB baseline
+
+1. Create an attestation provider using Azure portal experience. [Details here](/azure/attestation/quickstart-portal#create-and-configure-the-provider-with-unsigned-policies)
+
+2. Go to overview page and view the current default policy of the attestation provider. [Details here](/azure/attestation/quickstart-portal#view-an-attestation-policy)
+
+3. Click on **View current and available TCB baselines for attestation**, view **Available TCB baselines**, identify the desired TCB identifier and click Cancel
+
+4. Click Configure, set **x-ms-sgx-tcbidentifier** claim value in the policy to the desired value and click Save
+
+5. Needs code deployment to send attestation requests to the custom attestation provider
+
+### Existing custom provider users
+
+1. Go to overview page and view the current default policy of the attestation provider. [Details here](/azure/attestation/quickstart-portal#view-an-attestation-policy)
+
+2. Click on **View current and available TCB baselines for attestation**, view **Available TCB baselines**, identify the desired TCB identifier and click Cancel
+
+3. Click Configure, and use the below **sample** for configuring an attestation policy with a custom TCB baseline.
+
+```
+version = 1.1;
+configurationrules
+{
+=> issueproperty (
+type = "x-ms-sgx-tcbidentifier", value = "11ΓÇ¥
+);
+};
+
+authorizationrules
+{
+=> permit();
+};
+Issuancerules
+{
+c:[type=="x-ms-sgx-is-debuggable"] => issue(type="is-debuggable", value=c.value);
+c:[type=="x-ms-sgx-mrsigner"] => issue(type="sgx-mrsigner", value=c.value);
+c:[type=="x-ms-sgx-mrenclave"] => issue(type="sgx-mrenclave", value=c.value);
+c:[type=="x-ms-sgx-product-id"] => issue(type="product-id", value=c.value);
+c:[type=="x-ms-sgx-svn"] => issue(type="svn", value=c.value);
+c:[type=="x-ms-attestation-type"] => issue(type="tee", value=c.value);
+};
+```
## Key considerations: - It is always recommended to install the latest PSW version supported by Intel and configure attestation policy with the latest TCB identifier available in Azure
attestation Policy Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-examples.md
Issuance rules section isn't mandatory. This section can be used by the users to
## Default policy for an SGX enclave ```
-version= 1.0;
-authorizationrules {
+version= 1.1;
+configurationrules{
+ => issueproperty(type="x-ms-sgx-tcbidentifier", value="azuredefault");
+};
+authorizationrules{
=> permit(); };
-issuancerules {
+issuancerules{
c:[type=="x-ms-sgx-is-debuggable"] => issue(type="is-debuggable", value=c.value); c:[type=="x-ms-sgx-mrsigner"] => issue(type="sgx-mrsigner", value=c.value); c:[type=="x-ms-sgx-mrenclave"] => issue(type="sgx-mrenclave", value=c.value);
attestation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-reference.md
Title: Built-in policy definitions for Azure Attestation description: Lists Azure Policy built-in policy definitions for Azure Attestation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/28/2022 Last updated : 01/05/2023
automation Automation Dsc Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-getting-started.md
Title: Get started with Azure Automation State Configuration
description: This article tells how to do the most common tasks in Azure Automation State Configuration. Previously updated : 04/15/2019 Last updated : 01/03/2022
account in the **Nodes** tab of the State configuration (DSC) page.
1. On the Automation account page, click **State configuration (DSC)** under **Configuration Management**. 1. On the State configuration (DSC) page, click the **Nodes** tab. +
+### DSC nodes status values
+
+The DSC node can take any of the following six values as follows:
+
+- **Failed** - This status is displayed when an error occurs while applying one or more configurations on a node.
+- **Not compliant** - This status is displayed when drift occurs on a node and it requires a close review if it is systematic.
+- **Unresponsive** - This status is displayed when a node has not been checked in for more than 24 hours.
+- **Pending** - This status is displayed when a node has a new configuration to apply and the pull server is awaiting node check in.
+- **In progress** - This status is displayed when a node applies configuration, and the pull server is awaiting status.
+- **Compliant** - This status is displayed when a node has a valid configuration, and no drift occurs presently.
+
+>[!NOTE]
+>- **RefreshFrequencyMins** - It defines the frequency of node contacting the agent service and can be provided as part of onboarding to DSC. It takes a maximum value of 10080 minutes.
+>- Node will be marked as **Unresponsive** if the node does not contact the agent service for 1440 minutes (1 Day). We recommend that you use **RefreshFrequencyMins** value < 1440 minutes, else the node would show in a false **Unresponsive** state.
+ ## View reports for managed nodes Each time State Configuration performs a consistency check on a managed node, the
automation Automation Solution Vm Management Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-solution-vm-management-config.md
Title: Configure Azure Automation Start/Stop VMs during off-hours
description: This article tells how to configure the Start/Stop VMs during off-hours feature to support different use cases or scenarios. Previously updated : 11/29/2022 Last updated : 01/04/2023
# Configure Start/Stop VMs during off-hours > [!NOTE]
-> Start/Stop VM during off-hours, version 1 is deprecated and unavailable in the marketplace now. We recommend that you start using [version 2](../azure-functions/start-stop-vms/overview.md), which is now generally available.
-The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until further announcement.
+> Start/Stop VM during off-hours, version 1 is going to retire soon by CY23 and is unavailable in the marketplace now. We recommend that you start using [version 2](/articles/azure-functions/start-stop-vms/overview.md), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until retirement in CY23. The details on announcement will be shared soon.
This article describes how to configure the [Start/Stop VMs during off-hours](automation-solution-vm-management.md) feature to support the described scenarios. You can also learn how to:
The feature allows you to add VMs to be targeted or excluded.
There are two ways to ensure that a VM is included when the feature runs:
-* Each of the parent [runbooks](automation-solution-vm-management.md#runbooks) of the feature has a `VMList` parameter. You can pass a comma-separated list of VM names (without spaces) to this parameter when scheduling the appropriate parent runbook for your situation, and these VMs will be included when the feature runs.
+* Each of the parent runbooksof the feature has a `VMList` parameter. You can pass a comma-separated list of VM names (without spaces) to this parameter when scheduling the appropriate parent runbook for your situation, and these VMs will be included when the feature runs.
* To select multiple VMs, set `External_Start_ResourceGroupNames` and `External_Stop_ResourceGroupNames` with the resource group names that contain the VMs you want to start or stop. You can also set the variables to a value of `*` to have the feature run against all resource groups in the subscription.
Configuring the feature to just stop VMs at a certain time is supported. In this
1. Select **OK** to save your changes. +
+## Create alerts
+
+Start/Stop VMs during off-hours doesn't include a predefined set of Automation job alerts. Review [Forward job data to Azure Monitor Logs](automation-manage-send-joblogs-log-analytics.md#azure-monitor-log-records) to learn about log data forwarded from the Automation account related to the runbook job results and how to create job failed alerts to support your DevOps or operational processes and procedures.
+ ## Next steps * To monitor the feature during operation, see [Query logs from Start/Stop VMs during off-hours](automation-solution-vm-management-logs.md).
automation Automation Solution Vm Management Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-solution-vm-management-enable.md
- Title: Enable Azure Automation Start/Stop VMs during off-hours
-description: This article tells how to enable the Start/Stop VMs during off-hours feature for your Azure VMs.
-- Previously updated : 11/29/2022----
-# Enable Start/Stop VMs during off-hours
-
-> [!NOTE]
-> Start/Stop VM during off-hours, version 1 is deprecated and unavailable in the marketplace now. We recommend that you start using [version 2](../azure-functions/start-stop-vms/overview.md), which is now generally available.
-The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until further announcement.
-
-Perform the steps in this topic in sequence to enable the Start/Stop VMs during off-hours feature for VMs using a new or existing Automation account and linked Log Analytics workspace. After completing the setup process, configure the variables to customize the feature.
-
-## Enable and configure
-
-1. Sign in to the Azure [portal](https://portal.azure.com).
-2. Search for and select **Automation Accounts**.
-3. On the **Automation Accounts** page, select your Automation account from the list.
-4. From the Automation account, select **Start/Stop VM** under **Related Resources**. From here, you can click **Learn more about and enable the solution**. If you already have the feature deployed, you can click **Manage the solution** and find it in the list.
-
- ![Enable from automation account](./media/automation-solution-vm-management/enable-from-automation-account.png)
-
- > [!NOTE]
- > You can also create the resource from anywhere in the Azure portal, by clicking **Create a resource**. In the Marketplace page, type a keyword such as **Start** or **Start/Stop**. As you begin typing, the list filters based on your input. Alternatively, you can type in one or more keywords from the full name of the feature and then press **Enter**. Select **Start/Stop VMs during off-hours** from the search results.
-
-5. On the Start/Stop VMs during off-hours page for the selected deployment, review the summary information and then click **Create**.
-
- ![Azure portal](media/automation-solution-vm-management/azure-portal-01.png)
-
- With the resource created, the Add Solution page appears. You're prompted to configure the feature before you can import it into your Automation account.
-
- ![VM management Add Solution page](media/automation-solution-vm-management/azure-portal-add-solution-01.png)
-
-6. On the **Add Solution** page, select **Workspace**. Select an existing Log Analytics workspace from the list. If there isn't an Automation account in the same supported region as the workspace, you can create a new Automation account in the next step.
-
- > [!NOTE]
- > When enabling features, only certain regions are supported for linking a Log Analytics workspace and an Automation account. For a list of the supported mapping pairs, see [Region mapping for Automation account and Log Analytics workspace](how-to/region-mappings.md).
-
-7. On the **Add Solution page** if there isn't an Automation account available in the supported region as the workspace, select **Automation account**. You can create a new Automation account to associate with it by selecting **Create an Automation account**, and on the **Add Automation account** page, provide the the name of the Automation account in the **Name** field.
-
- All other options are automatically populated, based on the Log Analytics workspace selected. You can't modify these options. An Azure Run As account is the default authentication method for the runbooks included with the feature.
-
- After you click **OK**, the configuration options are validated and the Automation account is created. You can track its progress under **Notifications** from the menu.
-
-8. On the Add Solution page, select **Configure parameters**. The **Parameters** page appears.
-
- ![Parameters page for solution](media/automation-solution-vm-management/azure-portal-add-solution-02.png)
-
-9. Specify a value for the **Target ResourceGroup Names** field. The field defines group names that contain VMs for the feature to manage. You can enter more than one name and separate the names using commas (values are not case-sensitive). Using a wildcard is supported if you want to target VMs in all resource groups in the subscription. The values are stored in the `External_Start_ResourceGroupNames` and `External_Stop_ResourceGroupNames` variables.
-
- > [!IMPORTANT]
- > The default value for **Target ResourceGroup Names** is a **&ast;**. This setting targets all VMs in a subscription. If you don't want the feature to target all the VMs in your subscription, you must provide a list of resource group names before selecting a schedule.
-
-10. Specify a value for the **VM Exclude List (string)** field. This value is the name of one or more virtual machines from the target resource group. You can enter more than one name and separate the names using commas (values are not case-sensitive). Using a wildcard is supported. This value is stored in the `External_ExcludeVMNames` variable.
-
-11. Use the **Schedule** field to select a schedule for VM management by the feature. Select a start date and time for your schedule to create a recurring daily schedule starting at the chosen time. Selecting a different region is not available. To configure the schedule to your specific time zone after configuring the feature, see [Modify the startup and shutdown schedules](automation-solution-vm-management-config.md#modify-the-startup-and-shutdown-schedules).
-
-12. To receive email notifications from an [action group](../azure-monitor/alerts/action-groups.md), accept the default value of **Yes** in the **Email notifications** field, and provide a valid email address. If you select **No** but decide at a later date that you want to receive email notifications, you can update the action group that is created with valid email addresses separated by commas. The following alert rules are created in the subscription:
-
- - `AutoStop_VM_Child`
- - `Scheduled_StartStop_Parent`
- - `Sequenced_StartStop_Parent`
-
-13. After you have configured the initial settings required for the feature, click **OK** to close the **Parameters** page.
-
-14. Click **Create**. After all settings are validated, the feature deploys to your subscription. This process can take several seconds to finish, and you can track its progress under **Notifications** from the menu.
-
- > [!NOTE]
- > If you have an Azure Cloud Solution Provider (Azure CSP) subscription, after deployment is complete, in your Automation account, go to **Variables** under **Shared Resources** and set the [External_EnableClassicVMs](automation-solution-vm-management.md#variables) variable to **False**. This stops the solution from looking for Classic VM resources.
-
-## Create alerts
-
-Start/Stop VMs during off-hours doesn't include a predefined set of Automation job alerts. Review [Forward job data to Azure Monitor Logs](automation-manage-send-joblogs-log-analytics.md#azure-monitor-log-records) to learn about log data forwarded from the Automation account related to the runbook job results and how to create job failed alerts to support your DevOps or operational processes and procedures.
-
-## Next steps
-
-* To set up the feature, see [Configure Stop/Start VMs during off-hours](automation-solution-vm-management-config.md).
-* To resolve feature errors, see [Troubleshoot Start/Stop VMs during off-hours issues](troubleshoot/start-stop-vm.md).
automation Automation Solution Vm Management Remove https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-solution-vm-management-remove.md
Title: Remove Azure Automation Start/Stop VMs during off-hours overview
description: This article describes how to remove the Start/Stop VMs during off-hours feature and unlink an Automation account from the Log Analytics workspace. Previously updated : 11/29/2022 Last updated : 01/04/2023
# Remove Start/Stop VMs during off-hours from Automation account > [!NOTE]
-> Start/Stop VM during off-hours, version 1 is deprecated and unavailable in the marketplace now. We recommend that you start using [version 2](../azure-functions/start-stop-vms/overview.md), which is now generally available.
-The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until further announcement.
+> Start/Stop VM during off-hours, version 1 is going to retire soon by CY23 and is unavailable in the marketplace now. We recommend that you start using [version 2](/articles/azure-functions/start-stop-vms/overview.md), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until retirement in CY23. The details on announcement will be shared soon.
After you enable the Start/Stop VMs during off-hours feature to manage the running state of your Azure VMs, you may decide to stop using it. Removing this feature can be done using one of the following methods based on the supported deployment models:
-* Delete the resource group containing the Automation account and linked Azure Monitor Log Analytics workspace, each dedicated to support this feature.
-* Unlink the Log Analytics workspace from the Automation account and delete the Automation account dedicated for this feature.
-* Delete the feature from an Automation account and linked workspace that are supporting other management and monitoring objectives.
-
-Deleting this feature only removes the associated runbooks, it doesn't delete the schedules or variables that were created during deployment or any custom-defined ones created after.
> [!NOTE]
-> Before proceeding, verify there aren't any [Resource Manager locks](../azure-resource-manager/management/lock-resources.md) applied at the subscription, resource group, or resource which prevents accidental deletion or modification of critical resources. When you deploy the Start/Stop VMs during off-hours solution, it sets the lock level to **CanNotDelete** against several dependent resources in the Automation account (specifically its runbooks and variables). Any locks need to be removed before you can delete the Automation account.
+> Before proceeding, verify there aren't any [Resource Manager locks](../azure-resource-manager/management/lock-resources.md) applied at the subscription, resource group, or resource which prevents accidental deletion or modification of critical resources. When you deploy the Start/Stop VMs during off-hours solution, it sets the lock level to **Cannot Delete** against several dependent resources in the Automation account (specifically its runbooks and variables). Any locks need to be removed before you can delete the Automation account.
## Delete the dedicated resource group
To unlink from your Automation account, perform the following steps.
3. On the **Unlink workspace** page, select **Unlink workspace** and respond to prompts.
- ![Unlink workspace page](media/automation-solution-vm-management-remove/automation-unlink-workspace-blade.png)
+ ![Screenshot showing how to unlink a workspace page.](media/automation-solution-vm-management-remove/automation-unlink-workspace-blade.png)
While it attempts to unlink the Log Analytics workspace, you can track the progress under **Notifications** from the menu.
To delete Start/Stop VMs during off-hours from your Automation account, perform
5. On the **VMManagementSolution[Workspace]** page, select **Delete** from the menu.
- ![Delete VM management feature](media/automation-solution-vm-management/vm-management-solution-delete.png)
+ ![Screenshot showing the delete VM management feature.](media/automation-solution-vm-management/vm-management-solution-delete.png)
6. In the Delete Solution window, confirm that you want to delete the feature. 7. While the information is verified and the feature is deleted, you can track the progress under **Notifications**, chosen from the menu. You're returned to the Solutions page after the removal process.
-8. If you don't want to keep the [resources](automation-solution-vm-management.md#components) created by the feature or by you afterwards (such as, variables, schedules, etc.), you have to manually delete them from the account.
+8. If you don't want to keep the resources created by the feature or by you afterwards (such as, variables, schedules, etc.), you have to manually delete them from the account.
++ ## Next steps
automation Automation Solution Vm Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-solution-vm-management.md
Title: Azure Automation Start/Stop VMs during off-hours overview
description: This article describes the Start/Stop VMs during off-hours feature, which starts or stops VMs on a schedule and proactively monitor them from Azure Monitor Logs. Previously updated : 11/29/2022 Last updated : 01/04/2023
# Start/Stop VMs during off-hours overview > [!NOTE]
-> Start/Stop VM during off-hours, version 1 is deprecated and unavailable in the marketplace now. We recommend that you start using [version 2](../azure-functions/start-stop-vms/overview.md), which is now generally available.
-The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until further announcement.
+> Start/Stop VM during off-hours, version 1 is going to retire soon by CY23 and is unavailable in the marketplace now. We recommend that you start using [version 2](/articles/azure-functions/start-stop-vms/overview.md), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until retirement in CY23. The details on announcement will be shared soon.
The Start/Stop VMs during off-hours feature start or stops enabled Azure VMs. It starts or stops machines on user-defined schedules, provides insights through Azure Monitor logs, and sends optional emails by using [action groups](../azure-monitor/alerts/action-groups.md). The feature can be enabled on both Azure Resource Manager and classic VMs for most scenarios.
The following are limitations with the current feature:
- It manages VMs in any region, but can only be used in the same subscription as your Azure Automation account. - It is available in Azure and Azure Government for any region that supports a Log Analytics workspace, an Azure Automation account, and alerts. Azure Government regions currently don't support email functionality.
-## Prerequisites
--- The runbooks for the Start/Stop VMs during off hours feature work with an [Azure Run As account](./automation-security-overview.md#run-as-accounts). The Run As account is the preferred authentication method because it uses certificate authentication instead of a password that might expire or change frequently.--- An [Azure Monitor Log Analytics workspace](../azure-monitor/logs/log-analytics-workspace-overview.md) that stores the runbook job logs and job stream results in a workspace to query and analyze. The Automation account and Log Analytics workspace need to be in the same subscription and supported region. The workspace needs to already exist, you cannot create a new workspace during deployment of this feature.-
-We recommend that you use a separate Automation account for working with VMs enabled for the Start/Stop VMs during off-hours feature. Azure module versions are frequently upgraded, and their parameters might change. The feature isn't upgraded on the same cadence and it might not work with newer versions of the cmdlets that it uses. Before importing the updated modules into your production Automation account(s), we recommend you import them into a test Automation account to verify there aren't any compatibility issues.
- ## Permissions You must have certain permissions to enable VMs for the Start/Stop VMs during off-hours feature. The permissions are different depending on whether the feature uses a pre-created Automation account and Log Analytics workspace or creates a new account and workspace.
To enable VMs for the Start/Stop VMs during off-hours feature using an existing
| Microsoft.Resources/subscriptions/resourceGroups/read | Resource Group | | Microsoft.Resources/deployments/* | Resource Group |
-### Permissions for new Automation account and new Log Analytics workspace
-
-You can enable VMs for the Start/Stop VMs during off-hours feature using a new Automation account and Log Analytics workspace. In this case, you need the permissions defined in the previous section and the permissions defined in this section. You also require the following roles:
--- Membership in the [Azure AD](../active-directory/roles/permissions-reference.md) Application Developer role. For more information on configuring Run As Accounts, see [Permissions to configure Run As accounts](automation-security-overview.md#permissions).-- Contributor on the subscription or the following permissions.-
-| Permission |Scope|
-| | |
-| Microsoft.Authorization/Operations/read | Subscription|
-| Microsoft.Authorization/permissions/read |Subscription|
-| Microsoft.Authorization/roleAssignments/read | Subscription |
-| Microsoft.Authorization/roleAssignments/write | Subscription |
-| Microsoft.Authorization/roleAssignments/delete | Subscription |
-| Microsoft.Automation/automationAccounts/connections/read | Resource Group |
-| Microsoft.Automation/automationAccounts/certificates/read | Resource Group |
-| Microsoft.Automation/automationAccounts/write | Resource Group |
-| Microsoft.OperationalInsights/workspaces/write | Resource Group |
-
-## Components
+## Components for version 1
The Start/Stop VMs during off-hours feature include preconfigured runbooks, schedules, and integration with Azure Monitor Logs. You can use these elements to tailor the startup and shutdown of your VMs to suit your business needs.
-### Runbooks
+### Runbooks for version 1
The following table lists the runbooks that the feature deploys to your Automation account. Do NOT make changes to the runbook code. Instead, write your own runbook for new functionality.
All parent runbooks include the `WhatIf` parameter. When set to True, the parame
|ScheduledStartStop_Parent | Action: Start or Stop <br>VMList <br> WhatIf: True or False | Starts or stops all VMs in the subscription. Edit the variables `External_Start_ResourceGroupNames` and `External_Stop_ResourceGroupNames` to only execute on these targeted resource groups. You can also exclude specific VMs by updating the `External_ExcludeVMNames` variable.| |SequencedStartStop_Parent | Action: Start or Stop <br> WhatIf: True or False<br>VMList| Creates tags named **sequencestart** and **sequencestop** on each VM for which you want to sequence start/stop activity. These tag names are case-sensitive. The value of the tag should be a list of positive integers, for example, `1,2,3`, that corresponds to the order in which you want to start or stop. <br>**Note**: VMs must be within resource groups defined in `External_Start_ResourceGroupNames`, `External_Stop_ResourceGroupNames`, and `External_ExcludeVMNames` variables. They must have the appropriate tags for actions to take effect.|
-### Variables
+
+### Variables for version 1
The following table lists the variables created in your Automation account. Only modify variables prefixed with `External`. Modifying variables prefixed with `Internal` causes undesirable effects. > [!NOTE] > Limitations on VM name and resource group are largely a result of variable size. See [Variable assets in Azure Automation](./shared-resources/variables.md).
-|Variable | Description|
-|||
-|External_AutoStop_Condition | The conditional operator required for configuring the condition before triggering an alert. Acceptable values are `GreaterThan`, `GreaterThanOrEqual`, `LessThan`, and `LessThanOrEqual`.|
-|External_AutoStop_Description | The alert to stop the VM if the CPU percentage exceeds the threshold.|
-|External_AutoStop_Frequency | The evaluation frequency for rule. This parameter accepts input in timespan format. Possible values are from 5 minutes to 6 hours. |
-|External_AutoStop_MetricName | The name of the performance metric for which the Azure Alert rule is to be configured.|
-|External_AutoStop_Severity | Severity of the metric alert, which can range from 0 to 4. |
-|External_AutoStop_Threshold | The threshold for the Azure Alert rule specified in the variable `External_AutoStop_MetricName`. Percentage values range from 1 to 100.|
-|External_AutoStop_TimeAggregationOperator | The time aggregation operator applied to the selected window size to evaluate the condition. Acceptable values are `Average`, `Minimum`, `Maximum`, `Total`, and `Last`.|
-|External_AutoStop_TimeWindow | The size of the window during which Azure analyzes selected metrics for triggering an alert. This parameter accepts input in timespan format. Possible values are from 5 minutes to 6 hours.|
-|External_EnableClassicVMs| Value specifying if classic VMs are targeted by the feature. The default value is True. Set this variable to False for Azure Cloud Solution Provider (CSP) subscriptions.|
-|External_ExcludeVMNames | Comma-separated list of VM names to exclude, limited to 140 VMs. If you add more than 140 VMs to the list, VMs specified for exclusion might be inadvertently started or stopped.|
-|External_Start_ResourceGroupNames | Comma-separated list of one or more resource groups that are targeted for start actions.|
-|External_Stop_ResourceGroupNames | Comma-separated list of one or more resource groups that are targeted for stop actions.|
-|External_WaitTimeForVMRetrySeconds |The wait time in seconds for the actions to be performed on the VMs for the **SequencedStartStop_Parent** runbook. This variable allows the runbook to wait for child operations for a specified number of seconds before proceeding with the next action. The maximum wait time is 10800, or three hours. The default value is 2100 seconds.|
-|Internal_AutomationAccountName | Specifies the name of the Automation account.|
-|Internal_AutoSnooze_ARM_WebhookURI | The webhook URI called for the AutoStop scenario for VMs.|
-|Internal_AutoSnooze_WebhookUri | The webhook URI called for the AutoStop scenario for classic VMs.|
-|Internal_AzureSubscriptionId | The Azure subscription ID.|
-|Internal_ResourceGroupName | The Automation account resource group name.|
- >[!NOTE] >For the variable `External_WaitTimeForVMRetryInSeconds`, the default value has been updated from 600 to 2100. Across all scenarios, the variables `External_Start_ResourceGroupNames`, `External_Stop_ResourceGroupNames`, and `External_ExcludeVMNames` are necessary for targeting VMs, except for the comma-separated VM lists for the **AutoStop_CreateAlert_Parent**, **SequencedStartStop_Parent**, and **ScheduledStartStop_Parent** runbooks. That is, your VMs must belong to target resource groups for start and stop actions to occur. The logic works similar to Azure Policy, in that you can target the subscription or resource group and have actions inherited by newly created VMs. This approach avoids having to maintain a separate schedule for every VM and manage starts and stops in scale.
-### Schedules
-
-The following table lists each of the default schedules created in your Automation account. You can modify them or create your own custom schedules. By default, all schedules are disabled except for the **Scheduled_StartVM** and **Scheduled_StopVM** schedules.
-
-Don't enable all schedules, because doing so might create overlapping schedule actions. It's best to determine which optimizations you want to do and modify them accordingly. See the example scenarios in the overview section for further explanation.
-
-|Schedule name | Frequency | Description|
-| | | |
-|Schedule_AutoStop_CreateAlert_Parent | Every 8 hours | Runs the **AutoStop_CreateAlert_Parent** runbook every 8 hours, which in turn stops the VM-based values in `External_Start_ResourceGroupNames`, `External_Stop_ResourceGroupNames`, and `External_ExcludeVMNames` variables. Alternatively, you can specify a comma-separated list of VMs by using the `VMList` parameter.|
-|Scheduled_StopVM | User-defined, daily | Runs the **ScheduledStopStart_Parent** runbook with a parameter of `Stop` every day at the specified time. Automatically stops all VMs that meet the rules defined by variable assets. Enable the related schedule **Scheduled-StartVM**.|
-|Scheduled_StartVM | User-defined, daily | Runs the **ScheduledStopStart_Parent** runbook with a parameter value of `Start` every day at the specified time. Automatically starts all VMs that meet the rules defined by variable assets. Enable the related schedule **Scheduled-StopVM**.|
-|Sequenced-StopVM | 1:00 AM (UTC), every Friday | Runs the **Sequenced_StopStop_Parent** runbook with a parameter value of `Stop` every Friday at the specified time. Sequentially (ascending) stops all VMs with a tag of **SequenceStop** defined by the appropriate variables. For more information on tag values and asset variables, see [Runbooks](#runbooks). Enable the related schedule, **Sequenced-StartVM**.|
-|Sequenced-StartVM | 1:00 PM (UTC), every Monday | Runs the **SequencedStopStart_Parent** runbook with a parameter value of `Start` every Monday at the specified time. Sequentially (descending) starts all VMs with a tag of **SequenceStart** defined by the appropriate variables. For more information on tag values and variable assets, see [Runbooks](#runbooks). Enable the related schedule, **Sequenced-StopVM**.|
-
-## Use the feature with classic VMs
-
-If you are using the Start/Stop VMs during off-hours feature for classic VMs, Automation processes all your VMs sequentially per cloud service. VMs are still processed in parallel across different cloud services.
-
-If you have more than 20 VMs per cloud service, here are some recommendations:
-
-* Create multiple schedules with the parent runbook **ScheduledStartStop_Parent** and specifying 20 VMs per schedule.
-* In the schedule properties, use the `VMList` parameter to specify VM names as a comma-separated list (no whitespaces).
-
-Otherwise, if the Automation job for this feature runs more than three hours, it's temporarily unloaded or stopped per the [fair share](automation-runbook-execution.md#fair-share) limit.
-
-Azure CSP subscriptions support only the Azure Resource Manager model. Non-Azure Resource Manager services are not available in the program. When the Start/Stop VMs during off-hours feature runs, you might receive errors since it has cmdlets to manage classic resources. To learn more about CSP, see [Available services in CSP subscriptions](/azure/cloud-solution-provider/overview/azure-csp-available-services). If you use a CSP subscription, you should set the [External_EnableClassicVMs](#variables) variable to False after deployment.
+### Schedules for version 1
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../includes/azure-monitor-log-analytics-rebrand.md)]
-## View the feature
+## View the feature for version 1
Use one of the following mechanisms to access the enabled feature:
Selecting the feature displays the **Start-Stop-VM[workspace]** page. Here you c
You can perform further analysis of the job records by clicking the donut tile. The dashboard shows job history and predefined log search queries. Switch to the log analytics advanced portal to search based on your search queries.
-## Update the feature
-
-If you've deployed a previous version of Start/Stop VMs during off-hours, delete it from your account before deploying an updated release. Follow the steps to [remove the feature](automation-solution-vm-management-remove.md#delete-the-feature) and then follow the steps to [enable it](automation-solution-vm-management-enable.md).
- ## Next steps To enable the feature on VMs in your environment, see [Enable Start/Stop VMs during off-hours](automation-solution-vm-management-enable.md).
automation Extension Based Hybrid Runbook Worker Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/extension-based-hybrid-runbook-worker-install.md
You can delete an empty Hybrid Runbook Worker group from the portal.
## Automatic upgrade of extension
-Hybrid Worker extension supports [Automatic upgrade](/articles/virtual-machines/automatic-extension-upgrade.md) of minor versions by default. We recommend that you enable Automatic upgrades to take advantage of any security or feature updates without manual overhead. However, to prevent the extension from automatically upgrading (for example, if there is a strict change windows and can only be updated at specific time), you can opt out of this feature by setting the `enableAutomaticUpgrade`property in ARM, Bicep template, PowerShell cmdlets to *false*. Set the same property to *true* whenever you want to re-enable the Automatic upgrade.
+Hybrid Worker extension supports [Automatic upgrade](/azure/virtual-machines/automatic-extension-upgrade) of minor versions by default. We recommend that you enable Automatic upgrades to take advantage of any security or feature updates without manual overhead. However, to prevent the extension from automatically upgrading (for example, if there is a strict change windows and can only be updated at specific time), you can opt out of this feature by setting the `enableAutomaticUpgrade`property in ARM, Bicep template, PowerShell cmdlets to *false*. Set the same property to *true* whenever you want to re-enable the Automatic upgrade.
```powershell $extensionType = "HybridWorkerForLinux/HybridWorkerForWindows"
New-AzConnectedMachineExtension -ResourceGroupName <VMResourceGroupName> -Locati
#### [Bicep template](#tab/bicep-template)
-You can use the Bicep template to create a new Hybrid Worker group, create a new Azure Windows VM and add it to an existing Hybrid Worker Group. Learn more about [Bicep](/articles/azure-resource-manager/bicep/overview.md)
+You can use the Bicep template to create a new Hybrid Worker group, create a new Azure Windows VM and add it to an existing Hybrid Worker Group. Learn more about [Bicep](/azure/azure-resource-manager/bicep/overview)
```Bicep param automationAccount string
To install and use Hybrid Worker extension using REST API, follow these steps. T
**Manage Hybrid Worker Extension** -- To create, delete, and manage extension-based Hybrid Runbook Worker groups, see [az automation hrwg | Microsoft Docs](/cli/azure/automation/hrwg?view=azure-cli-latest)-- To create, delete, and manage extension-based Hybrid Runbook Worker, see [az automation hrwg hrw | Microsoft Docs](/cli/azure/automation/hrwg/hrw?view=azure-cli-latest)
+- To create, delete, and manage extension-based Hybrid Runbook Worker groups, see [az automation hrwg | Microsoft Docs](/cli/azure/automation/hrwg)
+- To create, delete, and manage extension-based Hybrid Runbook Worker, see [az automation hrwg hrw | Microsoft Docs](/cli/azure/automation/hrwg/hrw)
-After creating new Hybrid Runbook Worker, you must install the extension on the Hybrid Worker using [az vm extension set](/cli/azure/vm/extension?view=azure-cli-latest#az-vm-extension-set).
+After creating new Hybrid Runbook Worker, you must install the extension on the Hybrid Worker using [az vm extension set](/cli/azure/vm/extension#az-vm-extension-set).
#### [PowerShell](#tab/ps)
You can use the following PowerShell cmdlets to manage Hybrid Runbook Worker and
| PowerShell cmdlet | Description | | -- | -- |
-|[`Get-AzAutomationHybridRunbookWorkerGroup`](/powershell/module/az.automation/get-azautomationhybridrunbookworkergroup?view=azps-9.1.0) | Gets Hybrid Runbook Worker group|
-|[`Remove-AzAutomationHybridRunbookWorkerGroup`](/powershell/module/az.automation/remove-azautomationhybridrunbookworkergroup?view=azps-9.1.0) | Removes Hybrid Runbook Worker group|
-|[`Set-AzAutomationHybridRunbookWorkerGroup`](/powershell/module/az.automation/set-azautomationhybridrunbookworkergroup?view=azps-9.1.0) | Updates Hybrid Worker group with Hybrid Worker credentials|
-|[`New-AzAutomationHybridRunbookWorkerGroup`](/powershell/module/az.automation/new-azautomationhybridrunbookworkergroup?view=azps-9.1.0) | Creates new Hybrid Runbook Worker group|
-|[`Get-AzAutomationHybridRunbookWorker`](/powershell/module/az.automation/get-azautomationhybridrunbookworker?view=azps-9.1.0) | Gets Hybrid Runbook Worker|
-|[`Move-AzAutomationHybridRunbookWorker`](/powershell/module/az.automation/move-azautomationhybridrunbookworker?view=azps-9.1.0) | Moves Hybrid Worker from one group to other|
-|[`New-AzAutomationHybridRunbookWorker`](/powershell/module/az.automation/new-azautomationhybridrunbookworker?view=azps-9.1.0) | Creates new Hybrid Runbook Worker|
-|[`Remove-AzAutomationHybridRunbookWorker`](/powershell/module/az.automation/remove-azautomationhybridrunbookworker?view=azps-9.1.0)| Removes Hybrid Runbook Worker|
+|[`Get-AzAutomationHybridRunbookWorkerGroup`](/powershell/module/az.automation/get-azautomationhybridrunbookworkergroup) | Gets Hybrid Runbook Worker group|
+|[`Remove-AzAutomationHybridRunbookWorkerGroup`](/powershell/module/az.automation/remove-azautomationhybridrunbookworkergroup) | Removes Hybrid Runbook Worker group|
+|[`Set-AzAutomationHybridRunbookWorkerGroup`](/powershell/module/az.automation/set-azautomationhybridrunbookworkergroup) | Updates Hybrid Worker group with Hybrid Worker credentials|
+|[`New-AzAutomationHybridRunbookWorkerGroup`](/powershell/module/az.automation/new-azautomationhybridrunbookworkergroup) | Creates new Hybrid Runbook Worker group|
+|[`Get-AzAutomationHybridRunbookWorker`](/powershell/module/az.automation/get-azautomationhybridrunbookworker) | Gets Hybrid Runbook Worker|
+|[`Move-AzAutomationHybridRunbookWorker`](/powershell/module/az.automation/move-azautomationhybridrunbookworker) | Moves Hybrid Worker from one group to other|
+|[`New-AzAutomationHybridRunbookWorker`](/powershell/module/az.automation/new-azautomationhybridrunbookworker) | Creates new Hybrid Runbook Worker|
+|[`Remove-AzAutomationHybridRunbookWorker`](/powershell/module/az.automation/remove-azautomationhybridrunbookworker)| Removes Hybrid Runbook Worker|
After creating new Hybrid Runbook Worker, you must install the extension on the Hybrid Worker.
Using [VM insights](../azure-monitor/vm/vminsights-overview.md), you can monitor
- To learn about VM extensions for Arc-enabled servers, see [VM extension management with Azure Arc-enabled servers](../azure-arc/servers/manage-vm-extensions.md). - To learn about VM extensions for Arc-enabled VMware vSphere VMs, see [Manage VMware VMs in Azure through Arc-enabled VMware vSphere (preview)](../azure-arc/vmware-vsphere/manage-vmware-vms-in-azure.md).-
automation Region Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/how-to/region-mappings.md
Title: Supported regions for linked Log Analytics workspace description: This article describes the supported region mappings between an Automation account and a Log Analytics workspace as it relates to certain features of Azure Automation. Previously updated : 12/29/2022 Last updated : 01/04/2023
# Supported regions for linked Log Analytics workspace > [!NOTE]
-> Start/Stop VM during off-hours, version 1 is deprecated and unavailable in the marketplace now. We recommend that you start using [version 2](/articles/azure-functions/start-stop-vms/overview.md), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until further announcement.
+> Start/Stop VM during off-hours, version 1 is going to retire soon by CY23 and is unavailable in the marketplace now. We recommend that you start using [version 2](/articles/azure-functions/start-stop-vms/overview.md), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until retirement in CY23. The details on announcement will be shared soon.
In Azure Automation, you can enable the Update Management, Change Tracking and Inventory, and Start/Stop VMs during off-hours features for your servers and virtual machines. These features have a dependency on a Log Analytics workspace, and therefore require linking the workspace with an Automation account. However, only certain regions are supported to link them together. In general, the mapping is *not* applicable if you plan to link an Automation account to a workspace that won't have these features enabled.
-The mappings discussed here applying only to linking the Log Analytics Workspace to an Automation account. They don't apply to the virtual machines (VMs) that are connected to the workspace that's linked to the Automation Account. VMs aren't limited to the regions supported by a given Log Analytics workspace. They can be in any region. Keep in mind that having the VMs in a different region may affect state, local, and country regulatory requirements, or your company's compliance requirements. Having VMs in a different region could also introduce data bandwidth charges.
+The mappings discussed here applying only to linking the Log Analytics Workspace to an Automation account. They don't apply to the virtual machines (VMs) that are connected to the workspace that's linked to the Automation Account. VMs aren't limited to the regions supported by a given Log Analytics workspace. They can be in any region. Keep in mind that having the VMs in a different region may affect state, local, and country/regional regulatory requirements, or your company's compliance requirements. Having VMs in a different region could also introduce data bandwidth charges.
Before connecting VMs to a workspace in a different region, you should review the requirements and potential costs to confirm and understand the legal and cost implications.
This article provides the supported mappings in order to successfully enable and
For more information, see [Log Analytics workspace and Automation account](../../azure-monitor/insights/solutions.md#log-analytics-workspace-and-automation-account).
-## Supported mappings
+## Supported mappings for version 1
> [!NOTE] > As shown in following table, only one mapping can exist between Log Analytics and Azure Automation.
The following table shows the supported mappings:
<sup>3</sup> In this region, only Update Management is supported, and other features like Change Tracking and Inventory aren't available at this time.
-## Unlink a workspace
-
-If you decide that you no longer want to integrate your Automation account with a Log Analytics workspace, you can unlink your account directly from the Azure portal. Before proceeding, you first need to [remove](move-account.md#remove-features) Update Management, Change Tracking and Inventory, and Start/Stop VMs during off-hours if you're using them. If you don't remove them, you can't complete the unlinking operation.
-
-With the features removed, you can follow the steps to unlink your Automation account.
-
-> [!NOTE]
-> Some features, including earlier versions of the Azure SQL monitoring solution, might have created Automation assets that need to be removed prior to unlinking the workspace.
-
-1. From the Azure portal, open your Automation account. On the Automation account page, select **Linked workspace** under **Related Resources**.
-
-2. On the Unlink workspace page, select **Unlink workspace**. You receive a prompt verifying if you want to continue.
-
-3. While Azure Automation is unlinking the account from your Log Analytics workspace, you can track the progress under **Notifications** from the menu.
-
-4. If you used Update Management, optionally you might want to remove the following items that are no longer needed:
-
- * Update schedules: Each has a name that matches an update deployment that you created.
- * Hybrid worker groups created for the feature: Each has a name similar to `machine1.contoso.com_9ceb8108-26c9-4051-b6b3-227600d715c8`.
-
-5. If you used Start/Stop VMs during off-hours, optionally you can remove the following items that are no longer needed:
-
- * Start and stop VM runbook schedules
- * Start and stop VM runbooks
- * Variables
-
-Alternatively, you can unlink your workspace from your Automation account within the workspace.
-
-1. In the workspace, select **Automation Account** under **Related Resources**.
-2. On the Automation Account page, select **Unlink account**.
## Next steps * Learn about Update Management in [Update Management overview](../update-management/overview.md). * Learn about Change Tracking and Inventory in [Change Tracking and Inventory overview](../change-tracking/overview.md).
-* Learn about Start/Stop VMs during off-hours in [Start/Stop VMs during off-hours overview](../automation-solution-vm-management.md).
+* Learn about Start/Stop VMs during off-hours in [Start/Stop VMs during off-hours overview](../automation-solution-vm-management.md).
automation Migrate Existing Agent Based Hybrid Worker To Extension Based Workers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/migrate-existing-agent-based-hybrid-worker-to-extension-based-workers.md
+
+ Title: Migrate an existing agent-based hybrid workers to extension-based-workers in Azure Automation
+description: This article provides information on how to migrate an existing agent-based hybrid worker to extension based workers.
++ Last updated : 12/29/2022+
+#Customer intent: As a developer, I want to learn about extension so that I can efficiently migrate agent based hybrid workers to extension based workers.
++
+# Migrate the existing agent-based hybrid workers to extension-based hybrid workers
+
+This article describes the benefits of Extension-based User Hybrid Runbook Worker and how to migrate existing Agent-based User Hybrid Runbook Workers to Extension-based Hybrid Workers.
+
+There are two Hybrid Runbook Workers installation platforms supported by Azure Automation:
+- **Agent based hybrid runbook worker** (V1) - The Agent-based hybrid runbook worker depends on theΓÇ»[Log Analytics Agent](../azure-monitor/agents/log-analytics-agent.md).
+- **Extension based hybrid runbook worker** (V2) - The Extension-based hybrid runbook worker provides native integration of the hybrid runbook worker role through the Virtual machine (VM) extension framework.ΓÇ»
+
+The process of executing runbooks on Hybrid Runbook Workers remains the same for both.
+
+## Benefits of Extension-based User Hybrid Runbook Workers over Agent-based workers
+
+The purpose of the Extension-based approach is to simplify the installation and management of the Hybrid Worker and remove the complexity working with the Agent-based version. Here are some key benefits:
+
+- **Seamless onboarding** – The Agent-based approach for onboarding Hybrid Runbook worker is dependent on the Log Analytics Agent, which is a multi-step, time-consuming, and error-prone process. The Extension-based approach offers more security and is no longer dependent on the Log Analytics Agent.
+
+- **Ease of Manageability** – It offers native integration with Azure Resource Manager (ARM) identity for Hybrid Runbook Worker and provides the flexibility for governance at scale through policies and templates.
+
+- **Azure Active Directory based authentication** – It uses a VM system-assigned managed identities provided by Azure Active Directory. This centralizes control and management of identities and resource credentials.
+
+- **Unified experience** – It offers an identical experience for managing Azure and off-Azure Arc-enabled machines.
+
+- **Multiple onboarding channels** – You can choose to onboard and manage Extension-based workers through the Azure portal, PowerShell cmdlets, Bicep, ARM templates, REST API and Azure CLI.
+
+- **Default Automatic upgrade** – It offers Automatic upgrade of minor versions by default, significantly reducing the manageability of staying updated on the latest version. We recommend enabling Automatic upgrades to take advantage of any security or feature updates without the manual overhead. You can also opt out of automatic upgrades at any time. Any major version upgrades are currently not supported and should be managed manually.
+
+>[!NOTE]
+> The Extension-based Hybrid Runbook Worker only supports the User Hybrid Runbook Worker type, and doesn't include the System Hybrid Runbook Worker required for the Update Management feature.
+
+## Prerequisites
+
+### Machine minimum requirements
+
+- Two cores
+- 4 GB of RAM
+- **Non-Azure machines** must have the [Azure Connected Machine agent](../azure-arc/servers/agent-overview.md) installed. To install the `AzureConnectedMachineAgent`, see [Connect hybrid machines to Azure from the Azure portal](../azure-arc/servers/onboard-portal.md) for Arc-enabled servers or see [Manage VMware virtual machines Azure Arc](../azure-arc/vmware-vsphere/manage-vmware-vms-in-azure.md#enable-guest-management) to enable guest management for Arc-enabled VMware vSphere VMs.
+- The system-assigned managed identity must be enabled on the Azure virtual machine, Arc-enabled server or Arc-enabled VMware vSphere VM. If the system-assigned managed identity isn't enabled, it will be enabled as part of the installation process through the Azure portal.
+
+### Supported operating systems
+
+| Windows | Linux (x64)|
+|||
+| &#9679; Windows Server 2022 (including Server Core) <br> &#9679; Windows Server 2019 (including Server Core) <br> &#9679; Windows Server 2016, version 1709 and 1803 (excluding Server Core), and <br> &#9679; Windows Server 2012, 2012 R2 | &#9679; Debian GNU/Linux 10 and 11 <br> &#9679; Ubuntu 22.04 LTS <br> &#9679; SUSE Linux Enterprise Server 15.2, and 15.3 <br> &#9679; Red Hat Enterprise Linux Server 7 and 8ΓÇ»|
+
+### Other Requirements
+
+| Windows | Linux (x64)|
+|||
+| Windows PowerShell 5.1 (download WMF 5.1). PowerShell Core isn't supported.| Linux Hardening must not be enabled.ΓÇ» |
+| .NET Framework 4.6.2 or later.ΓÇ»| |
+
+### Package requirements for Linux
+
+| Required package | Description | Minimum version |
+| | | - |
+| Glibc |GNU C Library | 2.5-12 |
+| Openssl | OpenSSL Libraries | 1.0 (TLS 1.1 and TLS 1.2 are supported) |
+| Curl | cURL web client | 7.15.5 |
+| Python-ctypes | Foreign function library for Python | Python 2.x or Python 3.x are required |
+| PAM | Pluggable Authentication Modules | |
+
+| Optional package | Description | Minimum version |
+| | | - |
+| PowerShell Core | To run PowerShell runbooks, PowerShell Core needs to be installed. For instructions, see [Installing PowerShell Core on Linux](/powershell/scripting/install/installing-powershell-core-on-linux) | 6.0.0 |
+
+### Permissions for Hybrid Worker credentials
+
+If agent-based Hybrid Worker is using custom Hybrid Worker credentials, then ensure that following permissions are assigned to the custom user to avoid jobs from getting suspended on the extension-based Hybrid Worker.
+
+| **Resource type** | **Folder permissions** |
+| | |
+|Azure VM | C:\Packages\Plugins\Microsoft.Azure.Automation.HybridWorker.HybridWorkerForWindows (read and execute) |
+|Arc-enabled Server | C:\ProgramData\AzureConnectedMachineAgent\Tokens (read)</br> C:\Packages\Plugins\Microsoft.Azure.Automation.HybridWorker.HybridWorkerForWindows (read and execute) |
+
+> [!NOTE]
+> Hybrid Runbook Worker is currently not supported for Virtual Machine Scale Sets (VMSS).
+
+## Migrate an existing Agent based Hybrid Worker to Extension based Hybrid Worker
+
+To utilize the benefits of extension based Hybrid Workers, you must migrate all existing agent based User Hybrid Workers to extension based Workers. A hybrid worker machine can co-exist on both **Agent based (V1)** and **Extension based (V2)** platforms. The extension based installation doesn't affect the installation or management of an agent based Worker.
+
+To install Hybrid worker extension on an existing agent based hybrid worker, follow these steps:
+
+1. Under **Process Automation**, select **Hybrid worker groups**, and then select your existing hybrid worker group to go to the **Hybrid worker group** page.
+1. Under **Hybrid worker group**, select **Hybrid Workers** > **+ Add** to go to the **Add machines as hybrid worker** page.
+1. Select the checkbox next to the existing Agent based (V1) Hybrid worker. If you don't see your agent-based Hybrid Worker listed, ensure Azure Arc Connected Machine agent is installed on the machine. To install the `AzureConnectedMachineAgent`, see [Connect hybrid machines to Azure from the Azure portal](../azure-arc/servers/onboard-portal.md) for Arc-enabled servers, or see [Manage VMware virtual machines Azure Arc](../azure-arc/vmware-vsphere/manage-vmware-vms-in-azure.md#enable-guest-management) to enable guest management for Arc-enabled VMware vSphere VMs.
+
+ :::image type="content" source="./media/migrate-existing-agent-based-hybrid-worker-extension-based-hybrid-worker/add-machines-hybrid-worker-inline.png" alt-text="Screenshot of adding machines as hybrid worker." lightbox="./media/migrate-existing-agent-based-hybrid-worker-extension-based-hybrid-worker/add-machines-hybrid-worker-expanded.png":::
+
+1. Select **Add** to append the machine to the group.
+
+ The **Platform** column shows the same Hybrid worker as both **Agent based (V1)** and **Extension based (V2)**. After you're confident of the extension based Hybrid Worker experience and use, you can [remove](#remove-agent-based-hybrid-worker) the agent based Worker.
+
+ :::image type="content" source="./media/migrate-existing-agent-based-hybrid-worker-extension-based-hybrid-worker/hybrid-workers-group-platform-inline.png" alt-text="Screenshot of platform field showing agent or extension based hybrid worker." lightbox="./media/migrate-existing-agent-based-hybrid-worker-extension-based-hybrid-worker/hybrid-workers-group-platform-expanded.png":::
+
+For at-scale migration of multiple Agent based Hybrid Workers, you can also use other [channels](#manage-hybrid-worker-extension-using-bicep--arm-templates-rest-api-azure-cli-and-powershell) such as - Bicep, ARM templates, PowerShell cmdlets, REST API, and Azure CLI.
++
+## Manage Hybrid Worker extension using Bicep & ARM templates, REST API, Azure CLI, and PowerShell
+
+#### [Bicep template](#tab/bicep-template)
+
+You can use the Bicep template to create a new Hybrid Worker group, create a new Azure Windows VM and add it to an existing Hybrid Worker Group. Learn more about [Bicep](/articles/azure-resource-manager/bicep/overview.md)
+
+```Bicep
+param automationAccount string
+param automationAccountLocation string
+param workerGroupName string
+
+@description('Name of the virtual machine.')
+param virtualMachineName string
+
+@description('Username for the Virtual Machine.')
+param adminUsername string
+
+@description('Password for the Virtual Machine.')
+@minLength(12)
+@secure()
+param adminPassword string
+
+@description('Location for the VM.')
+param vmLocation string = 'North Central US'
+
+@description('Size of the virtual machine.')
+param vmSize string = 'Standard_DS1_v2'
+
+@description('The Windows version for the VM. This will pick a fully patched image of this given Windows version.')
+@allowed([
+ '2008-R2-SP1'
+ '2012-Datacenter'
+ '2012-R2-Datacenter'
+ '2016-Nano-Server'
+ '2016-Datacenter-with-Containers'
+ '2016-Datacenter'
+ '2019-Datacenter'
+ '2019-Datacenter-Core'
+ '2019-Datacenter-Core-smalldisk'
+ '2019-Datacenter-Core-with-Containers'
+ '2019-Datacenter-Core-with-Containers-smalldisk'
+ '2019-Datacenter-smalldisk'
+ '2019-Datacenter-with-Containers'
+ '2019-Datacenter-with-Containers-smalldisk'
+])
+param osVersion string = '2019-Datacenter'
+
+@description('DNS name for the public IP')
+param dnsNameForPublicIP string
+
+var nicName_var = 'myVMNict'
+var addressPrefix = '10.0.0.0/16'
+var subnetName = 'Subnet'
+var subnetPrefix = '10.0.0.0/24'
+var subnetRef = resourceId('Microsoft.Network/virtualNetworks/subnets', virtualNetworkName_var, subnetName)
+var vmName_var = virtualMachineName
+var virtualNetworkName_var = 'MyVNETt'
+var publicIPAddressName_var = 'myPublicIPt'
+var networkSecurityGroupName_var = 'default-NSGt'
+var UniqueStringBasedOnTimeStamp = uniqueString(resourceGroup().id)
+
+resource publicIPAddressName 'Microsoft.Network/publicIPAddresses@2020-08-01' = {
+ name: publicIPAddressName_var
+ location: vmLocation
+ properties: {
+ publicIPAllocationMethod: 'Dynamic'
+ dnsSettings: {
+ domainNameLabel: dnsNameForPublicIP
+ }
+ }
+}
+
+resource networkSecurityGroupName 'Microsoft.Network/networkSecurityGroups@2020-08-01' = {
+ name: networkSecurityGroupName_var
+ location: vmLocation
+ properties: {
+ securityRules: [
+ {
+ name: 'default-allow-3389'
+ properties: {
+ priority: 1000
+ access: 'Allow'
+ direction: 'Inbound'
+ destinationPortRange: '3389'
+ protocol: 'Tcp'
+ sourceAddressPrefix: '*'
+ sourcePortRange: '*'
+ destinationAddressPrefix: '*'
+ }
+ }
+ ]
+ }
+}
+
+resource virtualNetworkName 'Microsoft.Network/virtualNetworks@2020-08-01' = {
+ name: virtualNetworkName_var
+ location: vmLocation
+ properties: {
+ addressSpace: {
+ addressPrefixes: [
+ addressPrefix
+ ]
+ }
+ subnets: [
+ {
+ name: subnetName
+ properties: {
+ addressPrefix: subnetPrefix
+ networkSecurityGroup: {
+ id: networkSecurityGroupName.id
+ }
+ }
+ }
+ ]
+ }
+}
+
+resource nicName 'Microsoft.Network/networkInterfaces@2020-08-01' = {
+ name: nicName_var
+ location: vmLocation
+ properties: {
+ ipConfigurations: [
+ {
+ name: 'ipconfig1'
+ properties: {
+ privateIPAllocationMethod: 'Dynamic'
+ publicIPAddress: {
+ id: publicIPAddressName.id
+ }
+ subnet: {
+ id: subnetRef
+ }
+ }
+ }
+ ]
+ }
+ dependsOn: [
+
+ virtualNetworkName
+ ]
+}
+
+resource vmName 'Microsoft.Compute/virtualMachines@2020-12-01' = {
+ name: vmName_var
+ location: vmLocation
+ identity: {
+ type: 'SystemAssigned'
+ }
+ properties: {
+ hardwareProfile: {
+ vmSize: vmSize
+ }
+ osProfile: {
+ computerName: vmName_var
+ adminUsername: adminUsername
+ adminPassword: adminPassword
+ }
+ storageProfile: {
+ imageReference: {
+ publisher: 'MicrosoftWindowsServer'
+ offer: 'WindowsServer'
+ sku: osVersion
+ version: 'latest'
+ }
+ osDisk: {
+ createOption: 'FromImage'
+ }
+ }
+ networkProfile: {
+ networkInterfaces: [
+ {
+ id: nicName.id
+ }
+ ]
+ }
+ }
+}
+
+resource automationAccount_resource 'Microsoft.Automation/automationAccounts@2021-06-22' = {
+ name: automationAccount
+ location: automationAccountLocation
+ properties: {
+ sku: {
+ name: 'Basic'
+ }
+ }
+}
+
+resource automationAccount_workerGroupName 'Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups@2022-02-22' = {
+ parent: automationAccount_resource
+ name: workerGroupName
+ dependsOn: [
+
+ vmName
+ ]
+}
+
+resource automationAccount_workerGroupName_testhw_UniqueStringBasedOnTimeStamp 'Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/hybridRunbookWorkers@2021-06-22' = {
+ parent: automationAccount_workerGroupName
+ name: guid('testhw', UniqueStringBasedOnTimeStamp)
+ properties: {
+ vmResourceId: resourceId('Microsoft.Compute/virtualMachines', virtualMachineName)
+ }
+ dependsOn: [
+ vmName
+ ]
+}
+
+resource virtualMachineName_HybridWorkerExtension 'Microsoft.Compute/virtualMachines/extensions@2022-03-01' = {
+ name: '${virtualMachineName}/HybridWorkerExtension'
+ location: vmLocation
+ properties: {
+ publisher: 'Microsoft.Azure.Automation.HybridWorker'
+ type: 'HybridWorkerForWindows'
+ typeHandlerVersion: '1.1'
+ autoUpgradeMinorVersion: true
+ enableAutomaticUpgrade: true
+ settings: {
+ AutomationAccountURL: automationAccount_resource.properties.automationHybridServiceUrl
+ }
+ }
+ dependsOn: [
+ vmName
+ ]
+}
+
+output output1 string = automationAccount_resource.properties.automationHybridServiceUrl
+```
+
+#### [ARM template](#tab/arm-template)
+
+You can use an Azure Resource Manager (ARM) template to create a new Azure Windows VM and connect it to an existing Automation account and Hybrid Worker Group. To learn more about ARM templates, see [What are ARM templates?](../azure-resource-manager/templates/overview.md)
+
+**Review the template**
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "automationAccount": {
+ "type": "string"
+ },
+ "automationAccountLocation": {
+ "type": "string"
+ },
+ "workerGroupName": {
+ "type": "string"
+ },
+ "virtualMachineName": {
+ "type": "string",
+ "metadata": {
+ "description": "Name of the virtual machine."
+ }
+ },
+ "adminUsername": {
+ "type": "string",
+ "metadata": {
+ "description": "Username for the Virtual Machine."
+ }
+ },
+ "adminPassword": {
+ "type": "securestring",
+ "minLength": 12,
+ "metadata": {
+ "description": "Password for the Virtual Machine."
+ }
+ },
+ "vmLocation": {
+ "type": "string",
+ "defaultValue": "North Central US",
+ "metadata": {
+ "description": "Location for the VM."
+ }
+ },
+ "vmSize": {
+ "type": "string",
+ "defaultValue": "Standard_DS1_v2",
+ "metadata": {
+ "description": "Size of the virtual machine."
+ }
+ },
+ "osVersion": {
+ "type": "string",
+ "defaultValue": "2019-Datacenter",
+ "allowedValues": [
+ "2008-R2-SP1",
+ "2012-Datacenter",
+ "2012-R2-Datacenter",
+ "2016-Nano-Server",
+ "2016-Datacenter-with-Containers",
+ "2016-Datacenter",
+ "2019-Datacenter",
+ "2019-Datacenter-Core",
+ "2019-Datacenter-Core-smalldisk",
+ "2019-Datacenter-Core-with-Containers",
+ "2019-Datacenter-Core-with-Containers-smalldisk",
+ "2019-Datacenter-smalldisk",
+ "2019-Datacenter-with-Containers",
+ "2019-Datacenter-with-Containers-smalldisk"
+ ],
+ "metadata": {
+ "description": "The Windows version for the VM. This will pick a fully patched image of this given Windows version."
+ }
+ },
+ "dnsNameForPublicIP": {
+ "type": "string",
+ "metadata": {
+ "description": "DNS name for the public IP"
+ }
+ },
+ "_CurrentDateTimeInTicks": {
+ "type": "string",
+ "defaultValue": "[utcNow('yyyy-MM-dd')]"
+ }
+ },
+ "variables": {
+ "nicName": "myVMNict",
+ "addressPrefix": "10.0.0.0/16",
+ "subnetName": "Subnet",
+ "subnetPrefix": "10.0.0.0/24",
+ "subnetRef": "[resourceId('Microsoft.Network/virtualNetworks/subnets', variables('virtualNetworkName'), variables('subnetName'))]",
+ "vmName": "[parameters('virtualMachineName')]",
+ "virtualNetworkName": "MyVNETt",
+ "publicIPAddressName": "myPublicIPt",
+ "networkSecurityGroupName": "default-NSGt",
+ "UniqueStringBasedOnTimeStamp": "[uniqueString(deployment().name, parameters('_CurrentDateTimeInTicks'))]"
+ },
+ "resources": [
+ {
+ "apiVersion": "2020-08-01",
+ "type": "Microsoft.Network/publicIPAddresses",
+ "name": "[variables('publicIPAddressName')]",
+ "location": "[parameters('vmLocation')]",
+ "properties": {
+ "publicIPAllocationMethod": "Dynamic",
+ "dnsSettings": {
+ "domainNameLabel": "[parameters('dnsNameForPublicIP')]"
+ }
+ }
+ },
+ {
+ "comments": "Default Network Security Group for template",
+ "type": "Microsoft.Network/networkSecurityGroups",
+ "apiVersion": "2020-08-01",
+ "name": "[variables('networkSecurityGroupName')]",
+ "location": "[parameters('vmLocation')]",
+ "properties": {
+ "securityRules": [
+ {
+ "name": "default-allow-3389",
+ "properties": {
+ "priority": 1000,
+ "access": "Allow",
+ "direction": "Inbound",
+ "destinationPortRange": "3389",
+ "protocol": "Tcp",
+ "sourceAddressPrefix": "*",
+ "sourcePortRange": "*",
+ "destinationAddressPrefix": "*"
+ }
+ }
+ ]
+ }
+ },
+ {
+ "apiVersion": "2020-08-01",
+ "type": "Microsoft.Network/virtualNetworks",
+ "name": "[variables('virtualNetworkName')]",
+ "location": "[parameters('vmLocation')]",
+ "dependsOn": [
+ "[resourceId('Microsoft.Network/networkSecurityGroups', variables('networkSecurityGroupName'))]"
+ ],
+ "properties": {
+ "addressSpace": {
+ "addressPrefixes": [
+ "[variables('addressPrefix')]"
+ ]
+ },
+ "subnets": [
+ {
+ "name": "[variables('subnetName')]",
+ "properties": {
+ "addressPrefix": "[variables('subnetPrefix')]",
+ "networkSecurityGroup": {
+ "id": "[resourceId('Microsoft.Network/networkSecurityGroups', variables('networkSecurityGroupName'))]"
+ }
+ }
+ }
+ ]
+ }
+ },
+ {
+ "apiVersion": "2020-08-01",
+ "type": "Microsoft.Network/networkInterfaces",
+ "name": "[variables('nicName')]",
+ "location": "[parameters('vmLocation')]",
+ "dependsOn": [
+ "[variables('publicIPAddressName')]",
+ "[variables('virtualNetworkName')]"
+ ],
+ "properties": {
+ "ipConfigurations": [
+ {
+ "name": "ipconfig1",
+ "properties": {
+ "privateIPAllocationMethod": "Dynamic",
+ "publicIPAddress": {
+ "id": "[resourceId('Microsoft.Network/publicIPAddresses',variables('publicIPAddressName'))]"
+ },
+ "subnet": {
+ "id": "[variables('subnetRef')]"
+ }
+ }
+ }
+ ]
+ }
+ },
+ {
+ "apiVersion": "2020-12-01",
+ "type": "Microsoft.Compute/virtualMachines",
+ "name": "[variables('vmName')]",
+ "location": "[parameters('vmLocation')]",
+ "dependsOn": [
+ "[variables('nicName')]"
+ ],
+ "identity": {
+ "type": "SystemAssigned"
+ } ,
+ "properties": {
+ "hardwareProfile": {
+ "vmSize": "[parameters('vmSize')]"
+ },
+ "osProfile": {
+ "computerName": "[variables('vmName')]",
+ "adminUsername": "[parameters('adminUsername')]",
+ "adminPassword": "[parameters('adminPassword')]"
+ },
+ "storageProfile": {
+ "imageReference": {
+ "publisher": "MicrosoftWindowsServer",
+ "offer": "WindowsServer",
+ "sku": "[parameters('osVersion')]",
+ "version": "latest"
+ },
+ "osDisk": {
+ "createOption": "FromImage"
+ }
+ },
+ "networkProfile": {
+ "networkInterfaces": [
+ {
+ "id": "[resourceId('Microsoft.Network/networkInterfaces',variables('nicName'))]"
+ }
+ ]
+ }
+ }
+ },
+ {
+ "type": "Microsoft.Automation/automationAccounts",
+ "apiVersion": "2021-06-22",
+ "name": "[parameters('automationAccount')]",
+ "location": "[parameters('automationAccountLocation')]",
+ "properties": {
+ "sku": {
+ "name": "Basic"
+ }
+ },
+ "resources": [
+ {
+ "name": "[parameters('workerGroupName')]",
+ "type": "hybridRunbookWorkerGroups",
+ "apiVersion": "2022-02-22",
+ "dependsOn": [
+ "[resourceId('Microsoft.Automation/automationAccounts', parameters('automationAccount'))]",
+ "[resourceId('Microsoft.Compute/virtualMachines', variables('vmName'))]"
+ ],
+ "resources" : [
+ {
+ "name": "[guid('testhw', variables('UniqueStringBasedOnTimeStamp'))]",
+ "type": "hybridRunbookWorkers",
+ "apiVersion": "2021-06-22",
+ "dependsOn": [
+ "[resourceId('Microsoft.Automation/automationAccounts', parameters('automationAccount'))]",
+ "[resourceId('Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups', parameters('automationAccount'),parameters('workerGroupName'))]",
+ "[resourceId('Microsoft.Compute/virtualMachines', variables('vmName'))]"
+ ],
+ "properties": {
+ "vmResourceId": "[resourceId('Microsoft.Compute/virtualMachines', parameters('virtualMachineName'))]"
+ }
+ }
+ ]
+ }
+ ]
+ },
+ {
+ "type": "Microsoft.Compute/virtualMachines/extensions",
+ "name": "[concat(parameters('virtualMachineName'),'/HybridWorkerExtension')]",
+ "apiVersion": "2022-03-01",
+ "location": "[parameters('vmLocation')]",
+ "dependsOn": [
+ "[resourceId('Microsoft.Automation/automationAccounts', parameters('automationAccount'))]",
+ "[resourceId('Microsoft.Compute/virtualMachines', parameters('virtualMachineName'))]"
+ ],
+ "properties": {
+ "publisher": "Microsoft.Azure.Automation.HybridWorker",
+ "type": "HybridWorkerForWindows",
+ "typeHandlerVersion": "1.1",
+ "autoUpgradeMinorVersion": true,
+ "enableAutomaticUpgrade": true,
+ "settings": {
+ "AutomationAccountURL": "[reference(resourceId('Microsoft.Automation/automationAccounts', parameters('automationAccount'))).AutomationHybridServiceUrl]"
+ }
+ }
+ }
+ ],
+ "outputs": {
+ "output1": {
+ "type": "string",
+ "value": "[reference(resourceId('Microsoft.Automation/automationAccounts', parameters('automationAccount'))).AutomationHybridServiceUrl]"
+ }
+ }
+}
+```
+
+The following Azure resources are defined in the template:
+
+- [hybridRunbookWorkerGroups/hybridRunbookWorkers](/azure/templates/microsoft.automation/automationaccounts/hybridrunbookworkergroups/hybridrunbookworkers)
+- [Microsoft.Compute/virtualMachines/extensions](/azure/templates/microsoft.compute/virtualmachines/extensions)
+
+**Review parameters**
+
+Review the parameters used in this template.
+
+| Property | Description |
+| | |
+| automationAccount | The name of the existing Automation account. |
+| automationAccountLocation | The region of the existing Automation account. |
+| workerGroupName | The name of the existing Hybrid Worker Group. |
+| virtualMachineName | The name for the VM to be created. The default value is `simple-vm`. |
+| adminUsername | The VM admin user name. |
+| adminPassword | The VM admin password. |
+| vmLocation | The region for the new VM. The default value is `North Central US`. |
+| vmSize | The size for the new VM. The default value is `Standard_DS1_v2`. |
+| osVersion | The OS for the new Windows VM. The default value is `2019-Datacenter`. |
+| dnsNameForPublicIP | The DNS name for the public IP. |
+
+
+#### [REST API](#tab/rest-api)
+
+**Prerequisites**
+
+You would require an Azure VM or Arc-enabled server. You can follow the steps [here](../azure-arc/servers/onboard-portal.md) to create an Arc connected machine.
+
+**Install and use Hybrid Worker extension**
+
+To install and use Hybrid Worker extension using REST API, follow these steps. The West Central US region is considered in this example.
+
+1. Create a Hybrid Worker Group by making this API call.
+
+ ```http
+ PUT https://westcentralus.management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Automation/automationAccounts/{automationAccountName}/hybridRunbookWorkerGroups/{hybridRunbookWorkerGroupName}?api-version=2021-06-22
+
+ ```
+
+ The request body should contain the following information:
+
+ ```http
+ {
+ }
+ ```
+
+ Response of _PUT_ confirms if the Hybrid worker group is created or not. To reconfirm, you have to make another GET call on Hybrid worker group as follows:
+
+ ```http
+ GET https://westcentralus.management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Automation/automationAccounts/{automationAccountName}/hybridRunbookWorkerGroups/{hybridRunbookWorkerGroupName}?api-version=2021-06-22
+
+ ```
+
+1. Connect a VM to the above created Hybrid Worker Group by making the below API call. Before making the call, generate a new GUID to be used as _hybridRunbookWorkerId_.
+
+ ```http
+ PUT https://westcentralus.management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Automation/automationAccounts/{automationAccountName}/hybridRunbookWorkerGroups/{hybridRunbookWorkerGroupName}/hybridRunbookWorkers/{hybridRunbookWorkerId}?api-version=2021-06-22
+
+ ```
+
+ The request body should contain the following information:
+
+ ```json
+ {
+ "properties": {"vmResourceId": "{VmResourceId}"}
+ }
+ ```
+
+ Response of PUT call confirms if the Hybrid worker is created or not. To reconfirm, you would have to make another GET call on Hybrid worker as follows.
+
+ ```http
+ GET https://westcentralus.management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Automation/automationAccounts/{automationAccountName}/hybridRunbookWorkerGroups/{hybridRunbookWorkerGroupName}/hybridRunbookWorkers/{hybridRunbookWorkerId}?api-version=2021-06-22
+
+ ```
+
+1. Follow the steps [here](../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#enable-system-assigned-managed-identity-on-an-existing-vm) to enable the System-assigned managed identity on the VM.
+
+1. Get the automation account details using this API call.
+
+ ```http
+ GET https://westcentralus.management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Automation/automationAccounts/HybridWorkerExtension?api-version=2021-06-22
+
+ ```
+
+ The API call will provide the value with the key: `AutomationHybridServiceUrl`. Use the URL in the next step to enable extension on the VM.
+
+1. Install the Hybrid Worker Extension on Azure VM by using the following API call.
+
+ ```http
+ PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}/extensions/HybridWorkerExtension?api-version=2021-11-01
+
+ ```
+
+ The request body should contain the following information:
+
+ ```json
+ {
+ "location": "<VMLocation>",
+ "properties": {
+ "publisher": "Microsoft.Azure.Automation.HybridWorker",
+ "type": "<HybridWorkerForWindows/HybridWorkerForLinux>",
+ "typeHandlerVersion": <version>,
+ "settings": {
+ "AutomationAccountURL" = "<AutomationHybridServiceUrl>"
+ }
+ }
+ }
+
+ ```
+
+ For ARC VMs, use the below API call for enabling the extension:
+
+ ```http
+ PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.HybridCompute/machines/{machineName}/extensions/{extensionName}?api-version=2021-05-20
+
+ ```
+
+ The request body should contain the following information:
+
+ ```json
+ {
+ "location": "<VMLocation>",
+ "properties": {
+ "publisher": "Microsoft.Azure.Automation.HybridWorker",
+ "type": "<HybridWorkerForWindows/HybridWorkerForLinux>",
+ "typeHandlerVersion": <version>,
+ "settings": {
+ "AutomationAccountURL" = "<AutomationHybridServiceUrl>"
+ }
+ }
+ }
+ ```
+ Response of the *PUT* call will confirm if the extension is successfully installed or not on the targeted VM. You can also go to the VM in the Azure portal, and check status of extensions installed on the target VM under **Extensions** tab.
+
+#### [Azure CLI](#tab/cli)
+
+**Manage Hybrid Worker Extension**
+
+- To create, delete, and manage extension-based Hybrid Runbook Worker groups, see [az automation hrwg | Microsoft Docs](/cli/azure/automation/hrwg?view=azure-cli-latest)
+- To create, delete, and manage extension-based Hybrid Runbook Worker, see [az automation hrwg hrw | Microsoft Docs](/cli/azure/automation/hrwg/hrw?view=azure-cli-latest)
+
+After creating new Hybrid Runbook Worker, you must install the extension on the Hybrid Worker using [az vm extension set](/cli/azure/vm/extension?view=azure-cli-latest#az-vm-extension-set).
++
+#### [PowerShell](#tab/ps)
+
+You can use the following PowerShell cmdlets to manage Hybrid Runbook Worker and Hybrid Runbook Worker groups:
+
+| PowerShell cmdlet | Description |
+| -- | -- |
+|[`Get-AzAutomationHybridRunbookWorkerGroup`](/powershell/module/az.automation/get-azautomationhybridrunbookworkergroup?view=azps-9.1.0) | Gets Hybrid Runbook Worker group|
+|[`Remove-AzAutomationHybridRunbookWorkerGroup`](/powershell/module/az.automation/remove-azautomationhybridrunbookworkergroup?view=azps-9.1.0) | Removes Hybrid Runbook Worker group|
+|[`Set-AzAutomationHybridRunbookWorkerGroup`](/powershell/module/az.automation/set-azautomationhybridrunbookworkergroup?view=azps-9.1.0) | Updates Hybrid Worker group with Hybrid Worker credentials|
+|[`New-AzAutomationHybridRunbookWorkerGroup`](/powershell/module/az.automation/new-azautomationhybridrunbookworkergroup?view=azps-9.1.0) | Creates new Hybrid Runbook Worker group|
+|[`Get-AzAutomationHybridRunbookWorker`](/powershell/module/az.automation/get-azautomationhybridrunbookworker?view=azps-9.1.0) | Gets Hybrid Runbook Worker|
+|[`Move-AzAutomationHybridRunbookWorker`](/powershell/module/az.automation/move-azautomationhybridrunbookworker?view=azps-9.1.0) | Moves Hybrid Worker from one group to other|
+|[`New-AzAutomationHybridRunbookWorker`](/powershell/module/az.automation/new-azautomationhybridrunbookworker?view=azps-9.1.0) | Creates new Hybrid Runbook Worker|
+|[`Remove-AzAutomationHybridRunbookWorker`](/powershell/module/az.automation/remove-azautomationhybridrunbookworker?view=azps-9.1.0)| Removes Hybrid Runbook Worker|
+
+After creating new Hybrid Runbook Worker, you must install the extension on the Hybrid Worker.
+
+**Azure VMs**
+
+```powershell
+Set-AzVMExtension -ResourceGroupName <VMResourceGroupName> -Location <VMLocation> -VMName <VMName> -Name "HybridWorkerExtension" -Publisher "Microsoft.Azure.Automation.HybridWorker" -ExtensionType HybridWorkerForWindows -TypeHandlerVersion 1.1 -Settings $settings -EnableAutomaticUpgrade $true/$false
+```
+**Azure Arc-enabled VMs**
+
+```powershell
+New-AzConnectedMachineExtension -ResourceGroupName <VMResourceGroupName> -Location <VMLocation> -MachineName <VMName> -Name "HybridWorkerExtension" -Publisher "Microsoft.Azure.Automation.HybridWorker" -ExtensionType HybridWorkerForWindows -TypeHandlerVersion 1.1 -Setting $settings -NoWait -EnableAutomaticUpgrade
+```
++
+## Remove agent-based Hybrid Worker
+
+#### [Windows Hybrid Worker](#tab/win-hrw)
+
+1. In the Azure portal, go to your Automation account.
+
+1. Under **Account Settings**, select **Keys** and note the values for **URL** and **Primary Access Key**.
+
+1. Open a PowerShell session in Administrator mode and run the following command with your URL and primary access key values. Use the `Verbose` parameter for a detailed log of the removal process. To remove stale machines from your Hybrid Worker group, use the optional `machineName` parameter.
+
+```powershell-interactive
+Remove-HybridRunbookWorker -Url <URL> -Key <primaryAccessKey> -MachineName <computerName>
+```
+> [!NOTE]
+> - After you disable the Private Link in your Automation account, it might take up to 60 minutes to remove the Hybrid Runbook worker.
+> - After you remove the Hybrid Worker, the Hybrid Worker authentication certificate on the machine is valid for 45 minutes.
+
+#### [Linux Hybrid Worker](#tab/lin-hrw)
+
+You can use the command `ls /var/opt/microsoft/omsagent` on the Hybrid Runbook Worker to get the workspace ID. A folder is created that is named with the workspace ID.
+
+```bash
+sudo python onboarding.py --deregister --endpoint="<URL>" --key="<PrimaryAccessKey>" --groupname="Example" --workspaceid="<workspaceId>"
+```
+
+> [!NOTE]
+> - This script doesn't remove the Log Analytics agent for Linux from the machine. It only removes the functionality and configuration of the Hybrid Runbook Worker role. </br>
+> - After you disable the Private Link in your Automation account, it might take up to 60 minutes to remove the Hybrid Runbook worker.
+> - After you remove the Hybrid Worker, the Hybrid Worker authentication certificate on the machine is valid for 45 minutes.
+++
+## Next steps
+
+- To learn more about Hybrid Runbook Worker, see [Automation Hybrid Runbook Worker overview](automation-hybrid-runbook-worker.md).
+- To deploy Extension-based Hybrid Worker, see [Deploy an extension-based Windows or Linux User Hybrid Runbook Worker in Azure Automation](extension-based-hybrid-runbook-worker-install.md).
+- To learn about Azure VM extensions, see [Azure VM extensions and features for Windows](../virtual-machines/extensions/features-windows.md) and [Azure VM extensions and features for Linux](../virtual-machines/extensions/features-linux.md).
automation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/policy-reference.md
Title: Built-in policy definitions for Azure Automation description: Lists Azure Policy built-in policy definitions for Azure Automation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/28/2022 Last updated : 01/05/2023
automation Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Automation description: Lists Azure Policy Regulatory Compliance controls available for Azure Automation. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/04/2022 Last updated : 01/05/2023
automation Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/shared-resources/certificates.md
Title: Manage certificates in Azure Automation
description: This article tells how to work with certificates for access by runbooks and DSC configurations. Previously updated : 12/22/2020 Last updated : 01/04/2023
When you create a new certificate, you upload a .cer or .pfx file to Automation.
1. From your Automation account, on the left-hand pane select **Certificates** under **Shared Resource**. 1. On the **Certificates** page, select **Add a certificate**. 1. In the **Name** field, type a name for the certificate.
-1. To browse for a **.cer** or **.pfx** file, under **Upload a certificate file**, choose **Select a file**. If you select a **.pfx** file, specify a password and indicate if it can be exported.
+1. To browse for a **.cer** or **.pfx** file, under **Upload a certificate file**, choose **Select a file**. If you select a **.pfx** file, specify a password and indicate if it can be exported. If you are using Azure Automation portal to upload certificates, it might fail for partner (CSP) accounts. We recommend that you use [PowerShell cmdlets](#powershell-cmdlets-to-access-certificates) as a workaround to overcome this issue.
1. Select **Create** to save the new certificate asset. ### Create a new certificate with PowerShell
automation Start Stop Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/start-stop-vm.md
# Troubleshoot Start/Stop VMs during off-hours issues > [!NOTE]
-> Start/Stop VM during off-hours, version 1 is deprecated and unavailable in the marketplace now. We recommend that you start using [version 2](/articles/azure-functions/start-stop-vms/overview.md), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until further announcement.
+> Start/Stop VM during off-hours, version 1 is deprecated and unavailable in the marketplace now. We recommend that you start using [version 2](/azure/azure-functions/start-stop-vms/overview), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until further announcement.
This article provides information on troubleshooting and resolving issues that arise when you deploy the Azure Automation Start/Stop VMs during off-hours feature on your VMs.
Review the following fixes for potential resolutions:
To learn more about errors when you register providers, see [Resolve errors for resource provider registration](../../azure-resource-manager/templates/error-register-resource-provider.md). * If you have a lock on your Log Analytics workspace, go to your workspace in the Azure portal and remove any locks on the resource.
-* If these resolutions don't solve your issue, follow the instructions under [Update the feature](../automation-solution-vm-management.md#update-the-feature) to redeploy Start/Stop VMs during off-hours.
## <a name="all-vms-fail-to-startstop"></a>Scenario: All VMs fail to start or stop
Many times errors can be caused by using an old and outdated version of the feat
### Resolution
-To resolve many errors, remove and [update Start/Stop VMs during off-hours](../automation-solution-vm-management.md#update-the-feature). You also can check the [job streams](../automation-runbook-execution.md#job-statuses) to look for any errors.
+You can check the [job streams](../automation-runbook-execution.md#job-statuses) to look for any errors.
## Next steps
automation Operating System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/operating-system-requirements.md
Title: Azure Automation Update Management Supported Clients
description: This article describes the supported Windows and Linux operating systems with Azure Automation Update Management. Previously updated : 10/12/2021 Last updated : 01/04/2023
The following table lists the supported operating systems for update assessments
All operating systems are assumed to be x64. x86 is not supported for any operating system. > [!NOTE]
-> - Update assessment of Linux machines is only supported in certain regions as listed in the Automation account and Log Analytics workspace [mappings table](../how-to/region-mappings.md#supported-mappings).
+> - Update assessment of Linux machines is only supported in certain regions as listed in the Automation account and Log Analytics workspace [mappings table](../how-to/region-mappings.md#supported-mappings-for-version-1).
> - Update Management does not support CIS hardened images. # [Windows operating system](#tab/os-win)
All operating systems are assumed to be x64. x86 is not supported for any operat
# [Linux operating system](#tab/os-linux) > [!NOTE]
-> Update assessment of Linux machines is only supported in certain regions as listed in the Automation account and Log Analytics workspace [mappings table](../how-to/region-mappings.md#supported-mappings).
+> Update assessment of Linux machines is only supported in certain regions as listed in the Automation account and Log Analytics workspace [mappings table](../how-to/region-mappings.md#supported-mappings-for-version-1).
|Operating system |Notes | |||
By default, Windows VMs that are deployed from Azure Marketplace are set to rece
- The Update Management feature depends on the system Hybrid Runbook Worker role, and you should confirm its [system requirements](../automation-linux-hrw-install.md#prerequisites). Because Update Management uses Automation runbooks to initiate assessment and update of your machines, review the [version of Python required](../automation-linux-hrw-install.md#supported-runbook-types) for your supported Linux distro. > [!NOTE]
-> Update assessment of Linux machines is supported in certain regions only. See the Automation account and Log Analytics workspace [mappings table](../how-to/region-mappings.md#supported-mappings).
+> Update assessment of Linux machines is supported in certain regions only. See the Automation account and Log Analytics workspace [mappings table](../how-to/region-mappings.md#supported-mappings-for-version-1).
For hybrid machines, we recommend installing the Log Analytics agent for Linux by first connecting your machine to [Azure Arc-enabled servers](../../azure-arc/servers/overview.md), and then use Azure Policy to assign the [Deploy Log Analytics agent to Linux Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy definition. Alternatively, to monitor the machines use the [Enable Azure Monitor for VMs](../../governance/policy/samples/built-in-initiatives.md#monitoring) instead of Azure Monitor for VMs.
automation Query Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/query-logs.md
In addition to the details that are provided during Update Management deployment, you can search the logs stored in your Log Analytics workspace. To search the logs from your Automation account, select **Update management** and open the Log Analytics workspace associated with your deployment.
-You can also customize the log queries or use them from different clients. See [Log Analytics search API documentation](https://dev.loganalytics.io/).
+You can also customize the log queries or use them from different clients. See [Log Analytics search API documentation](/rest/api/loganalytics/).
## Query update records
Update
## Next steps * For details of Azure Monitor logs, see [Azure Monitor logs](../../azure-monitor/logs/log-query-overview.md).
-* For help with alerts, see [Configure alerts](configure-alerts.md).
+* For help with alerts, see [Configure alerts](configure-alerts.md).
automation Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/whats-new-archive.md
Title: Archive for What's new in Azure Automation
-description: The What's new release notes in the Overview section of this content set contains six months of activity. Thereafter, the items are removed from the main article and put into this article.
+description: The What's new release notes in the Overview section of this content set contain six months of activity. Thereafter, the items are removed from the main article and put into this article.
Last updated 10/27/2021
Automation account and State Configuration availability in Brazil South East. Fo
**Type:** New feature
-Azure Automation region mapping updated to support Update Management feature in South Central US region. See [Supported region mapping](how-to/region-mappings.md#supported-mappings) for updates to the documentation to reflect this change.
+Azure Automation region mapping updated to support Update Management feature in South Central US region. See [Supported region mapping](how-to/region-mappings.md#supported-mappings-for-version-1) for updates to the documentation to reflect this change.
## September 2020
The New-OnPremiseHybridWorker runbook has been updated to support Az modules. Fo
**Type:** New feature
-Azure Automation region mapping updated to support Update Management feature in China East 2 region. See [Supported region mapping](how-to/region-mappings.md#supported-mappings) for updates to the documentation to reflect this change.
+Azure Automation region mapping updated to support Update Management feature in China East 2 region. See [Supported region mapping](how-to/region-mappings.md#supported-mappings-for-version-1) for updates to the documentation to reflect this change.
## May 2020
Azure Service Management (ASM) REST APIs for Azure Automation will be retired an
## Next steps
-If you'd like to contribute to Azure Automation documentation, see our [contributor guide](/contribute/).
+If you'd like to contribute to Azure Automation documentation, see our [contributor guide](/contribute/).
automation Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/whats-new.md
Start/Stop VMs during off-hours (v1) will deprecate on May 21, 2022. Customers s
**Type:** New feature
-Region mapping has been updated to support Update Management and Change Tracking in Norway East, UAE North, North Central US, Brazil South, and Korea Central. For more information, see [Supported mappings](./how-to/region-mappings.md#supported-mappings).
+Region mapping has been updated to support Update Management and Change Tracking in Norway East, UAE North, North Central US, Brazil South, and Korea Central. For more information, see [Supported mappings](./how-to/region-mappings.md#supported-mappings-for-version-1).
### Support for system-assigned Managed Identities
azure-app-configuration Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/policy-reference.md
Title: Built-in policy definitions for Azure App Configuration description: Lists Azure Policy built-in policy definitions for Azure App Configuration. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/28/2022 Last updated : 01/05/2023
azure-app-configuration Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure App Configuration description: Lists Azure Policy Regulatory Compliance controls available for Azure App Configuration. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/04/2022 Last updated : 01/05/2023
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/validation-program.md
To see how all Azure Arc-enabled components are validated, see [Validation progr
|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version |--|--|--|--|--|
-|DataON AZS-6224|1.23.8|v1.12.0_2022-10-11|16.0.537.5223|)
+|DataON AZS-6224|1.23.8|v1.12.0_2022-10-11|16.0.537.5223|
### Dell
To see how all Azure Arc-enabled components are validated, see [Validation progr
|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version |--|--|--|--|--| |HPE Superdome Flex 280|1.20.0|1.8.0_2022-06-14|16.0.41.7339|12.3 (Ubuntu 12.3-1)
-|HPE Apollo 4200 Gen10 Plus|1.22.6|1.11.0_2022-09-13|12.3 (Ubuntu 12.3-1)|
+|HPE Apollo 4200 Gen10 Plus (directly connected mode) |1.7.18 <sup>*</sup>|1.11.0_2022-09-13 |16.0.312.4243|12.3 (Ubuntu 12.3-1)|
+|HPE Apollo 4200 Gen10 Plus (indirectly connected mode) |1.22.6 <sup>*</sup>|v1.10.0_2022-08-09 |16.0.312.4243|12.3 (Ubuntu 12.3-1)|
+
+<sup>*</sup>Azure Kubernetes Service (AKS) on Azure Stack HCI
### Kublr
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/policy-reference.md
Title: Built-in policy definitions for Azure Arc-enabled Kubernetes description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled Kubernetes. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/28/2022 Last updated : 01/05/2023 #
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/overview.md
Title: Azure Arc resource bridge (preview) overview description: Learn how to use Azure Arc resource bridge (preview) to support VM self-servicing on Azure Stack HCI, VMware, and System Center Virtual Machine Manager. Previously updated : 11/03/2022 Last updated : 01/06/2023 # What is Azure Arc resource bridge (preview)?
-Azure Arc resource bridge (preview) is part of the core Azure Arc platform, and is designed to host other Azure Arc services. In this release, the resource bridge supports VM self-servicing and management from Azure, for virtualized Windows and Linux virtual machines hosted in an on-premises environment on [Azure Stack HCI](/azure-stack/hci/overview) and VMware.
+Azure Arc resource bridge (preview) is part of the core Azure Arc platform, and is designed to host other Azure Arc services. In this release, the resource bridge supports VM self-servicing and management from Azure, for virtualized Windows and Linux virtual machines hosted in an on-premises environment on [Azure Stack HCI](/azure-stack/hci/manage/azure-arc-vm-management-overview), VMware ([Arc-enabled VMware vSphere](/azure/azure-arc/vmware-vsphere/) preview), and System Center Virtual Machine Manager (SCVMM) ([Arc-enabled SCVMM](/azure/azure-arc/system-center-virtual-machine-manager/) preview).
-The resource bridge is a packaged virtual machine, which hosts a *management* Kubernetes cluster that requires no user management. This virtual appliance delivers the following benefits:
+Arc resource bridge is a packaged virtual machine that hosts a *management* Kubernetes cluster and requires no user management. The virtual machine is deployed on the on-premises infrastructure, and an ARM resource of Arc resource bridge is created in Azure. The two resources are then connected, allowing VM self-service and management from Azure. The on-premises resource bridge uses guest management to tag local resources, making them available in Azure.
+
+Arc resource bridge delivers the following benefits:
* Enables VM self-servicing from Azure without having to create and manage a Kubernetes cluster. * Fully supported by Microsoft, including updates to core components.
Azure Arc resource bridge (preview) hosts other components such as [custom locat
* The platform layer that includes the custom location and cluster extension. * The solution layer for each service supported by Arc resource bridge (that is, the different type of VMs). Azure Arc resource bridge (preview) can host other Azure services or solutions running on-premises. For this preview, there are two objects hosted on the Arc resource bridge (preview):
To summarize, the Azure resources are projections of the resources running in yo
Through Azure Arc resource bridge (preview), you can accomplish the following for each private cloud infrastructure from Azure:
+### Azure Stack HCI
+
+You can provision and manage on-premises Windows and Linux virtual machines (VMs) running on Azure Stack HCI clusters.
+ ### VMware vSphere By registering resource pools, networks, and VM templates, you can represent a subset of your vCenter resources in Azure to enable self-service. Integration with Azure allows you to manage access to your vCenter resources in Azure to maintain a secure environment. You can also perform various operations on the VMware virtual machines that are enabled by Arc-enabled VMware vSphere:
By registering resource pools, networks, and VM templates, you can represent a s
* Enable guest management * Install extensions
-### Azure Stack HCI
-
-You can provision and manage on-premises Windows and Linux virtual machines (VMs) running on Azure Stack HCI clusters.
-
-### System Center Virtual Machine Manager (SCVMM)
+### System Center Virtual Machine Manager (SCVMM)
You can connect an SCVMM management server to Azure by deploying Azure Arc resource bridgeΓÇ»(preview) in the VMM environment. Azure Arc resource bridge (preview) enables you to represent the SCVMM resources (clouds, VMs, templates etc.) in Azure and perform various operations on them:
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/policy-reference.md
Title: Built-in policy definitions for Azure Arc-enabled servers description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/28/2022 Last updated : 01/05/2023
azure-arc Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Arc-enabled servers (preview) description: Lists Azure Policy Regulatory Compliance controls available for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/04/2022 Last updated : 01/05/2023
azure-cache-for-redis Cache Best Practices Client Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-best-practices-client-libraries.md
description: Learn about client libraries for Azure Cache for Redis.
Previously updated : 07/07/2022 Last updated : 01/04/2022 -+ # Client libraries
clusterServersConfig:
tcpNoDelay: true ```
+For an article demonstrating how to use Redisson's support for JCache as the store for HTTP session state in IBM Liberty on Azure, see [Use Java EE JCache with Open Liberty or WebSphere Liberty on an Azure Kubernetes Service (AKS) cluster](/azure/developer/java/ee/how-to-deploy-java-liberty-jcache).
+ ## How to use client libraries Besides the reference documentation, you can find tutorials showing how to get started with Azure Cache for Redis using different languages and cache clients.
azure-cache-for-redis Cache Java Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-java-get-started.md
Title: 'Quickstart: Use Azure Cache for Redis in Java'
description: In this quickstart, you'll create a new Java app that uses Azure Cache for Redis Previously updated : 03/21/2021 Last updated : 01/04/2022 ms.devlang: java-+
In this quickstart, you learned how to use Azure Cache for Redis from a Java app
- [Development](cache-best-practices-development.md) - [Connection resilience](cache-best-practices-connection.md)
+- [Azure Cache for Redis with Jakarta EE](/azure/developer/java/ee/how-to-deploy-java-liberty-jcache)
+- [Azure Cache for Redis with Spring](/azure/developer/java/spring-framework/configure-spring-boot-initializer-java-app-with-redis-cache)
azure-cache-for-redis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/policy-reference.md
Title: Built-in policy definitions for Azure Cache for Redis description: Lists Azure Policy built-in policy definitions for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/28/2022 Last updated : 01/05/2023
azure-cache-for-redis Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cache for Redis description: Lists Azure Policy Regulatory Compliance controls available for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/04/2022 Last updated : 01/05/2023
azure-functions Analyze Telemetry Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/analyze-telemetry-data.md
traces
The runtime provides the `customDimensions.LogLevel` and `customDimensions.Category` fields. You can provide additional fields in logs that you write in your function code. For an example in C#, see [Structured logging](functions-dotnet-class-library.md#structured-logging) in the .NET class library developer guide.
+## Query function invocations
+
+Every function invocation is assigned a unique ID. `InvocationId` is included in the custom dimension and can be used to correlate all the logs from a particular function execution.
+
+```kusto
+traces
+| project customDimensions["InvocationId"], message
+```
+
+## Telemetry correlation
+
+Logs from different functions can be correlated using `operation_Id`. Use the following query to return all the logs for a specific logical operation.
+
+```kusto
+traces
+| where operation_Id == '45fa5c4f8097239efe14a2388f8b4e29'
+| project timestamp, customDimensions["InvocationId"], message
+| order by timestamp
+```
+
+## Sampling percentage
+
+Sampling configuration can be used to reduce the volume of telemetry. Use the following query to determine if sampling is operational or not. If you see that `RetainedPercentage` for any type is less than 100, then that type of telemetry is being sampled.
+
+```kusto
+union requests,dependencies,pageViews,browserTimings,exceptions,traces
+| where timestamp > ago(1d)
+| summarize RetainedPercentage = 100/avg(itemCount) by bin(timestamp, 1h), itemType
+```
## Query scale controller logs _This feature is in preview._
azure-functions Functions Bindings Error Pages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-error-pages.md
Title: Azure Functions error handling and retry guidance
description: Learn to handle errors and retry events in Azure Functions with links to specific binding errors, including information on retry policies. Previously updated : 08/03/2022 Last updated : 01/03/2023 zone_pivot_groups: programming-languages-set-functions-lang-workers
There are two kinds of retries available for your functions: built-in retry beha
| Trigger/binding | Retry source | Configuration | | - | - | -- |
-| Azure Cosmos DB | n/a | Not configurable |
+| Azure Cosmos DB | [Retry policies](#retry-policies) | Function-level |
| Blob Storage | [Binding extension](functions-bindings-storage-blob-trigger.md#poison-blobs) | [host.json](functions-bindings-storage-queue.md#host-json) | | Event Grid | [Binding extension](../event-grid/delivery-and-retry.md) | Event subscription | | Event Hubs | [Retry policies](#retry-policies) | Function-level |
azure-functions Functions Event Grid Blob Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-event-grid-blob-trigger.md
Title: 'Tutorial: Trigger Azure Functions on blob containers using an event subscription'
-description: In this tutorial, you learn how to use an Event Grid event subscription to create a low-latency, event-driven trigger on an Azure Blob Storage container.
+description: This tutorial shows how to create a low-latency, event-driven trigger on an Azure Blob Storage container using an Event Grid event subscription.
Last updated 3/1/2021
zone_pivot_groups: programming-languages-set-functions-lang-workers
# Tutorial: Trigger Azure Functions on blob containers using an event subscription
-Earlier versions of the Blob Storage trigger for Azure Functions polled the container for updates, which often resulted in delayed execution. By using the latest version of the extension, you can reduce latency by instead triggering on an event subscription to the same blob container. The event subscription uses Event Grid to forward changes in the blob container as events for your function to consume. This article demonstrates how to use Visual Studio Code to locally develop a function that runs based events raised when a blob is added to a container. You'll locally verify the function before deploying your project to Azure.
+If you're using earlier versions of the Blob Storage trigger with Azure Functions, you often get delayed executions because the trigger polls the blob container for updates. You can reduce latency by triggering your function using an event subscription to the same container. The event subscription forwards changes in the container as events that your function consumes by using Event Grid. You can implement this capability with Visual Studio Code with latest Azure Functions extension.
+
+This article shows how to create a function that runs based on events raised when a blob is added to a container. You'll use Visual Studio Code for local development and to check that the function works locally before deploying your project to Azure.
> [!div class="checklist"] > * Create a general storage v2 account in Azure Storage.
Earlier versions of the Blob Storage trigger for Azure Functions polled the cont
## Prerequisites [!INCLUDE [functions-requirements-visual-studio-code-csharp](../../includes/functions-requirements-visual-studio-code-csharp.md)] ::: zone pivot="programming-language-javascript" [!INCLUDE [functions-requirements-visual-studio-code-node](../../includes/functions-requirements-visual-studio-code-node.md)] [!INCLUDE [functions-requirements-visual-studio-code-powershell](../../includes/functions-requirements-visual-studio-code-powershell.md)] [!INCLUDE [functions-requirements-visual-studio-code-python](../../includes/functions-requirements-visual-studio-code-python.md)] [!INCLUDE [functions-requirements-visual-studio-code-java](../../includes/functions-requirements-visual-studio-code-java.md)] ::: zone-end + The [ngrok](https://ngrok.com/) utility, which provides a way for Azure to call into your locally running function.
-+ The [Azure Storage extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurestorage) for Visual Studio Code.
++ [Azure Storage extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurestorage) for Visual Studio Code, minimally version 5.x. > [!NOTE]
-> The Storage Extension for Visual Studio Code is currently in preview.
+> The Azure Storage extension for Visual Studio Code is currently in preview.
## Create a storage account
-Using an event subscription to Azure Storage requires you to use a general-purpose v2 storage account. With the Azure Storage extension installed, you can create this kind of storage account by default from your Visual Studio Code project.
+To use an event subscription with Azure Storage, you'll need a general-purpose v2 storage account. By default, you can create this storage account from your Visual Studio Code project when you have the Azure Storage extension installed.
-1. In Visual Studio Code, open the command palette (press F1), type `Azure Storage: Create Storage Account...`, and then provide the following information at the prompts:
+1. In Visual Studio Code, open the command palette (press F1), enter `Azure Storage: Create Storage Account...`. At the prompts, provide the following information:
- |Prompt|Selection|
- |--|--|
- |**Enter the name of the new storage account**| Type a globally unique name. Storage account names must be between 3 and 24 characters in length and can contain numbers and lowercase letters only. We'll use the same name for the resource group and the function app name, to make it easier. |
- |**Select a location for new resources**| For better performance, choose a [region](https://azure.microsoft.com/regions/) near you.|
+ |Prompt|Action|
+ |--|--|
+ |**Enter the name of the new storage account**| Provide a globally unique name. Storage account names must have 3 to 24 characters in length with only lowercase letters and numbers. For easier identification, we'll use the same name for the resource group and the function app name. |
+ |**Select a location for new resources**| For better performance, choose a [region near you](https://azure.microsoft.com/regions/). |
- The extension creates a new general-purpose v2 storage account with the name you provided. The same name is also used for the resource group in which the storage account is created.
+ The extension creates a general-purpose v2 storage account with the name you provided. The same name is also used for the resource group that contains the storage account.
-1. After the storage account is created, open the command palette (press F1) and type `Azure Storage: Create Blob Container...`, and then provide the following information at the prompts:
+1. After you create the storage account, open the command palette (press F1), and enter `Azure Storage: Create Blob Container...`. At the prompts, provide the following information:
- |Prompt|Selection|
- |--|--|
- |**Select a resource**| Choose the name of the storage account you created. |
- |**Enter a name for the new blob container**| Type `samples-workitems`, which is the container name referenced in your code project.|
+ |Prompt|Action|
+ |--|--|
+ |**Select a resource**| Select the storage account that you created. |
+ |**Enter a name for the new blob container**| Enter `samples-workitems`, which is the container name referenced in your code project. |
-Now that you have the blob container, you can create both the function that triggers on this container and the event subscription that delivers events to your function.
+Now that you created the blob container, you can create both the function that triggers on this container and the event subscription that delivers events to your function.
## Create a Blob triggered function
-When you use Visual Studio Code to create a Blob Storage triggered function, you also create a new project. You'll then need to modify the function to consume an event subscription as the source instead of the regular polled container.
+When you create a Blob Storage-triggered function using Visual Studio Code, you also create a new project. You'll need to edit the function to consume an event subscription as the source, rather than use the regular polled container.
-1. Open your function app in Visual Studio Code.
+1. In Visual Studio Code, open your function app.
-1. Open the command palette (press F1) and type `Azure Functions: Create Function...` and select **Create new project**.
+1. Open the command palette (press F1), enter `Azure Functions: Create Function...`, and select **Create new project**.
-1. Choose the directory location for your project workspace and choose **Select**. You should either create a new folder or choose an empty folder for the project workspace. Don't choose a project folder that is already part of a workspace.
+1. For your project workspace, select the directory location. Make sure that you either create a new folder or choose an empty folder for the project workspace.
-1. Provide the following information at the prompts:
+ Don't choose a project folder that's already part of a workspace.
- ::: zone pivot="programming-language-csharp"
- |Prompt|Selection|
- |--|--|
- |**Select a language**|Choose `C#`.|
- |**Select a .NET runtime**| Choose `.NET 6.0 LTS`. Event-driven blob triggers aren't yet supported when running in an isolated worker process. |
- |**Select a template for your project's first function**|Choose `Azure Blob Storage trigger`.|
- |**Provide a function name**|Type `BlobTriggerEventGrid`.|
- |**Provide a namespace** | Type `My.Functions`. |
- |**Select setting from "local.settings.json"**|Choose `Create new local app setting`.|
- |**Select a storage account**|Choose the storage account you created from the list. |
+1. At the prompts, provide the following information:
+
+ ::: zone pivot="programming-language-csharp"
+ |Prompt|Action|
+ |--|--|
+ |**Select a language**| Select `C#`. |
+ |**Select a .NET runtime**| Select `.NET 6.0 LTS`. Event-driven blob triggers aren't yet supported when running in an isolated worker process. |
+ |**Select a template for your project's first function**| Select `Azure Blob Storage trigger`. |
+ |**Provide a function name**| Enter `BlobTriggerEventGrid`. |
+ |**Provide a namespace** | Enter `My.Functions`. |
+ |**Select setting from "local.settings.json"**| Select `Create new local app setting`. |
+ |**Select a storage account**| Select the storage account you created from the list. |
|**This is the path within your storage account that the trigger will monitor**| Accept the default value `samples-workitems`. |
- |**Select how you would like to open your project**|Choose `Add to workspace`.|
- ::: zone-end
- ::: zone pivot="programming-language-python"
- |Prompt|Selection|
+ |**Select how you would like to open your project**| Select `Add to workspace`. |
+ ::: zone-end
+ ::: zone pivot="programming-language-python"
+ |Prompt|Action|
|--|--|
- |**Select a language**|Choose `Python`.|
- |**Select a Python interpreter to create a virtual environment**| Choose your preferred Python interpreter. If an option isn't shown, type in the full path to your Python binary.|
- |**Select a template for your project's first function**|Choose `Azure Blob Storage trigger`.|
- |**Provide a function name**|Type `BlobTriggerEventGrid`.|
- |**Select setting from "local.settings.json"**|Choose `Create new local app setting`.|
- |**Select a storage account**|Choose the storage account you created from the list. |
+ |**Select a language**| Select `Python`. |
+ |**Select a Python interpreter to create a virtual environment**| Select your preferred Python interpreter. If an option isn't shown, enter the full path to your Python binary. |
+ |**Select a template for your project's first function**| Select `Azure Blob Storage trigger`. |
+ |**Provide a function name**| Enter `BlobTriggerEventGrid`. |
+ |**Select setting from "local.settings.json"**| Select `Create new local app setting`. |
+ |**Select a storage account**| Select the storage account you created from the list. |
|**This is the path within your storage account that the trigger will monitor**| Accept the default value `samples-workitems`. |
- |**Select how you would like to open your project**|Choose `Add to workspace`.|
- ::: zone-end
- ::: zone pivot="programming-language-java"
- |Prompt|Selection|
- |--|--|
- |**Select a language**|Choose `Java`.|
- |**Select a version of Java**| Choose `Java 11` or `Java 8`, the Java version on which your functions run in Azure. Choose a Java version that you've verified locally. |
- | **Provide a group ID** | Choose `com.function`. |
- | **Provide an artifact ID** | Choose `BlobTriggerEventGrid`. |
- | **Provide a version** | Choose `1.0-SNAPSHOT`. |
- | **Provide a package name** | Choose `com.function`. |
+ |**Select how you would like to open your project**| Select `Add to workspace`. |
+ ::: zone-end
+ ::: zone pivot="programming-language-java"
+ |Prompt|Action|
+ |--|--|
+ |**Select a language**| Select `Java`. |
+ |**Select a version of Java**| Select `Java 11` or `Java 8`, the Java version on which your functions run in Azure and that you've locally verified. |
+ | **Provide a group ID** | Select `com.function`. |
+ | **Provide an artifact ID** | Select `BlobTriggerEventGrid`. |
+ | **Provide a version** | Select `1.0-SNAPSHOT`. |
+ | **Provide a package name** | Select `com.function`. |
| **Provide an app name** | Accept the generated name starting with `BlobTriggerEventGrid`. |
- | **Select the build tool for Java project** | Choose `Maven`. |
- |**Select how you would like to open your project**|Choose `Add to workspace`.|
- ::: zone-end
- ::: zone pivot="programming-language-javascript"
- |Prompt|Selection|
- |--|--|
- |**Select a language for your function project**|Choose `JavaScript`.|
- |**Select a template for your project's first function**|Choose `Azure Blob Storage trigger`.|
- |**Provide a function name**|Type `BlobTriggerEventGrid`.|
- |**Select setting from "local.settings.json"**|Choose `Create new local app setting`.|
- |**Select a storage account**|Choose the storage account you created from the list. |
+ | **Select the build tool for Java project** | Select `Maven`. |
+ |**Select how you would like to open your project**| Select `Add to workspace`. |
+ ::: zone-end
+ ::: zone pivot="programming-language-javascript"
+ |Prompt|Action|
+ |--|--|
+ |**Select a language for your function project**| Select `JavaScript`. |
+ |**Select a template for your project's first function**| Select `Azure Blob Storage trigger`. |
+ |**Provide a function name**| Enter `BlobTriggerEventGrid`. |
+ |**Select setting from "local.settings.json"**| Select `Create new local app setting`. |
+ |**Select a storage account**| Select the storage account you created. |
|**This is the path within your storage account that the trigger will monitor**| Accept the default value `samples-workitems`. |
- |**Select how you would like to open your project**|Choose `Add to workspace`.|
- ::: zone-end
- ::: zone pivot="programming-language-powershell"
- |Prompt|Selection|
- |--|--|
- |**Select a language for your function project**|Choose `PowerShell`.|
- |**Select a template for your project's first function**|Choose `Azure Blob Storage trigger`.|
- |**Provide a function name**|Type `BlobTriggerEventGrid`.|
- |**Select setting from "local.settings.json"**|Choose `Create new local app setting`.|
- |**Select a storage account**|Choose the storage account you created from the list. |
+ |**Select how you would like to open your project**| Select `Add to workspace`. |
+ ::: zone-end
+ ::: zone pivot="programming-language-powershell"
+ |Prompt|Action|
+ |--|--|
+ |**Select a language for your function project**| Select `PowerShell`. |
+ |**Select a template for your project's first function**| Select `Azure Blob Storage trigger`. |
+ |**Provide a function name**| Enter `BlobTriggerEventGrid`. |
+ |**Select setting from "local.settings.json"**| Select `Create new local app setting`. |
+ |**Select a storage account**| Select the storage account you created. |
|**This is the path within your storage account that the trigger will monitor**| Accept the default value `samples-workitems`. |
- |**Select how you would like to open your project**|Choose `Add to workspace`.|
- ::: zone-end
+ |**Select how you would like to open your project**| Select `Add to workspace`. |
+ ::: zone-end
-1. When prompted, choose **Select storage account** and then **Add to workspace**.
+1. After the prompt appears, select **Select storage account** > **Add to workspace**.
-To simplify things, this tutorial reuses the same storage account with your function app. In production, you might want to use a separate storage account for your function app. For more information, see [Storage considerations for Azure Functions](storage-considerations.md).
+For simplicity, this tutorial reuses the same storage account with your function app. However, in production, you might want to use a separate storage account with your function app. For more information, see [Storage considerations for Azure Functions](storage-considerations.md).
-## Upgrade the Blob Storage extension
+## Upgrade the Storage extension
-To be able to use the Event Grid-based Blog Storage trigger, your function needs to be using version 5.x of the Blob Storage extension.
+To use the Event Grid-based Blob Storage trigger, your function requires at least version 5.x for the Storage extension.
-To upgrade your project to use the latest extension, run the following [dotnet add package](/dotnet/core/tools/dotnet-add-package) command in the Terminal window.
+To upgrade your project with the required extension version, in the Terminal window, run the following command: [dotnet add package](/dotnet/core/tools/dotnet-add-package)
<!# [In-process](#tab/in-process) --> ```bash
dotnet add package Microsoft.Azure.Functions.Worker.Extensions.Storage --version
``` --> ::: zone pivot="programming-language-javascript,programming-language-powershell,programming-language-python,programming-language-java"
-1. Open the host.json project file and inspect the `extensionBundle` element.
+1. Open the host.json project file, and inspect the `extensionBundle` element.
1. If `extensionBundle.version` isn't at least `3.3.0 `, replace `extensionBundle` with the following version:
- ```json
- "extensionBundle": {
- "id": "Microsoft.Azure.Functions.ExtensionBundle",
- "version": "[3.3.0, 4.0.0)"
- }
- ```
+ ```json
+ "extensionBundle": {
+ "id": "Microsoft.Azure.Functions.ExtensionBundle",
+ "version": "[3.3.0, 4.0.0)"
+ }
+ ```
::: zone-end ## Update the function to use events ::: zone pivot="programming-language-csharp"
-Open the BlobTriggerEventGrid.cs file and, add `Source = BlobTriggerSource.EventGrid` to the parameters for the blob trigger attribute, as shown in the following example:
+In the BlobTriggerEventGrid.cs file, add `Source = BlobTriggerSource.EventGrid` to the parameters for the Blob trigger attribute, for example:
```csharp [FunctionName("BlobTriggerCSharp")]
public static void Run([BlobTrigger("samples-workitems/{name}", Source = BlobTri
log.LogInformation($"C# Blob trigger function Processed blob\n Name:{name} \n Size: {myBlob.Length} Bytes"); } ```
-After the function is created add `"source": "EventGrid"` to the `myBlob` binding in the function.json configuration file, as shown in the following example:
+After you create the function, in the function.json configuration file, add `"source": "EventGrid"` to the `myBlob` binding, for example:
```json {
After the function is created add `"source": "EventGrid"` to the `myBlob` bindin
] } ```
-1. Replace contents of the generated `Function.java` file with the following code and rename the file to `BlobTriggerEventGrid.java`:
-
- ```java
- package com.function;
-
- import com.microsoft.azure.functions.annotation.*;
- import com.microsoft.azure.functions.*;
-
- /**
- * Azure Functions with Azure Blob trigger.
- */
- public class BlobTriggerEventGrid {
- /**
- * This function will be invoked when a new or updated blob is detected at the specified path. The blob contents are provided as input to this function.
- */
- @FunctionName("BlobTriggerEventGrid")
- @StorageAccount("glengatesteventgridblob_STORAGE")
- public void run(
- @BlobTrigger(name = "content", path = "samples-workitems/{name}", dataType = "binary", source = "EventGrid" ) byte[] content,
- @BindingName("name") String name,
- final ExecutionContext context
- ) {
- context.getLogger().info("Java Blob trigger function processed a blob. Name: " + name + "\n Size: " + content.length + " Bytes");
- }
+1. In the generated `Function.java` file, replace contents with the following code, and rename the file to `BlobTriggerEventGrid.java`:
+
+ ```java
+ package com.function;
+
+ import com.microsoft.azure.functions.annotation.*;
+ import com.microsoft.azure.functions.*;
+
+ /**
+ * Azure Functions with Azure Blob trigger.
+ */
+ public class BlobTriggerEventGrid {
+ /**
+ * This function will be invoked when a new or updated blob is detected at the specified path. The blob contents are provided as input to this function.
+ */
+ @FunctionName("BlobTriggerEventGrid")
+ @StorageAccount("glengatesteventgridblob_STORAGE")
+ public void run(
+ @BlobTrigger(name = "content", path = "samples-workitems/{name}", dataType = "binary", source = "EventGrid" ) byte[] content,
+ @BindingName("name") String name,
+ final ExecutionContext context
+ ) {
+ context.getLogger().info("Java Blob trigger function processed a blob. Name: " + name + "\n Size: " + content.length + " Bytes");
+ }
} ```
-2. Remove the associated unit test file, which is no longer relevant to the new trigger type.
-After the function is created, add `"source": "EventGrid"` to the `myBlob` binding in the function.json configuration file, as shown in the following example:
-
+1. Remove the associated unit test file, which no longer applies to the new trigger type.
+After you create the function, in the function.json configuration file, add `"source": "EventGrid"` to the `myBlob` binding, for example:
+ ```json {
- "bindings": [
- {
- "name": "myblob",
- "type": "blobTrigger",
- "direction": "in",
- "path": "samples-workitems/{name}",
- "source": "EventGrid",
- "connection": "<NAMED_STORAGE_CONNECTION>"
- }
- ]
+ "bindings": [
+ {
+ "name": "myblob",
+ "type": "blobTrigger",
+ "direction": "in",
+ "path": "samples-workitems/{name}",
+ "source": "EventGrid",
+ "connection": "<NAMED_STORAGE_CONNECTION>"
+ }
+ ]
}
- ```
+```
::: zone-end ## Start local debugging
With the entire topology now running Azure, it's time to verify that everything
## Next steps - [Automate resizing uploaded images using Event Grid](../event-grid/resize-images-on-storage-blob-upload-event.md)-- [Event Grid trigger for Azure Functions](./functions-bindings-event-grid.md)
+- [Event Grid trigger for Azure Functions](./functions-bindings-event-grid.md)
azure-functions Machine Learning Pytorch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/machine-learning-pytorch.md
Title: Deploy a PyTorch model as an Azure Functions application
description: Use a pre-trained ResNet 18 deep neural network from PyTorch with Azure Functions to assign 1 of 1000 ImageNet labels to an image. Previously updated : 02/28/2020 Last updated : 01/05/2023 # Tutorial: Deploy a pre-trained image classification model to Azure Functions with PyTorch
-In this article, you learn how to use Python, PyTorch, and Azure Functions to load a pre-trained model for classifying an image based on its contents. Because you do all work locally and create no Azure resources in the cloud, there is no cost to complete this tutorial.
+In this article, you learn how to use Python, PyTorch, and Azure Functions to load a pre-trained model for classifying an image based on its contents. Because you do all work locally and create no Azure resources in the cloud, there's no cost to complete this tutorial.
> [!div class="checklist"] > * Initialize a local environment for developing Azure Functions in Python.
To modify the `classify` function to classify an image based on its contents, yo
1. Verify that the *classify* folder contains files named *predict.py* and *labels.txt*. If not, check that you ran the command in the *start* folder.
-1. Open *start/requirements.txt* in a text editor and add the dependencies required by the helper code, which should look like the following:
+1. Open *start/requirements.txt* in a text editor and add the dependencies required by the helper code, which should look like:
```txt azure-functions requests -f https://download.pytorch.org/whl/torch_stable.html
- torch==1.5.0+cpu
- torchvision==0.6.0+cpu
+ torch==1.13.0+cpu
+ torchvision==0.14.0+cpu
```
+ > [!Tip]
+ > The versions of torch and torchvision must match values listed in the version table of the [PyTorch vision repo](https://github.com/pytorch/vision).
+ 1. Save *requirements.txt*, then run the following command from the *start* folder to install the dependencies.
Installation may take a few minutes, during which time you can proceed with modi
> > In a production application, change `*` to the web page's specific origin for added security.
-1. Save your changes, then assuming that dependencies have finished installing, start the local function host again with `func start`. Be sure to run the host in the *start* folder with the virtual environment activated. Otherwise the host will start, but you will see errors when invoking the function.
+1. Save your changes, then assuming that dependencies have finished installing, start the local function host again with `func start`. Be sure to run the host in the *start* folder with the virtual environment activated. Otherwise the host will start, but you'll see errors when invoking the function.
``` func start
azure-government Compare Azure Government Global Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compare-azure-government-global-azure.md
recommendations: false Previously updated : 09/20/2022 Last updated : 01/09/2023 # Compare Azure Government and global Azure
The following features have known limitations in Azure Government:
- Limitations with multi-factor authentication: - Trusted IPs isn't supported in Azure Government. Instead, use Conditional Access policies with named locations to establish when multi-factor authentication should and shouldn't be required based off the user's current IP address.
+### [Microsoft Authentication Library (MSAL)](../active-directory/develop/msal-overview.md)
+
+The Microsoft Authentication Library (MSAL) enables developers to acquire security tokens from the Microsoft identity platform to authenticate users and access secured web APIs. For feature variations and limitations, see [National clouds and MSAL](../active-directory/develop/msal-national-cloud.md).
+ ## Management and governance This section outlines variations and considerations when using Management and Governance services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=managed-applications,azure-policy,network-watcher,monitor,traffic-manager,automation,scheduler,site-recovery,cost-management,backup,blueprints,advisor&regions=usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
Title: Azure and other Microsoft cloud services compliance scope description: This article tracks FedRAMP and DoD compliance scope for Azure, Dynamics 365, Microsoft 365, and Power Platform cloud services across Azure, Azure Government, and Azure Government Secret cloud environments.++ recommendations: false Previously updated : 11/04/2022 Last updated : 01/09/2023 # Azure, Dynamics 365, Microsoft 365, and Power Platform services compliance scope
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Active Directory Provisioning Service](../../active-directory/app-provisioning/how-provisioning-works.md)| &#x2705; | &#x2705; | | [Azure AD Multi-Factor Authentication](../../active-directory/authentication/concept-mfa-howitworks.md) | &#x2705; | &#x2705; | | [Azure API for FHIR](../../healthcare-apis/azure-api-for-fhir/index.yml) | &#x2705; | &#x2705; |
+| **Service** | **FedRAMP High** | **DoD IL2** |
| [Azure Arc-enabled servers](../../azure-arc/servers/index.yml) | &#x2705; | &#x2705; | | [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/index.yml) | &#x2705; | &#x2705; |
-| **Service** | **FedRAMP High** | **DoD IL2** |
| [Azure Cache for Redis](../../azure-cache-for-redis/index.yml) | &#x2705; | &#x2705; | | [Azure Cosmos DB](../../cosmos-db/index.yml) | &#x2705; | &#x2705; | | [Azure Database for MariaDB](../../mariadb/index.yml) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Maps](../../azure-maps/index.yml) | &#x2705; | &#x2705; | | [Azure Monitor](../../azure-monitor/index.yml) (incl. [Application Insights](../../azure-monitor/app/app-insights-overview.md), [Log Analytics](../../azure-monitor/logs/data-platform-logs.md), and [Application Change Analysis](../../azure-monitor/app/change-analysis.md)) | &#x2705; | &#x2705; | | [Azure NetApp Files](../../azure-netapp-files/index.yml) | &#x2705; | &#x2705; |
+| **Service** | **FedRAMP High** | **DoD IL2** |
| [Azure Policy](../../governance/policy/index.yml) | &#x2705; | &#x2705; | | [Azure Policy's guest configuration](../../governance/machine-configuration/overview.md) | &#x2705; | &#x2705; |
-| **Service** | **FedRAMP High** | **DoD IL2** |
| [Azure Red Hat OpenShift](../../openshift/index.yml) | &#x2705; | &#x2705; | | [Azure Resource Manager](../../azure-resource-manager/management/index.yml) | &#x2705; | &#x2705; | | [Azure Service Manager (RDFE)](/previous-versions/azure/ee460799(v=azure.100)) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Backup](../../backup/index.yml) | &#x2705; | &#x2705; | | [Bastion](../../bastion/index.yml) | &#x2705; | &#x2705; | | [Batch](../../batch/index.yml) | &#x2705; | &#x2705; |
+| **Service** | **FedRAMP High** | **DoD IL2** |
| [Blueprints](../../governance/blueprints/index.yml) | &#x2705; | &#x2705; | | [Bot Service](/azure/bot-service/) | &#x2705; | &#x2705; |
-| **Service** | **FedRAMP High** | **DoD IL2** |
| [Cloud Services](../../cloud-services/index.yml) | &#x2705; | &#x2705; | | [Cloud Shell](../../cloud-shell/overview.md) | &#x2705; | &#x2705; | | [Cognitive Search](../../search/index.yml) (formerly Azure Search) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Cognitive | [Cognitive | [Cognitive
+| **Service** | **FedRAMP High** | **DoD IL2** |
| [Cognitive | [Cognitive
-| **Service** | **FedRAMP High** | **DoD IL2** |
| [Container Instances](../../container-instances/index.yml) | &#x2705; | &#x2705; | | [Container Registry](../../container-registry/index.yml) | &#x2705; | &#x2705; | | [Content Delivery Network (CDN)](../../cdn/index.yml) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Dataverse](/powerapps/maker/data-platform/) (incl. [Azure Synapse Link for Dataverse](/powerapps/maker/data-platform/export-to-data-lake)) | &#x2705; | &#x2705; | | [DDoS Protection](../../ddos-protection/index.yml) | &#x2705; | &#x2705; | | [Dedicated HSM](../../dedicated-hsm/index.yml) | &#x2705; | &#x2705; |
+| **Service** | **FedRAMP High** | **DoD IL2** |
| [DevTest Labs](../../devtest-labs/index.yml) | &#x2705; | &#x2705; | | [DNS](../../dns/index.yml) | &#x2705; | &#x2705; |
-| **Service** | **FedRAMP High** | **DoD IL2** |
| [Dynamics 365 Chat (Omnichannel Engagement Hub)](/dynamics365/omnichannel/introduction-omnichannel) | &#x2705; | &#x2705; | | [Dynamics 365 Commerce](/dynamics365/commerce/)| &#x2705; | &#x2705; | | [Dynamics 365 Customer Service](/dynamics365/customer-service/overview)| &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Event Hubs](../../event-hubs/index.yml) | &#x2705; | &#x2705; | | [ExpressRoute](../../expressroute/index.yml) | &#x2705; | &#x2705; | | [File Sync](../../storage/file-sync/index.yml) | &#x2705; | &#x2705; |
+| **Service** | **FedRAMP High** | **DoD IL2** |
| [Firewall](../../firewall/index.yml) | &#x2705; | &#x2705; | | [Firewall Manager](../../firewall-manager/index.yml) | &#x2705; | &#x2705; |
-| **Service** | **FedRAMP High** | **DoD IL2** |
| [Form Recognizer](../../applied-ai-services/form-recognizer/index.yml) | &#x2705; | &#x2705; | | [Front Door](../../frontdoor/index.yml) | &#x2705; | &#x2705; | | [Functions](../../azure-functions/index.yml) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [IoT Hub](../../iot-hub/index.yml) | &#x2705; | &#x2705; | | [Key Vault](../../key-vault/index.yml) | &#x2705; | &#x2705; | | [Lab Services](../../lab-services/index.yml) | &#x2705; | &#x2705; |
+| **Service** | **FedRAMP High** | **DoD IL2** |
| [Lighthouse](../../lighthouse/index.yml) | &#x2705; | &#x2705; | | [Load Balancer](../../load-balancer/index.yml) | &#x2705; | &#x2705; |
-| **Service** | **FedRAMP High** | **DoD IL2** |
| [Logic Apps](../../logic-apps/index.yml) | &#x2705; | &#x2705; | | [Machine Learning](../../machine-learning/index.yml) | &#x2705; | &#x2705; | | [Managed Applications](../../azure-resource-manager/managed-applications/index.yml) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/) (formerly Microsoft Defender Advanced Threat Protection) | &#x2705; | &#x2705; | | [Microsoft Defender for Identity](/defender-for-identity/) (formerly Azure Advanced Threat Protection) | &#x2705; | &#x2705; | | [Microsoft Defender for IoT](../../defender-for-iot/index.yml) (formerly Azure Security for IoT) | &#x2705; | &#x2705; |
+| **Service** | **FedRAMP High** | **DoD IL2** |
| [Microsoft Graph](/graph/) | &#x2705; | &#x2705; | | [Microsoft Intune](/mem/intune/) | &#x2705; | &#x2705; |
-| **Service** | **FedRAMP High** | **DoD IL2** |
| [Microsoft Sentinel](../../sentinel/index.yml) | &#x2705; | &#x2705; | | [Microsoft Stream](/stream/) | &#x2705; | &#x2705; | | [Microsoft Threat Experts](/microsoft-365/security/defender-endpoint/microsoft-threat-experts) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Power Apps Portal](https://powerapps.microsoft.com/portals/) | &#x2705; | &#x2705; | | [Power Automate](/power-automate/) (formerly Microsoft Flow) | &#x2705; | &#x2705; | | [Power BI](/power-bi/fundamentals/) | &#x2705; | &#x2705; |
+| **Service** | **FedRAMP High** | **DoD IL2** |
| [Power BI Embedded](/power-bi/developer/embedded/) | &#x2705; | &#x2705; | | [Power Data Integrator for Dataverse](/power-platform/admin/data-integrator) (formerly Dynamics 365 Integrator App) | &#x2705; | &#x2705; |
-| **Service** | **FedRAMP High** | **DoD IL2** |
| [Power Virtual Agents](/power-virtual-agents/) | &#x2705; | &#x2705; | | [Private Link](../../private-link/index.yml) | &#x2705; | &#x2705; | | [Public IP](../../virtual-network/ip-services/public-ip-addresses.md) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [SignalR Service](../../azure-signalr/index.yml) | &#x2705; | &#x2705; | | [Site Recovery](../../site-recovery/index.yml) | &#x2705; | &#x2705; | | [SQL Database](/azure/azure-sql/database/sql-database-paas-overview) | &#x2705; | &#x2705; |
+| **Service** | **FedRAMP High** | **DoD IL2** |
| [SQL Managed Instance](/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview) | &#x2705; | &#x2705; | | [SQL Server Registry](/sql/sql-server/end-of-support/sql-server-extended-security-updates) | &#x2705; | &#x2705; |
-| **Service** | **FedRAMP High** | **DoD IL2** |
| [SQL Server Stretch Database](../../sql-server-stretch-database/index.yml) | &#x2705; | &#x2705; | | [Storage: Archive](../../storage/blobs/access-tiers-overview.md) | &#x2705; | &#x2705; | | [Storage: Blobs](../../storage/blobs/index.yml) (incl. [Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-introduction.md)) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Time Series Insights](../../time-series-insights/index.yml) | &#x2705; | &#x2705; | | [Traffic Manager](../../traffic-manager/index.yml) | &#x2705; | &#x2705; | | [Virtual Machine Scale Sets](../../virtual-machine-scale-sets/index.yml) | &#x2705; | &#x2705; |
+| **Service** | **FedRAMP High** | **DoD IL2** |
| [Virtual Machines](../../virtual-machines/index.yml) (incl. [Reserved VM Instances](../../virtual-machines/prepay-reserved-vm-instances.md)) | &#x2705; | &#x2705; | | [Virtual Network](../../virtual-network/index.yml) | &#x2705; | &#x2705; |
-| **Service** | **FedRAMP High** | **DoD IL2** |
| [Virtual Network NAT](../../virtual-network/nat-gateway/index.yml) | &#x2705; | &#x2705; | | [Virtual WAN](../../virtual-wan/index.yml) | &#x2705; | &#x2705; | | [VM Image Builder](../../virtual-machines/image-builder-overview.md) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
**&ast;&ast;** FedRAMP High authorization for Azure Databricks is applicable to limited regions in Azure. To configure Azure Databricks for FedRAMP High use, contact your Microsoft or Databricks representative. ## Azure Government services by audit scope
-*Last updated: September 2022*
+*Last updated: January 2023*
### Terminology used
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure API for FHIR](../../healthcare-apis/azure-api-for-fhir/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Arc-enabled servers](../../azure-arc/servers/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure Cache for Redis](../../azure-cache-for-redis/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
+| [Azure Cache for Redis](../../azure-cache-for-redis/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [Azure Cosmos DB](../../cosmos-db/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure CXP Nomination Portal](https://cxp.azure.com/nominationportal/nominationform/fasttrack) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Database for MariaDB](../../mariadb/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure NetApp Files](../../azure-netapp-files/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Policy](../../governance/policy/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Policy's guest configuration](../../governance/machine-configuration/overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
| [Azure Resource Manager](../../azure-resource-manager/management/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
| [Azure Service Manager (RDFE)](/previous-versions/azure/ee460799(v=azure.100)) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Sign-up portal](https://signup.azure.com/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Stack Bridge](/azure-stack/operator/azure-stack-usage-reporting) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Stack Edge](../../databox-online/index.yml) (formerly Data Box Edge) **&ast;** | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Azure Stack HCI](/azure-stack/hci/) | &#x2705; | &#x2705; | | | |
| [Azure Video Indexer](../../azure-video-indexer/index.yml) | &#x2705; | &#x2705; | | | | | [Azure Virtual Desktop](../../virtual-desktop/index.yml) (formerly Windows Virtual Desktop) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Backup](../../backup/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
azure-government Documentation Government Csp List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-csp-list.md
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[Blueforce Development Corporation](https://www.blueforcedev.com/)| |[Booz Allen Hamilton](https://www.boozallen.com/)| |[Bridge Partners LLC](https://www.bridgepartnersllc.com)|
-|[C2 Technology Solutions](https://www.c2techsol.com)|
+|[C2 Technology Solutions](https://c2techsol.com/)|
|[CACI Inc - Federal](https://www.caci.com/)| |[Cambria Solutions, Inc.](https://www.cambriasolutions.com/)| |[Capgemini Government Solutions LLC](https://www.capgemini.com/us-en/service/capgemini-government-solutions/)|
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[CBTS](https://www.cbts.com/)| |[CDO Technologies Inc.](https://www.cdotech.com/contact/)| |[CDW-G, LLC](https://www.cdwg.com)|
-|[Centurylink](https://www.centurylink.com/public-sector/federal-government.html)|
+|[Centurylink](https://www.centurylink.com/)|
|[cFocus Software Incorporated](https://cfocussoftware.com)| |[CGI Federal, Inc.](https://www.cgi.com/en/us-federal)| |[CGI Technologies and Solutions Inc.](https://www.cgi.com)|
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[Protected Trust](https://www.proarch.com/)| |[Protera Technologies](https://www.protera.com)| |[Pueo Business Solutions, LLC](https://www.pueo.com/)|
-|[Quad M Tech](https://www.quadmtech.com/)|
|[Quality Technology Services LLC](https://www.qtsdatacenters.com/)| |[Quest Media & Supplies Inc.](https://www.questsys.com/)| |[Quisitive](https://quisitive.com)|
azure-maps About Azure Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/about-azure-maps.md
Maps Creator provides the following
* [Feature State service][FeatureState]. Use the Feature State service to support dynamic map styling. Dynamic map styling allows applications to reflect real-time events on spaces provided by IoT systems.
-* [WFS service][WFS]. Use the WFS service to query your indoor map data. The WFS service follows the [Open Geospatial Consortium API](http://docs.opengeospatial.org/is/17-069r3/17-069r3.html) standards for querying a single dataset.
+* [WFS service][WFS]. Use the WFS service to query your indoor map data. The WFS service follows the [Open Geospatial Consortium API](https://docs.opengeospatial.org/is/17-069r3/17-069r3.html) standards for querying a single dataset.
* [Wayfinding service][wayfinding-preview] (preview). Use the [wayfinding API][wayfind] to generate a path between two points within a facility. Use the [routeset API][routeset] to create the data that the wayfinding service needs to generate paths.
azure-maps Creator Indoor Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-indoor-maps.md
Applications can use the Render V2-Get Map Tile API to request tilesets. The til
### Web Feature Service API
-You can use the [Web Feature Service (WFS) API](/rest/api/maps/v2/wfs) to query datasets. WFS follows the [Open Geospatial Consortium API Features](http://docs.opengeospatial.org/DRAFTS/17-069r4.html). You can use the WFS API to query features within the dataset itself. For example, you can use WFS to find all mid-size meeting rooms of a specific facility and floor level.
+You can use the [Web Feature Service (WFS) API](/rest/api/maps/v2/wfs) to query datasets. WFS follows the [Open Geospatial Consortium API Features](https://docs.opengeospatial.org/DRAFTS/17-069r4.html). You can use the WFS API to query features within the dataset itself. For example, you can use WFS to find all mid-size meeting rooms of a specific facility and floor level.
### Alias API
azure-maps Data Driven Style Expressions Android Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/data-driven-style-expressions-android-sdk.md
The following image demonstrates how the colors are chosen for the above express
### Step expression
-A `step` expression can be used to calculate discrete, stepped result values by evaluating a [piecewise-constant function](http://mathworld.wolfram.com/PiecewiseConstantFunction.html) defined by stops.
+A `step` expression can be used to calculate discrete, stepped result values by evaluating a [piecewise-constant function](https://mathworld.wolfram.com/PiecewiseConstantFunction.html) defined by stops.
The `interpolate` expression has the following formats:
azure-maps Data Driven Style Expressions Ios Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/data-driven-style-expressions-ios-sdk.md
The following image demonstrates how the colors are chosen for the above express
##### Step expression
-A step expression can be used to calculate discrete, stepped result values by evaluating a [piecewise-constant function](http://mathworld.wolfram.com/PiecewiseConstantFunction.html) defined by stops.
+A step expression can be used to calculate discrete, stepped result values by evaluating a [piecewise-constant function](https://mathworld.wolfram.com/PiecewiseConstantFunction.html) defined by stops.
Step expressions return the output value of the stop just before the input value, or the from value if the input is less than the first stop.
azure-maps Data Driven Style Expressions Web Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/data-driven-style-expressions-web-sdk.md
var layer = new atlas.layer.BubbleLayer(datasource, null, {
``` The following image demonstrates how the colors are chosen for the above expression.
-
+ ![Interpolate expression example](media/how-to-expressions/interpolate-expression-example.png) ### Step expression
-A `step` expression can be used to calculate discrete, stepped result values by evaluating a [piecewise-constant function](http://mathworld.wolfram.com/PiecewiseConstantFunction.html) defined by stops.
+A `step` expression can be used to calculate discrete, stepped result values by evaluating a [piecewise-constant function](https://mathworld.wolfram.com/PiecewiseConstantFunction.html) defined by stops.
The following pseudocode defines the structure of the `step` expression.
azure-maps Indoor Map Dynamic Styling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/indoor-map-dynamic-styling.md
After you complete the prerequisites, you should have a simple web application c
To implement dynamic styling, a feature - such as a meeting or conference room - must be referenced by its feature `id`. You use the feature `id` to update the dynamic property or *state* of that feature. To view the features defined in a dataset, you can use one of the following methods:
-* WFS API (Web Feature service). You can use the [WFS API](/rest/api/maps/v2/wfs) to query datasets. WFS follows the [Open Geospatial Consortium API Features](http://docs.opengeospatial.org/DRAFTS/17-069r4.html). The WFS API is helpful for querying features within a dataset. For example, you can use WFS to find all mid-size meeting rooms of a specific facility and floor level.
+* WFS API (Web Feature service). You can use the [WFS API](/rest/api/maps/v2/wfs) to query datasets. WFS follows the [Open Geospatial Consortium API Features](https://docs.opengeospatial.org/DRAFTS/17-069r4.html). The WFS API is helpful for querying features within a dataset. For example, you can use WFS to find all mid-size meeting rooms of a specific facility and floor level.
* Implement customized code that a user can use to select features on a map using your web application. We use this option in this article.
azure-maps Migrate From Bing Maps Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps-web-app.md
The following are some of the key differences between the Bing Maps and Azure Ma
> The Position class has a static helper function for importing coordinates that are in `latitude, longitude` format. The [atlas.data.Position.fromLatLng](/javascript/api/azure-maps-control/atlas.data.position)function can often be replace the `new Microsoft.Maps.Location` function in Bing Maps code. * Rather than specifying styling information on each shape that is added to the map, Azure Maps separates styles from the data. Data is stored in data sources and is connected to rendering layers that Azure Maps code uses to render the data. This approach provides enhanced performance benefit. Additionally, many layers support data-driven styling where business logic can be added to layer style options that will change how individual shapes are rendered within a layer based on properties defined in the shape.
-* Azure Maps provides a bunch of useful spatial math functions in the `atlas.math` namespace, however these differ from those in the Bing Maps spatial math module. The primary difference is that Azure Maps doesnΓÇÖt provide built-in functions for binary operations such as union and intersection, however, since Azure Maps is based on GeoJSON that is an open standard, there are many open-source libraries available. One popular option that works well with Azure Maps and provides a ton of spatial math capabilities is [turf js](http://turfjs.org/).
+* Azure Maps provides a bunch of useful spatial math functions in the `atlas.math` namespace, however these differ from those in the Bing Maps spatial math module. The primary difference is that Azure Maps doesnΓÇÖt provide built-in functions for binary operations such as union and intersection, however, since Azure Maps is based on GeoJSON that is an open standard, there are many open-source libraries available. One popular option that works well with Azure Maps and provides a ton of spatial math capabilities is [turf js](https://turfjs.org/).
See also the [Azure Maps Glossary](./glossary.md) for an in-depth list of terminology associated with Azure Maps.
azure-maps Migrate From Bing Maps Web Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps-web-services.md
This approach however will only snap to the road segments that are loaded within
**Using the Azure Maps vector tiles directly to snap coordinates**
-The Azure Maps vector tiles contain the raw road geometry data that can be used to calculate the nearest point on a road to a coordinate to do basic snapping of individual coordinates. All road segments appear in the sectors at zoom level 15, so you will want to retrieve tiles from there. You can then use the [quadtree tile pyramid math](./zoom-levels-and-tile-grid.md) to determine that tiles are needed and convert the tiles to geometries. From there a spatial math library, such as [turf js](http://turfjs.org/) or [NetTopologySuite](https://github.com/NetTopologySuite/NetTopologySuite) can be used to calculate the closest line segments.
+The Azure Maps vector tiles contain the raw road geometry data that can be used to calculate the nearest point on a road to a coordinate to do basic snapping of individual coordinates. All road segments appear in the sectors at zoom level 15, so you will want to retrieve tiles from there. You can then use the [quadtree tile pyramid math](./zoom-levels-and-tile-grid.md) to determine that tiles are needed and convert the tiles to geometries. From there a spatial math library, such as [turf js](https://turfjs.org/) or [NetTopologySuite](https://github.com/NetTopologySuite/NetTopologySuite) can be used to calculate the closest line segments.
## Retrieve a map image (Static Map)
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
description: Overview of the Azure Monitor Agent, which collects monitoring data
Previously updated : 1/3/2023 Last updated : 1/5/2023
Azure Monitor Agent replaces the Azure Monitor legacy monitoring agents:
## Install the agent and configure data collection
-Azure Monitor Agent uses [data collection rules](../essentials/data-collection-rule-overview.md), where you define which data you want each agent to collect. Data collection rules let you manage data collection settings at scale and define unique, scoped configurations for subsets of machines. The rules are independent of the workspace and the virtual machine, which means you can define a rule once and reuse it across machines and environments.
+Azure Monitor Agent uses [data collection rules](../essentials/data-collection-rule-overview.md), where you define which data you want each agent to collect. Data collection rules let you manage data collection settings at scale and define unique, scoped configurations for subsets of machines. You can define a rule to send data from multiple machines to multiple destinations across regions and tenants.
+
+> [!NOTE]
+> To send data across tenants, you must first enable [Azure Lighthouse](/azure/lighthouse/overview).
**To collect data using Azure Monitor Agent:**
Azure Monitor Agent uses [data collection rules](../essentials/data-collection-r
| Performance | Azure Monitor Metrics (Public preview)<sup>1</sup> - Insights.virtualmachine namespace<br>Log Analytics workspace - [Perf](/azure/azure-monitor/reference/tables/perf) table | Numerical values measuring performance of different aspects of operating system and workloads | | Windows event logs (including sysmon events) | Log Analytics workspace - [Event](/azure/azure-monitor/reference/tables/Event) table | Information sent to the Windows event logging system | | Syslog | Log Analytics workspace - [Syslog](/azure/azure-monitor/reference/tables/syslog)<sup>2</sup> table | Information sent to the Linux event logging system |
- | Text logs and Windows IIS logs | Log Analytics workspace - custom tables | [Collect text logs with Azure Monitor Agent](data-collection-text-log.md) |
+ | Text logs and Windows IIS logs | Log Analytics workspace - custom table(s) created manually | [Collect text logs with Azure Monitor Agent](data-collection-text-log.md) |
<sup>1</sup> On Linux, using Azure Monitor Metrics as the only destination is supported in v1.10.9.0 or higher.<br>
In addition to the generally available data collection listed above, Azure Monit
| [Microsoft Sentinel](../../sentinel/overview.md) | <ul><li>Windows Security Events: [Generally available](../../sentinel/connect-windows-security-events.md?tabs=AMA)</li><li>Windows Forwarding Event (WEF): [Public preview](../../sentinel/data-connectors-reference.md#windows-forwarded-events-preview)</li><li>Windows DNS logs: [Public preview](../../sentinel/connect-dns-ama.md)</li><li>Linux Syslog CEF: [Public preview](../../sentinel/connect-cef-ama.md#set-up-the-common-event-format-cef-via-ama-connector)</li></ul> | Sentinel DNS extension, if youΓÇÖre collecting DNS logs. For all other data types, you just need the Azure Monitor Agent extension. | - | | [Change Tracking](../../automation/change-tracking/overview.md) | Change Tracking: Preview. | Change Tracking extension | [Sign-up link](https://aka.ms/amadcr-privatepreviews) | | [Update Management](../../automation/update-management/overview.md) (available without Azure Monitor Agent) | Use Update Management v2 - Public preview | None | [Update management center (Public preview) documentation](../../update-center/index.yml) |
-| [Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) | Connection Monitor: Preview | Azure NetworkWatcher extension | [Sign-up link](https://aka.ms/amadcr-privatepreviews) |
+| [Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) | Connection Monitor: Public preview | Azure NetworkWatcher extension | [Monitor network connectivity by using Azure Monitor Agent](/azure/network-watcher/azure-monitor-agent-with-connection-monitor) |
## Supported regions
azure-monitor Azure Monitor Agent Extension Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-extension-versions.md
description: This article describes the version details for the Azure Monitor ag
Previously updated : 12/19/2022 Last updated : 1/5/2023
We strongly recommended to update to the latest version at all times, or opt in
## Version details | Release Date | Release notes | Windows | Linux | |:|:|:|:|
-| Oct 2022 | **Windows** <ul><li>Increased default retry timeout for data upload from 4 to 8 hours</li><li>Data quality improvements</li></ul> **Linux** <ul><li>Support for `http_proxy` and `https_proxy` environment variables for [network proxy configurations](./azure-monitor-agent-data-collection-endpoint.md#proxy-configuration) for the agent</li><li>[Text logs](./data-collection-text-log.md) <ul><li>Network proxy support enabled</li><li>Fixed missing `_ResourceId`</li><li>Increased maximum line size support to 1MB</li></ul></li><li>Support ingestion of syslog events whose timestamp is in the future</li><li>Performance improvements</li><li>Fixed `diskio` metrics instance name dimension to use the disk mount path(s) instead of the device name(s)</li></ul> | 1.10.0.0 | 1.24.2 |
+| Oct 2022 | **Windows** <ul><li>Increased default retry timeout for data upload from 4 to 8 hours</li><li>Data quality improvements</li></ul> **Linux** <ul><li>Support for `http_proxy` and `https_proxy` environment variables for [network proxy configurations](./azure-monitor-agent-data-collection-endpoint.md#proxy-configuration) for the agent</li><li>[Text logs](./data-collection-text-log.md) <ul><li>Network proxy support enabled</li><li>Fixed missing `_ResourceId`</li><li>Increased maximum line size support to 1MB</li></ul></li><li>Support ingestion of syslog events whose timestamp is in the future</li><li>Performance improvements</li><li>Fixed `diskio` metrics instance name dimension to use the disk mount path(s) instead of the device name(s)</li><li>Fixed world writable file issue to lockdown write access to certain agent logs and configuration files stored locally on the machine</li></ul> | 1.10.0.0 | 1.24.2 |
| Sep 2022 | Reliability improvements | 1.9.0.0 | None | | August 2022 | **Common updates** <ul><li>Improved resiliency: Default lookback (retry) time updated to last 3 days (72 hours) up from 60 minutes, for agent to collect data post interruption. This is subject to default offline cache size of 10gigabytes</li><li>Fixes the preview custom text log feature that was incorrectly removing the *TimeGenerated* field from the raw data of each event. All events are now additionally stamped with agent (local) upload time</li><li>Reliability and supportability improvements</li></ul> **Windows** <ul><li>Fixed datetime format to UTC</li><li>Fix to use default location for firewall log collection, if not provided</li><li>Reliability and supportability improvements</li></ul> **Linux** <ul><li>Support for OpenSuse 15, Debian 11 ARM64</li><li>Support for coexistence of Azure Monitor agent with legacy Azure Diagnostic extension for Linux (LAD)</li><li>Increased max-size of UDP payload for Telegraf output to prevent dimension truncation</li><li>Prevent unconfigured upload to Azure Monitor Metrics destination</li><li>Fix for disk metrics wherein *instance name* dimension will use the disk mount path(s) instead of the device name(s), to provide parity with legacy agent</li><li>Fixed *disk free MB* metric to report megabytes instead of bytes</li></ul> | 1.8.0.0 | 1.22.2 | | July 2022 | Fix for mismatch event timestamps for Sentinel Windows Event Forwarding | 1.7.0.0 | None |
azure-monitor Data Collection Iis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-iis.md
The [data collection rule](../essentials/data-collection-rule-overview.md) defin
- How Azure Monitor transforms events during ingestion. - The destination Log Analytics workspace and table to which Azure Monitor sends the data.
-Create the data collection rule in the *same region* as your Log Analytics workspace. You can still associate the rule to machines in other supported regions.
+You can define a data collection rule to send data from multiple machines to multiple Log Analytics workspaces, including workspaces in a different region or tenant. Create the data collection rule in the *same region* as your Log Analytics workspace.
+
+> [!NOTE]
+> To send data across tenants, you must first enable [Azure Lighthouse](/azure/lighthouse/overview).
To create the data collection rule in the Azure portal:
azure-monitor Data Collection Rule Azure Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-rule-azure-monitor-agent.md
To complete this procedure, you need:
- Log Analytics workspace where you have at least [contributor rights](../logs/manage-access.md#azure-rbac). - [Permissions to create Data Collection Rule objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace.
+- Create [data collection rules (DCRs)](../essentials/data-collection-rule-overview.md) that define which data Azure Monitor Agent sends to which destinations, as described in the next section
+- Associate the data collection rule to specific virtual machines.
## Create a data collection rule
-Create the data collection rule in the *same region* as your Log Analytics workspace. You can still associate the rule to machines in other supported regions.
+You can define a data collection rule to send data from multiple machines to multiple Log Analytics workspaces, including workspaces in a different region or tenant. Create the data collection rule in the *same region* as your Log Analytics workspace.
+> [!NOTE]
+> To send data across tenants, you must first enable [Azure Lighthouse](/azure/lighthouse/overview).
### [Portal](#tab/portal) 1. On the **Monitor** menu, select **Data Collection Rules**.
Create the data collection rule in the *same region* as your Log Analytics works
1. Select **Add data source** and then select **Review + create** to review the details of the data collection rule and association with the set of virtual machines. 1. Select **Create** to create the data collection rule.
-> [!NOTE]
-> It might take up to 5 minutes for data to be sent to the destinations after you create the data collection rule and associations.
- ### [API](#tab/api) 1. Create a DCR file by using the JSON format shown in [Sample DCR](data-collection-rule-sample-agent.md).
For sample templates, see [Azure Resource Manager template samples for data coll
+> [!NOTE]
+> It can take up to 5 minutes for data to be sent to the destinations after you create the data collection rule.
+ ## Filter events using XPath queries You're charged for any data you collect in a Log Analytics workspace. Therefore, you should only collect the event data you need. The basic configuration in the Azure portal provides you with a limited ability to filter out events.
azure-monitor Data Collection Text Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-text-log.md
To complete this procedure, you need:
- Log Analytics workspace where you have at least [contributor rights](../logs/manage-access.md#azure-rbac). - [Data collection endpoint](../essentials/data-collection-endpoint-overview.md#create-a-data-collection-endpoint).-- [Custom table](../logs/create-custom-table.md) to send your logs to. - [Permissions to create Data Collection Rule objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace. - A VM, Virtual Machine Scale Set, or Arc-enabled on-premises server that writes logs to a text file. - The log file must be stored on the local drive of the machine on which Azure Monitor Agent is running. - Each entry in the log file must be delineated with an end of line. - The log file must not allow circular logging, log rotation where the file is overwritten with new entries, or renaming where a file is moved and a new file with the same name is opened.
+## Create a custom table
+
+This step will create a new custom table, which is any table name that ends in \_CL. Currently a direct REST call to the table management endpoint is used to create a table. The script at the end of this section is the input to the REST call.
+
+The table created in the script has two columns TimeGenerated: datetime and RawData: string, which is the default schema for a custom text log. If you know your final schema, then you can add columns in the script before creating the table. If you do not, columns can always be added in the log analytics table UI.
+
+The easiest way to make the REST call is from an Azure Cloud PowerShell command line (CLI). To open the shell, go to the Azure Portal, press the Cloud Shell button, and select PowerShell. If this is your first-time using Azure Cloud PowerShell, you will need to walk through the one-time configuration wizard.
+
+
+Copy and paste the following script in to PowerShell to create the table in your workspace. Make sure to replace the {subscription}, {resource group}, {workspace name}, and {table name} in the script. Make sure that there are no extra blanks at the beginning or end of the parameters
+
+ ```code
+$tableParams = @'
+{
+ "properties": {
+ "schema": {
+ "name": "{TableName}_CL",
+ "columns": [
+ {
+ "name": "TimeGenerated",
+ "type": "DateTime"
+ },
+ {
+ "name": "RawData",
+ "type": "String"
+ }
+ ]
+ }
+ }
+}
+'@
+
+Invoke-AzRestMethod -Path "/subscriptions/{subscription}/resourcegroups/{resourcegroup}/providers/microsoft.operationalinsights/workspaces/{WorkspaceName}/tables/{TableName}_CL?api-version=2021-12-01-preview" -Method PUT -payload $tableParams
+```
+
+Press return to execute the code. You should see a 200 response, and details about the table you just created will show up. To validate that the table was created go to your workspace and select Tables on the left blade. You should see your table in the list.
++ ## Create data collection rule to collect text logs The data collection rule defines:
The data collection rule defines:
- How Azure Monitor transforms events during ingestion. - The destination Log Analytics workspace and table to which Azure Monitor sends the data.
-Create the data collection rule in the *same region* as your Log Analytics workspace. You can still associate the rule to machines in other supported regions.
+You can define a data collection rule to send data from multiple machines to multiple Log Analytics workspaces, including workspaces in a different region or tenant. Create the data collection rule in the *same region* as your Log Analytics workspace.
> [!NOTE]
-> It can take up to 5 minutes for data to be sent to the destinations after you create the data collection rule.
+> To send data across tenants, you must first enable [Azure Lighthouse](/azure/lighthouse/overview).
+ ### [Portal](#tab/portal) To create the data collection rule in the Azure portal:
To create the data collection rule in the Azure portal:
+> [!NOTE]
+> It can take up to 5 minutes for data to be sent to the destinations after you create the data collection rule.
## Troubleshoot Use the following steps to troubleshoot collection of text logs.
azure-monitor Om Agents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/om-agents.md
If your IT security policies do not allow computers on your network to connect t
Before starting, review the following requirements. >[!Note]
->From 1 February 2023, System Center Operations Manager version lower than [2019 UR3](/system-center/scom/release-build-versions?view=sc-om-2019#agents) will stop sending data to Log Analytics workspace. Ensure your agents are on SCOM Agent version 10.19.10177.0 ([2019 UR3](/system-center/scom/release-build-versions?view=sc-om-2019#agents) or later) or 10.22.10056.0 ([2022 RTM](/system-center/scom/release-build-versions?view=sc-om-2022#agents)) and SCOM Management Group version is SCOM 2022 & 2019 UR3 or later version.
+>From 1 February 2023, System Center Operations Manager version lower than [2019 UR3](/system-center/scom/release-build-versions?view=sc-om-2019#agents&preserve-view=true) will stop sending data to Log Analytics workspace. Ensure your agents are on SCOM Agent version 10.19.10177.0 ([2019 UR3](/system-center/scom/release-build-versions?view=sc-om-2019#agents&preserve-view=true) or later) or 10.22.10056.0 ([2022 RTM](/system-center/scom/release-build-versions?view=sc-om-2022#agents&preserve-view=true)) and SCOM Management Group version is SCOM 2022 & 2019 UR3 or later version.
* Azure Monitor supports the following: * System Center Operations Manager 2022
Before starting, review the following requirements.
>[!NOTE] >Recent changes to Azure APIs will prevent customers from being able to successfully configure integration between their management group and Azure Monitor for the first time. For customers who have already integrated their management group with the service, you are not impacted unless you need to reconfigure your existing connection. >A new management pack has been released for the following versions of Operations
-> - For System Center Operations Manager 2019, this management pack is included with the source media and installed during setup of a new management group or during an upgrade.
->- Operations Manager 1801 management pack is also applicable for Operations Manager 1807.
->- For System Center Operations Manager 1801, download the management pack from [here](https://www.microsoft.com/download/details.aspx?id=57173).
->- For System Center 2016 - Operations Manager, download the management pack from [here](https://www.microsoft.com/download/details.aspx?id=57172).
+> - For System Center Operations Manager 2019 and newer, this management pack is included with the source media and installed during setup of a new management group or during an upgrade.
+>- For System Center Operations Manager 1801/1807, download the management pack from [here](https://www.microsoft.com/download/details.aspx?id=57173).
+>- For System Center Operations Manager 2016, download the management pack from [here](https://www.microsoft.com/download/details.aspx?id=57172).
>- For System Center Operations Manager 2012 R2, download the management pack from [here](https://www.microsoft.com/download/details.aspx?id=57171).
azure-monitor Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups.md
When you define the function action, the function's HTTP trigger endpoint and ac
You may have a limited number of function actions per action group.
+ > [!NOTE]
+ >
+ > The function must have access to the storage account. If not, no keys will be available and the function URI will not be accessible.
+ ### ITSM An ITSM action requires an ITSM connection. To learn how to create an ITSM connection, see [ITSM integration](./itsmc-overview.md).
azure-monitor Alerts Dynamic Thresholds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-dynamic-thresholds.md
To identify weekly seasonality, the Dynamic Thresholds model requires at least t
## Dynamic Thresholds is showing values that are not within the range of expected values
-When a metric exhibits large fluctuation, Dynamic Thresholds builds a wider model around the metric values. This model can result in a lower border below zero when the metric only has positive values, or in an upper border above 100% when the metric can't exceed 100%. This scenario can happen when:
+When a metric value exhibits large fluctuations, dynamic thresholds may build a wide model around the metric values, which can result in a lower or higher boundary than expected. This scenario can happen when:
- The sensitivity is set to low. - The metric exhibits an irregular behavior with high variance, which appears as spikes or dips in the data.
-When the lower bound has a negative value, it's plausible for the metric to reach a zero value given the metric's irregular behavior. Consider choosing a higher sensitivity or a larger **Aggregation granularity (Period)** to make the model less sensitive. Or, use the **Ignore data before** option to exclude a recent irregularity from the historical data used to build the model.
+Consider making the model less sensitive by choosing a higher sensitivity or selecting a larger **Aggregation granularity (Period)**.  You can also use the **Ignore data before** option to exclude a recent irregularity from the historical data used to build the model.
## The Dynamic Thresholds alert rule is too noisy or fires too much
azure-monitor Proactive Performance Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-performance-diagnostics.md
Smart detection notifications are enabled by default. They are sent to users tha
Emails about smart detection performance anomalies are limited to one email per day per Application Insights resource. The email will be sent only if there is at least one new issue that was detected on that day. You won't get repeats of any message.
-## FAQ
+## Frequently asked questions
* *So, Microsoft staff look at my data?* * No. The service is entirely automatic. Only you get the notifications. Your data is [private](../app/data-retention-privacy.md).
azure-monitor App Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md
Several other community-supported Application Insights SDKs exist. However, Azur
## Troubleshooting
-### FAQ
+### Frequently asked questions
Review [frequently asked questions](../faq.yml). ### Microsoft Q&A questions forum
azure-monitor Asp Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net.md
This section will guide you through manually adding Application Insights to a te
<Add Type="Microsoft.ApplicationInsights.Extensibility.AutocollectedMetricsExtractor, Microsoft.ApplicationInsights" /> <Add Type="Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel.AdaptiveSamplingTelemetryProcessor, Microsoft.AI.ServerTelemetryChannel"> <MaxTelemetryItemsPerSecond>5</MaxTelemetryItemsPerSecond>
- <ExcludedTypes>Trace</ExcludedTypes>
+ <ExcludedTypes>Event</ExcludedTypes>
</Add> <Add Type="Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel.AdaptiveSamplingTelemetryProcessor, Microsoft.AI.ServerTelemetryChannel"> <MaxTelemetryItemsPerSecond>5</MaxTelemetryItemsPerSecond>
azure-monitor Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-ad-authentication.md
Using various authentication systems can be cumbersome and risky because it's di
The following are prerequisites to enable Azure AD authenticated ingestion.
+- Must be in public cloud
- Familiarity with: - [Managed identity](../../active-directory/managed-identities-azure-resources/overview.md). - [Service principal](../../active-directory/develop/howto-create-service-principal-portal.md).
azure-monitor Codeless Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/codeless-overview.md
# What is auto-instrumentation for Azure Monitor Application Insights?
-Auto-instrumentation collects [Application Insights](app-insights-overview.md) [telemetry](data-model.md).
+Auto-instrumentation quickly and easily enables [Application Insights](app-insights-overview.md) to make [telemetry](data-model.md) (metrics, requests and dependencies) available in your [Application Insights resource](create-workspace-resource.md).
> [!div class="checklist"] > - No code changes required
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md
Title: Migrate an Application Insights classic resource to a workspace-based resource - Azure Monitor | Microsoft Docs
-description: Learn about the steps required to upgrade your Application Insights classic resource to the new workspace-based model.
+description: Learn how to upgrade your Application Insights classic resource to the new workspace-based model.
Last updated 11/15/2022
# Migrate to workspace-based Application Insights resources
-This article walks through migrating a classic Application Insights resource to a workspace-based resource.
+This article walks you through migrating a classic Application Insights resource to a workspace-based resource.
Workspace-based resources: > [!div class="checklist"]
-> - Support full integration between Application Insights and [Log Analytics](../logs/log-analytics-overview.md)
-> - Send Application Insights telemetry to a common [Log Analytics workspace](../logs/log-analytics-workspace-overview.md)
-> - Allow you to access [the latest features of Azure Monitor](#new-capabilities) while keeping application, infrastructure, and platform logs in a consolidated location
-> - Enable common [Azure role-based access control](../../role-based-access-control/overview.md) across your resources
-> - Eliminate the need for cross-app/workspace queries
-> - Are available in all commercial regions and [Azure US Government](../../azure-government/index.yml)
-> - Do not require changing instrumentation keys after migration from a Classic resource
+> - Support full integration between Application Insights and [Log Analytics](../logs/log-analytics-overview.md).
+> - Send Application Insights telemetry to a common [Log Analytics workspace](../logs/log-analytics-workspace-overview.md).
+> - Allow you to access [the latest features of Azure Monitor](#new-capabilities) while keeping application, infrastructure, and platform logs in a consolidated location.
+> - Enable common [Azure role-based access control](../../role-based-access-control/overview.md) across your resources.
+> - Eliminate the need for cross-app/workspace queries.
+> - Are available in all commercial regions and [Azure US Government](../../azure-government/index.yml).
+> - Don't require changing instrumentation keys after migration from a classic resource.
## New capabilities
-Workspace-based Application Insights allow you to take advantage of the latest capabilities of Azure Monitor and Log Analytics:
+Workspace-based Application Insights resources allow you to take advantage of the latest capabilities of Azure Monitor and Log Analytics:
* [Customer-managed keys](../logs/customer-managed-keys.md) provide encryption at rest for your data with encryption keys that only you have access to. * [Azure Private Link](../logs/private-link-security.md) allows you to securely link the Azure platform as a service (PaaS) to your virtual network by using private endpoints.
Workspace-based Application Insights allow you to take advantage of the latest c
- Encryption-at-rest policy. - Lifetime management policy. - Network access for all data associated with Application Insights Profiler and Snapshot Debugger.
-* [Commitment tiers](../logs/cost-logs.md#commitment-tiers) enable you to save as much as 30% compared to the pay-as-you-go price. Otherwise, pay-as-you-go data ingestion and data retention are billed similarly in Log Analytics as they are in Application Insights.
+* [Commitment tiers](../logs/cost-logs.md#commitment-tiers) enable you to save as much as 30% compared to the pay-as-you-go price. Otherwise, billing for pay-as-you-go data ingestion and data retention in Log Analytics is similar to the billing in Application Insights.
* Data is ingested faster via Log Analytics streaming ingestion. > [!NOTE]
-> After you migrate to a workspace-based Application Insights resource, telemetry from multiple Application Insights resources might be stored in a common Log Analytics workspace. You'll still be able to pull data from a specific Application Insights resource, as described in the section [Understand log queries](#understand-log-queries).
+> After you migrate to a workspace-based Application Insights resource, telemetry from multiple Application Insights resources might be stored in a common Log Analytics workspace. You can still pull data from a specific Application Insights resource, as described in the section [Understand log queries](#understand-log-queries).
## Migration process
-When you migrate to a workspace-based resource, no data is transferred from your classic resource's storage to the new workspace-based storage. Choosing to migrate will change the location where new data is written to a Log Analytics workspace while preserving access to your classic resource data.
+When you migrate to a workspace-based resource, no data is transferred from your classic resource's storage to the new workspace-based storage. Choosing to migrate changes the location where new data is written to a Log Analytics workspace while preserving access to your classic resource data.
Your classic resource data will persist and be subject to the retention settings on your classic Application Insights resource. All new data ingested post migration will be subject to the [retention settings](../logs/data-retention-archive.md) of the associated Log Analytics workspace, which also supports [different retention settings by data type](../logs/data-retention-archive.md#set-retention-and-archive-policy-by-table).
-*The migration process is permanent and can't be reversed*. After you migrate a resource to workspace-based Application Insights, it will always be a workspace-based resource. After you migrate, you can change the target workspace as often as needed.
+*The migration process is permanent and can't be reversed.* After you migrate a resource to workspace-based Application Insights, it will always be a workspace-based resource. After you migrate, you can change the target workspace as often as needed.
If you don't need to migrate an existing resource, and instead want to create a new workspace-based Application Insights resource, see the [Workspace-based resource creation guide](create-workspace-resource.md).
If you don't need to migrate an existing resource, and instead want to create a
- A Log Analytics workspace with the access control mode set to the **Use resource or workspace permissions** setting: - Workspace-based Application Insights resources aren't compatible with workspaces set to the dedicated **workspace-based permissions** setting. To learn more about Log Analytics workspace access control, see the [Access control mode guidance](../logs/manage-access.md#access-control-mode).- - If you don't already have an existing Log Analytics workspace, see the [Log Analytics workspace creation documentation](../logs/quick-create-workspace.md). - **Continuous export** isn't supported for workspace-based resources and must be disabled. After the migration is finished, you can use [diagnostic settings](../essentials/diagnostic-settings.md) to configure data archiving to a storage account or streaming to Azure Event Hubs. > [!CAUTION]
- > * Diagnostic settings use a different export format/schema than continuous export. Migrating will break any existing integrations with Azure Stream Analytics.
+ > * Diagnostic settings use a different export format/schema than continuous export. Migrating breaks any existing integrations with Azure Stream Analytics.
> * Diagnostic settings export might increase costs. For more information, see [Export telemetry from Application Insights](export-telemetry.md#diagnostic-settings-based-export). -- Check your current retention settings under **General** > **Usage and estimated costs** > **Data Retention** for your Log Analytics workspace. This setting will affect how long any new ingested data is stored after you migrate your Application Insights resource.
+- Check your current retention settings under **General** > **Usage and estimated costs** > **Data Retention** for your Log Analytics workspace. This setting affects how long any new ingested data is stored after you migrate your Application Insights resource.
> [!NOTE]
- > - If you currently store Application Insights data for longer than the default 90 days and want to retain this longer retention period after migration, adjust your [workspace retention settings](../logs/data-retention-archive.md?tabs=portal-1%2cportal-2#set-retention-and-archive-policy-by-table) from the default 90 days to the desired longer retention period.
- > - If you've selected data retention longer than 90 days on data ingested into the classic Application Insights resource prior to migration, data retention will continue to be billed through that Application Insights resource until the data exceeds the retention period.
+ > - If you currently store Application Insights data for longer than the default 90 days and want to retain this longer retention period after migration, adjust your [workspace retention settings](../logs/data-retention-archive.md?tabs=portal-1%2cportal-2#set-retention-and-archive-policy-by-table).
+ > - If you've selected data retention longer than 90 days on data ingested into the classic Application Insights resource prior to migration, data retention continues to be billed through that Application Insights resource until the data exceeds the retention period.
> - If the retention setting for your Application Insights instance under **Configure** > **Usage and estimated costs** > **Data Retention** is enabled, use that setting to control the retention days for the telemetry data still saved in your classic resource's storage. - Understand [workspace-based Application Insights](../logs/cost-logs.md#application-insights-billing) usage and costs.
To migrate a classic Application Insights resource to a workspace-based resource
1. Select **Migrate to Workspace-based**.
- ![Screenshot that shows the Migrate to Workspace-based resource button.](./media/convert-classic-resource/migrate.png)
+ ![Screenshot that shows the Migrate to Workspace-based button.](./media/convert-classic-resource/migrate.png)
-1. Choose the Log Analytics workspace where you want all future ingested Application Insights telemetry to be stored. It can either be a Log Analytics workspace in the same subscription or a different subscription that shares the same Azure Active Directory tenant. The Log Analytics workspace doesn't have to be in the same resource group as the Application Insights resource.
+1. Select the Log Analytics workspace where you want all future ingested Application Insights telemetry to be stored. It can either be a Log Analytics workspace in the same subscription or a different subscription that shares the same Azure Active Directory tenant. The Log Analytics workspace doesn't have to be in the same resource group as the Application Insights resource.
> [!NOTE]
- > Migrating to a workspace-based resource can take up to 24 hours, but the process is usually faster than that. Rely on accessing data through your Application Insights resource while you wait for the migration process to finish. After it's finished, you'll see new data stored in the Log Analytics workspace tables.
+ > Migrating to a workspace-based resource can take up to 24 hours, but the process is usually faster. Rely on accessing data through your Application Insights resource while you wait for the migration process to finish. After it's finished, you'll see new data stored in the Log Analytics workspace tables.
![Screenshot that shows the Migration wizard UI with the option to select target workspace.](./media/convert-classic-resource/migration.png)
- After your resource is migrated, you'll see the corresponding workspace information in the **Overview** pane:
+ After your resource is migrated, you'll see the corresponding workspace information in the **Overview** pane.
- ![Screenshot that shows the Workspace Name](./media/create-workspace-resource/workspace-name.png)
+ ![Screenshot that shows the Workspace name.](./media/create-workspace-resource/workspace-name.png)
Selecting the blue link text takes you to the associated Log Analytics workspace where you can take advantage of the new unified workspace query environment. > [!TIP]
-> After you migrate to a workspace-based Application Insights resource, we recommend using the [workspace's daily cap](../logs/daily-cap.md) to limit ingestion and costs instead of the cap in Application Insights.
+> After you migrate to a workspace-based Application Insights resource, use the [workspace's daily cap](../logs/daily-cap.md) to limit ingestion and costs instead of the cap in Application Insights.
## Understand log queries
-We still provide full backward compatibility for your Application Insights classic resource queries, workbooks, and log-based alerts within the Application Insights experience.
+We provide full backward compatibility for your Application Insights classic resource queries, workbooks, and log-based alerts within the Application Insights experience.
To write queries against the [new workspace-based table structure/schema](#workspace-based-resource-changes), you must first go to your Log Analytics workspace.
-To ensure the queries successfully run, validate that the query's fields align with the [new schema fields](#appmetrics).
+To ensure the queries run successfully, validate that the query's fields align with the [new schema fields](#appmetrics).
-If you have multiple Application Insights resources that store telemetry in one Log Analytics workspace, but you want to query data from one specific Application Insights resource, you have two options:
+You might have multiple Application Insights resources that store telemetry in one Log Analytics workspace, but you want to query data from one specific Application Insights resource. You have two options:
-- **Option 1:** Go to the desired Application Insights resource and select the **Logs** tab. All queries from this tab will automatically pull data from the selected Application Insights resource.-- **Option 2:** Go to the Log Analytics workspace that you configured as the destination for your Application Insights telemetry and select the **Logs** tab. To query data from a specific Application Insights resource, filter for the built-in `_ResourceId` property that's available in all application-specific tables.
+- Go to your Application Insights resource and select the **Logs** tab. All queries from this tab automatically pull data from the selected Application Insights resource.
+- Go to the Log Analytics workspace that you configured as the destination for your Application Insights telemetry and select the **Logs** tab. To query data from a specific Application Insights resource, filter for the built-in `_ResourceId` property that's available in all application-specific tables.
-Notice that if you query directly from the Log Analytics workspace, you'll only see data that's ingested post migration. To see both your classic Application Insights data and the new data ingested after migration in a unified query experience, use the **Logs** tab from within your migrated Application Insights resource.
+If you query directly from the Log Analytics workspace, you'll only see data that's ingested post migration. To see both your classic Application Insights data and the new data ingested after migration in a unified query experience, use the **Logs** tab from within your migrated Application Insights resource.
> [!NOTE]
-> If you rename your Application Insights resource after you migrate to the workspace-based model, the Application Insights **Logs** tab will no longer show the telemetry collected before renaming. You can see all old and new data on the **Logs** tab of the associated Log Analytics resource.
+> If you rename your Application Insights resource after you migrate to the workspace-based model, the Application Insights **Logs** tab no longer shows the telemetry collected before renaming. You can see all old and new data on the **Logs** tab of the associated Log Analytics resource.
## Programmatic resource migration
For the full Azure CLI documentation for this command, see the [Azure CLI docume
### Azure PowerShell
-The `Update-AzApplicationInsights` PowerShell command doesn't currently support migrating a classic Application Insights resource to workspace based. To create a workspace-based resource with PowerShell, you can use the following Azure Resource Manager templates and deploy with PowerShell.
+The `Update-AzApplicationInsights` PowerShell command doesn't currently support migrating a classic Application Insights resource to workspace based. To create a workspace-based resource with PowerShell, use the following Azure Resource Manager templates and deploy them with PowerShell.
### Azure Resource Manager templates
This section provides templates.
## Modify the associated workspace
-After a workspace-based Application Insights resource has been created, you can modify the associated Log Analytics workspace.
+After you create a workspace-based Application Insights resource, you can modify the associated Log Analytics workspace.
From within the Application Insights resource pane, select **Properties** > **Change Workspace** > **Log Analytics Workspaces**.
Yes, they'll continue to work.
### Will migration affect AppInsights API accessing data?
-No. Migration won't affect existing API access to data. After migration, you'll be able to access data directly from a workspace by using a [slightly different schema](#workspace-based-resource-changes).
+No. Migration won't affect existing API access to data. After migration, you can access data directly from a workspace by using a [slightly different schema](#workspace-based-resource-changes).
### Will there be any impact on Live Metrics or other monitoring experiences?
No. There's no impact to [Live Metrics](live-stream.md#live-metrics-monitor-and-
Continuous export doesn't support workspace-based resources.
-You'll need to switch to [diagnostic settings](../essentials/diagnostic-settings.md#diagnostic-settings-in-azure-monitor).
+Switch to [diagnostic settings](../essentials/diagnostic-settings.md#diagnostic-settings-in-azure-monitor).
## Troubleshooting
This section offers troubleshooting tips for common issues.
**Error message:** "The selected workspace is configured with workspace-based access mode. Some APM features may be impacted. Select another workspace or allow resource-based access in the workspace settings. You can override this error by using CLI."
-For your workspace-based Application Insights resource to operate properly, you need to change the access control mode of your target Log Analytics workspace to the **Resource or workspace permissions** setting. This setting is located in the Log Analytics workspace UI under **Properties** > **Access control mode**. For instructions, see the [Log Analytics configure access control mode guidance](../logs/manage-access.md#access-control-mode). If your access control mode is set to the exclusive **Require workspace permissions** setting, migration via the portal migration experience will remain blocked.
+For your workspace-based Application Insights resource to operate properly, you need to change the access control mode of your target Log Analytics workspace to the **Resource or workspace permissions** setting. This setting is located in the Log Analytics workspace UI under **Properties** > **Access control mode**. For instructions, see the [Log Analytics configure access control mode guidance](../logs/manage-access.md#access-control-mode). If your access control mode is set to the exclusive **Require workspace permissions** setting, migration via the portal migration experience remains blocked.
If you can't change the access control mode for security reasons for your current target workspace, create a new Log Analytics workspace to use for the migration.
The legacy **Continuous export** functionality isn't supported for workspace-bas
![Screenshot that shows the Continuous export Disable button.](./media/convert-classic-resource/disable.png)
- - After you select **Disable**, you can go back to the migration UI. If the **Edit continuous export** page prompts you that your settings won't be saved, select **OK** for this prompt because it doesn't pertain to disabling or enabling continuous export.
+ - After you select **Disable**, you can go back to the migration UI. If the **Edit continuous export** page prompts you that your settings won't be saved, select **OK**. This prompt doesn't pertain to disabling or enabling continuous export.
- - After you've successfully migrated your Application Insights resource to workspace based, you can use diagnostic settings to replace the functionality that continuous export used to provide. Select **Diagnostics settings** > **Add diagnostic setting** from within your Application Insights resource. You can select all tables, or a subset of tables, to archive to a storage account or stream to Azure Event Hubs. For more information on diagnostic settings, see the [Azure Monitor diagnostic settings guidance](../essentials/diagnostic-settings.md).
+ - After you've successfully migrated your Application Insights resource to workspace based, you can use diagnostic settings to replace the functionality that continuous export used to provide. Select **Diagnostics settings** > **Add diagnostic setting** in your Application Insights resource. You can select all tables, or a subset of tables, to archive to a storage account or stream to Azure Event Hubs. For more information on diagnostic settings, see the [Azure Monitor diagnostic settings guidance](../essentials/diagnostic-settings.md).
### Retention settings
The legacy **Continuous export** functionality isn't supported for workspace-bas
You don't have to make any changes prior to migrating. This message alerts you that your current Application Insights retention settings aren't set to the default 90-day retention period. This warning message means you might want to modify the retention settings for your Log Analytics workspace prior to migrating and starting to ingest new data.
-You can check your current retention settings for Log Analytics under **General** > **Usage and estimated costs** > **Data Retention** from within the Log Analytics UI. This setting will affect how long any new ingested data is stored after you migrate your Application Insights resource.
+You can check your current retention settings for Log Analytics under **General** > **Usage and estimated costs** > **Data Retention** in the Log Analytics UI. This setting affects how long any new ingested data is stored after you migrate your Application Insights resource.
## Workspace-based resource changes
-Prior to the introduction of [workspace-based Application Insights resources](create-workspace-resource.md), Application Insights data was stored separately from other log data in Azure Monitor. Both are based on Azure Data Explorer and use the same Kusto Query Language (KQL). Workspace-based Application Insights resources data is stored in a Log Analytics workspace, together with other monitoring data and application data. This arrangement simplifies your configuration by allowing you to analyze data across multiple solutions more easily, and to use the capabilities of workspaces.
+Prior to the introduction of [workspace-based Application Insights resources](create-workspace-resource.md), Application Insights data was stored separately from other log data in Azure Monitor. Both are based on Azure Data Explorer and use the same Kusto Query Language (KQL). Workspace-based Application Insights resources data is stored in a Log Analytics workspace, together with other monitoring data and application data. This arrangement simplifies your configuration. You can analyze data across multiple solutions more easily and use the capabilities of workspaces.
### Classic data structure
-The structure of a Log Analytics workspace is described in [Log Analytics workspace overview](../logs/log-analytics-workspace-overview.md). For a classic application, the data isn't stored in a Log Analytics workspace. It uses the same query language, and you create and run queries by using the same Log Analytics tool in the Azure portal. Data items for classic applications are stored separately from each other. The general structure is the same as for workspace-based applications, although the table and column names are different.
+The structure of a Log Analytics workspace is described in [Log Analytics workspace overview](../logs/log-analytics-workspace-overview.md). For a classic application, the data isn't stored in a Log Analytics workspace. It uses the same query language. You create and run queries by using the same Log Analytics tool in the Azure portal. Data items for classic applications are stored separately from each other. The general structure is the same as for workspace-based applications, although the table and column names are different.
> [!NOTE]
-> The classic Application Insights experience includes backward compatibility for your resource queries, workbooks, and log-based alerts. To query or view against the [new workspace-based table structure or schema](#table-structure), you must first go to your Log Analytics workspace. During the preview, selecting **Logs** from within the Application Insights panes will give you access to the classic Application Insights query experience. For more information, see [Query scope](../logs/scope.md).
+> The classic Application Insights experience includes backward compatibility for your resource queries, workbooks, and log-based alerts. To query or view against the [new workspace-based table structure or schema](#table-structure), first go to your Log Analytics workspace. During the preview, selecting **Logs** in the Application Insights pane gives you access to the classic Application Insights query experience. For more information, see [Query scope](../logs/scope.md).
[![Diagram that shows the Azure Monitor Logs structure for Application Insights.](../logs/media/data-platform-logs/logs-structure-ai.png)](../logs/media/data-platform-logs/logs-structure-ai.png#lightbox)
The structure of a Log Analytics workspace is described in [Log Analytics worksp
### Table schemas
-The following sections show the mapping between the classic property names and the new workspace-based Application Insights property names. Use this information to convert any queries using legacy tables.
+The following sections show the mapping between the classic property names and the new workspace-based Application Insights property names. Use this information to convert any queries by using legacy tables.
-Most of the columns have the same name with different capitalization. Since KQL is case sensitive, you'll need to change each column name along with the table names in existing queries. Columns with changes in addition to capitalization are highlighted. You can still use your classic Application Insights queries within the **Logs** pane of your Application Insights resource, even if it's a workspace-based resource. The new property names are required when you query from within the context of the Log Analytics workspace experience.
+Most of the columns have the same name with different capitalization. KQL is case sensitive, so you need to change each column name along with the table names in existing queries. Columns with changes in addition to capitalization are highlighted. You can still use your classic Application Insights queries within the **Logs** pane of your Application Insights resource, even if it's a workspace-based resource. The new property names are required when you query from within the context of the Log Analytics workspace experience.
#### AppAvailabilityResults
Legacy table: customMetrics
|valueSum|real|ValueSum|real| > [!NOTE]
-> Older versions of Application Insights SDKs used to report standard deviation (`valueStdDev`) in the metrics pre-aggregation. Because adoption in metrics analysis was light, the field was removed and is no longer aggregated by the SDKs. If the value is received by the Application Insights data collection endpoint, it gets dropped during ingestion and isn't sent to the Log Analytics workspace. If you're interested in using standard deviation in your analysis, we recommend using queries against Application Insights raw events.
+> Older versions of Application Insights SDKs are used to report standard deviation (`valueStdDev`) in the metrics pre-aggregation. Because adoption in metrics analysis was light, the field was removed and is no longer aggregated by the SDKs. If the value is received by the Application Insights data collection endpoint, it's dropped during ingestion and isn't sent to the Log Analytics workspace. If you want to use standard deviation in your analysis, use queries against Application Insights raw events.
#### AppPageViews
Legacy table: traces
## Next steps * [Explore metrics](../essentials/metrics-charts.md)
-* [Write Log Analytics queries](../logs/log-query-overview.md)
+* [Write Log Analytics queries](../logs/log-query-overview.md)
azure-monitor Data Model Context https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-context.md
Title: Azure Application Insights Telemetry Data Model - Telemetry Context | Microsoft Docs
-description: Application Insights telemetry context data model
+ Title: 'Application Insights telemetry data model: Telemetry context | Microsoft Docs'
+description: Learn about the Application Insights telemetry context data model.
Last updated 05/15/2017
# Telemetry context: Application Insights data model
-Every telemetry item may have a strongly typed context field. Every field enables a specific monitoring scenario. Use the custom properties collection to store custom or application-specific contextual information.
-
+Every telemetry item might have a strongly typed context field. Every field enables a specific monitoring scenario. Use the custom properties collection to store custom or application-specific contextual information.
## Application version
-Information in the application context fields is always about the application that is sending the telemetry. Application version is used to analyze trend changes in the application behavior and its correlation to the deployments.
-
-Max length: 1024
+Information in the application context fields is always about the application that's sending the telemetry. The application version is used to analyze trend changes in the application behavior and its correlation to the deployments.
+Maximum length: 1,024
## Client IP address
-The IP address of the client device. IPv4 and IPv6 are supported. When telemetry is sent from a service, the location context is about the user that initiated the operation in the service. Application Insights extract the geo-location information from the client IP and then truncate it. So client IP by itself can't be used as end-user identifiable information.
-
-Max length: 46
+This field is the IP address of the client device. IPv4 and IPv6 are supported. When telemetry is sent from a service, the location context is about the user that initiated the operation in the service. Application Insights extract the geo-location information from the client IP and then truncate it. The client IP by itself can't be used as user identifiable information.
+Maximum length: 46
## Device type
-Originally this field was used to indicate the type of the device the end user of the application is using. Today used primarily to distinguish JavaScript telemetry with the device type 'Browser' from server-side telemetry with the device type 'PC'.
-
-Max length: 64
+Originally, this field was used to indicate the type of the device the user of the application is using. Today it's used primarily to distinguish JavaScript telemetry with the device type `Browser` from server-side telemetry with the device type `PC`.
+Maximum length: 64
## Operation ID
-A unique identifier of the root operation. This identifier allows grouping telemetry across multiple components. See [telemetry correlation](./correlation.md) for details. The operation ID is created by either a request or a page view. All other telemetry sets this field to the value for the containing request or page view.
-
-Max length: 128
+This field is the unique identifier of the root operation. This identifier allows grouping telemetry across multiple components. For more information, see [Telemetry correlation](./correlation.md). The operation ID is created by either a request or a page view. All other telemetry sets this field to the value for the containing request or page view.
+Maximum length: 128
## Parent operation ID
-The unique identifier of the telemetry item's immediate parent. See [telemetry correlation](./correlation.md) for details.
-
-Max length: 128
+This field is the unique identifier of the telemetry item's immediate parent. For more information, see [Telemetry correlation](./correlation.md).
+Maximum length: 128
## Operation name
-The name (group) of the operation. The operation name is created by either a request or a page view. All other telemetry items set this field to the value for the containing request or page view. Operation name is used for finding all the telemetry items for a group of operations (for example 'GET Home/Index'). This context property is used to answer questions like "what are the typical exceptions thrown on this page."
-
-Max length: 1024
+This field is the name (group) of the operation. The operation name is created by either a request or a page view. All other telemetry items set this field to the value for the containing request or page view. The operation name is used for finding all the telemetry items for a group of operations (for example, `GET Home/Index`). This context property is used to answer questions like What are the typical exceptions thrown on this page?
+Maximum length: 1,024
## Synthetic source of the operation
-Name of synthetic source. Some telemetry from the application may represent synthetic traffic. It may be web crawler indexing the web site, site availability tests, or traces from diagnostic libraries like Application Insights SDK itself.
-
-Max length: 1024
+This field is the name of the synthetic source. Some telemetry from the application might represent synthetic traffic. It might be the web crawler indexing the website, site availability tests, or traces from diagnostic libraries like the Application Insights SDK itself.
+Maximum length: 1,024
## Session ID
-Session ID - the instance of the user's interaction with the app. Information in the session context fields is always about the end user. When telemetry is sent from a service, the session context is about the user that initiated the operation in the service.
-
-Max length: 64
+Session ID is the instance of the user's interaction with the app. Information in the session context fields is always about the user. When telemetry is sent from a service, the session context is about the user who initiated the operation in the service.
+Maximum length: 64
## Anonymous user ID
-Anonymous user ID. (User.Id) Represents the end user of the application. When telemetry is sent from a service, the user context is about the user that initiated the operation in the service.
+Anonymous user ID (User.Id) represents the user of the application. When telemetry is sent from a service, the user context is about the user who initiated the operation in the service.
-[Sampling](./sampling.md) is one of the techniques to minimize the amount of collected telemetry. Sampling algorithm attempts to either sample in or out all the correlated telemetry. Anonymous user ID is used for sampling score generation. So anonymous user ID should be a random enough value.
+[Sampling](./sampling.md) is one of the techniques to minimize the amount of collected telemetry. A sampling algorithm attempts to either sample in or out all the correlated telemetry. An anonymous user ID is used for sampling score generation, so an anonymous user ID should be a random enough value.
> [!NOTE]
-> The count of anonymous user IDs is not the same as the number of unique application users. The count of anonymous user IDs is typically higher because each time the user opens your app on a different device or browser, or cleans up browser cookies, a new unique anonymous user id is allocated. This calculation may result in counting the same physical users multiple times.
+> The count of anonymous user IDs isn't the same as the number of unique application users. The count of anonymous user IDs is typically higher because each time the user opens your app on a different device or browser, or cleans up browser cookies, a new unique anonymous user ID is allocated. This calculation might result in counting the same physical users multiple times.
User IDs can be cross referenced with session IDs to provide unique telemetry dimensions and establish user activity over a session duration.
-Using anonymous user ID to store user name is a misuse of the field. Use Authenticated user ID.
-
-Max length: 128
+Using an anonymous user ID to store a username is a misuse of the field. Use an authenticated user ID.
+Maximum length: 128
## Authenticated user ID
-Authenticated user ID. The opposite of anonymous user ID, this field represents the user with a friendly name. This ID is only collected by default with the ASP.NET Framework SDK's [`AuthenticatedUserIdTelemetryInitializer`](https://github.com/microsoft/ApplicationInsights-dotnet/blob/develop/WEB/Src/Web/Web/AuthenticatedUserIdTelemetryInitializer.cs).
+An authenticated user ID is the opposite of an anonymous user ID. This field represents the user with a friendly name. This ID is only collected by default with the ASP.NET Framework SDK's [`AuthenticatedUserIdTelemetryInitializer`](https://github.com/microsoft/ApplicationInsights-dotnet/blob/develop/WEB/Src/Web/Web/AuthenticatedUserIdTelemetryInitializer.cs).
-Use the Application Insights SDK to initialize the Authenticated User ID with a value identifying the user persistently across browsers and devices. In this way, all telemetry items are attributed to that unique ID. This ID enables querying for all telemetry collected for a specific user (subject to [sampling configurations](./sampling.md) and [telemetry filtering](./api-filtering-sampling.md)).
+Use the Application Insights SDK to initialize the authenticated user ID with a value that identifies the user persistently across browsers and devices. In this way, all telemetry items are attributed to that unique ID. This ID enables querying for all telemetry collected for a specific user (subject to [sampling configurations](./sampling.md) and [telemetry filtering](./api-filtering-sampling.md)).
User IDs can be cross referenced with session IDs to provide unique telemetry dimensions and establish user activity over a session duration.
-Max length: 1024
-
+Maximum length: 1,024
## Account ID
-The account ID, in multi-tenant applications, is the tenant account ID or name that the user is acting with. It's used for more user segmentation when user ID and authenticated user ID aren't sufficient. For example, a subscription ID for Azure portal or the blog name for a blogging platform.
-
-Max length: 1024
+The account ID, in multi-tenant applications, is the tenant account ID or name that the user is acting with. It's used for more user segmentation when a user ID and an authenticated user ID aren't sufficient. Examples might be a subscription ID for Azure portal or the blog name for a blogging platform.
+Maximum length: 1,024
## Cloud role
-Name of the role the application is a part of. Maps directly to the role name in Azure. Can also be used to distinguish micro services, which are part of a single application.
-
-Max length: 256
+This field is the name of the role of which the application is a part. It maps directly to the role name in Azure. It can also be used to distinguish micro services, which are part of a single application.
+Maximum length: 256
## Cloud role instance
-Name of the instance where the application is running. Computer name for on-premises, instance name for Azure.
-
-Max length: 256
+This field is the name of the instance where the application is running. For example, it's the computer name for on-premises or the instance name for Azure.
+Maximum length: 256
## Internal: SDK version
-SDK version. See [this article](https://github.com/MohanGsk/ApplicationInsights-Home/blob/master/EndpointSpecs/SDK-VERSIONS.md) for information.
-
-Max length: 64
+For more information, see this [SDK version article](https://github.com/MohanGsk/ApplicationInsights-Home/blob/master/EndpointSpecs/SDK-VERSIONS.md).
+Maximum length: 64
## Internal: Node name This field represents the node name used for billing purposes. Use it to override the standard detection of nodes.
-Max length: 256
-
+Maximum length: 256
## Next steps - Learn how to [extend and filter telemetry](./api-filtering-sampling.md).-- See [data model](data-model.md) for Application Insights types and data model.-- Check out standard context properties collection [configuration](./configuration-with-applicationinsights-config.md#telemetry-initializers-aspnet).-
+- See [Application Insights telemetry data model](data-model.md) for Application Insights types and data model.
+- Check out standard context properties collection [configuration](./configuration-with-applicationinsights-config.md#telemetry-initializers-aspnet).
azure-monitor Deprecated Java 2X https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/deprecated-java-2x.md
# Application Insights for Java 2.x > [!CAUTION]
-> This article applies to Application Insights Java 2.x, which is no longer recommended.
+> This article applies to Application Insights Java 2.x, which is [no longer recommended](https://azure.microsoft.com/updates/application-insights-java-2x-retirement/).
> > Documentation for the latest version can be found at [Application Insights Java 3.x](./java-in-process-agent.md).
If your project is already set up to use Gradle for build, merge the following c
-#### Questions
+#### Frequently asked questions
* What's the relationship between the `-web-auto`, `-web`, and `-core` components? * `applicationinsights-web-auto` gives you metrics that track HTTP servlet request counts and response times by automatically registering the Application Insights servlet filter at runtime.
azure-monitor Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/devops.md
Title: Web app performance monitoring - Azure Application Insights
-description: How Application Insights fits into the DevOps cycle
+ Title: Web app performance monitoring - Application Insights
+description: How Application Insights fits into the DevOps cycle.
Last updated 12/21/2018 # Deep diagnostics for web apps and services with Application Insights+
+This article explains how Application Insights fits into the DevOps cycle.
+ ## Why do I need Application Insights?
-Application Insights monitors your running web app. It tells you about failures and performance issues, and helps you analyze how customers use your app. It works for apps running on many platforms (ASP.NET, Java EE, Node.js, ...) and is hosted either in the Cloud or on-premises.
+Application Insights monitors your running web app. It tells you about failures and performance issues and helps you analyze how customers use your app. It works for apps running on platforms like ASP.NET, Java EE, and Node.js. It's hosted in the cloud or on-premises.
+
+![Image that shows aspects of the complexity of delivering web apps.](./media/devops/010.png)
+
+It's essential to monitor a modern application while it's running. You want to detect failures before your customers do. You also want to discover and fix performance issues that slow things down or cause an inconvenience to your users. When the system is performing to your satisfaction, you also want to know what the users are doing with it. For example, are they using the latest feature? Are they succeeding with it?
+
+Modern web applications are developed in a cycle of continuous delivery:
+
+- Release a new feature or improvement.
+- Observe how well it works for users.
+- Plan the next increment of development based on that knowledge.
-![Aspects of the complexity of delivering web apps](./media/devops/010.png)
+A key part of this cycle is the observation phase. Application Insights provides the tools to monitor a web application for performance and usage.
-It's essential to monitor a modern application while it is running. Most importantly, you want to detect failures before most of your customers do. You also want to discover and fix performance issues that, while not catastrophic, perhaps slow things down or cause some inconvenience to your users. And when the system is performing to your satisfaction, you want to know what the users are doing with it: Are they using the latest feature? Are they succeeding with it?
+The most important aspect of this process is diagnostics and diagnosis. If the application fails, business is lost. The prime role of a monitoring framework is to:
-Modern web applications are developed in a cycle of continuous delivery: release a new feature or improvement; observe how well it works for the users; plan the next increment of development based on that knowledge. A key part of this cycle is the observation phase. Application Insights provides the tools to monitor a web application for performance and usage.
+- Detect failures reliably.
+- Notify you immediately.
+- Present you with the information needed to diagnose the problem.
-The most important aspect of this process is diagnostics and diagnosis. If the application fails, then business is being lost. The prime role of a monitoring framework is therefore to detect failures reliably, notify you immediately, and to present you with the information needed to diagnose the problem. This is exactly what Application Insights does.
+Application Insights performs these tasks.
### Where do bugs come from?
-Failures in web systems typically arise from configuration issues or bad interactions between their many components. The first task when tackling a live site incident is therefore to identify the locus of the problem: which component or relationship is the cause?
+Failures in web systems typically arise from configuration issues or bad interactions between their many components. The first task when you tackle a live site incident is to identify the locus of the problem. Which component or relationship is the cause?
-Some of us, those with gray hair, can remember a simpler era in which a computer program ran in one computer. The developers would test it thoroughly before shipping it; and having shipped it, would rarely see or think about it again. The users would have to put up with the residual bugs for many years.
+In a simpler era, a computer program ran in one computer. Developers tested it thoroughly before shipping it, and after shipping, they rarely saw or thought about it again. Users had to put up with any residual bugs for many years.
-Things are so very different now. Your app has a plethora of different devices to run on, and it can be difficult to guarantee the exact same behavior on each one. Hosting apps in the cloud means bugs can be fixed fast, but it also means continuous competition and the expectation of new features at frequent intervals.
+The process is vastly different now. Your app has a multitude of different devices to run on, and it can be difficult to guarantee the exact same behavior on each one. Hosting apps in the cloud means bugs can be fixed fast. But it also means there's continuous competition and the expectation of new features at frequent intervals.
-In these conditions, the only way to keep a firm control on the bug count is automated unit testing. It would be impossible to manually re-test everything on every delivery. Unit test is now a commonplace part of the build process. Tools such as the Xamarin Test Cloud help by providing automated UI testing on multiple browser versions. These testing regimes allow us to hope that the rate of bugs found inside an app can be kept to a minimum.
+In these conditions, the only way to keep firm control on the bug count is automated unit testing. It's impossible to manually retest everything on every delivery. Unit test is now a commonplace part of the build process. Tools such as the Xamarin Test Cloud help by providing automated UI testing on multiple browser versions. These testing regimes allow us to hope that the rate of bugs found inside an app can be kept to a minimum.
-Typical web applications have many live components. In addition to the client (in a browser or device app) and the web server, there's likely to be substantial backend processing. Perhaps the backend is a pipeline of components, or a looser collection of collaborating pieces. And many of them won't be in your control - they are external services on which you depend.
+Typical web applications have many live components. Along with the client (in a browser or device app) and the web server, there's likely to be substantial back-end processing. Perhaps the back end is a pipeline of components or a loose collection of collaborating pieces. Many of them won't be in your control. They're external services on which you depend.
-In configurations like these, it can be difficult and uneconomical to test for, or foresee, every possible failure mode, other than in the live system itself.
+In configurations like these, it can be difficult and uneconomical to test for, or foresee, every possible failure mode, other than in the live system itself.
-### Questions ...
-Some questions we ask when we're developing a web system:
+### Questions
+Here are some questions to ask when you're developing a web system:
-* Is my app crashing?
-* What exactly happened? - If it failed a request, I want to know how it got there. We need a trace of events...
-* Is my app fast enough? How long does it take to respond to typical requests?
+* Is your app crashing?
+* What exactly happened? If it failed a request, you want to know how it got there. You need a trace of events.
+* Is your app fast enough? How long does it take to respond to typical requests?
* Can the server handle the load? When the rate of requests rises, does the response time hold steady?
-* How responsive are my dependencies - the REST APIs, databases and other components that my app calls. In particular, if the system is slow, is it my component, or am I getting slow responses from someone else?
-* Is my app Up or Down? Can it be seen from around the world? Let me know if it stops....
-* What is the root cause? Was the failure in my component or a dependency? Is it a communication issue?
-* How many users are impacted? If I have more than one issue to tackle, which is the most important?
+* How responsive are your dependencies, such as the REST APIs, databases, and other components that your app calls? In particular, if the system is slow, is it your component, or are you getting slow responses from someone else?
+* Is your app up or down? Can it be seen from around the world? You need to know if it stops.
+* What's the root cause? Was the failure in your component or a dependency? Is it a communication issue?
+* How many users are affected? If you have more than one issue to tackle, which is the most important?
## What is Application Insights?
-![Basic workflow of Application Insights](./media/devops/020.png)
+![Image that shows a basic workflow of Application Insights.](./media/devops/020.png)
-1. Application Insights instruments your app and sends telemetry about it while the app is running. Either you can build the Application Insights SDK into the app, or you can apply instrumentation at runtime. The former method is more flexible, as you can add your own telemetry to the regular modules.
-2. The telemetry is sent to the Application Insights portal, where it is stored and processed. (Although Application Insights is hosted in Microsoft Azure, it can monitor any web apps - not just Azure apps.)
-3. The telemetry is presented to you in the form of charts and tables of events.
+1. Application Insights instruments your app and sends telemetry about it while the app is running. Either you can build the Application Insights SDK into the app or you can apply instrumentation at runtime. The former method is more flexible because you can add your own telemetry to the regular modules.
+1. The telemetry is sent to the Application Insights portal, where it's stored and processed. Although Application Insights is hosted in Azure, it can monitor any web apps, not just Azure apps.
+1. The telemetry is presented to you in the form of charts and tables of events.
-There are two main types of telemetry: aggregated and raw instances.
+There are two main types of telemetry: aggregated and raw instances.
-* Instance data includes, for example, a report of a request that has been received by your web app. You can find for and inspect the details of a request using the Search tool in the Application Insights portal. The instance would include data such as how long your app took to respond to the request, as well as the requested URL, approximate location of the client, and other data.
-* Aggregated data includes counts of events per unit time, so that you can compare the rate of requests with the response times. It also includes averages of metrics such as request response times.
+* Instance data might include a report of a request that's been received by your web app. You can find for and inspect the details of a request by using the Search tool in the Application Insights portal. The instance might include data like how long your app took to respond to the request and the requested URL and the approximate location of the client.
+* Aggregated data includes counts of events per unit time so that you can compare the rate of requests with the response times. It also includes averages of metrics like request response times.
The main categories of data are:
-* Requests to your app (usually HTTP requests), with data on URL, response time, and success or failure.
-* Dependencies - REST and SQL calls made by your app, also with URI, response times and success
+* Requests to your app (usually HTTP requests) with data on URL, response time, and success or failure.
+* Dependencies like REST and SQL calls made by your app, also with URI, response times, and success.
* Exceptions, including stack traces.
-* Page view data, which come from the users' browsers.
-* Metrics such as performance counters, as well as metrics you write yourself.
-* Custom events that you can use to track business events
+* Page view data, which comes from users' browsers.
+* Metrics like performance counters and metrics you write yourself.
+* Custom events that you can use to track business events.
* Log traces used for debugging.
-## Case Study: Real Madrid F.C.
-The web service of [Real Madrid Football Club](https://www.realmadrid.com/) serves about 450 million fans around the world. Fans access it both through web browsers and the Club's mobile apps. Fans cannot only book tickets, but also access information and video clips on results, players and upcoming games. They can search with filters such as numbers of goals scored. There are also links to social media. The user experience is highly personalized, and is designed as a two-way communication to engage fans.
+## Case study: Real Madrid F.C.
+The web service of [Real Madrid Football Club](https://www.realmadrid.com/) serves about 450 million fans around the world. Fans access it through web browsers and the club's mobile apps. Fans can book tickets and also access information and video clips on results, players, and upcoming games. They can search with filters like numbers of goals scored. There are also links to social media. The user experience is highly personalized and is designed as a two-way communication to engage fans.
-The solution [is a system of services and applications on Microsoft Azure](https://www.microsoft.com/inculture/sports/real-madrid/). Scalability is a key requirement: traffic is variable and can reach very high volumes during and around matches.
+The solution [is a system of services and applications on Azure](https://www.microsoft.com/inculture/sports/real-madrid/). Scalability is a key requirement. Traffic is variable and can reach high volumes during and around matches.
-For Real Madrid, it's vital to monitor the system's performance. Azure Application Insights provides a comprehensive view across the system, ensuring a reliable and high level of service.
+For Real Madrid, it's vital to monitor the system's performance. Application Insights provides a comprehensive view across the system to ensure a reliable and high level of service.
-The Club also gets in-depth understanding of its fans: where they are (only 3% are in Spain), what interest they have in players, historical results, and upcoming games, and how they respond to match outcomes.
+The club also gets in-depth understanding of its fans like where they are (only 3% are in Spain), what interest they have in players, historical results, and upcoming games, and how they respond to match outcomes.
-Most of this telemetry data is automatically collected with no added code, which simplified the solution and reduced operational complexity. For Real Madrid, Application Insights deals with 3.8 billion telemetry points each month.
+Most of this telemetry data is automatically collected with no added code, which simplifies the solution and reduces operational complexity. For Real Madrid, Application Insights deals with 3.8 billion telemetry points each month.
-Real Madrid uses the Power BI module to view their telemetry.
+Real Madrid uses the Power BI module to view its telemetry.
-![Power BI view of Application Insights telemetry](./media/devops/080.png)
+![Screenshot that shows a Power BI view of Application Insights telemetry.](./media/devops/080.png)
## Smart detection
-[Proactive diagnostics](../alerts/proactive-diagnostics.md) is a recent feature. Without any special configuration by you, Application Insights automatically detects and alerts you about unusual rises in failure rates in your app. It's smart enough to ignore a background of occasional failures, and also rises that are simply proportionate to a rise in requests. So for example, if there's a failure in one of the services you depend on, or if the new build you just deployed isn't working so well, then you'll know about it as soon as you look at your email. (And there are webhooks so that you can trigger other apps.)
+[Proactive diagnostics](../alerts/proactive-diagnostics.md) is a recent feature. Without any special configuration by you, Application Insights automatically detects and alerts you about unusual rises in failure rates in your app. It's smart enough to ignore a background of occasional failures and also rises that are simply proportionate to an increase in requests.
-Another aspect of this feature performs a daily in-depth analysis of your telemetry, looking for unusual patterns of performance that are hard to discover. For example, it can find slow performance associated with a particular geographical area, or with a particular browser version.
+For example, there might be a failure in one of the services you depend on. Or perhaps the new build you deployed isn't working well. You'll know about it as soon as you look at your email. There are also webhooks so that you can trigger other apps.
-In both cases, the alert not only tells you the symptoms it's discovered, but also gives you data you need to help diagnose the problem, such as relevant exception reports.
+Another aspect of this feature performs a daily in-depth analysis of your telemetry, looking for unusual patterns of performance that are hard to discover. For example, it can find slow performance associated with a particular geographical area or with a specific browser version.
-![Email from proactive diagnostics](./media/devops/030.png)
+In both cases, the alert tells you the symptoms it's discovered. It also gives you the data you need to help diagnose the problem, such as relevant exception reports.
-Customer Samtec said: "During a recent feature cutover, we found an under-scaled database that was hitting its resource limits and causing timeouts. Proactive detection alerts came through literally as we were triaging the issue, very near real time as advertised. This alert coupled with the Azure platform alerts helped us almost instantly fix the issue. Total downtime < 10 minutes."
+![Screenshot that shows email from proactive diagnostics.](./media/devops/030.png)
+
+Customer Samtec said, "During a recent feature cutover, we found an under-scaled database that was hitting its resource limits and causing timeouts. Proactive detection alerts came through literally as we were triaging the issue, very near real time as advertised. This alert coupled with the Azure platform alerts helped us almost instantly fix the issue. Total downtime <10 minutes."
## Live Metrics Stream
-Deploying the latest build can be an anxious experience. If there are any problems, you want to know about them right away, so that you can back out if necessary. Live Metrics Stream gives you key metrics with a latency of about one second.
+Deploying the latest build can be an anxious experience. If there are any problems, you want to know about them right away so that you can back out if necessary. Live Metrics Stream gives you key metrics with a latency of about one second.
-![Live metrics](./media/devops/0040.png)
+![Screenshot that shows live metrics.](./media/devops/0040.png)
-And lets you immediately inspect a sample of any failures or exceptions.
+It lets you immediately inspect a sample of any failures or exceptions.
-![Live failure events](./media/devops/002-live-stream-failures.png)
+![Screenshot that shows live failure events.](./media/devops/002-live-stream-failures.png)
## Application Map
-Application Map automatically discovers your application topology, laying the performance information on top of it, to let you easily identify performance bottlenecks and problematic flows across your distributed environment. It allows you to discover application dependencies on Azure Services. You can triage the problem by understanding if it is code-related or dependency related and from a single place drill into related diagnostics experience. For example, your application may be failing due to performance degradation in SQL tier. With application map, you can see it immediately and drill into the SQL Index Advisor or Query Insights experience.
+Application Map automatically discovers your application topology. It lays the performance information on top of the map to let you easily identify performance bottlenecks and problematic flows across your distributed environment. With Application Map, you can discover application dependencies on Azure services.
+
+You can triage a problem by understanding if it's code related or dependency related. From a single place, you can drill into the related diagnostics experience. For example, your application might be failing because of performance degradation in a SQL tier. With Application Map, you can see it immediately and drill into the SQL Index Advisor or Query Insights experience.
-![Application Map](./media/devops/0050.png)
+![Screenshot that shows an application map.](./media/devops/0050.png)
-## Application Insights Analytics
-With [Analytics](../logs/log-query-overview.md), you can write arbitrary queries in a powerful SQL-like language. Diagnosing across the entire app stack becomes easy as various perspectives get connected and you can ask the right questions to correlate Service Performance with Business Metrics and Customer Experience.
+## Application Insights Log Analytics
+With [Log Analytics](../logs/log-query-overview.md), you can write arbitrary queries in a powerful SQL-like language. Diagnosing across the entire app stack becomes easy as various perspectives get connected. Then you can ask the right questions to correlate service performance with business metrics and customer experience.
-You can query all your telemetry instance and metric raw data stored in the portal. The language includes filter, join, aggregation, and other operations. You can calculate fields and perform statistical analysis. There are both tabular and graphical visualizations.
+You can query all your telemetry instance and metric raw data stored in the portal. The language includes filter, join, aggregation, and other operations. You can calculate fields and perform statistical analysis. Tabular and graphical visualizations are available.
-![Analytics query and results chart](./media/devops/0025.png)
+![Screenshot that shows an analytics query and results chart.](./media/devops/0025.png)
For example, it's easy to:
-* Segment your applicationΓÇÖs request performance data by customer tiers to understand their experience.
+* Segment your application's request performance data by customer tiers to understand their experience.
* Search for specific error codes or custom event names during live site investigations. * Drill down into the app usage of specific customers to understand how features are acquired and adopted. * Track sessions and response times for specific users to enable support and operations teams to provide instant customer support. * Determine frequently used app features to answer feature prioritization questions.
-Customer DNN said: "Application Insights has provided us with the missing part of the equation for being able to combine, sort, query, and filter data as needed. Allowing our team to use their own ingenuity and experience to find data with a powerful query language has allowed us to find insights and solve problems we didn't even know we had. A lot of interesting answers come from the questions starting with *'I wonder if...'.*"
+Customer DNN said, "Application Insights has provided us with the missing part of the equation for being able to combine, sort, query, and filter data as needed. Allowing our team to use their own ingenuity and experience to find data with a powerful query language has allowed us to find insights and solve problems we didn't even know we had. A lot of interesting answers come from the questions starting with *'I wonder if...'.*"
## Development tools integration
-### Configuring Application Insights
-Visual Studio and Eclipse have tools to configure the correct SDK packages for the project you are developing. There's a menu command to add Application Insights.
-If you happen to be using a trace logging framework such as Log4N, NLog, or System.Diagnostics.Trace, then you get the option to send the logs to Application Insights along with the other telemetry, so that you can easily correlate the traces with requests, dependency calls, and exceptions.
+Application Insights integrates with development tools.
+
+### Configure Application Insights
+Visual Studio and Eclipse have tools to configure the correct SDK packages for the project you're developing. There's a menu command to add Application Insights.
+
+If you happen to be using a trace logging framework, such as Log4N, NLog, or System.Diagnostics.Trace, you get the option to send the logs to Application Insights along with the other telemetry so that you can easily correlate the traces with requests, dependency calls, and exceptions.
### Search telemetry in Visual Studio
-While developing and debugging a feature, you can view and search the telemetry directly in Visual Studio, using the same search facilities as in the web portal.
+As you develop and debug a feature, you can view and search the telemetry directly in Visual Studio. You can use the same search facilities as in the web portal.
-And when Application Insights logs an exception, you can view the data point in Visual Studio and jump straight to the relevant code.
+When Application Insights logs an exception, you can view the data point in Visual Studio and jump straight to the relevant code.
-![Visual Studio search](./media/devops/060.png)
+![Screenshot that shows a Visual Studio search.](./media/devops/060.png)
-During debugging, you have the option to keep the telemetry in your development machine, viewing it in Visual Studio but without sending it to the portal. This local option avoids mixing debugging with production telemetry.
+During debugging, you can keep the telemetry in your development machine. You can view it in Visual Studio without sending it to the portal. This local option avoids mixing debugging with production telemetry.
### Work items When an alert is raised, Application Insights can automatically create a work item in your work tracking system.
-## But what about...?
-* [Privacy and storage](./data-retention-privacy.md) - Your telemetry is kept on Azure secure servers.
-* Performance - the impact is very low. Telemetry is batched.
-* [Pricing](../logs/cost-logs.md#application-insights-billing) - You can get started for free, and that continues while you're in low volume.
+## Other considerations
+
+Find out more about Application Insights:
+* [Privacy and storage](./data-retention-privacy.md): Your telemetry is kept on Azure secure servers.
+* **Performance**: The impact is low because telemetry is batched.
+* [Pricing](../logs/cost-logs.md#application-insights-billing): You can get started for free, and that continues while you're in low volume.
## Next steps Getting started with Application Insights is easy. The main options are:
-* [IIS servers](./status-monitor-v2-overview.md)
-* Instrument your project during development. You can do this for [ASP.NET](./asp-net.md) or [Java](./java-in-process-agent.md) apps, and [Node.js](./nodejs.md) and a host of [other types](./app-insights-overview.md#supported-languages).
-* Instrument [any web page](./javascript.md) by adding a short code snippet.
-
+* Use [IIS servers](./status-monitor-v2-overview.md).
+* Instrument your project during development. You can do it for [ASP.NET](./asp-net.md) or [Java](./java-in-process-agent.md) apps, [Node.js](./nodejs.md), and a host of [other types](./app-insights-overview.md#supported-languages).
+* Instrument [any webpage](./javascript.md) by adding a short code snippet.
azure-monitor Diagnostic Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/diagnostic-search.md
In addition to the out-of-the-box telemetry sent by Application Insights SDK, yo
Learn how to [send logs and custom telemetry to Application Insights](./asp-net-trace-logs.md).
-## <a name="questions"></a>Q & A
+## <a name="questions"></a>Frequently asked questions
Find answers to common questions.
azure-monitor Export Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/export-telemetry.md
For export samples, see:
On larger scales, consider [HDInsight](https://azure.microsoft.com/services/hdinsight/) Hadoop clusters in the cloud. HDInsight provides various technologies for managing and analyzing big data. You can use it to process data that's been exported from Application Insights.
-## Q & A
+## Frequently asked questions
This section provides answers to common questions.
azure-monitor Migrate From Instrumentation Keys To Connection Strings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/migrate-from-instrumentation-keys-to-connection-strings.md
Follow the [migration steps](#migration) in this article to resolve this alert.
If you hardcode an instrumentation key in your application code, that programming might take precedence before environment variables.
-## FAQ
+## Frequently asked questions
This section provides answers to common questions.
azure-monitor Status Monitor V2 Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/status-monitor-v2-get-started.md
Title: Azure Application Insights Agent - getting started | Microsoft Docs
-description: A quickstart guide for Application Insights Agent. Monitor website performance without redeploying the website. Works with ASP.NET web apps hosted on-premises, in VMs, or on Azure.
+ Title: 'Application Insights Agent: Get started | Microsoft Docs'
+description: This quickstart guide for Application Insights Agent shows how to monitor website performance without redeploying the website. It works with ASP.NET web apps hosted on-premises, in VMs, or on Azure.
Last updated 01/22/2021
# Get started with Azure Monitor Application Insights Agent for on-premises servers
-This article contains the quickstart commands expected to work for most environments.
-The instructions depend on the PowerShell Gallery to distribute updates.
-These commands support the PowerShell `-Proxy` parameter.
+This article contains the quickstart commands that are expected to work for most environments. The instructions depend on PowerShell Gallery to distribute updates. These commands support the PowerShell `-Proxy` parameter.
For an explanation of these commands, customization instructions, and information about troubleshooting, see the [detailed instructions](status-monitor-v2-detailed-instructions.md).
If you don't have an Azure subscription, create a [free account](https://azure.m
## Download and install via PowerShell Gallery
-### Install prerequisites
+Use PowerShell Gallery for download and installation.
-- To enable monitoring you will require a connection string. A connection string is displayed on the Overview pane of your Application Insights resource. For more information, see page [Connection Strings](./sdk-connection-string.md?tabs=net#find-your-connection-string).
+### Installation prerequisites
+
+To enable monitoring, you must have a connection string. A connection string is displayed on the **Overview** pane of your Application Insights resource. For more information, see [Connection strings](./sdk-connection-string.md?tabs=net#find-your-connection-string).
> [!NOTE] > As of April 2020, PowerShell Gallery has deprecated TLS 1.1 and 1.0. >
-> For additional prerequisites that you might need, see [PowerShell Gallery TLS Support](https://devblogs.microsoft.com/powershell/powershell-gallery-tls-support).
+> For more prerequisites that you might need, see [PowerShell Gallery TLS support](https://devblogs.microsoft.com/powershell/powershell-gallery-tls-support).
>
-Run PowerShell as Admin.
+Run PowerShell as an admin.
+ ```powershell Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope Process -Force Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201 -Force Set-PSRepository -Name "PSGallery" -InstallationPolicy Trusted Install-Module -Name PowerShellGet -Force ``` + Close PowerShell. ### Install Application Insights Agent
-Run PowerShell as Admin.
+Run PowerShell as an admin.
+ ```powershell Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope Process -Force Install-Module -Name Az.ApplicationMonitor -AllowPrerelease -AcceptLicense ``` > [!NOTE]
-> `AllowPrerelease` switch in `Install-Module` cmdlet allows installation of beta release.
+> The `AllowPrerelease` switch in the `Install-Module` cmdlet allows installation of the beta release.
>
-> For additional information, see [Install-Module](/powershell/module/powershellget/install-module#parameters).
+> For more information, see [Install-Module](/powershell/module/powershellget/install-module#parameters).
> ### Enable monitoring
Install-Module -Name Az.ApplicationMonitor -AllowPrerelease -AcceptLicense
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope Process -Force Enable-ApplicationInsightsMonitoring -ConnectionString 'InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://xxxx.applicationinsights.azure.com/' ```
-
-
+ ## Download and install manually (offline option)+
+You can also download and install manually.
+ ### Download the module Manually download the latest version of the module from [PowerShell Gallery](https://www.powershellgallery.com/packages/Az.ApplicationMonitor). ### Unzip and install Application Insights Agent+ ```powershell $pathToNupkg = "C:\Users\t\Desktop\Az.ApplicationMonitor.0.3.0-alpha.nupkg" $pathToZip = ([io.path]::ChangeExtension($pathToNupkg, "zip"))
$pathToNupkg | rename-item -newname $pathToZip
$pathInstalledModule = "$Env:ProgramFiles\WindowsPowerShell\Modules\Az.ApplicationMonitor" Expand-Archive -LiteralPath $pathToZip -DestinationPath $pathInstalledModule ```+ ### Enable monitoring ```powershell Enable-ApplicationInsightsMonitoring -ConnectionString 'InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://xxxx.applicationinsights.azure.com/' ``` --- ## Next steps View your telemetry: - [Explore metrics](../essentials/metrics-charts.md) to monitor performance and usage. - [Search events and logs](./diagnostic-search.md) to diagnose problems.-- [Use Analytics](../logs/log-query-overview.md) for more advanced queries.
+- [Use Log Analytics](../logs/log-query-overview.md) for more advanced queries.
- [Create dashboards](./overview-dashboard.md). Add more telemetry: - [Create web tests](monitor-web-app-availability.md) to make sure your site stays live.-- [Add web client telemetry](./javascript.md) to see exceptions from web page code and to enable trace calls.-- [Add the Application Insights SDK to your code](./asp-net.md) so you can insert trace and log calls.
+- [Add web client telemetry](./javascript.md) to see exceptions from webpage code and to enable trace calls.
+- [Add the Application Insights SDK to your code](./asp-net.md) so that you can insert trace and log calls.
Do more with Application Insights Agent: -- Review the [detailed instructions](status-monitor-v2-detailed-instructions.md) for an explanation of the commands found here.-- Use our guide to [troubleshoot](status-monitor-v2-troubleshoot.md) Application Insights Agent.
+- Review the [detailed instructions](status-monitor-v2-detailed-instructions.md) for an explanation of the commands in this article.
+- [Troubleshoot](status-monitor-v2-troubleshoot.md) Application Insights Agent.
azure-monitor Status Monitor V2 Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/status-monitor-v2-overview.md
Application Insights Agent is located in the [PowerShell Gallery](https://www.po
- [Set-ApplicationInsightsMonitoringConfig](./status-monitor-v2-api-reference.md#set-applicationinsightsmonitoringconfig) - [Start-ApplicationInsightsMonitoringTrace](./status-monitor-v2-api-reference.md#start-applicationinsightsmonitoringtrace)
-## FAQ
+## Frequently asked questions
This section provides answers to common questions.
azure-monitor Transaction Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/transaction-diagnostics.md
If you can't get Snapshot Debugger working, contact snapshothelp\@microsoft.com.
![Screenshot that shows Profiler integration.](media/transaction-diagnostics/profilerTraces.png)
-## FAQ
+## Frequently asked questions
This section provides answers to common questions.
azure-monitor Tutorial Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-users.md
Title: Understand your customers in Application Insights | Microsoft Docs
-description: Tutorial on using Application Insights to understand how customers are using your application.
+ Title: Understand your customers in Application Insights | Microsoft Docs
+description: Tutorial on how to use Application Insights to understand how customers are using your application.
Last updated 07/30/2021
-# Use Azure Application Insights to understand how customers are using your application
+# Use Application Insights to understand how customers use your application
- Application Insights collects usage information to help you understand how your users interact with your application. This tutorial walks you through the different resources that are available to analyze this information. You'll learn how to:
+ Application Insights collects usage information to help you understand how your users interact with your application. This tutorial walks you through the different resources that are available to analyze this information.
-> [!div class="checklist"]
-> * Analyze details about users accessing your application
-> * Use session information to analyze how customers use your application
-> * Define funnels that let you compare your desired user activity to their actual activity
-> * Create a workbook to consolidate visualizations and queries into a single document
-> * Group similar users to analyze them together
-> * Learn which users are returning to your application
-> * Inspect how users navigate through your application
+You'll learn how to:
+> [!div class="checklist"]
+> * Analyze details about users who access your application.
+> * Use session information to analyze how customers use your application.
+> * Define funnels that let you compare your desired user activity to their actual activity.
+> * Create a workbook to consolidate visualizations and queries into a single document.
+> * Group similar users to analyze them together.
+> * Learn which users are returning to your application.
+> * Inspect how users move through your application.
## Prerequisites To complete this tutorial: - Install [Visual Studio 2019](https://www.visualstudio.com/downloads/) with the following workloads:
- - ASP.NET and web development
- - Azure development
+ - ASP.NET and web development.
+ - Azure development.
- Download and install the [Visual Studio Snapshot Debugger](https://aka.ms/snapshotdebugger).-- Deploy a .NET application to Azure and [enable the Application Insights SDK](../app/asp-net.md). -- [Send telemetry from your application](../app/usage-overview.md#send-telemetry-from-your-app) for adding custom events/page views
+- Deploy a .NET application to Azure and [enable the Application Insights SDK](../app/asp-net.md).
+- [Send telemetry from your application](../app/usage-overview.md#send-telemetry-from-your-app) for adding custom events/page views.
- Send [user context](./usage-overview.md) to track what a user does over time and fully utilize the usage features.
-## Log in to Azure
-Log in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+## Sign in to Azure
+Sign in to the [Azure portal](https://portal.azure.com).
## Get information about your users
-The **Users** panel allows you to understand important details about your users in a variety of ways. You can use this panel to understand such information as where your users are connecting from, details of their client, and what areas of your application they're accessing.
-
-1. In your Application Insights resource under *Usage*, select **Users** in the menu.
-2. The default view shows the number of unique users that have connected to your application over the past 24 hours. You can change the time window and set various other criteria to filter this information.
+The **Users** pane helps you to understand important details about your users in various ways. You can use this pane to understand information like where your users are connecting from, details of their client, and what areas of your application they're accessing.
-3. Click the **During** dropdown and change the time window to 7 days. This increases the data included in the different charts in the panel.
+1. In your Application Insights resource, under **Usage**, select **Users**.
+1. The default view shows the number of unique users that have connected to your application over the past 24 hours. You can change the time window and set various other criteria to filter this information.
-4. Click the **Split by** dropdown to add a breakdown by a user property to the graph. Select **Country or region**. The graph includes the same data but allows you to view a breakdown of the number of users for each country/region.
+1. Select the **During** dropdown list and change the time window to **7 days**. This setting increases the data included in the different charts in the pane.
- :::image type="content" source="./media/tutorial-users/user-1.png" alt-text="Screenshot of the User tab's query builder." lightbox="./media/tutorial-users/user-1.png":::
+1. Select the **Split by** dropdown list to add a breakdown by a user property to the graph. Select **Country or region**. The graph includes the same data, but you can use it to view a breakdown of the number of users for each country/region.
-5. Position the cursor over different bars in the chart and note that the count for each country/region reflects only the time window represented by that bar.
-6. Select **View More Insights** for more information.
+ :::image type="content" source="./media/tutorial-users/user-1.png" alt-text="Screenshot that shows the User tab's query builder." lightbox="./media/tutorial-users/user-1.png":::
- :::image type="content" source="./media/tutorial-users/user-2.png" alt-text="Screenshot of the User tab of view more insights." lightbox="./media/tutorial-users/user-2.png":::
+1. Position the cursor over different bars in the chart and note that the count for each country/region reflects only the time window represented by that bar.
+1. Select **View More Insights** for more information.
+ :::image type="content" source="./media/tutorial-users/user-2.png" alt-text="Screenshot that shows the User tab of view more insights." lightbox="./media/tutorial-users/user-2.png":::
## Analyze user sessions
-The **Sessions** panel is similar to the **Users** panel. Where **Users** helps you understand details about the users accessing your application, **Sessions** helps you understand how those users used your application.
+The **Sessions** pane is similar to the **Users** pane. **Users** helps you understand details about the users who access your application. **Sessions** helps you understand how those users used your application.
-1. User *Usage*, select **Sessions**.
-2. Have a look at the graph and note that you have the same options to filter and break down the data as in the **Users** panel.
+1. Under **Usage**, select **Sessions**.
+1. Look at the graph and note that you have the same options to filter and break down the data as in the **Users** pane.
- :::image type="content" source="./media/tutorial-users/sessions.png" alt-text="Screenshot of the Sessions tab with a bar chart displayed." lightbox="./media/tutorial-users/sessions.png":::
+ :::image type="content" source="./media/tutorial-users/sessions.png" alt-text="Screenshot that shows the Sessions tab with a bar chart displayed." lightbox="./media/tutorial-users/sessions.png":::
-4. To view the sessions timeline, select **View More Insights** then under active sessions select **View session timeline** on one of the timelines. Session Timeline shows every action in the sessions. This can help you identify information such as the sessions with a large number of exceptions.
+1. To view the sessions timeline, select **View More Insights**. Under **Active Sessions**, select **View session timeline** on one of the timelines. The **Session Timeline** pane shows every action in the sessions. This information can help you identify examples like sessions with a large number of exceptions.
- :::image type="content" source="./media/tutorial-users/timeline.png" alt-text="Screenshot of the Sessions tab with a timeline selected." lightbox="./media/tutorial-users/timeline.png":::
+ :::image type="content" source="./media/tutorial-users/timeline.png" alt-text="Screenshot that shows the Sessions tab with a timeline selected." lightbox="./media/tutorial-users/timeline.png":::
## Group together similar users
-A **Cohort** is a set of users grouped on similar characteristics. You can use cohorts to filter data in other panels allowing you to analyze particular groups of users. For example, you might want to analyze only users who completed a purchase.
+A cohort is a set of users grouped by similar characteristics. You can use cohorts to filter data in other panes so that you can analyze particular groups of users. For example, you might want to analyze only users who completed a purchase.
-1. Select **Create a Cohort** at the top of one of the usage tabs ( Users, Sessions, Events and so on).
+1. On the **Users**, **Sessions**, or **Events** tab, select **Create a Cohort**.
-1. Select a template from the gallery.
+1. Select a template from the gallery.
- :::image type="content" source="./media/tutorial-users/cohort.png" alt-text="Screenshot of the template gallery for cohorts." lightbox="./media/tutorial-users/cohort.png":::
-1. Edit your Cohort then select **save**.
-1. To see your Cohort select it from the **Show** dropdown menu.
-
- :::image type="content" source="./media/tutorial-users/cohort-2.png" alt-text="Screenshot of the Show dropdown, showing a cohort." lightbox="./media/tutorial-users/cohort-2.png":::
+ :::image type="content" source="./media/tutorial-users/cohort.png" alt-text="Screenshot that shows the template gallery for cohorts." lightbox="./media/tutorial-users/cohort.png":::
+1. Edit your cohort and select **Save**.
+1. To see your cohort, select it from the **Show** dropdown list.
+ :::image type="content" source="./media/tutorial-users/cohort-2.png" alt-text="Screenshot that shows the Show dropdown, showing a cohort." lightbox="./media/tutorial-users/cohort-2.png":::
## Compare desired activity to reality
-While the previous panels are focused on what users of your application did, **Funnels** focus on what you want users to do. A funnel represents a set of steps in your application and the percentage of users who move between steps. For example, you could create a funnel that measures the percentage of users who connect to your application who search product. You can then see the percentage of users who add that product to a shopping cart, and then the percentage of those who complete a purchase.
+The previous panes are focused on what users of your application did. The **Funnels** pane focuses on what you want users to do. A funnel represents a set of steps in your application and the percentage of users who move between steps.
+
+For example, you could create a funnel that measures the percentage of users who connect to your application and search for a product. You can then see the percentage of users who add that product to a shopping cart. You can also see the percentage of customers who complete a purchase.
-1. Select **Funnels** in the menu and then select **Edit**.
+1. Select **Funnels** > **Edit**.
-3. Create a funnel with at least two steps by selecting an action for each step. The list of actions is built from usage data collected by Application Insights.
+1. Create a funnel with at least two steps by selecting an action for each step. The list of actions is built from usage data collected by Application Insights.
- :::image type="content" source="./media/tutorial-users/funnel.png" alt-text="Screenshot of the Funnel tab and selecting steps on the edit tab." lightbox="./media/tutorial-users/funnel.png":::
+ :::image type="content" source="./media/tutorial-users/funnel.png" alt-text="Screenshot that shows the Funnel tab and selecting steps on the edit tab." lightbox="./media/tutorial-users/funnel.png":::
-4. Select the **View** tab to see the results. The window to the right shows the most common events before the first activity and after the last activity to help you understand user tendencies around the particular sequence.
+1. Select the **View** tab to see the results. The window to the right shows the most common events before the first activity and after the last activity to help you understand user tendencies around the particular sequence.
- :::image type="content" source="./media/tutorial-users/funnel-2.png" alt-text="Screenshot of the funnel tab on view." lightbox="./media/tutorial-users/funnel-2.png":::
+ :::image type="content" source="./media/tutorial-users/funnel-2.png" alt-text="Screenshot that shows the funnel tab on view." lightbox="./media/tutorial-users/funnel-2.png":::
-4. To save the funnel, select **Save**.
+1. To save the funnel, select **Save**.
## Learn which customers return
-**Retention** helps you understand which users are coming back to your application.
+Retention helps you understand which users are coming back to your application.
+
+1. Select **Retention** > **Retention Analysis Workbook**.
+1. By default, the analyzed information includes users who performed an action and then returned to perform another action. For example, you can change this filter to include only those users who returned after they completed a purchase.
-1. Select **Retention** in the menu, then *Retention Analysis Workbook.
-2. By default, the analyzed information includes users who performed any action and then returned to perform any action. You can change this filter to any include, for example, only those users who returned after completing a purchase.
+ :::image type="content" source="./media/tutorial-users/retention.png" alt-text="Screenshot that shows a graph for users that match the criteria set for a retention filter." lightbox="./media/tutorial-users/retention.png":::
- :::image type="content" source="./media/tutorial-users/retention.png" alt-text="Screenshot showing a graph for users that match the criteria set for a retention filter." lightbox="./media/tutorial-users/retention.png":::
+1. The returning users that match the criteria are shown in graphical and table form for different time durations. The typical pattern is for a gradual drop in returning users over time. A sudden drop from one time period to the next might raise a concern.
-3. The returning users that match the criteria are shown in graphical and table form for different time durations. The typical pattern is for a gradual drop in returning users over time. A sudden drop from one time period to the next might raise a concern.
+ :::image type="content" source="./media/tutorial-users/retention-2.png" alt-text="Screenshot that shows the retention workbook with the User returned after # of weeks chart." lightbox="./media/tutorial-users/retention-2.png":::
- :::image type="content" source="./media/tutorial-users/retention-2.png" alt-text="Screenshot of the retention workbook, showing user return after # of weeks chart." lightbox="./media/tutorial-users/retention-2.png":::
+## Analyze user movements
+A user flow visualizes how users move between the pages and features of your application. The flow helps you answer questions like where users typically move from a particular page, how they usually exit your application, and if there are any actions that are regularly repeated.
-## Analyze user navigation
-A **User flow** visualizes how users navigate between the pages and features of your application. This helps you answer questions such as where users typically move from a particular page, how they typically exit your application, and if there are any actions that are regularly repeated.
+1. Select **User flows** on the menu.
+1. Select **New** to create a new user flow. Select **Edit** to edit its details.
+1. Increase **Time Range** to **7 days** and then select an initial event. The flow will track user sessions that start with that event.
-1. Select **User flows** in the menu.
-2. Click **New** to create a new user flow and then select **Edit** to edit its details.
-3. Increase the **Time Range** to 7 days and then select an initial event. The flow will track user sessions that start with that event.
+ :::image type="content" source="./media/tutorial-users/flowsedit.png" alt-text="Screenshot that shows how to create a new user flow." lightbox="./media/tutorial-users/flowsedit.png":::
- :::image type="content" source="./media/tutorial-users/flowsedit.png" alt-text="Screenshot showing how to create a new user flow." lightbox="./media/tutorial-users/flowsedit.png":::
-
-4. The user flow is displayed, and you can see the different user paths and their session counts. Blue lines indicate an action that the user performed after the current action. A red line indicates the end of the user session.
+1. The user flow is displayed, and you can see the different user paths and their session counts. Blue lines indicate an action that the user performed after the current action. A red line indicates the end of the user session.
- :::image type="content" source="./media/tutorial-users/flows.png" alt-text="Screenshot showing the display of user paths and session counts for a user flow." lightbox="./media/tutorial-users/flows.png":::
+ :::image type="content" source="./media/tutorial-users/flows.png" alt-text="Screenshot that shows the display of user paths and session counts for a user flow." lightbox="./media/tutorial-users/flows.png":::
-5. To remove an event from the flow, select the **x** in the corner of the action and then select **Create Graph**. The graph is redrawn with any instances of that event removed. Select **Edit** to see that the event is now added to **Excluded events**.
+1. To remove an event from the flow, select the **X** in the upper-right corner of the action. Then select **Create Graph**. The graph is redrawn with any instances of that event removed. Select **Edit** to see that the event is now added to **Excluded events**.
- :::image type="content" source="./media/tutorial-users/flowsexclude.png" alt-text="Screenshot showing the list of excluded events for a user flow." lightbox="./media/tutorial-users/flowsexclude.png":::
+ :::image type="content" source="./media/tutorial-users/flowsexclude.png" alt-text="Screenshot that shows the list of excluded events for a user flow." lightbox="./media/tutorial-users/flowsexclude.png":::
## Consolidate usage data
-**Workbooks** combine data visualizations, Analytics queries, and text into interactive documents. You can use workbooks to group together common usage information, consolidate information from a particular incident, or report back to your team on your application's usage.
+Workbooks combine data visualizations, Log Analytics queries, and text into interactive documents. You can use workbooks to:
+- Group together common usage information.
+- Consolidate information from a particular incident.
+- Report back to your team on your application's usage.
-1. Select **Workbooks** in the menu.
-2. Select **New** to create a new workbook.
-3. A query is already provided that includes all usage data in the last day displayed as a bar chart. You can use this query, manually edit it, or select **Samples** to select from other useful queries.
+1. Select **Workbooks** on the menu.
+1. Select **New** to create a new workbook.
+1. A query that's provided includes all usage data in the last day displayed as a bar chart. You can use this query, manually edit it, or select **Samples** to select from other useful queries.
- :::image type="content" source="./media/tutorial-users/sample-queries.png" alt-text="Screenshot showing the sample button and list of sample queries that you can use." lightbox="./media/tutorial-users/sample-queries.png":::
+ :::image type="content" source="./media/tutorial-users/sample-queries.png" alt-text="Screenshot that shows the sample button and list of sample queries that you can use." lightbox="./media/tutorial-users/sample-queries.png":::
-4. Select **Done editing**.
-5. Select **Edit** in the top pane to edit the text at the top of the workbook. This is formatted using markdown.
+1. Select **Done editing**.
+1. Select **Edit** in the top pane to edit the text at the top of the workbook. Formatting is done by using Markdown.
-6. Select **Add users** to add a graph with user information. Edit the details of the graph if you want and then select **Done editing** to save it.
+1. Select **Add users** to add a graph with user information. Edit the details of the graph if you want. Then select **Done editing** to save it.
-To learn more about workbooks, visit [the workbooks overview](../visualize/workbooks-overview.md).
+To learn more about workbooks, see the [workbooks overview](../visualize/workbooks-overview.md).
## Next steps
-Now that you've learned how to analyze your users, advance to the next tutorial to learn how to create custom dashboards that combine this information with other useful data about your application.
+You've learned how to analyze your users. In the next tutorial, you'll learn how to create custom dashboards that combine this information with other useful data about your application.
> [!div class="nextstepaction"] > [Create custom dashboards](./tutorial-app-dashboards.md)
azure-monitor Usage Flows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-flows.md
Keep in mind, that **Session Ended** nodes are based only on telemetry collected
Look for a page view or custom event that is repeated by many users across subsequent steps in the visualization. This usually means that users are performing repetitive actions on your site. If you find repetition, think about changing the design of your site or adding new functionality to reduce repetition. For example, adding bulk edit functionality if you find users performing repetitive actions on each row of a table element.
-## Common questions
+## Frequently asked questions
### Does the initial event represent the first time the event appears in a session, or any time it appears in a session?
azure-monitor Usage Heart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-heart.md
Set up a custom task using the below parameters.
-## FAQs
+## Frequently asked questions
### How do I view the data at different grains? (Daily, monthly, weekly)? You can click on the 'Date Grain' filter to change the grain (As shown below)
azure-monitor Autoscale Predictive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-predictive.md
PS G:\works\kusto_onboard\test_arm_template> new-azurermresourcegroupdeployment
For more information on Azure Resource Manager templates, see [Resource Manager template overview](../../azure-resource-manager/templates/overview.md).
-## Common questions
+## Frequently asked questions
-This section answers common questions.
+This section answers frequently asked questions.
### Why is CPU percentage over 100 percent on predictive charts? The predictive chart shows the cumulative load for all machines in the scale set. If you have 5 VMs in a scale set, the maximum cumulative load for all VMs will be 500%, that is, five times the 100% maximum CPU load of each VM.
azure-monitor Container Insights Enable Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-aks.md
Title: Monitor an Azure Kubernetes Service (AKS) cluster deployed description: Learn how to enable monitoring of an Azure Kubernetes Service (AKS) cluster with Container insights already deployed in your subscription. Previously updated : 09/28/2022 Last updated : 01/09/2023
This article describes how to set up Container insights to monitor a managed Kub
## Prerequisites
-If you're connecting an existing AKS cluster to a Log Analytics workspace in another subscription, the Microsoft.ContainerService resource provider must be registered in the subscription with the Log Analytics workspace. For more information, see [Register resource provider](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider).
+If you're connecting an existing AKS cluster to a Log Analytics workspace in another subscription, the *Microsoft.ContainerService* resource provider must be registered in the subscription with the Log Analytics workspace. For more information, see [Register resource provider](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider).
## New AKS cluster
You can enable monitoring for an AKS cluster when it's created by using any of t
- **Azure CLI**: Follow the steps in [Create AKS cluster](../../aks/learn/quick-kubernetes-deploy-cli.md). - **Azure Policy**: Follow the steps in [Enable AKS monitoring add-on by using Azure Policy](container-insights-enable-aks-policy.md).-- **Terraform**: If you're [deploying a new AKS cluster by using Terraform](/azure/developer/terraform/create-k8s-cluster-with-tf-and-aks), you specify the arguments required in the profile [to create a Log Analytics workspace](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/log_analytics_workspace) if you don't choose to specify an existing one. To add Container insights to the workspace, see [azurerm_log_analytics_solution](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/log_analytics_solution). Complete the profile by including the [addon_profile](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster) and specifying **oms_agent**.
+- **Terraform**: If you're [deploying a new AKS cluster by using Terraform](/azure/developer/terraform/create-k8s-cluster-with-tf-and-aks), specify the arguments required in the profile [to create a Log Analytics workspace](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/log_analytics_workspace) if you don't choose to specify an existing one. To add Container insights to the workspace, see [azurerm_log_analytics_solution](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/log_analytics_solution). Complete the profile by including **oms_agent** profile.
## Existing AKS cluster
provisioningState : Succeeded
## [Terraform](#tab/terraform)
-To enable monitoring by using Terraform:
+1. Add the **oms_agent** add-on profile to the existing [azurerm_kubernetes_cluster resource](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/data-sources/kubernetes_cluster) depending on the version of the [Terraform AzureRM provider version](/azure/developer/terraform/provider-version-history-azurerm).
-1. Add the **oms_agent** add-on profile to the existing [azurerm_kubernetes_cluster resource](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/data-sources/kubernetes_cluster).
+ * If the Terraform AzureRM provider version is 3.0 or higher, add the following:
- ```
- addon_profile {
+ ```
oms_agent {
- enabled = true
- log_analytics_workspace_id = "${azurerm_log_analytics_workspace.test.id}"
- }
- }
- ```
-
-1. Add the [azurerm_log_analytics_solution](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/log_analytics_solution) by following the steps in the Terraform documentation.
-1. Enable collection of custom metrics by using the guidance at [Enable custom metrics](container-insights-custom-metrics.md).
+ log_analytics_workspace_id = "${azurerm_log_analytics_workspace.test.id}"
+ }
+ ```
+
+ * If the Terraform AzureRM provider is less than version 3.0, add the following:
+
+ ```
+ addon_profile {
+ oms_agent {
+ enabled = true
+ log_analytics_workspace_id = "${azurerm_log_analytics_workspace.test.id}"
+ }
+ }
+ ```
+
+2. Add the [azurerm_log_analytics_solution](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/log_analytics_solution) by following the steps in the Terraform documentation.
+
+3. Enable collection of custom metrics by following the guidance at [Enable custom metrics](container-insights-custom-metrics.md).
## [Azure portal](#tab/portal-azure-monitor)
azure-monitor Container Insights Log Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-log-alerts.md
You might also decide not to split when you want a condition on multiple resourc
You might want to see a list of the alerts by affected computer. You can use a custom workbook that uses a custom [resource graph](../../governance/resource-graph/overview.md) to provide this view. Use the following query to display alerts, and use the data source **Azure Resource Graph** in the workbook. ## Create a log query alert rule
-[This example of a log query alert](../vm/monitor-virtual-machine-alerts.md#example-log-query-alert) provides a complete walkthrough of creating a log query alert rule. You can use these same processes to create alert rules for AKS clusters by using queries similar to the ones in this article.
+To create a log query alert rule by using the portal, see [this example of a log query alert](../vm/monitor-virtual-machine-alerts.md#example-log-query-alert), which provides a complete walkthrough. You can use these same processes to create alert rules for AKS clusters by using queries similar to the ones in this article.
+
+To create a query alert rule by using an Azure Resource Manager (ARM) template, see [Resource Manager template samples for log alert rules in Azure Monitor](../alerts/resource-manager-alerts-log.md). You can use these same processes to create ARM templates for the log queries in this article.
## Resource utilization
azure-monitor Metrics Store Custom Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-store-custom-rest-api.md
Title: Send metrics to the Azure Monitor metric database using REST API description: Send custom metrics for an Azure resource to the Azure Monitor metric store by using a REST API-+ Previously updated : 09/24/2018- Last updated : 01/04/2023+ # Send custom metrics for an Azure resource to the Azure Monitor metric store by using a REST API
-This article shows you how to send custom metrics for Azure resources to the Azure Monitor metrics store via a REST API. After the metrics are in Azure Monitor, you can do all the things with them that you do with standard metrics. Examples are charting, alerting, and routing them to other external tools.
+This article shows you how to send custom metrics for Azure resources to the Azure Monitor metrics store via a REST API. When the metrics are in Azure Monitor, you can do all the things with them that you do with standard metrics. For example, charting, alerting, and routing them to other external tools.
>[!NOTE]
->The REST API only permits sending custom metrics for Azure resources. To send metrics for resources in different environments or on-premises, you can use [Application Insights](../app/api-custom-events-metrics.md).
+>The REST API only permits sending custom metrics for Azure resources.
+To send metrics for resources in other environments or on-premises, use [Application Insights](../app/api-custom-events-metrics.md).
+## Create and authorize a service principal to emit metrics
-## Create and authorize a service principal to emit metrics
+A service principal is an application whose tokens can be used to authenticate and grant access to specific Azure resources using Azure Active Directory. This includes user-apps, services or automation tools.
-Create a service principal in your Azure Active Directory tenant by using the instructions found at [Create a service principal](../../active-directory/develop/howto-create-service-principal-portal.md).
+1. [Register an application with Azure Active Directory](../logs/api/register-app-for-token.md) to create a service principal.
-Note the following while you go through this process:
+1. Save the tenant ID, new client ID, and client secret value for your app to use when requesting a token.
-- You can enter any URL for the sign-in URL. -- Create a new client secret for this app. -- Save the key and the client ID for use in later steps. -
-Give the app created as part of step 1, Monitoring Metrics Publisher, permissions to the resource you wish to emit metrics against. If you plan to use the app to emit custom metrics against many resources, you can grant these permissions at the resource group or subscription level.
+1. Give the app created as part of the previous step **Monitoring Metrics Publisher** permissions to the resource you wish to emit metrics against. If you plan to use the app to emit custom metrics against many resources, you can grant these permissions at the resource group or subscription level.
+
+ On your resource's overview page, select **Access Control (IAM)**
+1. Select **Add**, then **Add role assignment** from the dropdown.
+ :::image type="content" source="./media/metrics-store-custom-rest-api/access-contol-add-role-assignment.png" alt-text="A screenshot showing the Access control(IAM) page for a virtual machine.":::
+1. Search for *Monitoring Metrics* in the search field.
+1. Select **Monitoring Metrics Publisher** from the list.
+1. Select **Members**.
+ :::image type="content" source="./media/metrics-store-custom-rest-api/add-role-assignment.png" alt-text="A screenshot showing the add role assignment page.":::
+
+1. Search for your app in the **Select** field.
+1. Select your app from the list.
+1. Click **Select**.
+1. Select **Review + assign**.
+ :::image type="content" source="./media/metrics-store-custom-rest-api/select-members.png" alt-text="A screenshot showing the members tab of the role assignment page.":::
## Get an authorization token
-Open a command prompt and run the following command:
+
+Send the following request in the command prompt or using a client like Postman.
```shell
-curl -X POST https://login.microsoftonline.com/<yourtenantid>/oauth2/token -F "grant_type=client_credentials" -F "client_id=<insert clientId from earlier step>" -F "client_secret=<insert client secret from earlier step>" -F "resource=https://monitoring.azure.com/"
+curl -X POST 'https://login.microsoftonline.com/<tennant ID>/oauth2/token' \
+-H 'Content-Type: application/x-www-form-urlencoded' \
+--data-urlencode 'grant_type=client_credentials' \
+--data-urlencode 'client_id=<your apps client ID>' \
+--data-urlencode 'client_secret=<your apps client secret' \
+--data-urlencode 'resource=https://management.azure.com'
+```
+
+The response body appears as follows:
+
+```JSON
+{
+ "token_type": "Bearer",
+ "expires_in": "86399",
+ "ext_expires_in": "86399",
+ "expires_on": "1672826207",
+ "not_before": "1672739507",
+ "resource": "https://monitoring.azure.com",
+ "access_token": "eyJ0eXAiOiJKV1Qi....gpHWoRzeDdVQd2OE3dNsLIvUIxQ"
+}
```
-Save the access token from the response.
-![Access token](./media/metrics-store-custom-rest-api/accesstoken.png)
+Save the access token from the response for use in the following HTTP requests.
-## Emit the metric via the REST API
+## Send a metric via the REST API
-1. Paste the following JSON into a file, and save it asΓÇ»**custommetric.json** on your local computer. Update the time parameter in the JSON file:
+1. Paste the following JSON into a file, and save it asΓÇ»**custommetric.json** on your local computer. Update the time parameter so that it is within the last 20 minutes. You can't put a metric into the store that's over 20 minutes old. The metric store is optimized for alerting and real-time charting.
- ```json
+ ```JSON
{
- "time": "2018-09-13T16:34:20",
+ "time": "2023-01-03T11:00:20",
"data": { "baseData": { "metric": "QueueDepth",
Save the access token from the response.
} } }
- ```
+ ```
-1. In your command prompt window, post the metric data:
- - **azureRegion**. Must match the deployment region of the resource you're emitting metrics for.
- - **resourceID**. Resource ID of the Azure resource you're tracking the metric against.
- - **AccessToken**. Paste the token that you acquired previously.
+1. Submit the following HTTP POST request using the following variables:
+ - **location**: Deployment region of the resource you're emitting metrics for.
+ - **resourceId**: Resource ID of the Azure resource you're tracking the metric against.
+ - **accessToken**: The authorization token acquired from the previous step.
+
+ ```Shell
+ curl -X POST 'https://<location>.monitoring.azure.com/<resourceId>/metrics' \
+ -H 'Content-Type: application/json' \
+ -H 'Authorization: Bearer <accessToken>' \
+ -d @custommetric.json
+ ```
- ```Shell
- curl -X POST https://<azureRegion>.monitoring.azure.com/<resourceId>/metrics -H "Content-Type: application/json" -H "Authorization: Bearer <AccessToken>" -d @custommetric.json
- ```
1. Change the timestamp and values in the JSON file.
-1. Repeat the previous two steps a few times, so you have data for several minutes.
-
-## Troubleshooting
-If you receive an error message with some part of the process, consider the following troubleshooting information:
+1. Repeat the previous two steps a number of times, to create data for several minutes.
-1. You can't issue metrics against a subscription or resource group as your Azure resource.
-1. You can't put a metric into the store that's over 20 minutes old. The metric store is optimized for alerting and real-time charting.
-2. The number of dimension names should match the values and vice versa. Check the values.
-2. You might be emitting metrics against a region that doesnΓÇÖt support custom metrics. See [supported regions](./metrics-custom-overview.md#supported-regions).
+## Troubleshooting
+If you receive an error message with some part of the process, consider the following troubleshooting information:
+- If you can't issue metrics against a subscription or resource group, or resource, check that your application or Service Principal has the **Monitoring Metrics Publisher** role assigned in Access control (IAM).
+- Check that the number of dimension names matches the number values.
+- Check that you are not emitting metrics against a region that doesnΓÇÖt support custom metrics. See [supported regions](./metrics-custom-overview.md#supported-regions).
-## View your metrics
+## View your metrics
-1. Sign in to the Azure portal.
+1. Sign in to the Azure portal.
-1. In the left-hand menu, select **Monitor**.
+1. In the left-hand menu, select **Monitor**.
-1. On the **Monitor** page, select **Metrics**.
+1. On the **Monitor** page, select **Metrics**.
- ![Select Metrics](./media/metrics-store-custom-rest-api/metrics.png)
+ ![Select Metrics](./media/metrics-store-custom-rest-api/metrics.png)
-1. Change the aggregation period to **Last 30 minutes**.
+1. Change the aggregation period to **Last hour**.
-1. In the **resource** drop-down menu, select the resource you emitted the metric against.
+1. In the **Scope** drop-down menu, select the resource you send the metric for.
-1. In the **namespaces** drop-down menu, select **QueueProcessing**.
+1. In the **Metric namespace** drop-down menu, select **QueueProcessing**.
-1. In the **metrics** drop-down menu, select **QueueDepth**.
+1. In the **Metric** drop-down menu, select **QueueDepth**.
-
## Next steps-- Learn more about [custom metrics](./metrics-custom-overview.md).
+- Learn more about [custom metrics](./metrics-custom-overview.md).
azure-monitor Prometheus Metrics Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-enable.md
The `aks-preview` extension needs to be installed using the following command. F
```azurecli az extension add --name aks-preview ```
-Use the following command to remove the agent from the cluster nodes and delete the recording rules created for the data being collected from the cluster. This doesn't remove the DCE, DCR, or the data already collected and stored in your Azure Monitor workspace.
+Use the following command to remove the agent from the cluster nodes and delete the recording rules created for the data being collected from the cluster along with the Data Collection Rule Associations (DCRA) that link the DCE or DCR with your cluster. This doesn't remove the DCE, DCR, or the data already collected and stored in your Azure Monitor workspace.
```azurecli az aks update --disable-azuremonitormetrics -n <cluster-name> -g <cluster-resource-group>
azure-monitor Ad Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/ad-assessment.md
After the next scheduled health check runs, by default every seven days, the spe
2. If you decide later that you want to see ignored recommendations, remove any IgnoreRecommendations.txt files, or you can remove RecommendationIDs from them.
-## AD Health Check solutions FAQ
+## Frequently asked questions
*What checks are performed by the AD Assessment solution?*
azure-monitor Ad Replication Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/ad-replication-status.md
You can also click **Export** to export the results to Excel. Exporting the data
![exported AD replication status errors in Excel](./media/ad-replication-status/oms-ad-replication-export.png)
-## AD Replication Status FAQ
+## Frequently asked questions
**Q: How often is AD replication status data updated?** A: The information is updated every five days.
azure-monitor Scom Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/scom-assessment.md
If you have recommendations that you want to ignore, you can create a text file
3. If you decide later that you want to see ignored recommendations, remove any IgnoreRecommendations.txt files, or you can remove RecommendationIDs from them.
-## System Center Operations Manager Health Check solution FAQ
+## Frequently asked questions
*I added the Health Check solution to my Log Analytics workspace. But I donΓÇÖt see the recommendations. Why not?* After adding the solution, use the following steps view the recommendations on the Log Analytics dashboard.
azure-monitor Sql Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/sql-assessment.md
If you have recommendations that you want to ignore, you can create a text file
``` 3. If you decide later that you want to see ignored recommendations, remove any IgnoreRecommendations.txt files, or you can remove RecommendationIDs from them.
-## SQL Health Check solution FAQ
+## Frequently asked questions
*What checks are performed by the SQL Assessment solution?*
azure-monitor Access Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/access-api.md
For example,
To access the API, you need to register a client app with Azure Active Directory and request a token. 1. [Register an app in Azure Active Directory](./register-app-for-token.md).+
+1. On the app's overview page, select **API permissions**
+1. Select **Add a permission**
+1. In the **APIs my organization uses** tab search for *log analytics* and select **Log Analytics API** from the list.
+
+1. Select **Delegated permissions**
+1. Check the checkbox for **Data.Read**
+1. Select **Add permissions**
+
+Now that your app is registered and has permissions to use the API, grant your app access to your Log Analytics Workspace.
+
+1. From your Log analytics Workspace overview page, select **Access control (IAM)**.
+1. Select **Add role assignment**.
+
+ :::image type="content" source="../media/api-register-app/workspace-access-control.png" alt-text="A screenshot showing the access control page for a log analytics workspace.":::
+
+1. Select the **Reader** role then select **Members**
+
+ :::image type="content" source="../media/api-register-app/add-role-assignment.png" alt-text="A screenshot showing the add role assignment page for a log analytics workspace.":::
+
+1. In the Members tab, select **Select members**
+1. Enter the name of your app in the **Select** field.
+1. Choose your app and select **Select**
+1. Select **Review and assign**
+
+ :::image type="content" source="../media/api-register-app/select-members.png" alt-text="A screenshot showing the select members blade on the role assignment page for a log analytics workspace.":::
+ 1. After completing the Active Directory setup and workspace permissions, request an authorization token.
+>[!Note]
+> For this example we applied the **Reader** role. This role is one of many built-in roles and may include more permissions than you require. More granular roles and permissions can be created. For more information, see [Manage access to Log Analytics workspaces](../../logs/manage-access.md).
+ ## Request an Authorization Token Before beginning, make sure you have all the values required to make the request successfully. All requests require:
azure-monitor Register App For Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/register-app-for-token.md
Title: Register an App for API Access
-description: How to register an app and assign a role so it can access a log analytics workspace using the API
+ Title: Register an App to request authorization tokens and work with APIs
+description: How to register an app and assign a role so it can access request a token and work with APIs
Previously updated : 11/18/2021 Last updated : 01/04/2023
-# Register an App to work with Log Analytics APIs
+# Register an App to request authorization tokens and work with APIs
-To access the log analytics API, you can generate a token based on a client ID and secret. This article shows you how to register a client app and assign permissions to access a Log Analytics Workspace.
+To access Azure REST APIs such as the Log analytics API, or to send custom metrics, you can generate an authorization token based on a client ID and secret. The token is then passed in your REST API request. This article shows you how to register a client app and create a client secret so that you can generate a token.
## Register an App
To access the log analytics API, you can generate a token based on a client ID a
1. Select **New registration** 1. On the Register an application page, enter a **Name** for the application. 1. Select **Register**
-1. On the app's overview page, select **API permissions**
-1. Select **Add a permission**
-1. In the **APIs my organization uses** tab search for *log analytics* and select **Log Analytics API** from the list.
-
-1. Select **Delegated permissions**
-1. Check the checkbox for **Data.Read**
-1. Select **Add permissions**
1. On the app's overview page, select **Certificates and Secrets** 1. Note the **Application (client) ID**. It's used in the HTTP request for a token.
To access the log analytics API, you can generate a token based on a client ID a
1. Enter a **Description** and select **Add** :::image type="content" source="../media/api-register-app/add-a-client-secret.png" alt-text="A screenshot showing the Add client secret page.":::
-1. Copy and save the client secret **Value**.
+1. Copy and save the client secret **Value**.
> [!NOTE] > Client secret values can only be viewed immediately after creation. Be sure to save the secret before leaving the page. :::image type="content" source="../media/api-register-app/client-secret.png" alt-text="A screenshot showing the client secrets page.":::
-## Grant your app access to a Log Analytics Workspace
-
-1. From your Log analytics Workspace overview page, select **Access control (IAM)**.
-1. Select **Add role assignment**.
- :::image type="content" source="../media/api-register-app/workspace-access-control.png" alt-text="A screenshot showing the access control page for a log analytics workspace.":::
+## Next steps
-1. Select the **Reader** role then select **Members**
-
- :::image type="content" source="../media/api-register-app/add-role-assignment.png" alt-text="A screenshot showing the add role assignment page for a log analytics workspace.":::
+Before you can generate a token using your app, client ID, and secret, assign the app to a role using Access control (IAM) for resource that you want to access.
+The role will depend on the resource type and the API that you want to use.
+For example,
+- To grant your app read from a Log Analytics Workspace, add your app as a member to the **Reader** role using Access control (IAM) for your Log Analytics Workspace. For more information, see [Access the API](./access-api.md)
-1. In the Members tab, select **Select members**
-1. Enter the name of your app in the **Select** field.
-1. Choose your app and select **Select**
-1. Select **Review and assign**
-
- :::image type="content" source="../media/api-register-app/select-members.png" alt-text="A screenshot showing the select members blade on the role assignment page for a log analytics workspace.":::
+- To grant access to send custom metrics for a resource, add your app as a member to the **Monitoring Metrics Publisher** role using Access control (IAM) for your resource. For more information, see [ Send metrics to the Azure Monitor metric database using REST API](../../essentials/metrics-store-custom-rest-api.md)
-## Next steps
+For more information see [Assign Azure roles using the Azure portal](https://learn.microsoft.com/azure/role-based-access-control/role-assignments-portal)
-You can use your client ID and client secret to generate a bearer token to access the Log Analytics API. For more information, see [Access the API](./access-api.md)
+Once you have assigned a role you can use your app, client ID, and client secret to generate a bearer token to access the REST API.
> [!NOTE] > When using Azure AD authentication, it may take up to 60 minutes for the Azure Application Insights REST API to recognize new role-based access control (RBAC) permissions. While permissions are propagating, REST API calls may fail with error code 403.
azure-monitor Response Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/response-format.md
In the following example, we can see the result contains two columns, `Category`
## Azure Monitor Log Analytics API errors
-If a fatal error occurs during query execution, an error status code is returned with a [OneAPI](https://github.com/Microsoft/api-guidelines/blob/vNext/Guidelines.md#errorresponse--object) error object describing the error. See the [reference](https://dev.loganalytics.io/reference/post-query) for a list of error status codes.
+If a fatal error occurs during query execution, an error status code is returned with a [OneAPI](https://github.com/Microsoft/api-guidelines/blob/vNext/Guidelines.md#errorresponse--object) error object describing the error.
If a non-fatal error occurs during query execution, the response status code is `200 OK` and contains the query results in the `tables` property as described above. The response will also contain an `error` property, which is OneAPI error object with code `PartialError`. Details of the error are included in the `details` property. ## Next Steps
-Get detailed information about using the [API options](batch-queries.md).
+Get detailed information about using the [API options](batch-queries.md).
azure-monitor Basic Logs Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md
Azure Monitor Logs offers two log data plans that let you reduce log ingestion a
This article describes Azure Monitor's log data plans and explains how to configure the log data plan of the tables in your Log Analytics workspace.
-> [!IMPORTANT]
-> You can switch a table's plan once a week.<br/> The Basic Logs feature isn't available for workspaces in [legacy pricing tiers](cost-logs.md#legacy-pricing-tiers).
- ## Compare the Basic and Analytics log data plans
-The following table summarizes the two plans.
+The following table summarizes the Basic and Analytics log data plans.
| Category | Analytics | Basic | |:|:|:|
-| Ingestion | Cost for ingestion. | Reduced cost for ingestion. |
-| Log queries | No extra cost. Full query capabilities. | Extra cost.<br/>[Subset of query capabilities](basic-logs-query.md#limitations). |
-| Retention | Configure retention from 30 days to 730 days. | Retention fixed at eight days. |
+| Ingestion | Regular ingestion cost. | Reduced ingestion cost. |
+| Log queries | Full query capabilities<br/>No extra cost. | [Basic query capabilities](basic-logs-query.md#limitations).<br/>Pay-per-use.|
+| Retention | [Configure retention from 30 days to two years](data-retention-archive.md). | Retention fixed at eight days.<br/>When you change an existing table's plan to Basic logs, [Azure archives data](data-retention-archive.md) that's more than eight days old but still within the table's original retention period. |
| Alerts | Supported. | Not supported. | > [!NOTE] > The Basic log data plan isn't available for workspaces in [legacy pricing tiers](cost-logs.md#legacy-pricing-tiers).
-## When should I use Basic Logs?
-The decision whether to configure a table for Basic Logs is based on the following criteria:
+## When should I use Basic logs?
-- The table currently [supports Basic Logs](#which-tables-support-basic-logs).-- You don't require more than eight days of data retention for the table.-- You only require basic queries of the data using a limited version of the query language.-- The cost savings for data ingestion over a month exceed the expected cost for any expected queries-
-## Which tables support Basic Logs?
-
-By default, all tables in your Log Analytics workspace are Analytics tables, and they're available for query and alerts. You can currently configure the following tables for Basic Logs:
-
-| Table | Details|
-|:|:|
-| Custom tables | All custom tables created with or migrated to the [data collection rule (DCR)-based logs ingestion API.](logs-ingestion-api-overview.md) |
-| [ACSCallAutomationIncomingOperations](/azure/azure-monitor/reference/tables/ACSCallAutomationIncomingOperations) | Communication Services incoming requests Calls. |
-| [ACSCallRecordingSummary](/azure/azure-monitor/reference/tables/acscallrecordingsummary) | Communication Services recording summary logs. |
-| [ACSRoomsIncomingOperations](/azure/azure-monitor/reference/tables/acsroomsincomingoperations) | Communication Services Rooms incoming requests operations. |
-| [AHDSMedTechDiagnosticLogs](/azure/azure-monitor/reference/tables/AHDSMedTechDiagnosticLogs) | Health Data Services operational logs. |
-| [AppTraces](/azure/azure-monitor/reference/tables/apptraces) | Application Insights Freeform traces. |
-| [AMSLiveEventOperations](/azure/azure-monitor/reference/tables/AMSLiveEventOperations) | Azure Media Services encoder connects, disconnects, or discontinues. |
-| [AMSKeyDeliveryRequests](/azure/azure-monitor/reference/tables/AMSKeyDeliveryRequests) | Azure Media Services HTTP request details for key, or license acquisition. |
-| [AMSMediaAccountHealth](/azure/azure-monitor/reference/tables/AMSMediaAccountHealth) | Azure Media Services account health status. |
-| [AMSStreamingEndpointRequests](/azure/azure-monitor/reference/tables/AMSStreamingEndpointRequests) | Azure Media Services information about requests to streaming endpoints. |
-| [ContainerAppConsoleLogs](/azure/azure-monitor/reference/tables/containerappconsoleLogs) | Azure Container Apps logs, generated within a Container Apps environment. |
-| [ContainerLogV2](/azure/azure-monitor/reference/tables/containerlogv2) | Used in [Container insights](../containers/container-insights-overview.md) and includes verbose text-based log records. |
-| [DevCenterDiagnosticLogs](/azure/azure-monitor/reference/tables/DevCenterDiagnosticLogs) | Dev Center resources data plane audit logs. For example, dev boxes and environment stop, start, delete. |
-| [StorageBlobLogs](/azure/azure-monitor/reference/tables/StorageBlobLogs) | Azure Storage blob service logs. |
-| [StorageFileLogs](/azure/azure-monitor/reference/tables/StorageFileLogs) | Azure Storage file service logs. |
-| [StorageQueueLogs](/azure/azure-monitor/reference/tables/StorageQueueLogs) | Azure Storage queue service logs. |
-| [StorageTableLogs](/azure/azure-monitor/reference/tables/StorageTableLogs) | Azure Storage table service logs. |
+By default, all tables in your Log Analytics workspace are Analytics tables, and they're available for query and alerts.
+
+Configure a table for Basic logs if:
+- You don't require more than eight days of data retention for the table.
+- You only require basic queries of the data using a [limited version of the query language](basic-logs-query.md#limitations).
+- The cost savings for data ingestion exceed the expected cost for any expected queries.
+- The table supports Basic logs.
+
+ These tables currently support Basic logs:
+
+ | Table | Details|
+ |:|:|
+ | Custom tables | All custom tables created with or migrated to the [data collection rule (DCR)-based logs ingestion API.](logs-ingestion-api-overview.md) |
+ | [ACSCallAutomationIncomingOperations](/azure/azure-monitor/reference/tables/ACSCallAutomationIncomingOperations) | Communication Services incoming requests Calls. |
+ | [ACSCallRecordingSummary](/azure/azure-monitor/reference/tables/acscallrecordingsummary) | Communication Services recording summary logs. |
+ | [ACSRoomsIncomingOperations](/azure/azure-monitor/reference/tables/acsroomsincomingoperations) | Communication Services Rooms incoming requests operations. |
+ | [AHDSMedTechDiagnosticLogs](/azure/azure-monitor/reference/tables/AHDSMedTechDiagnosticLogs) | Health Data Services operational logs. |
+ | [AppTraces](/azure/azure-monitor/reference/tables/apptraces) | Application Insights Freeform traces. |
+ | [AMSLiveEventOperations](/azure/azure-monitor/reference/tables/AMSLiveEventOperations) | Azure Media Services encoder connects, disconnects, or discontinues. |
+ | [AMSKeyDeliveryRequests](/azure/azure-monitor/reference/tables/AMSKeyDeliveryRequests) | Azure Media Services HTTP request details for key, or license acquisition. |
+ | [AMSMediaAccountHealth](/azure/azure-monitor/reference/tables/AMSMediaAccountHealth) | Azure Media Services account health status. |
+ | [AMSStreamingEndpointRequests](/azure/azure-monitor/reference/tables/AMSStreamingEndpointRequests) | Azure Media Services information about requests to streaming endpoints. |
+ | [ContainerAppConsoleLogs](/azure/azure-monitor/reference/tables/containerappconsoleLogs) | Azure Container Apps logs, generated within a Container Apps environment. |
+ | [ContainerLogV2](/azure/azure-monitor/reference/tables/containerlogv2) | Used in [Container insights](../containers/container-insights-overview.md) and includes verbose text-based log records. |
+ | [DevCenterDiagnosticLogs](/azure/azure-monitor/reference/tables/DevCenterDiagnosticLogs) | Dev Center resources data plane audit logs. For example, dev boxes and environment stop, start, delete. |
+ | [StorageBlobLogs](/azure/azure-monitor/reference/tables/StorageBlobLogs) | Azure Storage blob service logs. |
+ | [StorageFileLogs](/azure/azure-monitor/reference/tables/StorageFileLogs) | Azure Storage file service logs. |
+ | [StorageQueueLogs](/azure/azure-monitor/reference/tables/StorageQueueLogs) | Azure Storage queue service logs. |
+ | [StorageTableLogs](/azure/azure-monitor/reference/tables/StorageTableLogs) | Azure Storage table service logs. |
+
> [!NOTE]
-> Tables created with the [Data Collector API](data-collector-api.md) don't support Basic Logs.
+> Tables created with the [Data Collector API](data-collector-api.md) don't support Basic logs.
## Set a table's log data plan
+You can switch a table's plan once a week.
+ # [Portal](#tab/portal-1)
-To configure a table for Basic Logs or Analytics Logs in the Azure portal:
+To configure a table for Basic logs or Analytics logs in the Azure portal:
1. From the **Log Analytics workspaces** menu, select **Tables**.
To configure a table for Basic Logs or Analytics Logs in the Azure portal:
1. From the **Table plan** dropdown on the table configuration screen, select **Basic** or **Analytics**.
- The **Table plan** dropdown is enabled only for [tables that support Basic Logs](#which-tables-support-basic-logs).
+ The **Table plan** dropdown is enabled only for [tables that support Basic logs](#when-should-i-use-basic-logs).
:::image type="content" source="media/basic-logs-configure/log-analytics-configure-table-plan.png" lightbox="media/basic-logs-configure/log-analytics-configure-table-plan.png" alt-text="Screenshot that shows the Table plan dropdown on the table configuration screen.":::
To configure a table for Basic Logs or Analytics Logs in the Azure portal:
# [API](#tab/api-1)
-To configure a table for Basic Logs or Analytics Logs, call the **Tables - Update** API:
+To configure a table for Basic logs or Analytics logs, call the [Tables - Update API](/rest/api/loganalytics/tables/create-or-update):
```http PATCH https://management.azure.com/subscriptions/<subscriptionId>/resourcegroups/<resourceGroupName>/providers/Microsoft.OperationalInsights/workspaces/<workspaceName>/tables/<tableName>?api-version=2021-12-01-preview
PATCH https://management.azure.com/subscriptions/<subscriptionId>/resourcegroups
**Example**
-This example configures the `ContainerLogV2` table for Basic Logs.
+This example configures the `ContainerLogV2` table for Basic logs.
-Container insights uses `ContainerLog` by default. To switch to using `ContainerLogV2` for Container insights, [enable the ContainerLogV2 schema](../containers/container-insights-logging-v2.md) before you convert the table to Basic Logs.
+Container insights uses `ContainerLog` by default. To switch to using `ContainerLogV2` for Container insights, [enable the ContainerLogV2 schema](../containers/container-insights-logging-v2.md) before you convert the table to Basic logs.
**Sample request**
Container insights uses `ContainerLog` by default. To switch to using `Container
PATCH https://management.azure.com/subscriptions/ContosoSID/resourcegroups/ContosoRG/providers/Microsoft.OperationalInsights/workspaces/ContosoWorkspace/tables/ContainerLogV2?api-version=2021-12-01-preview ```
-Use this request body to change to Basic Logs:
+Use this request body to change to Basic logs:
```http {
Use this request body to change to Analytics Logs:
**Sample response**
-This sample is the response for a table changed to Basic Logs:
+This sample is the response for a table changed to Basic logs:
Status code: 200
Status code: 200
# [CLI](#tab/cli-1)
-To configure a table for Basic Logs or Analytics Logs, run the [az monitor log-analytics workspace table update](/cli/azure/monitor/log-analytics/workspace/table#az-monitor-log-analytics-workspace-table-update) command and set the `--plan` parameter to `Basic` or `Analytics`.
+To configure a table for Basic logs or Analytics logs, run the [az monitor log-analytics workspace table update](/cli/azure/monitor/log-analytics/workspace/table#az-monitor-log-analytics-workspace-table-update) command and set the `--plan` parameter to `Basic` or `Analytics`.
For example: -- To set Basic Logs:
+- To set Basic logs:
```azurecli az monitor log-analytics workspace table update --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace --name ContainerLogV2 --plan Basic
For example:
az monitor log-analytics workspace table update --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace --name ContainerLogV2 --plan Analytics ``` --
-## View a table's log data plan
-
-# [Portal](#tab/portal-2)
-
-To check table configuration in the Azure portal, you can open the table configuration screen, as described in [Set table configuration](#set-a-tables-log-data-plan).
-
-Alternatively:
-
-1. From the **Azure Monitor** menu, select **Logs** and select your workspace for the [scope](scope.md). See the [Log Analytics tutorial](log-analytics-tutorial.md#view-table-information) for a walkthrough.
-1. Open the **Tables** tab, which lists all tables in the workspace.
-
- Basic Logs tables have a unique icon:
-
- :::image type="content" source="media/basic-logs-configure/table-icon.png" alt-text="Screenshot that shows the Basic Logs table icon in the table list." lightbox="media/basic-logs-configure/table-icon.png":::
-
- You can also hover over a table name for the table information view, which indicates whether the table is configured as Basic Logs:
-
- :::image type="content" source="media/basic-logs-configure/table-info.png" alt-text="Screenshot that shows the Basic Logs table indicator in the table details." lightbox="media/basic-logs-configure/table-info.png":::
-
-# [API](#tab/api-2)
-
-To check the configuration of a table, call the **Tables - Get** API:
-
-```http
-GET https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/tables/{tableName}?api-version=2021-12-01-preview
-```
-
-**Response body**
-
-|Name | Type | Description |
-| | | |
-|properties.plan | string | The table plan. Either `Analytics` or `Basic`. |
-|properties.retentionInDays | integer | The table's data retention in days. In `Basic Logs`, the value is eight days, fixed. In `Analytics Logs`, the value is between 7 and 730 days.|
-|properties.totalRetentionInDays | integer | The table's data retention that also includes the archive period.|
-|properties.archiveRetentionInDays|integer|The table's archive period (read-only, calculated).|
-|properties.lastPlanModifiedDate|String|Last time when the plan was set for this table. Null if no change was ever done from the default settings (read-only).
-
-**Sample request**
+# [PowerShell](#tab/azure-powershell)
-```http
-GET https://management.azure.com/subscriptions/ContosoSID/resourcegroups/ContosoRG/providers/Microsoft.OperationalInsights/workspaces/ContosoWorkspace/tables/ContainerLogV2?api-version=2021-12-01-preview
-```
+To configure a table's log data plan, use the [Update-AzOperationalInsightsTable](/powershell/module/az.operationalinsights/Update-AzOperationalInsightsTable) cmdlet:
-**Sample response**
-
-Status code: 200
-```http
-{
- "properties": {
- "retentionInDays": 8,
- "totalRetentionInDays": 8,
- "archiveRetentionInDays": 0,
- "plan": "Basic",
- "lastPlanModifiedDate": "2022-01-01T14:34:04.37",
- "schema": {...},
- "provisioningState": "Succeeded"
- },
- "id": "subscriptions/ContosoSID/resourcegroups/ContosoRG/providers/Microsoft.OperationalInsights/workspaces/ContosoWorkspace",
- "name": "ContainerLogV2"
-}
-```
-
-# [CLI](#tab/cli-2)
-
-To check the configuration of a table, run the [az monitor log-analytics workspace table show](/cli/azure/monitor/log-analytics/workspace/table#az-monitor-log-analytics-workspace-table-show) command.
-
-For example:
-
-```azurecli
-az monitor log-analytics workspace table show --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace --name Syslog --output table
+```powershell
+Update-AzOperationalInsightsTable -ResourceGroupName RG-NAME -WorkspaceName WORKSPACE-NAME -Plan Basic|Analytics
```
-## Retain and archive Basic Logs
-
-Analytics tables retain data based on a [retention and archive policy](data-retention-archive.md) you set.
-
-Basic Logs tables retain data for eight days. When you change an existing table's plan to Basic Logs, Azure archives data that's more than eight days old but still within the table's original retention period.
- ## Next steps -- [Query data in Basic Logs](basic-logs-query.md)
+- [View table properties](../logs/manage-logs-tables.md#view-table-properties)
- [Set retention and archive policies](../logs/data-retention-archive.md)-
+- [Query data in Basic logs](basic-logs-query.md)
azure-monitor Create Custom Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/create-custom-table.md
Last updated 11/09/2022
# Add or delete tables and columns in Azure Monitor Logs
-[Data collection rules](../essentials/data-collection-rule-overview.md) let you [filter and transform log data](../essentials/data-collection-transformations.md) before sending the data to an [Azure table or a custom table](../logs/manage-logs-tables.md#table-type). This article explains how to create custom tables and add custom columns to tables in your Log Analytics workspace.
+[Data collection rules](../essentials/data-collection-rule-overview.md) let you [filter and transform log data](../essentials/data-collection-transformations.md) before sending the data to an [Azure table or a custom table](../logs/manage-logs-tables.md#table-type-and-schema). This article explains how to create custom tables and add custom columns to tables in your Log Analytics workspace.
## Prerequisites
Azure tables have predefined schemas. To store log data in a different schema, u
> [!NOTE] > For information about creating a custom table for logs you ingest with the deprecated Log Analytics agent, also known as MMA or OMS, see [Collect text logs with the Log Analytics agent](../agents/data-sources-custom-logs.md#define-a-custom-log).
-### [Portal](#tab/portal-1)
+# [Portal](#tab/azure-portal-1)
To create a custom table in the Azure portal:
To create a custom table in the Azure portal:
:::image type="content" source="media/tutorial-logs-ingestion-portal/new-data-collection-rule.png" lightbox="media/tutorial-logs-ingestion-portal/new-data-collection-rule.png" alt-text="Screenshot showing new data collection rule.":::
-4. Select a [data collection endpoint](../essentials/data-collection-endpoint-overview.md#create-a-data-collection-endpoint) and select **Next**.
+1. Select a [data collection endpoint](../essentials/data-collection-endpoint-overview.md#create-a-data-collection-endpoint) and select **Next**.
:::image type="content" source="media/tutorial-logs-ingestion-portal/custom-log-table-name.png" lightbox="media/tutorial-logs-ingestion-portal/custom-log-table-name.png" alt-text="Screenshot showing custom log table name.":::
To create a custom table in the Azure portal:
:::image type="content" source="media/tutorial-logs-ingestion-portal/custom-log-create.png" lightbox="media/tutorial-logs-ingestion-portal/custom-log-create.png" alt-text="Screenshot showing custom log create.":::
-### [PowerShell](#tab/powershell-1)
+# [API](#tab/api-1)
+
+To create a custom table, call the [Tables - Create Or Update API](/rest/api/loganalytics/tables/create-or-update).
+
+# [CLI](#tab/azure-cli-1)
+
+To create a custom table, run the [az monitor log-analytics workspace table create](/cli/azure/monitor/log-analytics/workspace/table#az-monitor-log-analytics-workspace-table-create) command.
+# [PowerShell](#tab/azure-powershell-1)
Use the [Tables - Update PATCH API](/rest/api/loganalytics/tables/update) to create a custom table with the PowerShell code below. This code creates a table called *MyTable_CL* with two columns. Modify this schema to collect a different table.
Use the [Tables - Update PATCH API](/rest/api/loganalytics/tables/update) to cre
:::image type="content" source="../logs/media/tutorial-workspace-transformations-api/open-cloud-shell.png" lightbox="../logs/media/tutorial-workspace-transformations-api/open-cloud-shell.png" alt-text="Screenshot of opening Cloud Shell in the Azure portal.":::
-2. Copy the following PowerShell code and replace the **Path** parameter with the appropriate values for your workspace in the `Invoke-AzRestMethod` command. Paste it into the Cloud Shell prompt to run it.
+1. Copy the following PowerShell code and replace the **Path** parameter with the appropriate values for your workspace in the `Invoke-AzRestMethod` command. Paste it into the Cloud Shell prompt to run it.
```PowerShell $tableParams = @'
Use the [Tables - Update PATCH API](/rest/api/loganalytics/tables/update) to cre
## Delete a table
-You can delete any table in your Log Analytics workspace that's not an [Azure table](../logs/manage-logs-tables.md#table-type).
+You can delete any table in your Log Analytics workspace that's not an [Azure table](../logs/manage-logs-tables.md#table-type-and-schema).
> [!NOTE] > Deleting a restored table doesn't delete the data in the source table.
-### [Portal](#tab/portal-2)
+# [Portal](#tab/azure-portal-2)
To delete a table from the Azure portal:
To delete a table from the Azure portal:
:::image type="content" source="media/search-job/delete-table.png" alt-text="Screenshot that shows the Delete Table screen for a table in a Log Analytics workspace." lightbox="media/search-job/delete-table.png":::
-### [API](#tab/api-2)
+# [API](#tab/api-2)
-To delete a table, call the **Tables - Delete** API:
+To delete a table, call the [Tables - Delete API](/rest/api/loganalytics/tables/delete).
-```http
-DELETE https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/tables/<TableName>_SRCH?api-version=2021-12-01-preview
-```
-
-### [CLI](#tab/cli-2)
+# [CLI](#tab/azure-cli-2)
To delete a table, run the [az monitor log-analytics workspace table delete](/cli/azure/monitor/log-analytics/workspace/table#az-monitor-log-analytics-workspace-table-delete) command.
-For example:
+# [PowerShell](#tab/azure-powershell-2)
-```azurecli
-az monitor log-analytics workspace table delete --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace --name HeartbeatByIp_SRCH
-```
+To delete a table using PowerShell:
+
+1. Select the **Cloud Shell** button in the Azure portal and ensure the environment is set to **PowerShell**.
+
+ :::image type="content" source="../logs/media/tutorial-workspace-transformations-api/open-cloud-shell.png" lightbox="../logs/media/tutorial-workspace-transformations-api/open-cloud-shell.png" alt-text="Screenshot of opening Cloud Shell in the Azure portal.":::
+
+1. Copy the following PowerShell code and replace the **Path** parameter with the appropriate values for your workspace in the `Invoke-AzRestMethod` command. Paste it into the Cloud Shell prompt to run it.
+
+ ```PowerShell
+ Invoke-AzRestMethod -Path "/subscriptions/{subscription}/resourcegroups/{resourcegroup}/providers/microsoft.operationalinsights/workspaces/{workspace}/tables/NewCustom_CL?api-version=2021-12-01-preview" -Method DELETE
+ ```
+
## Add or delete a custom column
+You can modify the schema of custom tables and add custom columns to, or delete columns from, a standard table.
+# [Portal](#tab/azure-portal-3)
+ To add a custom column to a table in your Log Analytics workspace, or delete a column: 1. From the **Log Analytics workspaces** menu, select **Tables**.
To add a custom column to a table in your Log Analytics workspace, or delete a c
1. Select **Save** to save the new column. 1. To delete a column, select the **Delete** icon to the left of the column you want to delete.
+# [API](#tab/api-3)
+
+To add or delete a custom column, call the [Tables - Create Or Update API](/rest/api/loganalytics/tables/create-or-update).
+
+# [CLI](#tab/azure-cli-3)
+
+To add or delete a custom column, run the [az monitor log-analytics workspace table update](/cli/azure/monitor/log-analytics/workspace/table#az-monitor-log-analytics-workspace-table-update) command.
+
+# [PowerShell](#tab/azure-powershell-3)
+
+To add a new column to an Azure or custom table, run:
+
+```powershell
+$tableParams = @'
+{
+ "properties": {
+ "schema": {
+ "name": "<TableName>",
+ "columns": [
+ {
+ "name": ""<ColumnName>",
+ "description": "First custom column",
+ "type": "string",
+ "isDefaultDisplay": true,
+ "isHidden": false
+ }
+ ]
+ }
+ }
+}
+'@
+
+Invoke-AzRestMethod -Path "/subscriptions/{subscription}/resourcegroups/{resourcegroup}/providers/microsoft.operationalinsights/workspaces/{workspace}/tables/<TableName>?api-version=2021-12-01-preview" -Method PUT -payload $tableParams
+```
+
+The `PUT` call returns the updated table properties, which should include the newly added column.
+
+**Example**
+
+Run this command to add a custom column, called `Custom1_CF`, to the Azure `Heartbeat` table:
+
+```powershell
+$tableParams = @'
+{
+ "properties": {
+ "schema": {
+ "name": "Heartbeat",
+ "columns": [
+ {
+ "name": "Custom1_CF",
+ "description": "The second custom column",
+ "type": "datetime",
+ "isDefaultDisplay": true,
+ "isHidden": false
+ }
+ ]
+ }
+ }
+}
+'@
+
+Invoke-AzRestMethod -Path "/subscriptions/{subscription}/resourcegroups/{resourcegroup}/providers/microsoft.operationalinsights/workspaces/{workspace}/tables/Heartbeat?api-version=2021-12-01-preview" -Method PUT -payload $tableParams
+```
+
+Now, to delete the newly added column and add another one instead, run:
+
+```powershell
+$tableParams = @'
+{
+ "properties": {
+ "schema": {
+ "name": "Heartbeat",
+ "columns": [
+ {
+ "name": "Custom2_CF",
+ "description": "The second custom column",
+ "type": "datetime",
+ "isDefaultDisplay": true,
+ "isHidden": false
+ }
+ ]
+ }
+ }
+}
+'@
+
+Invoke-AzRestMethod -Path "/subscriptions/{subscription}/resourcegroups/{resourcegroup}/providers/microsoft.operationalinsights/workspaces/{workspace}/tables/Heartbeat?api-version=2021-12-01-preview" -Method PUT -payload $tableParams
+```
+
+To delete all custom columns in the table, run:
+
+```powershell
+$tableParams = @'
+{
+ "properties": {
+ "schema": {
+ "name": "Heartbeat",
+ "columns": [
+ ]
+ }
+ }
+}
+'@
+
+Invoke-AzRestMethod -Path "/subscriptions/{subscription}/resourcegroups/{resourcegroup}/providers/microsoft.operationalinsights/workspaces/{workspace}/tables/Heartbeat?api-version=2021-12-01-preview" -Method PUT -payload $tableParams
+```
++ ## Next steps Learn more about:
azure-monitor Manage Logs Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/manage-logs-tables.md
Title: Manage tables in a Log Analytics workspace
-description: Learn how to manage the data and costs related to a Log Analytics workspace effectively
+description: Learn how to manage table settings in a Log Analytics workspace based on your data analysis and cost management needs.
Last updated 11/09/2022
-# Customer intent: As a Log Analytics workspace administrator, I want to understand the options I have for configuring tables in a Log Analytics workspace so that I can manage the data and costs related to a Log Analytics workspace effectively.
+# Customer intent: As a Log Analytics workspace administrator, I want to understand how table properties work and how to view and manage table properties so that I can manage the data and costs related to a Log Analytics workspace effectively.
# Manage tables in a Log Analytics workspace
-Azure Monitor Logs stores log data in tables. Table configuration lets you define how to store collected data, how long to retain the data, and whether you collect the data for auditing and troubleshooting or for ongoing data analysis and regular use by features and services.
+A Log Analytics workspace lets you collect logs from Azure and non-Azure resources into one space for data analysis, use by other services, such as [Sentinel](../../../articles/sentinel/overview.md), and to trigger alerts and actions, for example, using [Logic Apps](../logs/logicapp-flow-connector.md). The Log Analytics workspace consists of tables, which you can configure to manage your data model and log-related costs. This article explains the table configuration options in Azure Monitor Logs and how to set table properties based on your data analysis and cost management needs.
-This article explains the table configuration options in Azure Monitor Logs and how to manage table settings based on your data analysis and cost management needs.
-
-## Table configuration settings
+## Table properties
This diagram provides an overview of the table configuration options in Azure Monitor Logs: :::image type="content" source="media/manage-logs-tables/azure-monitor-logs-table-management.png" alt-text="Diagram that shows table configuration options, including table type, table schema, table plan, and retention and archive policies." lightbox="media/manage-logs-tables/azure-monitor-logs-table-management.png":::
-In the Azure portal, you can view and set table configuration settings by selecting **Tables** from your Log Analytics workspace.
--
-## Table type
+### Table type and schema
-A Log Analytics workspace lets you collect logs from Azure and non-Azure resources into one space for data analysis, use by other services, such as [Sentinel](../../../articles/sentinel/overview.md), and to trigger alerts and actions, for example, using [Logic Apps](../logs/logicapp-flow-connector.md).
+A table's schema is the set of columns that make up the table, into which Azure Monitor Logs collects log data from one or more data sources.
Your Log Analytics workspace can contain the following types of tables:
-| Table type | Data source | Setup |
-|-|-|-|
-| Azure table | Logs from Azure resources or required by Azure services and solutions. | Azure Monitor Logs creates Azure tables automatically based on Azure services you use and [diagnostic settings](../essentials/diagnostic-settings.md) you configure for specific resources. |
-| Custom table | Non-Azure resource and any other data source, such as file-based logs. | [Create a custom table](../logs/create-custom-table.md).|
-| Search results | Logs within the workspace. | Azure Monitor creates a search job results table when you run a [search job](../logs/search-jobs.md). |
-| Restored logs | Archived logs. | Azure Monitor creates a restored logs table when you [restore archived logs](../logs/restore.md). |
+| Table type | Data source | Schema |
+|--|-|-|
+| Azure table | Logs from Azure resources or required by Azure services and solutions. | Azure Monitor Logs creates Azure tables automatically based on Azure services you use and [diagnostic settings](../essentials/diagnostic-settings.md) you configure for specific resources. Each Azure table has a predefined schema. You can [add columns to an Azure table](../logs/create-custom-table.md#add-or-delete-a-custom-column) to store transformed log data or enrich data in the Azure table with data from another source.|
+| Custom table | Non-Azure resources and any other data source, such as file-based logs. | You can [define a custom table's schema](../logs/create-custom-table.md) based on how you want to store data you collect from a given data source. |
+| Search results | All data stored in a Log Analytics workspace. | The schema of a search results table is based on the query you define when you [run the search job](../logs/search-jobs.md). You can't edit the schema of existing search results tables. |
+| Restored logs | Archived logs. | A restored logs table has the same schema as the table from which you [restore logs](../logs/restore.md). You can't edit the schema of existing restored logs tables. |
-## Table schema
+### Log data plan
-A table's schema is the set of columns that make up the table, into which Azure Monitor Logs collects log data from one or more data sources.
+[Configure a table's log data plan](../logs/basic-logs-configure.md) based on how often you access the data in the table:
+- The **Analytics** plan makes log data available for interactive queries and use by features and services.
+- The **Basic** log data plan provides a low-cost way to ingest and retain logs for troubleshooting, debugging, auditing, and compliance.
-### Azure table schema
+### Retention and archive
-Each Azure table has a predefined schema into which Azure Monitor Logs collects logs defined by Azure resources, services, and solutions.
+Archiving is a low-cost solution for keeping data that you no longer use regularly in your workspace for compliance or occasional investigation. [Set table-level retention policies](../logs/data-retention-archive.md) to override the default workspace retention policy and to archive data within your workspace.
-You can [add columns to an Azure table](../logs/create-custom-table.md#add-or-delete-a-custom-column) to store transformed log data or enrich data in the Azure table with data from another source.
-### Custom table schema
+To access archived data, [run a search job](../logs/search-jobs.md) or [restore data for a specific time range](../logs/restore.md).
-You can [define a custom table's schema](../logs/create-custom-table.md) based on how you want to store data you collect from a given data source.
+### Ingestion-time transformations
Reduce costs and analysis effort by using data collection rules to [filter out and transform data before ingestion](../essentials/data-collection-transformations.md) based on the schema you define for your custom table.
-### Search results and restored logs table schema
+## View table properties
-The schema of a search results table is based on the query you define when you [run the search job](../logs/search-jobs.md).
+# [Portal](#tab/azure-portal)
-A restored logs table has the same schema as the table from which you [restore logs](../logs/restore.md).
+To view and set table properties in the Azure portal:
-You can't edit the schema of existing search results and restored logs tables.
-## Log data plan
+1. From your Log Analytics workspace, select **Tables**.
-[Configure a table's log data plan](../logs/basic-logs-configure.md) based on how often you access the data in the table. The **Basic** log data plan provides a low-cost way to ingest and retain logs for troubleshooting, debugging, auditing, and compliance. The **Analytics** plan makes log data available for interactive queries and use by features and services.
+ The **Tables** screen presents table configuration information for all tables in your Log Analytics workspace.
-## Retention and archive
+ :::image type="content" source="media/manage-logs-tables/azure-monitor-logs-table-configuration.png" alt-text="Screenshot that shows the Tables screen for a Log Analytics workspace." lightbox="media/manage-logs-tables/azure-monitor-logs-table-configuration.png":::
- Archiving is a low-cost solution for keeping data that you no longer use regularly in your workspace for compliance or occasional investigation. [Set table-level retention policies](../logs/data-retention-archive.md) to override the default workspace retention policy and to archive data within your workspace.
+1. Select the ellipsis (**...**) to the right of a table to open the table management menu.
-To access archived data, [run a search job](../logs/search-jobs.md) or [restore data for a specific time range](../logs/restore.md).
+ The available table management options vary based on the table type.
+
+ 1. Select **Manage table** to edit the table properties.
+
+ 1. Select **Edit schema** to view and edit the table schema.
+
+# [API](#tab/api)
+
+To view table properties, call the [Tables - Get API](/rest/api/loganalytics/tables/get):
+
+```http
+GET https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/tables/{tableName}?api-version=2021-12-01-preview
+```
+
+**Response body**
+
+|Name | Type | Description |
+| | | |
+|properties.plan | string | The table plan. Either `Analytics` or `Basic`. |
+|properties.retentionInDays | integer | The table's data retention in days. In `Basic Logs`, the value is eight days, fixed. In `Analytics Logs`, the value is between 7 and 730 days.|
+|properties.totalRetentionInDays | integer | The table's data retention that also includes the archive period.|
+|properties.archiveRetentionInDays|integer|The table's archive period (read-only, calculated).|
+|properties.lastPlanModifiedDate|String|Last time when the plan was set for this table. Null if no change was ever done from the default settings (read-only).
+
+**Sample request**
+
+```http
+GET https://management.azure.com/subscriptions/ContosoSID/resourcegroups/ContosoRG/providers/Microsoft.OperationalInsights/workspaces/ContosoWorkspace/tables/ContainerLogV2?api-version=2021-12-01-preview
+```
+
+**Sample response**
+
+Status code: 200
+```http
+{
+ "properties": {
+ "retentionInDays": 8,
+ "totalRetentionInDays": 8,
+ "archiveRetentionInDays": 0,
+ "plan": "Basic",
+ "lastPlanModifiedDate": "2022-01-01T14:34:04.37",
+ "schema": {...},
+ "provisioningState": "Succeeded"
+ },
+ "id": "subscriptions/ContosoSID/resourcegroups/ContosoRG/providers/Microsoft.OperationalInsights/workspaces/ContosoWorkspace",
+ "name": "ContainerLogV2"
+}
+```
+
+To set table properties, call the [Tables - Create Or Update API](/rest/api/loganalytics/tables/create-or-update).
+
+# [Azure CLI](#tab/azure-cli)
+
+To view table properties using Azure CLI, run the [az monitor log-analytics workspace table show](/cli/azure/monitor/log-analytics/workspace/table#az-monitor-log-analytics-workspace-table-show) command.
+
+For example:
+
+```azurecli
+az monitor log-analytics workspace table show --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace --name Syslog --output table
+```
+
+To set table properties using Azure CLI, run the [az monitor log-analytics workspace table update](/cli/azure/monitor/log-analytics/workspace/table#az-monitor-log-analytics-workspace-table-update) command.
+
+# [PowerShell](#tab/azure-powershell)
+
+To view table properties using PowerShell, run:
+
+```powershell
+Invoke-AzRestMethod -Path "/subscriptions/ContosoSID/resourcegroups/ContosoRG/providers/microsoft.operationalinsights/workspaces/ContosoWorkspace/tables/Heartbeat?api-version=2021-12-01-preview" -Method GET
+```
+
+> [!NOTE]
+> The table name used in the `-Path` parameter is case sensitive.
+
+**Sample response**
+
+```json
+{
+ "properties": {
+ "totalRetentionInDays": 30,
+ "archiveRetentionInDays": 0,
+ "plan": "Analytics",
+ "retentionInDaysAsDefault": true,
+ "totalRetentionInDaysAsDefault": true,
+ "schema": {
+ "tableSubType": "Any",
+ "name": "Heartbeat",
+ "tableType": "Microsoft",
+ "standardColumns": [
+ {
+ "name": "TenantId",
+ "type": "guid",
+ "description": "ID of the workspace that stores this record.",
+ "isDefaultDisplay": true,
+ "isHidden": true
+ },
+ {
+ "name": "SourceSystem",
+ "type": "string",
+ "description": "Type of agent the data was collected from. Possible values are OpsManager (Windows agent) or Linux.",
+ "isDefaultDisplay": true,
+ "isHidden": false
+ },
+ {
+ "name": "TimeGenerated",
+ "type": "datetime",
+ "description": "Date and time the record was created.",
+ "isDefaultDisplay": true,
+ "isHidden": false
+ },
+ <OMITTED>
+ {
+ "name": "ComputerPrivateIPs",
+ "type": "dynamic",
+ "description": "The list of private IP addresses of the computer.",
+ "isDefaultDisplay": true,
+ "isHidden": false
+ }
+ ],
+ "solutions": [
+ "LogManagement"
+ ],
+ "isTroubleshootingAllowed": false
+ },
+ "provisioningState": "Succeeded",
+ "retentionInDays": 30
+ },
+ "id": "/subscriptions/{guid}/resourceGroups/{rg name}/providers/Microsoft.OperationalInsights/workspaces/{ws id}/tables/Heartbeat",
+ "name": "Heartbeat"
+}
+```
+
+Use the [Update-AzOperationalInsightsTable](/powershell/module/az.operationalinsights/Update-AzOperationalInsightsTable) cmdlet to set table properties.
++ ## Next steps
azure-monitor Search Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/search-jobs.md
The search results table schema is based on the source table schema and the spec
| _OriginalType | *Type* value from source table. | | _OriginalItemId | *_ItemID* value from source table. | | _OriginalTimeGenerated | *TimeGenerated* value from source table. |
-| TimeGenerated | Time at which the search job retrieved the record from the original table. |
+| TimeGenerated | Time at which the search job ran. |
Queries on the results table appear in [log query auditing](query-audit.md) but not the initial search job.
azure-monitor Tutorial Logs Ingestion Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-logs-ingestion-portal.md
Instead of directly configuring the schema of the table, you can use the portal
```kusto source | extend TimeGenerated = todatetime(Time)
- | parse RawData.value with
+ | parse RawData with
ClientIP:string ' ' * ' ' *
azure-monitor Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/policy-reference.md
Title: Built-in policy definitions for Azure Monitor description: Lists Azure Policy built-in policy definitions for Azure Monitor. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/28/2022 Last updated : 01/05/2023
azure-monitor Profiler Bring Your Own Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-bring-your-own-storage.md
For general Profiler troubleshooting, refer to the [Profiler Troubleshoot docume
For general Snapshot Debugger troubleshooting, refer to the [Snapshot Debugger Troubleshoot documentation](../app/snapshot-debugger-troubleshoot.md).
-## FAQs
+## Frequently asked questions
### If I have enabled Profiler/Snapshot Debugger and BYOS, will my data be migrated into my Storage Account?
azure-monitor Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Monitor description: Lists Azure Policy Regulatory Compliance controls available for Azure Monitor. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/04/2022 Last updated : 01/05/2023
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
Title: "What's new in Azure Monitor documentation" description: "What's new in Azure Monitor documentation" Previously updated : 11/06/2022 Last updated : 01/06/2023 # What's new in Azure Monitor documentation This article lists significant changes to Azure Monitor documentation.
+## December 2022
+
+|Subservice| Article | Description |
+||||
+General|[Azure Monitor for existing Operations Manager customers](azure-monitor-operations-manager.md)|Updated for AMA and SCOM managed instance.|
+Application-Insights|[Create an Application Insights resource](app/create-new-resource.md)|Classic Application Insights resources are deprecated and support will end on February 29th, 2024. Migrate to workspace-based resources to take advantage of new capabilities.|
+Application-Insights|[Enable Azure Monitor OpenTelemetry for .NET, Node.js, and Python applications (preview)](app/opentelemetry-enable.md)|Updated Node.js sample code for JavaScript and TypeScript.|
+Application-Insights|[System performance counters in Application Insights](app/performance-counters.md)|Updated code samples for .NET 6/7.|
+Application-Insights|[Sampling in Application Insights](app/sampling.md)|Updated code samples for .NET 6/7.|
+Application-Insights|[Availability alerts](app/availability-alerts.md)|This article has been rewritten with new guidance and screenshots.|
+Change-Analysis|[Tutorial: Track a web app outage using Change Analysis](change/tutorial-outages.md)|Change tutorial content to reflect changes to repo; remove and replace a few sections.|
+Containers|[Configure Azure CNI networking in Azure Kubernetes Service (AKS)](../../articles/aks/configure-azure-cni.md)|Added steps to enable IP subnet usage|
+Containers|[Reports in Container insights](containers/container-insights-reports.md)|Updated the documents to reflect the steps to enable IP subnet Usage|
+Essentials|[Best practices for data collection rule creation and management in Azure Monitor](essentials/data-collection-rule-best-practices.md)|New article|
+Essentials|[Configure self-managed Grafana to use Azure Monitor managed service for Prometheus (preview) with Azure Active Directory.](essentials/prometheus-self-managed-grafana-azure-active-directory.md)|New Article: Configure self-managed Grafana to use Azure Monitor managed service for Prometheus (preview) with Azure Active Directory.|
+Logs|[Azure Monitor SCOM Managed Instance (preview)](vm/scom-managed-instance-overview.md)|New article|
+Logs|[Set a table's log data plan to Basic or Analytics](logs/basic-logs-configure.md)|Updated the list of tables that support Basic logs.|
+Virtual-Machines|[Tutorial: Create availability alert rule for Azure virtual machine (preview)](vm/tutorial-monitor-vm-alert-availability.md)|New article|
+Virtual-Machines|[Tutorial: Enable recommended alert rules for Azure virtual machine](vm/tutorial-monitor-vm-alert-recommended.md)|New article|
+Virtual-Machines|[Tutorial: Enable monitoring with VM insights for Azure virtual machine](vm/tutorial-monitor-vm-enable-insights.md)|New article|
+Virtual-Machines|[Monitor Azure virtual machines](../../articles/virtual-machines/monitor-vm.md)|Updated for AMA and availability metric.|
+Virtual-Machines|[Enable VM insights by using Azure Policy](vm/vminsights-enable-policy.md)|Updated flow for enabling VM insights with Azure Monitor Agent by using Azure Policy.|
+Visualizations|[Creating an Azure Workbook](visualize/workbooks-create-workbook.md)|added Tutorial - resource centric logs queries in workbooks|
## November 2022
azure-netapp-files Azacsnap Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-introduction.md
AzAcSnap is a lightweight application that is typically executed from an externa
This is a list of technical articles where AzAcSnap has been used as part of a data protection strategy. * [Manual Recovery Guide for SAP HANA on Azure VMs from Azure NetApp Files snapshot with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/manual-recovery-guide-for-sap-hana-on-azure-vms-from-azure/ba-p/3290161)
-* [Manual Recovery Guide for SAP Oracle 19c on Azure VMs from Azure NetApp Files snapshot with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/manual-recovery-guide-for-sap-oracle-19c-on-azure-vms-from-azure/ba-p/3242408)
* [Manual Recovery Guide for SAP HANA on Azure Large Instance from storage snapshot with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/manual-recovery-guide-for-sap-hana-on-azure-large-instance-from/ba-p/3242347)
-* [Automating SAP system copy operations with Libelle SystemCopy](https://docs.netapp.com/us-en/netapp-solutions-sap/lifecycle/libelle-sc-overview.html)
+* [Manual Recovery Guide for SAP Oracle 19c on Azure VMs from Azure NetApp Files snapshot with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/manual-recovery-guide-for-sap-oracle-19c-on-azure-vms-from-azure/ba-p/3242408)
+* [SAP Oracle 19c System Refresh Guide on Azure VMs using Azure NetApp Files Snapshots with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-oracle-19c-system-refresh-guide-on-azure-vms-using-azure/ba-p/3708172)
* [Protecting HANA databases configured with HSR on Azure NetApp Files with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/protecting-hana-databases-configured-with-hsr-on-azure-netapp/ba-p/3654620)
+* [Automating SAP system copy operations with Libelle SystemCopy](https://docs.netapp.com/us-en/netapp-solutions-sap/lifecycle/libelle-sc-overview.html)
## Command synopsis
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
The following table describes the network topologies supported by each network f
| Connectivity to volume in a peered VNet (Cross region or global peering) | Yes* | No | | Connectivity to a volume over ExpressRoute gateway | Yes | Yes | | ExpressRoute (ER) FastPath | Yes | No |
-| Connectivity from on-premises to a volume in a spoke VNet over ExpressRoute gateway and VNet peering with gateway transit | Yes | Yes |
-| Connectivity from on-premises to a volume in a spoke VNet over VPN gateway | Yes | Yes |
-| Connectivity from on-premises to a volume in a spoke VNet over VPN gateway and VNet peering with gateway transit | Yes | Yes |
+| Connectivity from on-premises to a volume in a spoke VNet over ExpressRoute gateway and VNet peering with gateway transit | Yes | Yes |
+| Connectivity from on-premises to a volume in a spoke VNet over VPN gateway | Yes | Yes |
+| Connectivity from on-premises to a volume in a spoke VNet over VPN gateway and VNet peering with gateway transit | Yes | Yes |
| Connectivity over Active/Passive VPN gateways | Yes | Yes | | Connectivity over Active/Active VPN gateways | Yes | No | | Connectivity over Active/Active Zone Redundant gateways | Yes | No |
azure-netapp-files Azure Netapp Files Solution Architectures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
na Previously updated : 12/09/2022 Last updated : 01/09/2023 # Solution architectures using Azure NetApp Files
This section provides references to SAP on Azure solutions.
* [Oracle Azure Virtual Machines DBMS deployment for SAP workload - Azure Virtual Machines](../virtual-machines/workloads/sap/dbms_guide_oracle.md) * [Deploy SAP AnyDB (Oracle 19c) with Azure NetApp Files](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/deploy-sap-anydb-oracle-19c-with-azure-netapp-files/ba-p/2064043) * [Manual Recovery Guide for SAP Oracle 19c on Azure VMs from Azure NetApp Files snapshot with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/manual-recovery-guide-for-sap-oracle-19c-on-azure-vms-from-azure/ba-p/3242408)
+* [SAP Oracle 19c System Refresh Guide on Azure VMs using Azure NetApp Files Snapshots with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-oracle-19c-system-refresh-guide-on-azure-vms-using-azure/ba-p/3708172)
* [IBM Db2 Azure Virtual Machines DBMS deployment for SAP workload using Azure NetApp Files](../virtual-machines/workloads/sap/dbms_guide_ibm.md#using-azure-netapp-files) ### SAP IQ-NLS
azure-netapp-files Configure Virtual Wan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-virtual-wan.md
Previously updated : 12/19/2022 Last updated : 01/05/2023 # Configure Virtual WAN for Azure NetApp Files (preview)
This article will explain how to deploy and access an Azure NetApp Files volume
## Considerations
-* Inter-region secure hub connectivity is not supported. A spoke VNet containing Azure NetApp Files in region A cannot connect to a secure virtual hub in region B.
* You should be familiar with network policies for Azure NetApp Files [private endpoints](../private-link/disable-private-endpoint-network-policy.md). Refer to [Route Azure NetApp Files traffic from on-premises via Azure Firewall](#route-azure-netapp-files-traffic-from-on-premises-via-azure-firewall) for further information. ## Before you begin
azure-netapp-files Create Cross Zone Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-cross-zone-replication.md
na Previously updated : 12/15/2022 Last updated : 01/04/2023 # Create cross-zone replication relationships for Azure NetApp Files
Before you begin, you should review the [requirements and considerations for cro
## Register the feature >[!IMPORTANT]
-> Cross-zone replication uses the [availability zone volume placement feature](use-availability-zones.md). The availability zone volume placement feature is currently in preview. You must [register the feature](manage-availability-zone-volume-placement.md#register-the-feature) before you can use availability zone volume placement.
+> Cross-zone replication uses the [availability zone volume placement feature](use-availability-zones.md). The availability zone volume placement feature is currently in preview. You must [register the feature](manage-availability-zone-volume-placement.md#register-the-feature) before you can register the cross-zone replication feature.
Cross-zone replication is currently in preview. You need to register the feature before using it for the first time. After registration, the feature is enabled and works in the background. No UI control is required.
You can also use [Azure CLI commands](/cli/azure/feature) `az feature register`
## Create the source volume with an availability zone
-This process requires that your account is subscribed to the availability zone volume placement private preview. Contact your account team to request access to the availability zone volume placement private preview program.
+This process requires that your account is subscribed to the [availability zone volume placement feature](use-availability-zones.md).
1. Select **Volumes** from your capacity pool. Then select **+ Add volume** to create a volume.
azure-netapp-files Snapshots Manage Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/snapshots-manage-policy.md
na Previously updated : 05/25/2022 Last updated : 01/06/2023
You cannot apply a snapshot policy to a destination volume in cross-region repli
You can modify an existing snapshot policy to change the policy state, snapshot frequency (hourly, daily, weekly, or monthly), or number of snapshots to keep.
-When modifying a snapshot policy, snapshots created with an old schedule will not be deleted or overwritten by the new schedule or disable the schedule. If you proceed with the update, you will have to manually delete the old snapshots.
+>[!IMPORTANT]
+>When modifying a snapshot policy, make note of the naming format. Snapshots created with policies modified before March 2022 will have a long name, for example `daily-0-min-past-1am.2022-11-03_0100`, while snapshots created with policies after March 2022 will have a shorter name, for example `daily.2022-11-29_0100`.
+>
+> If your snapshot policy is creating snapshots using the long naming convention, modifications to the snapshot policy will not be applied to existing snapshots. The snapshots created with the previous schedule will not be deleted or overwritten by the new schedule. You will have to manually delete the old snapshots.
+>
+> If your snapshot policy is creating snapshots using the short naming convention, policy modifications will be applied to the existing snapshots.
1. From the NetApp Account view, select **Snapshot policy**.
azure-netapp-files Understand Guidelines Active Directory Domain Service Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/understand-guidelines-active-directory-domain-service-site.md
na Previously updated : 01/03/2022 Last updated : 01/06/2022 # Understand guidelines for Active Directory Domain Services site design and planning for Azure NetApp Files
Before you deploy Azure NetApp Files volumes, you must identify the AD DS integr
### <a name="network-requirements"></a>Network requirements
-Azure NetApp Files SMB, dual-protocol, and Kerberos NFSv4.1 volumes require reliable and low-latency network connectivity (< 10ms RTT) to AD DS domain controllers. Poor network connectivity or high network latency between Azure NetApp Files and AD DS domain controllers can cause client access interruptions or client timeouts.
+Azure NetApp Files SMB, dual-protocol, and Kerberos NFSv4.1 volumes require reliable and low-latency network connectivity (less than 10 ms RTT) to AD DS domain controllers. Poor network connectivity or high network latency between Azure NetApp Files and AD DS domain controllers can cause client access interruptions or client timeouts.
Ensure that you meet the following requirements about network topology and configurations:
Ensure that you meet the following requirements about network topology and confi
* Ensure that AD DS domain controllers have network connectivity from the Azure NetApp Files delegated subnet hosting the Azure NetApp Files volumes. * Peered virtual network topologies with AD DS domain controllers must have peering configured correctly to support Azure NetApp Files to AD DS domain controller network connectivity. * Network Security Groups (NSGs) and AD DS domain controller firewalls must have appropriately configured rules to support Azure NetApp Files connectivity to AD DS and DNS.
-* Ensure that the latency is less than 10ms RTT between Azure NetApp Files and AD DS domain controllers.
+* Ensure that the latency is less than 10 ms RTT between Azure NetApp Files and AD DS domain controllers.
The required network ports are as follows:
Ensure that you meet the following requirements about the DNS configurations:
### Time source requirements
-Azure NetApp Files uses **time.windows.com** as the time source. Ensure that the domain controllers used by Azure NetApp Files are configured to use time.windows.com or another accurate, stable root (stratum 1) time source. If there is more than a five-minute skew between Azure NetApp Files and the customer client or AS DS domain controllers, authentication will fail, and access to Azure NetApp Files volumes might also fail.
+Azure NetApp Files uses **time.windows.com** as the time source. Ensure that the domain controllers used by Azure NetApp Files are configured to use time.windows.com or another accurate, stable root (stratum 1) time source. If there's more than a five-minute skew between Azure NetApp Files and your client or AS DS domain controllers, authentication will fail; access to Azure NetApp Files volumes might also fail.
## Decide which AD DS to use with Azure NetApp Files
-Azure NetApp Files supports both Active Directory Domain Services (AD DS) and Azure Active Directory Domain Services (AAD DS) for AD connections. Before you create an AD connection, you need to decide whether to use AD DS or AAD DS.
+Azure NetApp Files supports both Active Directory Domain Services (AD DS) and Azure Active Directory Domain Services (Azure AD DS) for AD connections. Before you create an AD connection, you need to decide whether to use AD DS or Azure AD DS.
For more information, see [Compare self-managed Active Directory Domain Services, Azure Active Directory, and managed Azure Active Directory Domain Services](../active-directory-domain-services/compare-identity-solutions.md).
You should use Active Directory Domain Services (AD DS) in the following scenari
* You have AD DS users hosted in an on-premises AD DS domain that need access to Azure NetApp Files resources. * You have applications hosted partially on-premises and partially in Azure that need access to Azure NetApp Files resources.
-* You donΓÇÖt need AAD DS integration with an Azure AD tenant in your subscription, or AAD DS is incompatible with your technical requirements.
+* You donΓÇÖt need Azure AD DS integration with an Azure AD tenant in your subscription, or Azure AD DS is incompatible with your technical requirements.
> [!NOTE] > Azure NetApp Files doesn't support the use of AD DS Read-only Domain Controllers (RODC).
If you choose to use AD DS with Azure NetApp Files, follow the guidance in [Exte
### Azure Active Directory Domain Services considerations
-[Azure Active Directory Domain Services (AAD DS)](../active-directory-domain-services/overview.md) is a managed AD DS domain that is synchronized with your Azure AD tenant. The main benefits to using Azure AD DS are as follows:
+[Azure Active Directory Domain Services (Azure AD DS)](../active-directory-domain-services/overview.md) is a managed AD DS domain that is synchronized with your Azure AD tenant. The main benefits to using Azure AD DS are as follows:
-* AAD DS is a standalone domain. As such, there is no need to set up network connectivity between on-premises and Azure.
+* Azure AD DS is a standalone domain. As such, there's no need to set up network connectivity between on-premises and Azure.
* Provides simplified deployment and management experience.
-You should use AAD DS in the following scenarios:
+You should use Azure AD DS in the following scenarios:
* ThereΓÇÖs no need to extend AD DS from on-premises into Azure to provide access to Azure NetApp Files resources. * Your security policies do not allow the extension of on-premises AD DS into Azure.
-* You donΓÇÖt have strong knowledge of AD DS. AAD DS can improve the likelihood of good outcomes with Azure NetApp Files.
+* You donΓÇÖt have strong knowledge of AD DS. Azure AD DS can improve the likelihood of good outcomes with Azure NetApp Files.
-If you choose to use AAD DS with Azure NetApp Files, see [Azure AD DS documentation](../active-directory-domain-services/overview.md) for [architecture](../active-directory-domain-services/scenarios.md), deployment, and management guidance. Ensure that you also meet the Azure NetApp Files [Network](#network-requirements) and [DNS requirements](#ad-ds-requirements).
+If you choose to use Azure AD DS with Azure NetApp Files, see [Azure AD DS documentation](../active-directory-domain-services/overview.md) for [architecture](../active-directory-domain-services/scenarios.md), deployment, and management guidance. Ensure that you also meet the Azure NetApp Files [Network](#network-requirements) and [DNS requirements](#ad-ds-requirements).
## Design AD DS site topology for use with Azure NetApp Files
Incorrect AD DS site topology or configuration can result in the following behav
An AD DS site topology for Azure NetApp Files is a logical representation of the [Azure NetApp Files network](#network-requirements). Designing an AD DS site topology for Azure NetApp Files involves planning for domain controller placement, designing sites, DNS infrastructure, and network subnets to ensure good connectivity among the Azure NetApp Files service, Azure NetApp Files storage clients, and AD DS domain controllers.
+In addition to multiple domain controllers assigned to the AD DS site configured in the Azure NetApp Files AD Site Name, the Azure NetApp Files AD DS site can have one or more subnets assigned to it.
+
+>[!NOTE]
+>It's essential that all the domain controllers and subnets assigned to the Azure NetApp Files AD DS site must be well connected (less than 10ms RTT latency) and reachable by the network interfaces used by the Azure NetApp Files volumes.
+>
+>If you're using using Standard network features, you should ensure that any User Defined Routes (UDRs) or Network Security Group (NSG) rules do not block Azure NetApp Files network communication with AD DS domain controllers assigned to the Azure NetApp Files AD DS site.
+>
+>If you're using Network Virtual Appliances or firewalls (such as Palo Alto Networks or Fortinet firewalls), they must be configured to not block network traffic between Azure NetApp Files and the AD DS domain controllers and subnets assigned to the Azure NetApp Files AD DS site.
+ ### How Azure NetApp Files uses AD DS site information Azure NetApp Files uses the **AD Site Name** configured in the [Active Directory connections](create-active-directory-connections.md#create-an-active-directory-connection) to discover which domain controllers are present to support authentication, domain join, LDAP queries, and Kerberos ticket operations. #### AD DS domain controller discovery
-Azure NetApp Files initiates domain controller discovery every four hours. Azure NetApp Files queries the site-specific service (SRV) resource record to determine which domain controllers are in the AD DS site specified in the **AD Site Name** field of the Azure NetApp Files AD connection. The associated services hosted on the domain controllers (such as Kerberos, LDAP, Net Logon, and LSA) server discovery checks the status of the services hosted on the domain controllers and selects the optimal domain controller for authentication requests.
+Azure NetApp Files initiates domain controller discovery every four hours. Azure NetApp Files queries the site-specific DNS service (SRV) resource record to determine which domain controllers are in the AD DS site specified in the **AD Site Name** field of the Azure NetApp Files AD connection. Azure NetApp Files domain controller server discovery checks the status of the services hosted on the domain controllers (such as Kerberos, LDAP, Net Logon, and LSA) and selects the optimal domain controller for authentication requests.
+
+The DNS service (SRV) resource records for the AD DS site specified in the AD Site name field of the Azure NetApp Files AD connection must contain the list of IP addresses for the AD DS domain controllers that will be used by Azure NetApp Files. You can check the validity of the DNS (SRV) resource record by using the `nslookup` utility.
> [!NOTE] > If you make changes to the domain controllers in the AD DS site that is used by Azure NetApp Files, wait at least four hours between deploying new AD DS domain controllers and retiring existing AD DS domain controllers. This wait time enables Azure NetApp Files to discover the new AD DS domain controllers.
Incorrect or incomplete AD DS site topology or configuration can result in volum
Azure NetApp Files uses the AD DS Site to discover the domain controllers and subnets assigned to the AD DS Site defined in the AD Site Name. All domain controllers assigned to the AD DS Site must have good network connectivity from the Azure virtual network interfaces used by ANF and be reachable. AD DS domain controller VMs assigned to the AD DS Site that are used by Azure NetApp Files must be excluded from cost management policies that shut down VMs.
-You must update the AD DS Site configuration whenever new domain controllers are deployed into a subnet assigned to the AD DS site that is used by the Azure NetApp Files AD Connection. Ensure that the DNS SRV records for the site reflect any changes to the domain controllers assigned to the AD DS Site used by Azure NetApp Files.
+If Azure NetApp Files is not able to reach any domain controllers assigned to the AD DS site, the domain controller discovery process will query the AD DS domain for a list of all domain controllers. The list of domain controllers returned from this query is an unordered list. As a result, Azure NetApp Files may try to use domain controllers that are not reachable or well-connected, which can cause volume creation failures, problems with client queries, authentication failures, and failures to modify Azure NetApp Files AD connections.
+
+You must update the AD DS Site configuration whenever new domain controllers are deployed into a subnet assigned to the AD DS site that is used by the Azure NetApp Files AD Connection. Ensure that the DNS SRV records for the site reflect any changes to the domain controllers assigned to the AD DS Site used by Azure NetApp Files. You can check the validity of the DNS (SRV) resource record by using the `nslookup` utility.
> [!NOTE]
-> Azure NetApp Files doesn't support the use of AD DS Read-only Domain Controllers (RODC). To prevent Azure NetApp Files from using an RODC, do not configure the **AD Site Name** filed of the AD connections with an RODC.
+> Azure NetApp Files doesn't support the use of AD DS Read-only Domain Controllers (RODC). To prevent Azure NetApp Files from using an RODC, do not configure the **AD Site Name** field of the AD connections with an RODC.
### Sample AD DS site topology configuration for Azure NetApp Files
To create the subnet object that maps to the Azure NetApp Files delegated subnet
[Azure NetApp Files cross-region replication](cross-region-replication-introduction.md) enables you to replicate Azure NetApp Files volumes from one region to another region to support business continuance and disaster recovery (BC/DR) requirements.
-Azure NetApp Files SMB, dual-protocol, and NFSv4.1 Kerberos volumes support cross-region replication. Replication of these volumes requires the following:
+Azure NetApp Files SMB, dual-protocol, and NFSv4.1 Kerberos volumes support cross-region replication. Replication of these volumes requires:
* A NetApp account created in both the source and destination regions. * An Azure NetApp Files Active Directory connection in the NetApp account created in the source and destination regions.
azure-portal Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/policy-reference.md
Title: Built-in policy definitions for Azure portal description: Lists Azure Policy built-in policy definitions for Azure portal. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/28/2022 Last updated : 01/05/2023
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/policy-reference.md
Title: Built-in policy definitions for Azure Custom Resource Providers description: Lists Azure Policy built-in policy definitions for Azure Custom Resource Providers. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/28/2022 Last updated : 01/05/2023
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/policy-reference.md
Title: Built-in policy definitions for Azure Managed Applications description: Lists Azure Policy built-in policy definitions for Azure Managed Applications. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/28/2022 Last updated : 01/05/2023
azure-resource-manager Move Support Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-support-resources.md
Title: Move operation support by resource type description: Lists the Azure resource types that can be moved to a new resource group, subscription, or region. Previously updated : 12/02/2022 Last updated : 01/05/2023 # Move operation support for resources
Before starting your move operation, review the [checklist](./move-resource-grou
> | networkinterfaces | **Yes** | **Yes** | **Yes** <br/><br/> Use [Azure Resource Mover](../../resource-mover/tutorial-move-region-virtual-machines.md) to move NICs. | > | networkprofiles | No | No | No | > | networksecuritygroups | **Yes** | **Yes** | **Yes** <br/><br/> Use [Azure Resource Mover](../../resource-mover/tutorial-move-region-virtual-machines.md) to move network security groups (NSGs). |
-> | networkwatchers | No | No | No |
+> | networkwatchers | **Yes** | No | No |
> | networkwatchers / connectionmonitors | **Yes** | No | No | > | networkwatchers / flowlogs | **Yes** | No | No | > | networkwatchers / pingmeshes | **Yes** | No | No |
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/policy-reference.md
Title: Built-in policy definitions for Azure Resource Manager description: Lists Azure Policy built-in policy definitions for Azure Resource Manager. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/28/2022 Last updated : 01/05/2023
azure-resource-manager Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Resource Manager description: Lists Azure Policy Regulatory Compliance controls available for Azure Resource Manager. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/04/2022 Last updated : 01/05/2023
azure-signalr Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/policy-reference.md
Title: Built-in policy definitions for Azure SignalR description: Lists Azure Policy built-in policy definitions for Azure SignalR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/28/2022 Last updated : 01/05/2023
azure-signalr Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure SignalR description: Lists Azure Policy Regulatory Compliance controls available for Azure SignalR. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/04/2022 Last updated : 01/05/2023
azure-signalr Signalr Quickstart Azure Functions Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-quickstart-azure-functions-csharp.md
ms.devlang: csharp Previously updated : 03/30/2022 Last updated : 12/28/2022
In this article, you'll learn how to use SignalR Service and Azure Functions to build a serverless application with C# to broadcast messages to clients.
+# [In-process](#tab/in-process)
+ > [!NOTE] > You can get the code mentioned in this article from [GitHub](https://github.com/aspnet/AzureSignalR-samples/tree/main/samples/QuickStartServerless/csharp).
+# [Isolated process](#tab/isolated-process)
+
+> [!NOTE]
+> You can get the code mentioned in this article from [GitHub](https://github.com/aspnet/AzureSignalR-samples/tree/main/samples/QuickStartServerless/csharp-isolated).
+++ ## Prerequisites The following prerequisites are needed for this quickstart:
You'll need the Azure Functions Core Tools for this step.
1. Create an empty directory and change to the directory with the command line. 1. Initialize a new project.
+ # [In-process](#tab/in-process)
+ ```bash # Initialize a function project func init --worker-runtime dotnet
You'll need the Azure Functions Core Tools for this step.
dotnet add package Microsoft.Azure.WebJobs.Extensions.SignalRService ```
+ # [Isolated process](#tab/isolated-process)
+
+ ```bash
+ # Initialize a function project
+ func init --worker-runtime dotnet-isolated
+
+ # Add extensions package references to the project
+ dotnet add package Microsoft.Azure.Functions.Worker.Extensions.Http
+ dotnet add package Microsoft.Azure.Functions.Worker.Extensions.SignalRService
+ dotnet add package Microsoft.Azure.Functions.Worker.Extensions.Timer
+ ```
+ 1. Using your code editor, create a new file with the name *Function.cs*. Add the following code to *Function.cs*:
+ # [In-process](#tab/in-process)
+ ```csharp using System; using System.IO;
You'll need the Azure Functions Core Tools for this step.
using Microsoft.Azure.WebJobs.Extensions.Http; using Microsoft.Azure.WebJobs.Extensions.SignalRService; using Newtonsoft.Json;
-
+ namespace CSharp { public static class Function
You'll need the Azure Functions Core Tools for this step.
private static HttpClient httpClient = new HttpClient(); private static string Etag = string.Empty; private static string StarCount = "0";
-
+ [FunctionName("index")] public static IActionResult GetHomePage([HttpTrigger(AuthorizationLevel.Anonymous)]HttpRequest req, ExecutionContext context) {
You'll need the Azure Functions Core Tools for this step.
ContentType = "text/html", }; }
-
+ [FunctionName("negotiate")]
- public static SignalRConnectionInfo Negotiate(
+ public static SignalRConnectionInfo Negotiate(
[HttpTrigger(AuthorizationLevel.Anonymous)] HttpRequest req, [SignalRConnectionInfo(HubName = "serverless")] SignalRConnectionInfo connectionInfo) { return connectionInfo; }
-
+ [FunctionName("broadcast")] public static async Task Broadcast([TimerTrigger("*/5 * * * * *")] TimerInfo myTimer, [SignalR(HubName = "serverless")] IAsyncCollector<SignalRMessage> signalRMessages)
You'll need the Azure Functions Core Tools for this step.
var result = JsonConvert.DeserializeObject<GitResult>(await response.Content.ReadAsStringAsync()); StarCount = result.StarCount; }
-
+ await signalRMessages.AddAsync( new SignalRMessage {
You'll need the Azure Functions Core Tools for this step.
Arguments = new[] { $"Current star count of https://github.com/Azure/azure-signalr is: {StarCount}" } }); }
-
+ private class GitResult { [JsonRequired]
You'll need the Azure Functions Core Tools for this step.
} ```
+ # [Isolated process](#tab/isolated-process)
+
+ ```csharp
+ using System.Net;
+ using System.Net.Http.Json;
+ using System.Text.Json.Serialization;
+ using Microsoft.Azure.Functions.Worker;
+ using Microsoft.Azure.Functions.Worker.Http;
+
+ namespace csharp_isolated;
+
+ public class Functions
+ {
+ private static readonly HttpClient HttpClient = new();
+ private static string Etag = string.Empty;
+ private static int StarCount = 0;
+
+ [Function("index")]
+ public static HttpResponseData GetHomePage([HttpTrigger(AuthorizationLevel.Anonymous)] HttpRequestData req)
+ {
+ var response = req.CreateResponse(HttpStatusCode.OK);
+ response.WriteString(File.ReadAllText("content/https://docsupdatetracker.net/index.html"));
+ response.Headers.Add("Content-Type", "text/html");
+ return response;
+ }
+
+ [Function("negotiate")]
+ public static HttpResponseData Negotiate([HttpTrigger(AuthorizationLevel.Anonymous)] HttpRequestData req,
+ [SignalRConnectionInfoInput(HubName = "serverless")] string connectionInfo)
+ {
+ var response = req.CreateResponse(HttpStatusCode.OK);
+ response.Headers.Add("Content-Type", "application/json");
+ response.WriteString(connectionInfo);
+ return response;
+ }
+
+ [Function("broadcast")]
+ [SignalROutput(HubName = "serverless")]
+ public static async Task<SignalRMessageAction> Broadcast([TimerTrigger("*/5 * * * * *")] TimerInfo timerInfo)
+ {
+ var request = new HttpRequestMessage(HttpMethod.Get, "https://api.github.com/repos/azure/azure-signalr");
+ request.Headers.UserAgent.ParseAdd("Serverless");
+ request.Headers.Add("If-None-Match", Etag);
+ var response = await HttpClient.SendAsync(request);
+ if (response.Headers.Contains("Etag"))
+ {
+ Etag = response.Headers.GetValues("Etag").First();
+ }
+ if (response.StatusCode == HttpStatusCode.OK)
+ {
+ var result = await response.Content.ReadFromJsonAsync<GitResult>();
+ if (result != null)
+ {
+ StarCount = result.StarCount;
+ }
+ }
+ return new SignalRMessageAction("newMessage", new object[] { $"Current star count of https://github.com/Azure/azure-signalr is: {StarCount}" });
+ }
+
+ private class GitResult
+ {
+ [JsonPropertyName("stargazers_count")]
+ public int StarCount { get; set; }
+ }
+ }
+ ```
+
+
+ The code in *Function.cs* has three functions: - `GetHomePage` is used to get a website as client. - `Negotiate` is used by the client to get an access token.
You'll need the Azure Functions Core Tools for this step.
</ItemGroup> ```
+1. Azure Functions requires a storage account to work. You can install and run the [Azure Storage Emulator](../storage/common/storage-use-azurite.md). **Or** you can update the setting to use your real storage account with the following command:
+ ```bash
+ func settings add AzureWebJobsStorage "<storage-connection-string>"
+ ```
+ 1. It's almost done now. The last step is to set a connection string of the SignalR Service to Azure Function settings. 1. Confirm the SignalR Service instance was successfully created by searching for its name in the search box at the top of the portal. Select the instance to open it.
You'll need the Azure Functions Core Tools for this step.
func start ```
- After the Azure function is running locally, open `http://localhost:7071/api/index` and you can see the current star count. If you star or unstar in the GitHub, you'll get a star count refreshing every few seconds.
+ After the Azure function is running locally, open `http://localhost:7071/api/index`, and you can see the current star count. If you star or unstar in the GitHub, you'll get a star count refreshing every few seconds.
- > [!NOTE]
- > SignalR binding needs Azure Storage, but you can use a local storage emulator when the function is running locally.
- > If you got the error `There was an error performing a read operation on the Blob Storage Secret Repository. Please ensure the 'AzureWebJobsStorage' connection string is valid.` You need to download and enable [Storage Emulator](../storage/common/storage-use-emulator.md)
[!INCLUDE [Cleanup](includes/signalr-quickstart-cleanup.md)]
azure-signalr Signalr Quickstart Azure Functions Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-quickstart-azure-functions-java.md
In this article, you'll use Azure SignalR Service, Azure Functions, and Java to
- A code editor, such as [Visual Studio Code](https://code.visualstudio.com/) - An Azure account with an active subscription. If you don't already have an account, [create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). - [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing). Used to run Azure Function apps locally.
-
+ - The required SignalR Service bindings in Java are only supported in Azure Function Core Tools version 2.4.419 (host version 2.0.12332) or above. - To install extensions, Azure Functions Core Tools requires the [.NET Core SDK](https://dotnet.microsoft.com/download) installed. However, no knowledge of .NET is required to build Java Azure Function apps.
Make sure you have Azure Function Core Tools, Java (version 11 in the sample), a
| **groupId** | `com.signalr` | A value that uniquely identifies your project across all projects, following the [package naming rules](https://docs.oracle.com/javase/specs/jls/se6/html/packages.html#7.7) for Java. | | **artifactId** | `java` | A value that is the name of the jar, without a version number. | | **version** | `1.0-SNAPSHOT` | Choose the default value. |
- | **package** | `com.signalr` | A value that is the Java package for the generated function code. Use the default. |
+ | **package** | `com.signalr` | A value that is the Java package for the generated function code. Use the default. |
1. Go to the folder `src/main/java/com/signalr` and copy the following code to *Function.java*: ```java package com.signalr;
-
+ import com.google.gson.Gson; import com.microsoft.azure.functions.ExecutionContext; import com.microsoft.azure.functions.HttpMethod;
Make sure you have Azure Function Core Tools, Java (version 11 in the sample), a
import com.microsoft.azure.functions.annotation.TimerTrigger; import com.microsoft.azure.functions.signalr.*; import com.microsoft.azure.functions.signalr.annotation.*;
-
+ import org.apache.commons.io.IOUtils;
-
-
++ import java.io.IOException; import java.io.InputStream; import java.net.URI;
Make sure you have Azure Function Core Tools, Java (version 11 in the sample), a
import java.net.http.HttpResponse.BodyHandlers; import java.nio.charset.StandardCharsets; import java.util.Optional;
-
+ public class Function { private static String Etag = ""; private static String StarCount;
-
+ @FunctionName("index") public HttpResponseMessage run( @HttpTrigger(
Make sure you have Azure Function Core Tools, Java (version 11 in the sample), a
methods = {HttpMethod.GET}, authLevel = AuthorizationLevel.ANONYMOUS)HttpRequestMessage<Optional<String>> request, final ExecutionContext context) throws IOException {
-
+ InputStream inputStream = getClass().getClassLoader().getResourceAsStream("content/https://docsupdatetracker.net/index.html"); String text = IOUtils.toString(inputStream, StandardCharsets.UTF_8.name()); return request.createResponseBuilder(HttpStatus.OK).header("Content-Type", "text/html").body(text).build(); }
-
+ @FunctionName("negotiate") public SignalRConnectionInfo negotiate( @HttpTrigger(
Make sure you have Azure Function Core Tools, Java (version 11 in the sample), a
@SignalRConnectionInfoInput( name = "connectionInfo", hubName = "serverless") SignalRConnectionInfo connectionInfo) {
-
+ return connectionInfo; }
-
+ @FunctionName("broadcast") @SignalROutput(name = "$return", hubName = "serverless") public SignalRMessage broadcast(
Make sure you have Azure Function Core Tools, Java (version 11 in the sample), a
GitResult result = gson.fromJson(res.body(), GitResult.class); StarCount = result.stargazers_count; }
-
+ return new SignalRMessage("newMessage", "Current start count of https://github.com/Azure/azure-signalr is:".concat(StarCount)); }
-
+ class GitResult { public String stargazers_count; }
Make sure you have Azure Function Core Tools, Java (version 11 in the sample), a
| | - main | | | - java | | | | - com
- | | | | | - signalr
+ | | | | | - signalr
| | | | | | - Function.java | | | - resources | | | | - content
Make sure you have Azure Function Core Tools, Java (version 11 in the sample), a
```html <html>
-
+ <body> <h1>Azure SignalR Serverless Sample</h1> <div id="messages"></div>
Make sure you have Azure Function Core Tools, Java (version 11 in the sample), a
connection.on('newMessage', (message) => { document.getElementById("messages").innerHTML = message; });
-
+ connection.start() .catch(console.error); </script> </body>
-
+ </html> ```
+1. Azure Functions requires a storage account to work. You can install and run the [Azure Storage Emulator](../storage/common/storage-use-azurite.md).
+ 1. You're almost done now. The last step is to set a connection string of the SignalR Service to Azure Function settings. 1. Search for the Azure SignalR instance you deployed earlier using the **Search** box in Azure portal. Select the instance to open it.
Make sure you have Azure Function Core Tools, Java (version 11 in the sample), a
After Azure Function is running locally, go to `http://localhost:7071/api/index` and you'll see the current star count. If you star or "unstar" in the GitHub, you'll get a star count refreshing every few seconds.
- > [!NOTE]
- > SignalR binding needs Azure Storage, but you can use local storage emulator when the Function is running locally.
- > If you got some error like `There was an error performing a read operation on the Blob Storage Secret Repository. Please ensure the 'AzureWebJobsStorage' connection string is valid.` You need to download and enable [Storage Emulator](../storage/common/storage-use-emulator.md)
- [!INCLUDE [Cleanup](includes/signalr-quickstart-cleanup.md)] Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qsjava).
azure-signalr Signalr Quickstart Azure Functions Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-quickstart-azure-functions-javascript.md
Make sure you have Azure Functions Core Tools installed.
```javascript var fs = require('fs').promises
-
+ module.exports = async function (context, req) { const path = context.executionContext.functionDirectory + '/../content/https://docsupdatetracker.net/index.html' try {
Make sure you have Azure Functions Core Tools installed.
] } ```
-
+ 3. Create a `broadcast` function to broadcast messages to all clients. In the sample, we use a time trigger to broadcast messages periodically.
-
+ ```bash func new -n broadcast -t TimerTrigger ```
-
+ Open *broadcast/function.json* and copy the following code:
-
+ ```json { "bindings": [
Make sure you have Azure Functions Core Tools installed.
] } ```
-
+ Open *broadcast/index.js* and copy the following code:
-
+ ```javascript var https = require('https');
-
+ var etag = ''; var star = 0;
-
+ module.exports = function (context) { var req = https.request("https://api.github.com/repos/azure/azure-signalr", { method: 'GET',
Make sure you have Azure Functions Core Tools installed.
if (res.headers['etag']) { etag = res.headers['etag'] }
-
+ var body = "";
-
+ res.on('data', data => { body += data; });
Make sure you have Azure Functions Core Tools installed.
var jbody = JSON.parse(body); star = jbody['stargazers_count']; }
-
+ context.bindings.signalRMessages = [{ "target": "newMessage", "arguments": [ `Current star count of https://github.com/Azure/azure-signalr is: ${star}` ]
Make sure you have Azure Functions Core Tools installed.
```html <html>
-
+ <body> <h1>Azure SignalR Serverless Sample</h1> <div id="messages"></div>
Make sure you have Azure Functions Core Tools installed.
connection.on('newMessage', (message) => { document.getElementById("messages").innerHTML = message; });
-
+ connection.start() .catch(console.error); </script> </body>
-
+ </html> ```
+1. Azure Functions requires a storage account to work. You can install and run the [Azure Storage Emulator](../storage/common/storage-use-azurite.md). **Or** you can update the setting to use your real storage account with the following command:
+ ```bash
+ func settings add AzureWebJobsStorage "<storage-connection-string>"
+ ```
+ 4. You're almost done now. The last step is to set a connection string of the SignalR Service to Azure Function settings. 1. In the Azure portal, find the SignalR instance you deployed earlier by typing its name in the **Search** box. Select the instance to open it.
Make sure you have Azure Functions Core Tools installed.
![Search for the SignalR Service instance](media/signalr-quickstart-azure-functions-csharp/signalr-quickstart-search-instance.png) 1. Select **Keys** to view the connection strings for the SignalR Service instance.
-
+ ![Screenshot that highlights the primary connection string.](media/signalr-quickstart-azure-functions-javascript/signalr-quickstart-keys.png) 1. Copy the primary connection string. And execute the command below.
-
+ ```bash func settings add AzureSignalRConnectionString "<signalr-connection-string>" ```
-
+ 5. Run the Azure function in local host: ```bash
Make sure you have Azure Functions Core Tools installed.
After Azure Function running locally. Use your browser to visit `http://localhost:7071/api/index` and you can see the current star count. And if you star or "unstar" in GitHub, you'll see the star count refreshing every few seconds.
- > [!NOTE]
- > SignalR binding needs Azure Storage, but you can use local storage emulator when the function is running locally.
- > If you got an error like `There was an error performing a read operation on the Blob Storage Secret Repository. Please ensure the 'AzureWebJobsStorage' connection string is valid.` You need to download and enable [Storage Emulator](../storage/common/storage-use-emulator.md)
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qscsharp)
azure-signalr Signalr Quickstart Azure Functions Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-quickstart-azure-functions-python.md
This quickstart can be run on macOS, Windows, or Linux.
```javascript import os
-
+ import azure.functions as func
-
+ def main(req: func.HttpRequest) -> func.HttpResponse: f = open(os.path.dirname(os.path.realpath(__file__)) + '/../content/https://docsupdatetracker.net/index.html') return func.HttpResponse(f.read(), mimetype='text/html')
This quickstart can be run on macOS, Windows, or Linux.
```python import azure.functions as func
-
-
++ def main(req: func.HttpRequest, connectionInfo) -> func.HttpResponse: return func.HttpResponse(connectionInfo) ``` 3. Create a `broadcast` function to broadcast messages to all clients. In the sample, we use time trigger to broadcast messages periodically.
-
+ ```bash func new -n broadcast -t TimerTrigger # install requests
This quickstart can be run on macOS, Windows, or Linux.
] } ```
-
+ Open *broadcast/\__init\__.py* and copy the following code:
-
+ ```python import requests import json
-
+ import azure.functions as func
-
+ etag = '' start_count = 0
-
+ def main(myTimer: func.TimerRequest, signalRMessages: func.Out[str]) -> None: global etag global start_count
This quickstart can be run on macOS, Windows, or Linux.
res = requests.get('https://api.github.com/repos/azure/azure-signalr', headers=headers) if res.headers.get('ETag'): etag = res.headers.get('ETag')
-
+ if res.status_code == 200: jres = res.json() start_count = jres['stargazers_count']
-
+ signalRMessages.set(json.dumps({ 'target': 'newMessage', 'arguments': [ 'Current star count of https://github.com/Azure/azure-signalr is: ' + str(start_count) ]
This quickstart can be run on macOS, Windows, or Linux.
```html <html>
-
+ <body> <h1>Azure SignalR Serverless Sample</h1> <div id="messages"></div>
This quickstart can be run on macOS, Windows, or Linux.
connection.on('newMessage', (message) => { document.getElementById("messages").innerHTML = message; });
-
+ connection.start() .catch(console.error); </script> </body>
-
+ </html> ```
+1. Azure Functions requires a storage account to work. You can install and run the [Azure Storage Emulator](../storage/common/storage-use-azurite.md). **Or** you can update the setting to use your real storage account with the following command:
+ ```bash
+ func settings add AzureWebJobsStorage "<storage-connection-string>"
+ ```
+ 4. We're almost done now. The last step is to set a connection string of the SignalR Service to Azure Function settings. 1. In the Azure portal, search for the SignalR Service instance you deployed earlier. Select the instance to open it.
This quickstart can be run on macOS, Windows, or Linux.
After the Azure Function is running locally, go to `http://localhost:7071/api/index` and you'll see the current star count. If you star or unstar in GitHub, you'll get a refreshed star count every few seconds.
- > [!NOTE]
- > SignalR binding needs Azure Storage, but you can use a local storage emulator when the function is running locally.
- > You need to download and enable [Storage Emulator](../storage/common/storage-use-emulator.md) if you got an error like `There was an error performing a read operation on the Blob Storage Secret Repository. Please ensure the 'AzureWebJobsStorage' connection string is valid.`
- [!INCLUDE [Cleanup](includes/signalr-quickstart-cleanup.md)] Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qspython).
azure-sql-edge Deploy Onnx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/deploy-onnx.md
keywords: deploy SQL Edge
In this quickstart, you'll learn how to train a model, convert it to ONNX, deploy it to [Azure SQL Edge](onnx-overview.md), and then run native PREDICT on data using the uploaded ONNX model.
-This quickstart is based on **scikit-learn** and uses the [Boston Housing dataset](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_boston.html).
+This quickstart is based on **scikit-learn** and uses the [Boston Housing dataset](https://scikit-learn.org/0.24/modules/generated/sklearn.datasets.load_boston.html).
## Before you begin
azure-video-indexer Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/language-support.md
Previously updated : 12/12/2022 Last updated : 01/06/2023 # Language support in Azure Video Indexer This article provides a comprehensive list of language support by service features in Azure Video Indexer. For the list and definitions of all the features, see [Overview](video-indexer-overview.md).
-The list below contains the source languages for transcription that are supported by the Video Indexer API.
+Some languages are supported only through the API (see [Get Supported Languages](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Supported-Languages)) and not through the Video Indexer website or widgets. To make sure a language is supported for search, transcription, or translation by the Azure Video Indexer website and widgets, see the [front end language
+support table](#language-support-in-front-end-experiences) further below.
-> [!NOTE]
-> Some languages are supported only through the API and not through the Video Indexer website or widgets.
->
-> To make sure a language is supported for search, transcription, or translation by the Azure Video Indexer website and widgets, see the [frontend language
-> support table](#language-support-in-frontend-experiences) further below.
+## API language support
+
+The API returns a list of supported languages with the following values:
-## General language support
+```json
+"name": "Language",
+"languageCode": "Code",
+"isRightToLeft": true/false,
+"isSourceLanguage": true/false,
+"isAutoDetect": true/false
+```
-This section describes languages supported by Azure Video Indexer API.
+Some notes for the above values are:
-- Transcription (source language of the video/audio file)-- Language identification (LID)-- Multi-language identification (MLID)-- Translation
+- Supported source language:
- The following insights are translated, otherwise will remain in English:
+ If `isSourceLanguage` is `false`, the language is supported for translation only.
+ If `isSourceLanguage` is `true`, the language is supported as source for transcription, translation, and search.
+- Language identification (auto detection):
+
+ If `isAutoDetect` set to `true`, the language is supported for language identification (LID) and multi-language identification (MLID).
+- The following insights are translated, otherwise will remain in English:
- - Transcript
- - Keywords
- - Topics
- - Labels
- - Frame patterns (Only to Hebrew as of now)
-- Language customization-
-| **Language** | **Code** | **Transcription** | **LID** | **MLID** | **Translation** | **Customization** (language model) |
-|::|:--:|:--:|:--:|:--:|:-:|::|
-| Afrikaans | `af-ZA` | | | | Γ£ö | |
-| Arabic (Israel) | `ar-IL` | Γ£ö | | | Γ£ö | Γ£ö |
-| Arabic (Jordan) | `ar-JO` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Arabic (Kuwait) | `ar-KW` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Arabic (Lebanon) | `ar-LB` | Γ£ö | | | Γ£ö | Γ£ö |
-| Arabic (Oman) | `ar-OM` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Arabic (Palestinian Authority) | `ar-PS` | Γ£ö | | | Γ£ö | Γ£ö |
-| Arabic (Qatar) | `ar-QA` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Arabic (Saudi Arabia) | `ar-SA` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Arabic (United Arab Emirates) | `ar-AE` | Γ£ö | Γ£ö| Γ£ö | Γ£ö | Γ£ö |
-| Arabic Egypt | `ar-EG` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Arabic Modern Standard (Bahrain) | `ar-BH` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Arabic Syrian Arab Republic | `ar-SY` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Bangla | `bn-BD` | | | | Γ£ö | |
-| Bosnian | `bs-Latn` | | | | Γ£ö | |
-| Bulgarian | `bg-BG` | | | | Γ£ö | |
-| Catalan | `ca-ES` | | | | Γ£ö | |
-| Chinese (Cantonese Traditional) | `zh-HK` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Chinese (Simplified) | `zh-Hans` | Γ£ö | Γ£ö | | Γ£ö | Γ£ö |
-| Chinese (Simplified) | `zh-CK` | Γ£ö | Γ£ö | | Γ£ö | Γ£ö |
-| Chinese (Traditional) | `zh-Hant` | | | | Γ£ö | |
-| Croatian | `hr-HR` | | | | Γ£ö | |
-| Czech | `cs-CZ` | Γ£ö |Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Danish | `da-DK` | Γ£ö |Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Dutch | `nl-NL` | Γ£ö |Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| English Australia | `en-AU` | Γ£ö |Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| English United Kingdom | `en-GB` | Γ£ö |Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| English United States | `en-US` | Γ£ö |Γ£ö | Γ£ö| Γ£ö | Γ£ö |
-| Estonian | `et-EE` | | | | Γ£ö | |
-| Fijian | `en-FJ` | | | | Γ£ö | |
-| Filipino | `fil-PH` | | | | Γ£ö | |
-| Finnish | `fi-FI` | Γ£ö |Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| French | `fr-FR` | Γ£ö |Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| French (Canada) | `fr-CA` | Γ£ö |Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| German | `de-DE` | Γ£ö |Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Greek | `el-GR` | | | | Γ£ö | |
-| Haitian | `fr-HT` | | | | Γ£ö | |
-| Hebrew | `he-IL` | Γ£ö |Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Hindi | `hi-IN` | Γ£ö |Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Hungarian | `hu-HU` | | | | Γ£ö | |
-| Indonesian | `id-ID` | | | | Γ£ö | |
-| Italian | `it-IT` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Japanese | `ja-JP` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Kiswahili | `sw-KE` | | | | Γ£ö | |
-| Korean | `ko-KR` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Latvian | `lv-LV` | | | | Γ£ö | |
-| Lithuanian | `lt-LT` | | | | Γ£ö | |
-| Malagasy | `mg-MG` | | | | Γ£ö | |
-| Malay | `ms-MY` | | | | Γ£ö | |
-| Maltese | `mt-MT` | | | | Γ£ö | |
-| Norwegian | `nb-NO` | Γ£ö |Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Persian | `fa-IR` | Γ£ö | | | Γ£ö | Γ£ö |
-| Polish | `pl-PL` | Γ£ö |Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Portuguese | `pt-BR` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Portuguese (Portugal) | `pt-PT` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Romanian | `ro-RO` | | | | Γ£ö | |
-| Russian | `ru-RU` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Samoan | `en-WS` | | | | Γ£ö | |
-| Serbian (Cyrillic) | `sr-Cyrl-RS` | | | | Γ£ö | |
-| Serbian (Latin) | `sr-Latn-RS` | | | | Γ£ö | |
-| Slovak | `sk-SK` | | | | Γ£ö | |
-| Slovenian | `sl-SI` | | | | Γ£ö | |
-| Spanish | `es-ES` | Γ£ö | Γ£ö| Γ£ö| Γ£ö | Γ£ö |
-| Spanish (Mexico) | `es-MX` | Γ£ö | | | Γ£ö | Γ£ö |
-| Swedish | `sv-SE` | Γ£ö |Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Tamil | `ta-IN` | | | | Γ£ö | |
-| Thai | `th-TH` | Γ£ö |Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Tongan | `to-TO` | | | | Γ£ö | |
-| Turkish | `tr-TR` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Ukrainian | `uk-UA` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
-| Urdu | `ur-PK` | | | | Γ£ö | |
-| Vietnamese | `vi-VN` | Γ£ö |Γ£ö | Γ£ö | Γ£ö | |
+ - Transcript
+ - Keywords
+ - Topics
+ - Labels
+ - Frame patterns (Only to Hebrew as of now)
+
+| **Language** | **Code** | **Supported source language** | **Language identification** | **Customization** (language model) |
+|:--:|:--:|:--:|:-:|:--:|
+| Afrikaans | `af-ZA` | | Γ£ö | |
+| Arabic (Israel) | `ar-IL` | Γ£ö | | Γ£ö |
+| Arabic (Iraq) | `ar-IQ` | Γ£ö | Γ£ö | |
+| Arabic (Jordan) | `ar-JO` | Γ£ö | Γ£ö | Γ£ö |
+| Arabic (Kuwait) | `ar-KW` | Γ£ö | Γ£ö | Γ£ö |
+| Arabic (Lebanon) | `ar-LB` | Γ£ö | | Γ£ö |
+| Arabic (Oman) | `ar-OM` | Γ£ö | Γ£ö | Γ£ö |
+| Arabic (Paestinian Authority) | `ar-PS` | Γ£ö | | Γ£ö |
+| Arabic (Qatar) | `ar-QA` | Γ£ö | Γ£ö | Γ£ö |
+| Arabic (Saudi Arabia) | `ar-SA` | Γ£ö | Γ£ö | Γ£ö |
+| Arabic (United Arab Emirates) | `ar-AE` | Γ£ö | Γ£ö | Γ£ö |
+| Arabic Egypt | `ar-EG` | Γ£ö | Γ£ö | Γ£ö |
+| Arabic Modern Standard (Bahrain) | `ar-BH` | Γ£ö | Γ£ö | Γ£ö |
+| Arabic Syrian Arab Republic | `ar-SY` | Γ£ö | Γ£ö| Γ£ö |
+| Armenian | `hy-AM` | Γ£ö | | |
+| Bangla | `bn-BD` | | Γ£ö | |
+| Bosnian | `bs-Latn` | | Γ£ö | |
+| Bulgarian | `bg-BG` | Γ£ö | Γ£ö | |
+| Catalan | `ca-ES` | Γ£ö | Γ£ö | |
+| Chinese (Cantonese Traditional) | `zh-HK` | Γ£ö | Γ£ö | Γ£ö |
+| Chinese (Simplified) | `zh-Hans` | Γ£ö | Γ£ö | Γ£ö |
+| Chinese (Simplified) | `zh-CK` | Γ£ö | Γ£ö | Γ£ö |
+| Chinese (Traditional) | `zh-Hant` | | Γ£ö | |
+| Croatian | `hr-HR` | Γ£ö | Γ£ö | |
+| Czech | `cs-CZ` | Γ£ö |Γ£ö | Γ£ö |
+| Danish | `da-DK` | Γ£ö |Γ£ö | Γ£ö |
+| Dutch | `nl-NL` | Γ£ö |Γ£ö | Γ£ö |
+| English Australia | `en-AU` | Γ£ö |Γ£ö | Γ£ö |
+| English United Kingdom | `en-GB` | Γ£ö |Γ£ö | Γ£ö |
+| English United States | `en-US` | Γ£ö |Γ£ö | Γ£ö |
+| Estonian | `et-EE` | Γ£ö |Γ£ö | |
+| Fijian | `en-FJ` | | Γ£ö | |
+| Filipino | `fil-PH` | | Γ£ö | |
+| Finnish | `fi-FI` | Γ£ö |Γ£ö | Γ£ö |
+| French | `fr-FR` | Γ£ö |Γ£ö | Γ£ö |
+| French (Canada) | `fr-CA` | Γ£ö |Γ£ö | Γ£ö |
+| German | `de-DE` | Γ£ö |Γ£ö | Γ£ö |
+| Greek | `el-GR` | Γ£ö |Γ£ö | |
+| Gujarati | `gu-IN` | Γ£ö |Γ£ö | |
+| Haitian | `fr-HT` | | Γ£ö | |
+| Hebrew | `he-IL` | Γ£ö |Γ£ö | Γ£ö |
+| Hindi | `hi-IN` | Γ£ö |Γ£ö | Γ£ö |
+| Hungarian | `hu-HU` | | Γ£ö | |
+| Icelandic | `is-IS` | Γ£ö | | |
+| Indonesian | `id-ID` | | Γ£ö | |
+| Irish | `ga-IE` | Γ£ö | Γ£ö | |
+| Italian | `it-IT` | Γ£ö | Γ£ö | Γ£ö |
+| Japanese | `ja-JP` | Γ£ö | Γ£ö | Γ£ö |
+| Kannada | `kn-IN` | Γ£ö | Γ£ö | |
+| Kiswahili | `sw-KE` | | Γ£ö | |
+| Korean | `ko-KR` | Γ£ö | Γ£ö| Γ£ö |
+| Latvian | `lv-LV` | Γ£ö | Γ£ö | |
+| Lithuanian | `lt-LT` | | Γ£ö | |
+| Malagasy | `mg-MG` | | Γ£ö | |
+| Malay | `ms-MY` | Γ£ö | | |
+| Malayalam | `ml-IN` |Γ£ö |Γ£ö | |
+| Maltese | `mt-MT` | | Γ£ö | |
+| Norwegian | `nb-NO` | Γ£ö |Γ£ö | Γ£ö |
+| Persian | `fa-IR` | Γ£ö | | Γ£ö |
+| Polish | `pl-PL` | Γ£ö |Γ£ö | Γ£ö |
+| Portuguese | `pt-BR` | Γ£ö | Γ£ö | Γ£ö |
+| Portuguese (Portugal) | `pt-PT` | Γ£ö | Γ£ö | Γ£ö |
+| Romanian | `ro-RO` | Γ£ö | Γ£ö | |
+| Russian | `ru-RU` | Γ£ö | Γ£ö | Γ£ö |
+| Samoan | `en-WS` | | Γ£ö | |
+| Serbian (Cyrillic) | `sr-Cyrl-RS` | |Γ£ö | |
+| Serbian (Latin) | `sr-Latn-RS` | | Γ£ö | |
+| Slovak | `sk-SK` | Γ£ö | Γ£ö | |
+| Slovenian | `sl-SI` | Γ£ö | Γ£ö | |
+| Spanish | `es-ES` | Γ£ö | Γ£ö | Γ£ö |
+| Spanish (Mexico) | `es-MX` | Γ£ö | Γ£ö | Γ£ö |
+| Swedish | `sv-SE` | Γ£ö |Γ£ö | Γ£ö |
+| Tamil | `ta-IN` | Γ£ö | Γ£ö | |
+| Telugu | `te-IN` | Γ£ö | Γ£ö | |
+| Thai | `th-TH` | Γ£ö |Γ£ö | Γ£ö |
+| Tongan | `to-TO` | | Γ£ö | |
+| Turkish | `tr-TR` | Γ£ö | Γ£ö| Γ£ö |
+| Ukrainian | `uk-UA` | Γ£ö | Γ£ö | |
+| Urdu | `ur-PK` | | | |
+| Vietnamese | `vi-VN` | Γ£ö |Γ£ö| |
**Default languages supported by Language identification (LID)**: German (de-DE) , English United States (en-US) , Spanish (es-ES) , French (fr-FR), Italian (it-IT) , Japanese (ja-JP), Portuguese (pt-BR), Russian (ru-RU), Chinese (Simplified) (zh-Hans).
This section describes languages supported by Azure Video Indexer API.
### Change default languages supported by LID and MLID
-You can specify to use other supported languages (listed in the table above) as default languages, when [uploading a video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) with an API and passing the `customLanguages` parameter. The `customLanguages` parameter allows up to 10 languages to be identified by LID or MLID.
+When [uploading a video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) through an API, you can specify to use other supported languages (listed in the table above) for LID and MLID by passing the `customLanguages` parameter. The `customLanguages` parameter allows up to 10 languages to be identified by LID or MLID.
> [!NOTE] > Language identification (LID) and Multi-language identification (MLID) compares speech at the language level, such as English and German. > Do not include multiple locales of the same language in the custom languages list.
-## Language support in frontend experiences
+## Language support in front end experiences
-The following table describes language support in the Azure Video Indexer frontend experiences.
+The following table describes language support in the Azure Video Indexer front end experiences.
* website - the website column lists supported languages for the [Azure Video Indexer website](https://aka.ms/vi-portal-link). For for more information, see [Get started](video-indexer-get-started.md). * widgets - the [widgets](video-indexer-embed-widgets.md) column lists supported languages for translating the index file. For for more information, see [Get started](video-indexer-embed-widgets.md).
azure-video-indexer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md
To stay up-to-date with the most recent Azure Video Indexer developments, this a
[!INCLUDE [announcement](./includes/deprecation-announcement.md)]
+## January 2023
+
+### Language support
+
+* New languages are now supported: Irish, Bulgarian, Catalan, Greek, Estonian, Croatian, Latvian, Romanian, Slovak, Slovenian, Telugu, Malayalam, Kannada, Icelandic, Armenian, Gujarati, Malay, and Tamil.
+* Use an API to get all supported languages: [Get Supported Languages](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Supported-Languages).
+
+For more information, see [supported languages](language-support.md).
+ ## November 2022 ### Speakers' names can now be edited from the Azure Video Indexer website
azure-vmware Attach Azure Netapp Files To Azure Vmware Solution Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md
Title: Attach Azure NetApp Files datastores to Azure VMware Solution hosts
description: Learn how to create Azure NetApp Files-based NSF datastores for Azure VMware Solution hosts. Previously updated : 11/07/2022 Last updated : 01/09/2023
Azure VMware Solution currently supports the following regions:
There are some important best practices to follow for optimal performance of NFS datastores on Azure NetApp Files volumes. - Create Azure NetApp Files volumes using **Standard** network features to enable optimized connectivity from Azure VMware Solution private cloud via ExpressRoute FastPath connectivity.-- For optimized performance, choose **UltraPerformance** gateway and enable [ExpressRoute FastPath](../expressroute/expressroute-howto-linkvnet-arm.md#configure-expressroute-fastpath) from a private cloud to Azure NetApp Files volumes virtual network. View more detailed information on gateway SKUs at [About ExpressRoute virtual network gateways](../expressroute/expressroute-about-virtual-network-gateways.md).-- Based on your performance requirements, select the correct service level needed for the Azure NetApp Files capacity pool. For best performance, it's recommended to use the Ultra tier.-- Create multiple datastores of 4-TB size for better performance. The default limit is 64 but it can be increased up to a maximum of 256 by submitting a support ticket. To submit a support ticket, go to [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md).-- Work with your Microsoft representative to ensure that the Azure VMware Solution private cloud and the Azure NetApp Files volumes are deployed within same [Availability Zone](../availability-zones/az-overview.md#availability-zones).
+- For optimized performance, choose either **UltraPerformance** gateway or **ErGw3Az** gateway, and enable [FastPath](../expressroute/expressroute-howto-linkvnet-arm.md#configure-expressroute-fastpath) from a private cloud to Azure NetApp Files volumes virtual network. View more detailed information on gateway SKUs at [About ExpressRoute virtual network gateways](../expressroute/expressroute-about-virtual-network-gateways.md).
+- Based on your performance requirements, select the correct service level needed for the Azure NetApp Files capacity pool. See [Service levels for Azure NetApp Files](../azure-netapp-files/azure-netapp-files-service-levels.md) to understand the throughput allowed per provisioned TiB for each service level.
+- Create one or more volumes based on the required throughput and capacity. See [Performance considerations](../azure-netapp-files/azure-netapp-files-performance-considerations.md) for Azure NetApp Files to understand how volume size, service level, and capacity pool QoS type will determine volume throughput. For assistance calculating workload capacity and performance requirements, contact your Azure VMware Solution or Azure NetApp Files field expert. The default maximum number of Azure NetApp Files datastores is 64, but it can be increased to a maximum of 256 by submitting a support ticket. To submit a support ticket, see [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md).
+- Work with your Microsoft representative to ensure that the Azure VMware Solution private cloud and the Azure NetApp Files volumes are deployed within same [availability zone](../availability-zones/az-overview.md#availability-zones).
> [!IMPORTANT] >Changing the Azure NetApp Files volumes tier after creating the datastore will result in unexpected behavior in portal and API due to metadata mismatch. Set your performance tier of the Azure NetApp Files volume when creating the datastore. If you need to change tier during run time, detach the datastore, change the performance tier of the volume and attach the datastore. We are working on improvements to make this seamless.
azure-vmware Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/introduction.md
Title: Introduction
description: Learn the features and benefits of Azure VMware Solution to deploy and manage VMware-based workloads in Azure. Azure VMware Solution SLA guarantees that Azure VMware management tools (vCenter Server and NSX Manager) will be available at least 99.9% of the time. Previously updated : 12/22/2022 Last updated : 1/4/2023
The diagram shows the adjacency between private clouds and VNets in Azure, Azure
- Runs on Intel® Xeon® Gold 6240 Processor with 36 Cores and a Base Frequency of 2.6Ghz and Turbo of 3.9Ghz. - 768 GB of DRAM Memory-- 19.2 TB Storage Capacity with all NVMe based SSDs (With Random Read of 636500 IOPS and Random Write of 223300 IOPS)
+- 19.2 TB Storage Capacity with all NVMe based SSDs
- 1.5TB of NVMe Cache **AV52 key highlights for Memory and Storage optimized Workloads:**
The diagram shows the adjacency between private clouds and VNets in Azure, Azure
- Runs on Intel® Xeon® Platinum 8270 with 52 Cores and a Base Frequency of 2.7Ghz and Turbo of 4.0Ghz. - 1.5 TB of DRAM Memory-- 38.4TB storage capacity with all NVMe based SSDs (With Random Read of 636500 IOPS and Random Write of 223300 IOPS)
+- 38.4TB storage capacity with all NVMe based SSDs
- 1.5TB of NVMe Cache For pricing and region availability, see the [Azure VMware Solution pricing page](https://azure.microsoft.com/pricing/details/azure-vmware/) and see the [Products available by region page](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=azure-vmware&regions=all).
backup Automation Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/automation-backup.md
Title: Automation in Azure Backup description: Provides a summary of automation capabilities offered by Azure Backup. Previously updated : 11/26/2021- Last updated : 09/15/2022++ -+ # Automation in Azure Backup
This section provides a few common automation use cases that you might encounter
### Configure backups
-As a backup admin, you need to deal with new infrastructure getting added periodically, and ensure they are protected as per the agreed requirements. The automation clients, such as PowerShell/CLI, help to fetch all VM details, check the backup status of each of them, and then take appropriate action for unprotected VMs.
+As a backup admin, you need to deal with new infrastructure getting added periodically, and ensure they're protected as per the agreed requirements. The automation clients, such as PowerShell/CLI, help to fetch all VM details, check the backup status of each of them, and then take appropriate action for unprotected VMs.
However, this must be performant at-scale. Also, you need to schedule them periodically and monitor each run. To ease the automation operations, Azure Backup now uses Azure Policy and provides [built-in backup specific Azure Policies](backup-center-govern-environment.md#azure-policies-for-backup) to govern the backup estate.
backup Azure File Share Backup Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-file-share-backup-overview.md
Title: About Azure file share backup description: Learn how to back up Azure file shares in the Recovery Services vault Previously updated : 12/10/2021- Last updated : 03/08/2022+ -++ # About Azure file share backup
-Azure file share backup is a native, cloud based backup solution that protects your data in the cloud and eliminates additional maintenance overheads involved in on-premises backup solutions. The Azure Backup service smoothly integrates with Azure File Sync, and allows you to centralize your file share data as well as your backups. This simple, reliable, and secure solution enables you to configure protection for your enterprise file shares in a few simple steps with an assurance that you can recover your data in case of any accidental deletion.
+Azure file share backup is a native, cloud based backup solution that protects your data in the cloud and eliminates additional maintenance overheads involved in on-premises backup solutions. The Azure Backup service smoothly integrates with Azure File Sync, and allows you to centralize your file share data as well as your backups. This simple, reliable, and secure solution enables you to configure protection for your enterprise file shares in a few simple steps with an assurance that you can recover your data if any accidental deletion.
## Key benefits of Azure file share backup
There are two costs associated with Azure file share backup solution:
1. **Snapshot storage cost**: Storage charges incurred for snapshots are billed along with Azure Files usage according to the pricing details mentioned [here](https://azure.microsoft.com/pricing/details/storage/files/)
-2. **Protected Instance fee**: Starting September 1, 2020, customers will be charged a protected instance fee according to the pricing details mentioned [here](https://azure.microsoft.com/pricing/details/backup/). The protected instance fee depends on the total size of protected file shares in a storage account.
+2. **Protected Instance fee**: Starting from September 1, 2020, you're charged a protected instance fee as per the [pricing details](https://azure.microsoft.com/pricing/details/backup/). The protected instance fee depends on the total size of protected file shares in a storage account.
To get detailed estimates for backing up Azure file shares, you can download the detailed [Azure Backup pricing estimator](https://aka.ms/AzureBackupCostEstimates).
backup Backup Azure Sap Hana Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sap-hana-database.md
Title: Back up an SAP HANA database to Azure with Azure Backup description: In this article, learn how to back up an SAP HANA database to Azure virtual machines with the Azure Backup service. Previously updated : 12/23/2022 Last updated : 01/05/2023
You can also use the following FQDNs to allow access to the required services fr
## Enable Cross Region Restore
-At the Recovery Services vault, you can enable Cross Region Restore. You must turn on Cross Region Restore before you configure and protect backups on your HANA databases. Learn about [how to turn on Cross Region Restore](./backup-create-rs-vault.md#set-cross-region-restore).
+At the Recovery Services vault, you can enable Cross Region Restore. Learn [how to turn on Cross Region Restore](./backup-create-rs-vault.md#set-cross-region-restore).
[Learn more](./backup-azure-recovery-services-vault-overview.md) about Cross Region Restore.
backup Backup Rbac Rs Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-rbac-rs-vault.md
The following table captures the Backup management actions and corresponding Azu
| Enable backup from file share blade | Backup Contributor | Recovery Services vault | | | Storage account Contributor | Storage account Resource | | | Contributor | Subscription |
-| On-demand backup of VM | Backup Operator | Recovery Services vault |
+| On-demand backup of file share | Backup Operator | Recovery Services vault |
| Restore File share | Backup Operator | Recovery Services vault | | | Storage Account Backup Contributor | Storage account resources where restore source and Target file shares are present | | Restore Individual Files | Backup Operator | Recovery Services vault |
backup Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/policy-reference.md
Title: Built-in policy definitions for Azure Backup description: Lists Azure Policy built-in policy definitions for Azure Backup. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/28/2022 Last updated : 01/05/2023
backup Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Backup description: Lists Azure Policy Regulatory Compliance controls available for Azure Backup. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/04/2022 Last updated : 01/05/2023
backup Selective Disk Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/selective-disk-backup-restore.md
description: In this article, learn about selective disk backup and restore usin
Last updated 11/10/2021 -+ -+ # Selective disk backup and restore for Azure virtual machines
bastion Bastion Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-faq.md
Azure Bastion needs to be able to communicate with certain internal endpoints to
* blob.core.windows.net * core.windows.net * vaultcore.windows.net
-* vault.azure.com
+* vault.azure.net
* azure.com You may use a private DNS zone ending with one of the names listed above (ex: privatelink.blob.core.windows.net).
batch Batch Pools Without Public Ip Addresses Classic Retirement Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-pools-without-public-ip-addresses-classic-retirement-migration-guide.md
Last updated 09/01/2022
# Migrate pools without public IP addresses (classic) in Batch
-The Azure Batch feature pools without public IP addresses (classic) will be retired on *March 31, 2023*. Learn how to migrate eligible pools to simplified compute node communication (preview) pools without public IP addresses. You must opt in to migrate your Batch pools.
+The Azure Batch feature pools without public IP addresses (classic) will be retired on *March 31, 2023*. Learn how to migrate eligible pools to simplified compute node communication pools without public IP addresses. You must opt in to migrate your Batch pools.
## About the feature
When the Batch pools without public IP addresses (classic) feature retires on Ma
:::image type="content" source="media/certificates/scale-down-pool.png" alt-text="Screenshot that shows how to scale down a pool.":::
-1. Scale out the pool again. The pool is then automatically migrated to the new version of the preview.
+1. Scale out the pool again. The pool is then automatically migrated to the new version.
:::image type="content" source="media/certificates/scale-out-pool.png" alt-text="Screenshot that shows how to scale out a pool.":::
batch Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/policy-reference.md
Title: Built-in policy definitions for Azure Batch description: Lists Azure Policy built-in policy definitions for Azure Batch. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/28/2022 Last updated : 01/05/2023
batch Private Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/private-connectivity.md
Title: Use private endpoints with Azure Batch accounts description: Learn how to connect privately to an Azure Batch account by using private endpoints. Previously updated : 05/26/2022 Last updated : 12/16/2022
Batch account resource has two endpoints supported to access with private endpoi
- Account endpoint (sub-resource: **batchAccount**): this endpoint is used for accessing [Batch Service REST API](/rest/api/batchservice/) (data plane), for example managing pools, compute nodes, jobs, tasks, etc. -- Node management endpoint (sub-resource: **nodeManagement**): used by Batch pool nodes to access Batch node management service. This endpoint is only applicable when using [simplified compute node communication](simplified-compute-node-communication.md). This feature is in preview.-
-> [!IMPORTANT]
-> - This preview sub-resource is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> - For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+- Node management endpoint (sub-resource: **nodeManagement**): used by Batch pool nodes to access Batch node management service. This endpoint is only applicable when using [simplified compute node communication](simplified-compute-node-communication.md).
:::image type="content" source="media/private-connectivity/private-endpoint-sub-resources.png" alt-text="Diagram that shows sub-resources for Batch private endpoints.":::
+> [!TIP]
+> You can create private endpoint for one of them or both within your virtual network, depending on the actual usage for your Batch account. For example, if you run Batch pool within the virtual network, but call Batch service REST API from somewhere else, you will only need to create the **nodeManagement** private endpoint in the virtual network.
+ ## Azure portal Use the following steps to create a private endpoint with your Batch account using the Azure portal:
Use the following steps to create a private endpoint with your Batch account usi
## Use the private endpoint
-After the private endpoint is provisioned, you can access the Batch account from within the same virtual network using the private endpoint.
+After the private endpoint is provisioned, you can access the Batch account using the private IP address within the virtual network:
- Private endpoint for **batchAccount**: can access Batch account data plane to manage pools/jobs/tasks. - Private endpoint for **nodeManagement**: Batch pool's compute nodes can connect to and be managed by Batch node management service.
+> [!TIP]
+> It's recommended to also disable the [public network access](public-network-access.md) with your Batch account when you're using private endpoints, which will restrict the access to private network only.
+ > [!IMPORTANT]
-> If [public network access](public-network-access.md) is disabled with Batch account, performing account operations (for example pools, jobs) outside of the virtual network where the private endpoint is provisioned will result in an "AuthorizationFailure" message for Batch account in the Azure portal.
+> If public network access is disabled with Batch account, performing account operations (for example pools, jobs) outside of the virtual network where the private endpoint is provisioned will result in an "AuthorizationFailure" message for Batch account in the Azure portal.
To view the IP addresses for the private endpoint from the Azure portal:
batch Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Batch description: Lists Azure Policy Regulatory Compliance controls available for Azure Batch. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/04/2022 Last updated : 01/05/2023
batch Simplified Node Communication Pool No Public Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/simplified-node-communication-pool-no-public-ip.md
Title: Create a simplified node communication pool without public IP addresses (preview)
+ Title: Create a simplified node communication pool without public IP addresses
description: Learn how to create an Azure Batch simplified node communication pool without public IP addresses. Previously updated : 11/18/2022 Last updated : 12/16/2022
-# Create a simplified node communication pool without public IP addresses (preview)
+# Create a simplified node communication pool without public IP addresses
> [!NOTE] > This replaces the previous preview version of [Azure Batch pool without public IP addresses](batch-pool-no-public-ip-address.md). This new version requires [using simplified compute node communication](simplified-compute-node-communication.md). > [!IMPORTANT]
-> - Support for pools without public IP addresses in Azure Batch is currently in public preview for [selected regions](simplified-compute-node-communication.md#supported-regions).
-> - This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> - For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> Support for pools without public IP addresses in Azure Batch is currently available for [select regions](simplified-compute-node-communication.md#supported-regions).
When you create an Azure Batch pool, you can provision the virtual machine (VM) configuration pool without a public IP address. This article explains how to set up a Batch pool without public IP addresses.
To restrict access to these nodes and reduce the discoverability of these nodes
## Prerequisites > [!IMPORTANT]
-> The prerequisites have changed from the previous version of this preview. Make sure to review each item for changes before proceeding.
+> The prerequisites have changed from the previous preview version of this feature. Make sure to review each item for changes before proceeding.
- Use simplified compute node communication. For more information, see [Use simplified compute node communication](simplified-compute-node-communication.md).
To restrict access to these nodes and reduce the discoverability of these nodes
- The subnet specified for the pool must have enough unassigned IP addresses to accommodate the number of VMs targeted for the pool; that is, the sum of the `targetDedicatedNodes` and `targetLowPriorityNodes` properties of the pool. If the subnet doesn't have enough unassigned IP addresses, the pool partially allocates the compute nodes, and a resize error occurs.
- - If you plan to use a [private endpoint with Batch accounts](private-connectivity.md), you must disable private endpoint network policies. Run the following Azure CLI command:
-
-```azurecli-interactive
-az network vnet subnet update \
- --vnet-name <vnetname> \
- -n <subnetname> \
- --resource-group <resourcegroup> \
- --disable-private-endpoint-network-policies
-```
+ - If you plan to use private endpoint, and your virtual network has [private endpoint network policy](../private-link/disable-private-endpoint-network-policy.md) enabled, make sure the inbound connection with TCP/443 to the subnet hosting the private endpoint must be allowed from Batch pool's subnet.
- Enable outbound access for Batch node management. A pool with no public IP addresses doesn't have internet outbound access enabled by default. Choose one of the following options to allow compute nodes to access the Batch node management service (see [Use simplified compute node communication](simplified-compute-node-communication.md)):
- - Use [**nodeManagement**](private-connectivity.md) private endpoint with Batch accounts, which provides private access to Batch node management service from the virtual network. This solution is the preferred method.
+ - Use [**nodeManagement private endpoint**](private-connectivity.md) with Batch accounts, which provides private access to Batch node management service from the virtual network. This solution is the preferred method.
- Alternatively, provide your own internet outbound access support (see [Outbound access to the internet](#outbound-access-to-the-internet)). > [!IMPORTANT]
-> There are two sub-resources for private endpoints with Batch accounts. Please use the **nodeManagement** private endpoint for the Batch pool without public IP addresses.
+> There are two sub-resources for private endpoints with Batch accounts. Please use the **nodeManagement** private endpoint for the Batch pool without public IP addresses. For more details please check [Use private endpoints with Azure Batch accounts](private-connectivity.md).
## Current limitations
az network vnet subnet update \
1. On the **Add Pool** window, select the option you intend to use from the **Image Type** dropdown. 1. Select the correct **Publisher/Offer/Sku** of your image. 1. Specify the remaining required settings, including the **Node size**, **Target dedicated nodes**, and **Target Spot/low-priority nodes**.
-1. For **Node communication mode**, select **simplified** under Optional Settings.
+1. For **Node communication mode**, select **Simplified** under Optional Settings.
1. Select a virtual network and subnet you wish to use. This virtual network must be in the same location as the pool you're creating. 1. In **IP address provisioning type**, select **NoPublicIPAddresses**.
-The following screenshot shows the elements that are required to be modified to enable a pool without public
-IP addresses as specified above.
+The following screenshot shows the elements that's required to be modified to create a pool without public
+IP addresses.
![Screenshot of the Add pool screen with NoPublicIPAddresses selected.](./media/simplified-compute-node-communication/add-pool-simplified-mode-no-public-ip.png) ## Use the Batch REST API to create a pool without public IP addresses
-The example below shows how to use the [Batch Service REST API](/rest/api/batchservice/pool/add) to create a pool that uses public IP addresses.
+The following example shows how to use the [Batch Service REST API](/rest/api/batchservice/pool/add) to create a pool that uses public IP addresses.
### REST API URI
If you're familiar with using ARM templates, select the **Deploy to Azure** butt
[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.batch%2Fbatch-pool-no-public-ip%2Fazuredeploy.json) > [!NOTE]
-> If the private endpoint deployment failed due to invalid groupId "nodeManagement", please check if the region is in the supported list, and your pool is using [Simplified compute node communication](simplified-compute-node-communication.md). Choose the right region, specify `simplified` node communiction mode for the pool, and then retry the deployment.
+> If the private endpoint deployment failed due to invalid groupId "nodeManagement", please check if the region is in the supported list for [Simplified compute node communication](simplified-compute-node-communication.md). Choose the right region, and then retry the deployment.
## Outbound access to the internet
Another way to provide outbound connectivity is to use a user-defined route (UDR
If compute nodes run into unusable state in a Batch pool without public IP addresses, the first and most important check is to verify the outbound access to the Batch node management service. It must be configured correctly so that compute nodes are able to connect to service from your virtual network.
-If you're using **nodeManagement** private endpoint:
+#### Using **nodeManagement** private endpoint
+
+If you created node management private endpoint in the virtual network for your Batch account:
-- Check if the private endpoint is in provisioning succeeded state, and also in **Approved** status.-- Check if the DNS configuration is set up correctly for the node management endpoint of your Batch account. You can confirm it by running `nslookup <nodeManagementEndpoint>` from within your virtual network, and the DNS name should be resolved to the private endpoint IP address.-- Run TCP ping with the node management endpoint using default HTTPS port (443). This probe can tell if the private link connection is working as expected.
+- Check if the private endpoint is created in the right virtual network, in provisioning **Succeeded** state, and also in **Approved** status.
+- Check if the DNS configuration is set up correctly for the node management endpoint of your Batch account:
+ - If your private endpoint is created with automatic private DNS zone integration, check the DNS A record is configured correctly in the private DNS zone `privatelink.batch.azure.com`, and the zone is linked to your virtual network.
+ - If you're using your own DNS solution, make sure the DNS record for your Batch node management endpoint is configured correctly and point to the private endpoint IP address.
+- Check the DNS resolution for [Batch node management endpoint](batch-account-create-portal.md#view-batch-account-properties) of your account. You can confirm it by running `nslookup <nodeManagementEndpoint>` from within your virtual network, and the DNS name should be resolved to the private endpoint IP address.
+- If your virtual network has [private endpoint network policy](../private-link/disable-private-endpoint-network-policy.md) enabled, check NSG and UDR for subnets of both the Batch pool and the private endpoint. The inbound connection with TCP/443 to the subnet hosting the private endpoint must be allowed from Batch pool's subnet.
+- From the Batch pool's subnet, run TCP ping to the node management endpoint using default HTTPS port (443). This probe can tell if the private link connection is working as expected.
``` # Windows
Test-TcpConnection -ComputeName <nodeManagementEndpoint> -Port 443
nc -v <nodeManagementEndpoint> 443 ```
-> [!TIP]
-> You can get the node management endpoint from your [Batch account's properties](batch-account-create-portal.md#view-batch-account-properties).
- If the TCP ping fails (for example, timed out), it's typically an issue with the private link connection, and you can raise Azure support ticket with this private endpoint resource. Otherwise, this node unusable issue can be troubleshot as normal Batch pools, and you can raise support ticket with your Batch account.
-If you're using your own internet outbound solution instead of private endpoint, run the same TCP ping with node management endpoint as shown above. If it's not working, check if your outbound access is configured correctly by following detailed requirements for [simplified compute node communication](simplified-compute-node-communication.md).
+#### Using your own internet outbound solution
+
+If you're using your own internet outbound solution instead of private endpoint, run TCP ping to the node management endpoint. If it's not working, check if your outbound access is configured correctly by following detailed requirements for [simplified compute node communication](simplified-compute-node-communication.md).
### Connect to compute nodes
There's no internet inbound access to compute nodes in the Batch pool without pu
- Use jumpbox machine inside the virtual network, then connect to your compute nodes from there. - Or, try using other remote connection solutions like [Azure Bastion](../bastion/bastion-overview.md):
- - Create Bastion in the virtual network with [IP based connection](../bastion/connect-ip-address.md) enabled.
- - Use Bastion to connect to the compute node using its IP address.
+ - Create Bastion in the virtual network with [IP based connection](../bastion/connect-ip-address.md) enabled.
+ - Use Bastion to connect to the compute node using its IP address.
You can follow the guide [Connect to compute nodes](error-handling.md#connect-to-compute-nodes) to get user credential and IP address for the target compute node in your Batch pool.
For existing pools that use the [previous preview version of Azure Batch No Publ
1. Create a [private endpoint for Batch node management](private-connectivity.md) in the virtual network. 1. Update the pool's node communication mode to [simplified](simplified-compute-node-communication.md). 1. Scale down the pool to zero nodes.
-1. Scale out the pool again. The pool is then automatically migrated to the new version of the preview.
+1. Scale out the pool again. The pool is then automatically migrated to the new version.
## Next steps
batch Tutorial Run Python Batch Azure Data Factory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/tutorial-run-python-batch-azure-data-factory.md
In this section, you'll create and validate a pipeline using your Python script.
1. In the **Azure Batch** tab, add the **Batch Account** that was created in the previous steps and **Test connection** to ensure that it is successful. ![In the Azure Batch tab, add the Batch Account that was created in the previous steps, then test connection](./media/run-python-batch-azure-data-factory/integrate-pipeline-with-azure-batch.png) 1. In the **Settings** tab:
- 1. Set the **Command** as `python main.py`.
+ 1. Set the **Command** as `cmd /C python main.py`.
1. For the **Resource Linked Service**, add the storage account that was created in the previous steps. Test the connection to ensure it is successful. 1. In the **Folder Path**, select the name of the **Azure Blob Storage** container that contains the Python script and the associated inputs. This will download the selected files from the container to the pool node instances before the execution of the Python script.
cloud-shell Cloud Shell Predictive Intellisense https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/cloud-shell-predictive-intellisense.md
+
+ Title: Predictive IntelliSense in Azure Cloud Shell
+description: Azure Cloud Shell uses Predictive IntelliSense
+
+documentationcenter: ''
+++++
+ vm-linux
+ Last updated : 10/11/2022+++
+# Predictive IntelliSense in Azure Cloud Shell
+
+Beginning January 2023 Azure Cloud Shell uses the version of [PSReadLine][01] that has Predictive
+IntelliSense enabled by default. We've also installed and enabled the Azure PowerShell predictor
+[Az.Tools.Predictor][02]] module. Together, these changes enhance the command-line experience by
+providing suggestions that help new and experienced users of Azure discover, edit, and execute
+complete commands.
+
+## What is Predictive IntelliSense?
+
+Predictive IntelliSense is a feature of the **PSReadLine** module. It provides suggestions for
+complete commands based on items from your history and from predictor modules, like
+**Az.Tools.Predictor**.
+
+Prediction suggestions appear as colored text following the user's cursor. The following image shows
+the default `InlineView` of the suggestion. Pressing <kbd>RightArrow</kbd> key accepts an inline
+suggestion. After accepting the suggestion, you can edit the command line before hitting
+<kbd>Enter</kbd> to run the command.
+
+![Suggestion in InlineView mode](./media/predictive-intellisense/cloud-shell-inline.png)
+
+PSReadLine also offers a `ListView` presentation of the suggestions.
+
+![Suggestions in ListView mode](./media/predictive-intellisense/cloud-shell-list-view.png)
+
+In `ListView` mode, use the arrow keys to scroll through the available suggestions. List view also
+shows the source of the prediction.
+
+You can switch between `InlineView` and `ListView` by pressing the <kbd>F2</kbd> key.
+
+## How to change the prediction color
+
+The default color of the suggestions may be difficult for some people. **PSReadLine** allows you to
+configure the color of the suggestions.
+
+The following command changes the color of inline suggestions to white text on a gray background.
+
+```powershell
+Set-PSReadLineOption -Colors @{ InlinePrediction = $PSStyle.Foreground.White + $PSStyle.Background.BrightBlack }
+```
+
+Learn more about color settings for [Set-PSReadLineOption][03].
+
+## How to disable Predictive IntelliSense
+
+If you don't want to take advantage of these updated features, **PSReadLine** allows you to turn off
+Predictive IntelliSense.
+
+To disable Predictive IntelliSense, execute the following `Set-PSReadLineOption` command or add to
+the PowerShell profile script.
+
+```powershell
+Set-PSReadLineOption -PredictionSource None
+```
+
+## Keep your changes permanent
+
+The commands to change the prediction color and enable or disable predictions only affect the
+current session. Add these commands to your PowerShell profile so that they're available every time
+you start Cloud Shell. The following instructions will guide you through configuring a profile for
+Cloud Shell. For more information on PowerShell profiles, see [About_Profiles][06]
+
+### How to check if you have a PowerShell profile in Cloud Shell
+
+A PowerShell profile is a script that runs when PowerShell starts. Use `Test-Path` to check if the
+profile exists in Cloud Shell.
+
+```powershell
+Test-Path -Path $Profile
+```
+
+### How to Create a PowerShell profile in Cloud Shell
+
+If the output is **False**, create a profile and add the customized color and behavior commands.
+
+To store configuration commands for Predictive IntelliSense, Use the `New-Item` cmdlet to create a
+PowerShell profile.
+
+```powershell
+New-Item -Path $Profile -ItemType File -Force
+```
+
+```output
+
+ Directory: /home/jason/.config/PowerShell
+
+UnixMode User Group LastWriteTime Size Name
+-- - -- - - -
+-rw-r--r-- jason jason 11/19/2022 18:21 0 Microsoft.PowerShell_profile.ps1
+```
+
+Use the built-in open-source editor to edit the profile. To learn more, see [Azure Cloud Shell editor][04].
+
+The following example shows the profile commands that set the prediction color to default light grey
+and enables History predictions.
+
+```powershell
+Set-PSReadLineOption -PredictionSource History
+Set-PSReadLineOption -Colors @{ InLinePrediction = '#8d8d8d' }
+```
+
+### How to Edit a PowerShell profile in Cloud Shell
+
+If the output is **True**, then a profile already exists. Edit the existing profile to add the
+commands to configure the color and behavior of Predictive IntelliSense. Use the built-in
+open-source editor to edit the profile. To learn more, see [Azure Cloud Shell editor][04].
+
+Use the built-in Cloud Shell editor to edit the profile:
+
+```powershell
+Code $Profile
+```
+
+## Next steps
+
+For more information about configuring PSReadLine and managing predictors, see
+[Using predictors in PSReadLine][05].
+
+For more information on PowerShell profiles, see [About_Profiles][06].
++
+<!-- link references -->
+[01]: /powershell/module/psreadline/about/about_psreadline
+[02]: /powershell/azure/az-predictor
+[03]: /powershell/module/psreadline/set-psreadlineoption
+[04]: /azure/cloud-shell/using-cloud-shell-editor
+[05]: /powershell/scripting/learn/shell/using-predictors
+[06]: /powershell/module/microsoft.powershell.core/about/about_profiles
+
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/whats-new.md
We've also added links to some user-generated content. Those items will be marke
## Release notes
+### Jan 2023
+* Multivariate Anomaly Detection will begin charging as of January 10th, 2023. For pricing details see the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/anomaly-detector/).
+ ### Dec 2022 * Multivariate Anomaly Detection SDK is updated to match with GA API for four languages.
cognitive-services Copy Move Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/copy-move-projects.md
After you've created and trained a Custom Vision project, you may want to copy your project to another resource. If your app or business depends on the use of a Custom Vision project, we recommend you copy your model to another Custom Vision account in another region. Then if a regional outage occurs, you can access your project in the region where it was copied.
-The **[ExportProject](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddeb3)** and **[ImportProject](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc7548b571998fddee3)** APIs enable this scenario by allowing you to copy projects from one Custom Vision account into others. This guide shows you how to use these REST APIs with cURL. You can also use an HTTP request service like Postman to issue the requests.
+The **[ExportProject](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddeb3)** and **[ImportProject](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc7548b571998fddee3)** APIs enable this scenario by allowing you to copy projects from one Custom Vision account into others. This guide shows you how to use these REST APIs with cURL. You can also use an HTTP request service like Postman to issue the requests.
> [!TIP] > For an example of this scenario using the Python client library, see the [Move Custom Vision Project](https://github.com/Azure-Samples/custom-vision-move-project/tree/master/) repository on GitHub.
The process for copying a project consists of the following steps:
## Get the project ID
-First call **[GetProjects](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddead)** to see a list of your existing Custom Vision projects and their IDs. Use the training key and endpoint of your source account.
+First call **[GetProjects](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddead)** to see a list of your existing Custom Vision projects and their IDs. Use the training key and endpoint of your source account.
```curl curl -v -X GET "{endpoint}/customvision/v3.3/Training/projects"
You'll get a `200\OK` response with a list of projects and their metadata in the
## Export the project
-Call **[ExportProject](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddeb3)** using the project ID and your source training key and endpoint.
+Call **[ExportProject](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddeb3)** using the project ID and your source training key and endpoint.
```curl curl -v -X GET "{endpoint}/customvision/v3.3/Training/projects/{projectId}/export"
You'll get a `200/OK` response with metadata about the exported project and a re
## Import the project
-Call **[ImportProject](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc7548b571998fddee3)** using your target training key and endpoint, along with the reference token. You can also give your project a name in its new account.
+Call **[ImportProject](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc7548b571998fddee3)** using your target training key and endpoint, along with the reference token. You can also give your project a name in its new account.
```curl curl -v -G -X POST "{endpoint}/customvision/v3.3/Training/projects/import"
You'll get a `200/OK` response with metadata about your newly imported project.
## Next steps In this guide, you learned how to copy and move a project between Custom Vision resources. Next, explore the API reference docs to see what else you can do with Custom Vision.
-* [REST API reference documentation](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddeb3)
+* [REST API reference documentation](/rest/api/custom-vision/)
cognitive-services Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/release-notes.md
Bug fixes, including for ONNX export with special characters.
- Export to Android (TensorFlow) added, in addition to previously released export to iOS (CoreML.) This allows export of a trained compact model to be run offline in an application. - Added Retail and Landmark "compact" domains to enable model export for these domains.-- Released version [1.2 Training API](https://southcentralus.dev.cognitive.microsoft.com/docs/services/f2d62aa3b93843d79e948fe87fa89554/operations/5a3044ee08fa5e06b890f11f) and [1.1 Prediction API](https://southcentralus.dev.cognitive.microsoft.com/docs/services/57982f59b5964e36841e22dfbfe78fc1/operations/5a3044f608fa5e06b890f164). Updated APIs support model export, new Prediction operation that does not save images to "Predictions," and introduced batch operations to the Training API.
+- Released version [1.2 Training API](https://westus2.dev.cognitive.microsoft.com/docs/services/f2d62aa3b93843d79e948fe87fa89554/operations/5a3044ee08fa5e06b890f11f) and [1.1 Prediction API](https://westus2.dev.cognitive.microsoft.com/docs/services/57982f59b5964e36841e22dfbfe78fc1/operations/5a3044f608fa5e06b890f164). Updated APIs support model export, new Prediction operation that does not save images to "Predictions," and introduced batch operations to the Training API.
- UX tweaks, including the ability to see which domain was used to train an iteration. - Updated [C# SDK and sample](https://github.com/Microsoft/Cognitive-CustomVision-Windows).
cognitive-services Storage Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/storage-integration.md
Now that you have the integration URLs, you can create a new Custom Vision proje
### Create new project
-When you call the [CreateProject](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddeae) API, add the optional parameters _exportModelContainerUri_ and _notificationQueueUri_. Assign the URL values you got in the previous section.
+When you call the [CreateProject](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddeae) API, add the optional parameters _exportModelContainerUri_ and _notificationQueueUri_. Assign the URL values you got in the previous section.
```curl curl -v -X POST "{endpoint}/customvision/v3.3/Training/projects?exportModelContainerUri={inputUri}&notificationQueueUri={inputUri}&name={inputName}"
If you receive a `200/OK` response, that means the URLs have been set up success
### Update existing project
-To update an existing project with Azure storage feature integration, call the [UpdateProject](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddeb1) API, using the ID of the project you want to update.
+To update an existing project with Azure storage feature integration, call the [UpdateProject](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddeb1) API, using the ID of the project you want to update.
```curl curl -v -X PATCH "{endpoint}/customvision/v3.3/Training/projects/{projectId}"
In your notification queue, you should see a test notification in the following
## Get event notifications
-When you're ready, call the [TrainProject](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc7548b571998fddee1) API on your project to do an ordinary training operation.
+When you're ready, call the [TrainProject](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc7548b571998fddee1) API on your project to do an ordinary training operation.
In your Storage notification queue, you'll receive a notification once training finishes:
The `"trainingStatus"` field may be either `"TrainingCompleted"` or `"TrainingFa
## Get model export backups
-When you're ready, call the [ExportIteration](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddece) API to export a trained model into a specified platform.
+When you're ready, call the [ExportIteration](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddece) API to export a trained model into a specified platform.
In your designated storage container, a backup copy of the exported model will appear. The blob name will have the format:
The `"exportStatus"` field may be either `"ExportCompleted"` or `"ExportFailed"`
## Next steps In this guide, you learned how to copy and back up a project between Custom Vision resources. Next, explore the API reference docs to see what else you can do with Custom Vision.
-* [REST API reference documentation (training)](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddeb3)
-* [REST API reference documentation (prediction)](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Prediction_3.1/operations/5eb37d24548b571998fde5f3)
+* [REST API reference documentation (training)](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddeb3)
+* [REST API reference documentation (prediction)](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Prediction_3.1/operations/5eb37d24548b571998fde5f3)
cognitive-services Update Application To 3.0 Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/update-application-to-3.0-sdk.md
See the [Release notes](release-notes.md) for a full list of the changes. This g
The 2.x APIs used the same prediction call for both image classifiers and object detector projects. Both project types were acceptable to the **PredictImage** and **PredictImageUrl** calls. Starting with 3.0, we have split this API so that you need to match the project type to the call:
-* Use **[ClassifyImage](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Prediction_3.0/operations/5c82db60bf6a2b11a8247c15)** and **[ClassifyImageUrl](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Prediction_3.0/operations/5c82db60bf6a2b11a8247c14)** to get predictions for image classification projects.
-* Use **[DetectImage](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Prediction_3.0/operations/5c82db60bf6a2b11a8247c19)** and **[DetectImageUrl](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Prediction_3.0/operations/5c82db60bf6a2b11a8247c18)** to get predictions for object detection projects.
+* Use **[ClassifyImage](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Prediction_3.0/operations/5c82db60bf6a2b11a8247c15)** and **[ClassifyImageUrl](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Prediction_3.0/operations/5c82db60bf6a2b11a8247c14)** to get predictions for image classification projects.
+* Use **[DetectImage](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Prediction_3.0/operations/5c82db60bf6a2b11a8247c19)** and **[DetectImageUrl](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Prediction_3.0/operations/5c82db60bf6a2b11a8247c18)** to get predictions for object detection projects.
## Use the new iteration publishing workflow The 2.x APIs used the default iteration or a specified iteration ID to choose the iteration to use for prediction. Starting in 3.0, we have adopted a publishing flow whereby you first publish an iteration under a specified name from the training API. You then pass the name to the prediction methods to specify which iteration to use. > [!IMPORTANT]
-> The 3.0 APIs do not use the default iteration feature. Until we deprecate the older APIs, you can continue to use the 2.x APIs to toggle an iteration as the default. These APIs will be maintained for a period of time, and you can call the **[UpdateIteration](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.0/operations/5c771cdcbf6a2b18a0c3b818)** method to mark an iteration as default.
+> The 3.0 APIs do not use the default iteration feature. Until we deprecate the older APIs, you can continue to use the 2.x APIs to toggle an iteration as the default. These APIs will be maintained for a period of time, and you can call the **[UpdateIteration](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.0/operations/5c771cdcbf6a2b18a0c3b818)** method to mark an iteration as default.
### Publish an iteration
-Once an iteration is trained, you can make it available for prediction using the **[PublishIteration](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.0/operations/5c82db28bf6a2b11a8247bbc)** method. To publish an iteration, you'll need the prediction resource ID, which is available on the CustomVision website's settings page.
+Once an iteration is trained, you can make it available for prediction using the **[PublishIteration](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.0/operations/5c82db28bf6a2b11a8247bbc)** method. To publish an iteration, you'll need the prediction resource ID, which is available on the CustomVision website's settings page.
![The Custom Vision website settings page with the prediction resource ID outlined.](./media/update-application-to-3.0-sdk/prediction-id.png) > [!TIP] > You can also get this information from the [Azure Portal](https://portal.azure.com) by going to the Custom Vision Prediction resource and selecting **Properties**.
-Once your iteration is published, apps can use it for prediction by specifying the name in their prediction API call. To make an iteration unavailable for prediction calls, use the **[UnpublishIteration](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.0/operations/5c771cdcbf6a2b18a0c3b81a)** API.
+Once your iteration is published, apps can use it for prediction by specifying the name in their prediction API call. To make an iteration unavailable for prediction calls, use the **[UnpublishIteration](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.0/operations/5c771cdcbf6a2b18a0c3b81a)** API.
## Next steps
cognitive-services Use Prediction Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/use-prediction-api.md
After you've trained your model, you can test images programmatically by submitting them to the prediction API endpoint. In this guide, you'll learn how to call the prediction API to score an image. You'll learn the different ways you can configure the behavior of this API to meet your needs. > [!NOTE]
-> This document demonstrates use of the .NET client library for C# to submit an image to the Prediction API. For more information and examples, see the [Prediction API reference](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Prediction_3.0/operations/5c82db60bf6a2b11a8247c15).
+> This document demonstrates use of the .NET client library for C# to submit an image to the Prediction API. For more information and examples, see the [Prediction API reference](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Prediction_3.0/operations/5c82db60bf6a2b11a8247c15).
## Setup
cognitive-services How To Custom Speech Model And Endpoint Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-model-and-endpoint-lifecycle.md
spx help csr model
::: zone pivot="rest-api"
-To get the training and transcription expiration dates for a base model, use the [Models_ListBaseModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_ListBaseModel) operation of the [Speech-to-text REST API](rest-speech-to-text.md). You can make a [Models_ListBaseModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_ListBaseModels) request to get available base models for all locales.
+To get the training and transcription expiration dates for a base model, use the [Models_GetBaseModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_GetBaseModel) operation of the [Speech-to-text REST API](rest-speech-to-text.md). You can make a [Models_ListBaseModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_ListBaseModels) request to get available base models for all locales.
Make an HTTP GET request using the model URI as shown in the following example. Replace `BaseModelId` with your model ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region.
cognitive-services Improve Accuracy Phrase List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/improve-accuracy-phrase-list.md
spx --% recognize --file "your\path\to\audio.wav" --phrases @phrases.txt
::: zone-end
+Allowed characters include locale-specific letters and digits, white space characters, and special characters such as +, \-, $, :, (, ), {, }, \_, ., ?, @, \\, ΓÇÖ, &, \#, %, \^, \*, \`, \<, \>, ;, \/. Other special characters are removed internally from the phrase.
+ ## Next steps Check out more options to improve recognition accuracy.
cognitive-services Migrate V3 0 To V3 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/migrate-v3-0-to-v3-1.md
If you use webhook to receive notifications about transcription status, please n
The following operations are added for uploading and managing multiple data blocks for a dataset: - [Datasets_UploadBlock](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_UploadBlock) - Upload a block of data for the dataset. The maximum size of the block is 8MiB.
+ - [Datasets_GetBlocks](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_GetBlocks) - Get the list of uploaded blocks for this dataset.
- [Datasets_CommitBlocks](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_CommitBlocks) - Commit blocklist to complete the upload of the dataset. To support model adaptation with [structured text in markdown](how-to-custom-speech-test-and-train.md#structured-text-data-for-training) data, the [Datasets_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_Create) operation now supports the **LanguageMarkdown** data kind. For more information, see [upload datasets](how-to-custom-speech-upload-data.md#upload-datasets). ### Models
-The [Models_ListBaseModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_ListBaseModels) and [Models_ListBaseModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_ListBaseModel) operations return information on the type of adaptation supported by each base model.
+The [Models_ListBaseModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_ListBaseModels) and [Models_GetBaseModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_GetBaseModel) operations return information on the type of adaptation supported by each base model.
```json "features": {
The name of each `operationId` in version 3.1 is prefixed with the object name.
|`/datasets/{id}`|GET|[Datasets_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_Get)|[GetDataset](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetDataset)| |`/datasets/{id}`|PATCH|[Datasets_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_Update)|[UpdateDataset](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateDataset)| |`/datasets/{id}/blocks:commit`|POST|[Datasets_CommitBlocks](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_CommitBlocks)|Not applicable|
-|`/datasets/{id}/blocks`|GET|[Datasets_GetDatasetBlocks](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_GetDatasetBlocks)|Not applicable|
+|`/datasets/{id}/blocks`|GET|[Datasets_GetBlocks](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_GetBlocks)|Not applicable|
|`/datasets/{id}/blocks`|PUT|[Datasets_UploadBlock](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_UploadBlock)|Not applicable| |`/datasets/{id}/files`|GET|[Datasets_ListFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_ListFiles)|[GetDatasetFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetDatasetFiles)| |`/datasets/{id}/files/{fileId}`|GET|[Datasets_GetFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_GetFile)|[GetDatasetFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetDatasetFile)|
The name of each `operationId` in version 3.1 is prefixed with the object name.
|`/evaluations/{id}/files`|GET|[Evaluations_ListFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_ListFiles)|[GetEvaluationFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEvaluationFiles)| |`/evaluations/{id}/files/{fileId}`|GET|[Evaluations_GetFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_GetFile)|[GetEvaluationFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEvaluationFile)| |`/evaluations/locales`|GET|[Evaluations_ListSupportedLocales](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_ListSupportedLocales)|[GetSupportedLocalesForEvaluations](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetSupportedLocalesForEvaluations)|
-|`/healthstatus`|GET|[HealthStatus_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/HealthStatus_Get)|[GetHealthStatus](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetHealthStatus)|
+|`/healthstatus`|GET|[ServiceHealth_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/ServiceHealth_Get)|[GetHealthStatus](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetHealthStatus)|
|`/models`|GET|[Models_ListCustomModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_ListCustomModels)|[GetModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetModels)| |`/models`|POST|[Models_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_Create)|[CreateModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateModel)| |`/models/{id}:copyto`<sup>1</sup>|POST|[Models_CopyTo](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_CopyTo)|[CopyModelToSubscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription)|
The name of each `operationId` in version 3.1 is prefixed with the object name.
|`/models/{id}/files/{fileId}`|GET|[Models_GetFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_GetFile)|Not applicable| |`/models/{id}/manifest`|GET|[Models_GetCustomModelManifest](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_GetCustomModelManifest)|[GetModelManifest](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetModelManifest)| |`/models/base`|GET|[Models_ListBaseModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_ListBaseModels)|[GetBaseModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModels)|
-|`/models/base/{id}`|GET|[Models_ListBaseModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_ListBaseModel)|[GetBaseModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModel)|
+|`/models/base/{id}`|GET|[Models_GetBaseModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_GetBaseModel)|[GetBaseModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModel)|
|`/models/base/{id}/manifest`|GET|[Models_GetBaseModelManifest](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_GetBaseModelManifest)|[GetBaseModelManifest](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModelManifest)| |`/models/locales`|GET|[Models_ListSupportedLocales](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_ListSupportedLocales)|[GetSupportedLocalesForModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetSupportedLocalesForModels)| |`/projects`|GET|[Projects_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_List)|[GetProjects](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetProjects)|
cognitive-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/releasenotes.md
Title: Release notes - Speech Service
+ Title: What's new - Speech Service
-description: A running log of Speech Service feature releases, improvements, bug fixes, and known issues.
+description: Find out about new releases and features for the Azure Cognitive Service for Speech.
- Previously updated : 11/14/2022+ Last updated : 01/09/2023
-# Speech service release notes
+# What's new in Azure Cognitive Service for Speech?
-See below for information about new features and other changes to the Speech service.
+Azure Cognitive Service for Speech is updated on an ongoing basis. To stay up-to-date with recent developments, this article provides you with information about new releases and features.
-## What's new?
+## Recent highlights
+* Text-to-speech Batch synthesis API is available in public preview.
* Speech-to-text REST API version 3.1 is generally available. * Speech SDK 1.24.0 and Speech CLI 1.24.0 were released in October 2022. * Speech-to-text and text-to-speech container versions were updated in October 2022.
cognitive-services Rest Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/rest-speech-to-text.md
See [Upload training and testing datasets](how-to-custom-speech-upload-data.md?p
|`/datasets/{id}`|GET|[Datasets_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_Get)|[GetDataset](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetDataset)| |`/datasets/{id}`|PATCH|[Datasets_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_Update)|[UpdateDataset](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateDataset)| |`/datasets/{id}/blocks:commit`|POST|[Datasets_CommitBlocks](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_CommitBlocks)|Not applicable|
-|`/datasets/{id}/blocks`|GET|[Datasets_GetDatasetBlocks](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_GetDatasetBlocks)|Not applicable|
+|`/datasets/{id}/blocks`|GET|[Datasets_GetBlocks](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_GetBlocks)|Not applicable|
|`/datasets/{id}/blocks`|PUT|[Datasets_UploadBlock](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_UploadBlock)|Not applicable| |`/datasets/{id}/files`|GET|[Datasets_ListFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_ListFiles)|[GetDatasetFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetDatasetFiles)| |`/datasets/{id}/files/{fileId}`|GET|[Datasets_GetFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_GetFile)|[GetDatasetFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetDatasetFile)|
Health status provides insights about the overall health of the service and sub-
|Path|Method|Version 3.1|Version 3.0| |||||
-|`/healthstatus`|GET|[HealthStatus_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/HealthStatus_Get)|[GetHealthStatus](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetHealthStatus)|
+|`/healthstatus`|GET|[ServiceHealth_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/ServiceHealth_Get)|[GetHealthStatus](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetHealthStatus)|
## Models
See [Train a model](how-to-custom-speech-train-model.md?pivots=rest-api) and [Cu
|`/models/{id}/files/{fileId}`|GET|[Models_GetFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_GetFile)|Not applicable| |`/models/{id}/manifest`|GET|[Models_GetCustomModelManifest](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_GetCustomModelManifest)|[GetModelManifest](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetModelManifest)| |`/models/base`|GET|[Models_ListBaseModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_ListBaseModels)|[GetBaseModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModels)|
-|`/models/base/{id}`|GET|[Models_ListBaseModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_ListBaseModel)|[GetBaseModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModel)|
+|`/models/base/{id}`|GET|[Models_GetBaseModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_GetBaseModel)|[GetBaseModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModel)|
|`/models/base/{id}/manifest`|GET|[Models_GetBaseModelManifest](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_GetBaseModelManifest)|[GetBaseModelManifest](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModelManifest)| |`/models/locales`|GET|[Models_ListSupportedLocales](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_ListSupportedLocales)|[GetSupportedLocalesForModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetSupportedLocalesForModels)|
cognitive-services Speech Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-sdk.md
The Speech SDK supports the following languages and platforms:
| Programming language | Reference | Platform support | |-|-|-|
-| [C#](quickstarts/setup-platform.md?pivots=programming-language-csharp) <sup>1</sup> | [.NET](/dotnet/api/overview/azure/cognitiveservices/client/speechservice) | Windows, Linux, macOS, Mono, Xamarin.iOS, Xamarin.Mac, Xamarin.Android, UWP, Unity |
+| [C#](quickstarts/setup-platform.md?pivots=programming-language-csharp) <sup>1</sup> | [.NET](/dotnet/api/microsoft.cognitiveservices.speech) | Windows, Linux, macOS, Mono, Xamarin.iOS, Xamarin.Mac, Xamarin.Android, UWP, Unity |
| [C++](quickstarts/setup-platform.md?pivots=programming-language-cpp) <sup>2</sup> | [C++](/cpp/cognitive-services/speech/) | Windows, Linux, macOS | | [Go](quickstarts/setup-platform.md?pivots=programming-language-go) | [Go](https://github.com/Microsoft/cognitive-services-speech-sdk-go) | Linux | | [Java](quickstarts/setup-platform.md?pivots=programming-language-java) | [Java](/java/api/com.microsoft.cognitiveservices.speech) | Android, Windows, Linux, macOS |
cognitive-services Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/text-to-speech.md
Custom Neural Voice training and hosting are both calculated by hour and billed
Custom Neural Voice (CNV) training time is measured by ΓÇÿcompute hourΓÇÖ (a unit to measure machine running time). Typically, when training a voice model, two computing tasks are running in parallel. So, the calculated compute hours will be longer than the actual training time. On average, it takes less than one compute hour to train a CNV Lite voice; while for CNV Pro, it usually takes 20 to 40 compute hours to train a single-style voice, and around 90 compute hours to train a multi-style voice. The CNV training time is billed with a cap of 96 compute hours. So in the case that a voice model is trained in 98 compute hours, you will only be charged with 96 compute hours.
-Custom Neural Voice (CNV) endpoint hosting is measured by the actual time (hour). The hosting time (hours) for each endpoint is calculated at 00:00 UTC every day for the previous 24 hours. For example, if the endpoint has been active for 24 hours on day one, it will be billed for 24 hours at 00:00 UTC the second day. If the endpoint is newly created or has been suspended during the day, it will be billed for its acumulated running time until 00:00 UTC the second day. If the endpoint is not currently hosted, it will not be billed. In addition to the daily calculation at 00:00 UTC each day, the billing is also triggered immediately when an endpoint is deleted or suspended. For example, for an endpoint created at 08:00 UTC on December 1, the hosting hour will be calculated to 16 hours at 00:00 UTC on December 2 and 24 hours at 00:00 UTC on December 3. If the user suspends hosting the endpoint at 16:00 UTC on December 3, the duration (16 hours) from 00:00 to 16:00 UTC on December 3 will be calculated for billing.
+Custom Neural Voice (CNV) endpoint hosting is measured by the actual time (hour). The hosting time (hours) for each endpoint is calculated at 00:00 UTC every day for the previous 24 hours. For example, if the endpoint has been active for 24 hours on day one, it will be billed for 24 hours at 00:00 UTC the second day. If the endpoint is newly created or has been suspended during the day, it will be billed for its acumulated running time until 00:00 UTC the second day. If the endpoint is not currently hosted, it will not be billed. In addition to the daily calculation at 00:00 UTC each day, the billing is also triggered immediately when an endpoint is deleted or suspended. For example, for an endpoint created at 08:00 UTC on December 1, the hosting hour will be calculated to 16 hours at 00:00 UTC on December 2 and 24 hours at 00:00 UTC on December 3. If the user suspends hosting the endpoint at 16:30 UTC on December 3, the duration (16.5 hours) from 00:00 to 16:30 UTC on December 3 will be calculated for billing.
## Reference docs
cognitive-services V3 0 Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/reference/v3-0-reference.md
To force the request to be handled within a specific geography, use the desired
|Europe| api-eur.cognitive.microsofttranslator.com|North Europe, West Europe| |United States| api-nam.cognitive.microsofttranslator.com|East US, South Central US, West Central US, and West US 2|
-<sup>1</sup> Customers with a resource located in Switzerland North or Switzerland West can ensure that their Text API requests are served within Switzerland. To ensure that requests are handled in Switzerland, create the Translator resource in the 'Resource region' 'Switzerland North' or 'Switzerland West', then use the resource's custom endpoint in your API requests. For example: If you create a Translator resource in Azure portal with 'Resource region' as 'Switzerland North' and your resource name is 'my-ch-n', then your custom endpoint is "https://my-ch-n.cognitiveservices.azure.com". And a sample request to translate is:
+<sup>1</sup> Customers with a resource located in Switzerland North or Switzerland West can ensure that their Text API requests are served within Switzerland. To ensure that requests are handled in Switzerland, create the Translator resource in the 'Resource region' 'Switzerland North' or 'Switzerland West', then use the resource's custom endpoint in your API requests. For example: If you create a Translator resource in Azure portal with 'Resource region' as 'Switzerland North' and your resource name is 'my-swiss-n', then your custom endpoint is "https://my-swiss-n.cognitiveservices.azure.com". And a sample request to translate is:
```curl // Pass secret key and region using headers to a custom endpoint
-curl -X POST " my-ch-n.cognitiveservices.azure.com/translator/text/v3.0/translate?to=fr" \
+curl -X POST "https://my-swiss-n.cognitiveservices.azure.com/translator/text/v3.0/translate?to=fr" \
-H "Ocp-Apim-Subscription-Key: xxx" \ -H "Ocp-Apim-Subscription-Region: switzerlandnorth" \ -H "Content-Type: application/json" \
cognitive-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/service-limits.md
The following limits are observed for the conversational language understanding.
|Count of entities per project | 1 | 500| |Count of list synonyms per entity| 0 | 20,000 | |Count of prebuilt components per entity| 0 | 7 |
+|Count of regular expressions per project| 0 | 20 |
|Count of trained models per project| 0 | 10 | |Count of deployments per project| 0 | 10 |
cognitive-services Assertion Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/concepts/assertion-detection.md
Previously updated : 11/02/2021 Last updated : 01/04/2023 # Assertion detection
-The meaning of medical content is highly affected by modifiers, such as negative or conditional assertions which can have critical implications if misrepresented. Text Analytics for health supports three categories of assertion detection for entities in the text:
+The meaning of medical content is highly affected by modifiers, such as negative or conditional assertions, which can have critical implications if misrepresented. Text Analytics for health supports three categories of assertion detection for entities in the text:
* Certainty * Conditional
The meaning of medical content is highly affected by modifiers, such as negative
## Assertion output
-Text Analytics for health returns assertion modifiers, which are informative attributes assigned to medical concepts that provide deeper understanding of the conceptsΓÇÖ context within the text. These modifiers are divided into three categories, each focusing on a different aspect, and containing a set of mutually exclusive values. Only one value per category is assigned to each entity. The most common value for each category is the Default value. The serviceΓÇÖs output response contains only assertion modifiers that are different from the default value.
+Text Analytics for health returns assertion modifiers, which are informative attributes assigned to medical concepts that provide a deeper understanding of the conceptsΓÇÖ context within the text. These modifiers are divided into three categories, each focusing on a different aspect and containing a set of mutually exclusive values. Only one value per category is assigned to each entity. The most common value for each category is the Default value. The serviceΓÇÖs output response contains only assertion modifiers that are different from the default value. In other words, if no assertion is returned, the implied assertion is the default value.
**CERTAINTY** ΓÇô provides information regarding the presence (present vs. absent) of the concept and how certain the text is regarding its presence (definite vs. possible).
-* **Positive** [Default]: the concept exists or happened.
+* **Positive** [Default]: the concept exists or has happened.
* **Negative**: the concept does not exist now or never happened. * **Positive_Possible**: the concept likely exists but there is some uncertainty. * **Negative_Possible**: the conceptΓÇÖs existence is unlikely but there is some uncertainty.
cognitive-services Health Entity Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/concepts/health-entity-categories.md
Previously updated : 11/02/2021 Last updated : 01/04/2023
Text Analytics for health processes and extracts insights from unstructured medical data. The service detects and surfaces medical concepts, assigns assertions to concepts, infers semantic relations between concepts and links them to common medical ontologies.
-Text Analytics for health detects medical concepts in the following categories.
+Text Analytics for health detects medical concepts that fall under the following categories.
## Anatomy
Text Analytics for health detects medical concepts in the following categories.
### Entities
-**COURSE** - Description of a change in another entity over time, such as condition progression (e.g., improvement, worsening, resolution, remission), a course of treatment or medication (e.g., increase in medication dosage).
+**COURSE** - Description of a change in another entity over time, such as condition progression (for example: improvement, worsening, resolution, remission), a course of treatment or medication (for example: increase in medication dosage).
:::image type="content" source="../media/entities/course-entity.png" alt-text="An example of a course entity." lightbox="../media/entities/course-entity.png":::
Text Analytics for health detects medical concepts in the following categories.
:::image type="content" source="../media/entities/treatment-entities-name.png" alt-text="An example of a treatment name entity." lightbox="../media/entities/treatment-entities-name.png":::
-## Supported Assertions
-
-Assertion modifiers are divided into three categories, each one focuses on a different aspect.
-Each category contains a set of mutually exclusive values. Only one value per category is assigned to each entity. The most common value for each category is the Default value. The serviceΓÇÖs output response contains only assertion modifiers that are different from the default value.
-
-### Certainty
-
-provides information regarding the presence (present vs. absent) of the concept and how certain the text is regarding its presence (definite vs. possible).
-
-**Positive** (Default): the concept exists or happened.
-
-**Negative**: the concept does not exist now or never happened.
--
-**Positive_Possible**: the concept likely exists but there is some uncertainty.
--
-**Negative_Possible**: the conceptΓÇÖs existence is unlikely but there is some uncertainty.
--
-**Neutral_Possible**: the concept may or may not exist without a tendency to either side.
--
-### Conditionality
-
-provides information regarding whether the existence of a concept depends on certain conditions.
-
-**None** (Default): the concept is a fact and not hypothetical and does not depend on certain conditions.
-
-**Hypothetical**: the concept may develop or occur in the future.
--
-**Conditional**: the concept exists or occurs only under certain conditions.
--
-### Association
-
-describes whether the concept is associated with the subject of the text or someone else.
-
-**Subject** (Default): the concept is associated with the subject of the text, usually the patient.
-
-**Someone_Else**: the concept is associated with someone who is not the subject of the text.
--- ## Next steps
-* [NER overview](../../named-entity-recognition/overview.md)
+* [How to call the Text Analytics for health](../how-to/call-api.md)
cognitive-services Relation Extraction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/concepts/relation-extraction.md
Previously updated : 11/02/2021 Last updated : 01/04/2023 # Relation extraction
-Relation extraction identifies meaningful connections between concepts mentioned in text. For example, a "time of condition" relation is found by associating a condition name with a time or between an abbreviation and the full description.
--
-## Relation extraction output
-
-Text Analytics for health recognizes relations between different concepts, including relations between attribute and entity (for example, direction of body structure, dosage of medication) and between entities (for example, abbreviation detection).
+Text Analytics for health features relation extraction, which is used to identify meaningful connections between concepts, or entities, mentioned in the text. For example, a "time of condition" relation is found by associating a condition name with a time. Another example is a "dosage of medication" relation, which is found by relating an extracted medication to its extracted dosage. The following example shows how relations are expressed in the JSON output.
> [!NOTE] > * Relations referring to CONDITION may refer to either the DIAGNOSIS entity type or the SYMPTOM_OR_SIGN entity type.
Relation extraction output contains URI references and assigned roles of the ent
## Recognized relations
-The following relations can be returned by the API.
+The following list presents all the recognized relations by the Text Analytics for health API.
**ABBREVIATION**
The following relations can be returned by the API.
**VALUE_OF_EXAMINATION** **VARIANT_OF_GENE**+
+## Next steps
+
+* [How to call the Text Analytics for health](../how-to/call-api.md)
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/how-to/call-api.md
Previously updated : 09/05/2022 Last updated : 01/04/2023
[!INCLUDE [service notice](../includes/service-notice.md)]
-Text Analytics for health can be used to extract and label relevant medical information from unstructured texts, such as: doctor's notes, discharge summaries, clinical documents, and electronic health records. There are two ways to utilize this service:
+Text Analytics for health can be used to extract and label relevant medical information from unstructured texts such as doctors' notes, discharge summaries, clinical documents, and electronic health records. The service performs [named entity recognition](../concepts/health-entity-categories.md), [relation extraction](../concepts/relation-extraction.md), [entity linking](https://www.nlm.nih.gov/research/umls/sourcereleasedocs/https://docsupdatetracker.net/index.html), and [assertion detection](../concepts/assertion-detection.md) to uncover insights from the input text. For information on the returned confidence scores, see the [transparency note](/legal/cognitive-services/text-analytics/transparency-note#general-guidelines-to-understand-and-improve-performance?context=/azure/cognitive-services/text-analytics/context/context).
+
+There are two ways to call the service:
-* The web-based API and client libraries (asynchronous)
* A [Docker container](use-containers.md) (synchronous)
+* Using the web-based API and client libraries (asynchronous)
-## Features
-Text Analytics for health performs Named Entity Recognition (NER), relation extraction, entity negation and entity linking on English-language text to uncover insights in unstructured clinical and biomedical text.
-See the [entity categories](../concepts/health-entity-categories.md) returned by Text Analytics for health for a full list of supported entities. For information on confidence scores, see the [transparency note](/legal/cognitive-services/text-analytics/transparency-note#general-guidelines-to-understand-and-improve-performance?context=/azure/cognitive-services/text-analytics/context/context).
> [!TIP]
-> If you want to start using this feature, you can follow the [quickstart article](../quickstart.md) to get started. You can also make example requests using [Language Studio](../../language-studio.md) without needing to write code.
+> If you want to test out the feature without writing any, you can follow the [quickstart article](../quickstart.md) to get started. You can also make example requests using [Language Studio](../../language-studio.md) without needing to write code.
-## Determine how to process the data (optional)
-### Specify the Text Analytics for health model
+## Specify the Text Analytics for health model
By default, Text Analytics for health will use the latest available AI model on your text. You can also configure your API requests to use a specific model version. The model you specify will be used to perform operations provided by the Text Analytics for health.
The Text Analytics for health supports English in addition to multiple languages
## Submitting data
-To send an API request, You will need your Language resource endpoint and key.
+To send an API request, you will need your Language resource endpoint and key.
> [!NOTE] > You can find the key and endpoint for your Language resource on the Azure portal. They will be located on the resource's **Key and endpoint** page, under **resource management**.
Analysis is performed upon receipt of the request. If you send a request using t
## Submitting a Fast Healthcare Interoperability Resources (FHIR) request
-To receive your result using the **FHIR** structure, you must send the FHIR version in the API request body. You can also send the **document type** as a parameter to the FHIR API request body. If the request does not specify a document type, the value is set to none.
+To receive your result using the **FHIR** structure, you must send the FHIR version in the API request body.
| Parameter Name | Type | Value | |--|--|--| | fhirVersion | string | `4.0.1` |
-| documentType | string | `ClinicalTrial`, `Consult`, `DischargeSummary`, `HistoryAndPhysical`, `Imaging`, `None`, `Pathology`, `ProcedureNote`, `ProgressNote`|
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/language-support.md
Previously updated : 9/5/2022 Last updated : 01/04/2023 # Language support for Text Analytics for health
-Use this article to learn which natural languages are supported by Text Analytics for health and its Docker container.
+Use this article to learn which natural languages are supported by Text Analytics for health and its Docker container.
## Hosted API Service
The docker container supports English language, model version 2022-03-01.
Additional languages are also supported when using a docker container to deploy the API: Spanish, French, German Italian, Portuguese and Hebrew. This functionality is currently in preview, model version 2022-08-15-preview. Full details for deploying the service in a container can be found [here](../text-analytics-for-health/how-to/use-containers.md).
-In order to download the new container images from the Microsoft public container registry, use the [docker pull](https://docs.docker.com/engine/reference/commandline/pull/) command, as follows :
+In order to download the new container images from the Microsoft public container registry, use the following [docker pull](https://docs.docker.com/engine/reference/commandline/pull/) command.
For English, Spanish, Italian, French, German and Portuguese:
json
| Language Code | Model Version: | Featured Tag | Specific Tag | |:--|:-:|:-:|::|
-| en | 2022-03-01 | latest | 3.0.59413252-onprem-amd64 |
-| en,es,it,fr,de,pt | 2022-08-15-preview | latin | 3.0.60903415-latin-onprem-amd64 |
-| he | 2022-08-15-preview | semitic | 3.0.60903415-semitic-onprem-amd64 |
+| `en` | 2022-03-01 | latest | 3.0.59413252-onprem-amd64 |
+| `en`, `es`, `it`, `fr`, `de`, `pt` | 2022-08-15-preview | latin | 3.0.60903415-latin-onprem-amd64 |
+| `he` | 2022-08-15-preview | semitic | 3.0.60903415-semitic-onprem-amd64 |
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/overview.md
Previously updated : 06/15/2022 Last updated : 01/06/2023
-# What is Text Analytics for health in Azure Cognitive Service for Language?
+# What is Text Analytics for health?
[!INCLUDE [service notice](includes/service-notice.md)]
-Text Analytics for health is one of the features offered by [Azure Cognitive Service for Language](../overview.md), a collection of machine learning and AI algorithms in the cloud for developing intelligent applications that involve written language.
+Text Analytics for health is one of the prebuilt features offered by [Azure Cognitive Service for Language](../overview.md). It is a cloud-based API service that applies machine-learning intelligence to extract and label relevant medical information from a variety of unstructured texts such as doctor's notes, discharge summaries, clinical documents, and electronic health records.
This documentation contains the following types of articles:-
-* [**Quickstarts**](quickstart.md) are getting-started instructions to guide you through making requests to the service.
-* [**How-to guides**](how-to/call-api.md) contain instructions for using the service in more specific or customized ways.
-* The [**conceptual articles**](concepts/health-entity-categories.md) provide in-depth explanations of the service's functionality and features.
+* The [**quickstart article**](quickstart.md) provides a short tutorial that guides you with making your first request to the service.
+* The [**how-to guides**](how-to/call-api.md) contain detailed instructions on how to make calls to the service using the hosted API or using the on-premises Docker container.
+* The [**conceptual articles**](concepts/health-entity-categories.md) provide in-depth information on each of the service's features, named entity recognition, relation extraction, entity linking, and assertion detection.
## Text Analytics for health features
-Text Analytics for health extracts and labels relevant medical information from unstructured texts such as doctor's notes, discharge summaries, clinical documents, and electronic health records.
+Text Analytics for health performs four key functions which are named entity recognition, relation extraction, entity linking, and assertion detection, all with a single API call.
[!INCLUDE [Text Analytics for health](includes/features.md)]
+Text Analytics for health can receive unstructured text in English as part of its generally available offering. Additional languages such as German, French, Italian, Spanish, Portuguese, and Hebrew are currently supported in preview.
+
+Additionally, Text Analytics for health can return the processed output using the Fast Healthcare Interoperability Resources (FHIR) structure which enables the service's integration with other electronic health systems.
+++ > [!VIDEO https://learn.microsoft.com/Shows/AI-Show/Introducing-Text-Analytics-for-Health/player] ++
+## Usage scenarios
+
+Text Analytics for health can be used in multiple scenarios across a variety of industries.
+Some common customer motivations for using Text Analytics for health include:
+* Assisting and automating the processing of medical documents by proper medical coding to ensure accurate care and billing.
+* Increasing the efficiency of analyzing healthcare data to help drive the success of value-based care models similar to Medicare.
+* Minimizing healthcare provider effort by automating the aggregation of key patient data for trend and pattern monitoring.
+* Facilitating and supporting the adoption of HL7 standards for improved exchange, integration, sharing, retrieval, and delivery of electronic health information in all healthcare services.
+
+### Example use cases:ΓÇâ
+
+|Use case|Description|
+|--|--|
+|Extract insights and statistics|Identify medical entities such as symptoms, medications, diagnosis from clinical and research documents in order to extract insights and statistics for different patient cohorts.|
+|Develop predictive models using historic data|Power solutions for planning, decision support, risk analysis and more, based on prediction models created from historic data.|
+|Annotate and curate medical information|Support solutions for clinical data annotation and curation such as automating clinical coding and digitizing manually created data.|
+|Review and report medical information|Support solutions for reporting and flagging possible errors in medical information resulting from reviewal processes such as quality assurance.|
+|Assist with decision support|Enable solutions that provide humans with assistive information relating to patientsΓÇÖ medical information for faster and more reliable decisions.|
+++ ## Get started with Text analytics for health
-To use this feature, you submit raw unstructured text for analysis and handle the API output in your application. Analysis is performed as-is, with no additional customization to the model used on your data. There are three ways to use Text Analytics for health:
+To use this feature, all you need is to submit raw unstructured text for analysis. Analysis is performed as-is, with no additional customization to the model used on your data. There are three ways to get started Text Analytics for health:
|Development option |Description | Links | ||||
-| Language Studio | A web-based platform that enables you to try Text Analytics for health without needing writing code. | ΓÇó [Language Studio website](https://language.cognitive.azure.com/tryout/healthAnalysis) <br> ΓÇó [Quickstart: Use Language Studio](../language-studio.md) |
-| REST API or Client library (Azure SDK) | Integrate Text Analytics for health into your applications using the REST API, or the client library available in a variety of languages. | ΓÇó [Quickstart: Use Text Analytics for health](quickstart.md) |
+| Language Studio | A web-based platform that enables you to try Text Analytics for health without needing to write any code. | ΓÇó [Language Studio website](https://language.cognitive.azure.com/tryout/healthAnalysis) <br> ΓÇó [Quickstart: Use Language Studio](../language-studio.md) |
+| REST API or Client library (Azure SDK) | Integrate Text Analytics for health into your applications using the REST API or the client library, available in a variety of development languages. | ΓÇó [Quickstart: Use Text Analytics for health](quickstart.md) |
| Docker container | Use the available Docker container to deploy this feature on-premises, letting you bring the service closer to your data for compliance, security, or other operational reasons. | ΓÇó [How to deploy on-premises](how-to/use-containers.md) | ## Input requirements and service limits
-* Text Analytics for health takes raw unstructured text for analysis. See [Data and service limits](../concepts/data-limits.md) for more information.
-* Text Analytics for health works with a variety of written languages. See [language support](language-support.md) for more information.
+Text Analytics for health is designed to receive unstructured text for analysis. For more information, see [data and service limits](../concepts/data-limits.md).
+
+Text Analytics for health works with a variety of input languages. For more information, see [language support](language-support.md).
[!INCLUDE [Developer reference](../includes/reference-samples-text-analytics.md)]
-## Responsible AI
+## Responsible use of AI
-An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the [transparency note for Text Analytics for health](/legal/cognitive-services/language-service/transparency-note-health?context=/azure/cognitive-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
+An AI system includes the technology, the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the [transparency note for Text Analytics for health](/legal/cognitive-services/language-service/transparency-note-health?context=/azure/cognitive-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also refer to the following articles for more information:
[!INCLUDE [Responsible AI links](../includes/overview-responsible-ai-links.md)]
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/quickstart.md
zone_pivot_groups: programming-languages-text-analytics
# Quickstart: Using Text Analytics for health client library and REST API
-> [!IMPORTANT]
-> Fast Healthcare Interoperability Resources (FHIR) structuring is available for preview using the Language REST API. The client libraries are not currently supported. [Learn more](./how-to/call-api.md) on how to use FHIR structuring in your API call.
+This article contains Text Analytics for health quickstarts that help with using the supported client libraries, C#, Java, NodeJS, and Python as well as with using the REST API.
::: zone pivot="programming-language-csharp" [!INCLUDE [C# quickstart](includes/quickstarts/csharp-sdk.md)]
zone_pivot_groups: programming-languages-text-analytics
## Next steps
-* [Text Analytics for health overview](overview.md)
+* [How to call the hosted API](./how-to/call-api.md)
+* [How to use the service with Docker containers](./how-to/use-containers.md)
cognitive-services Quotas Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/quotas-limits.md
The following sections provide you with a quick guide to the quotas and limits t
| Limit Name | Limit Value | |--|--| | OpenAI resources per region | 2 |
-| Requests per second per deployment | 15 |
+| Requests per second per deployment | 20 requests per second for: text-davinci-002, text-davinci-fine-tune-002, code-cushman-002, code-davinci-002, code-davinci-fine-tune-002 <br ><br> 50 requests per second for all other text models.
+ |
| Max fine-tuned model deployments | 2 | | Ability to deploy same model to multiple deployments | Not allowed | | Total number of training jobs per resource | 100 |
cognitive-services Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/tutorials/embeddings.md
Last updated 12/14/2022-+ recommendations: false
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/whats-new.md
keywords:
# What's new in Azure OpenAI
-## December 2022 - Azure OpenAI General Availability (GA)
+## December 2022
### New features
keywords:
} ```
-**GA API 2022-12-01:**
+**API version 2022-12-01:**
```json {ΓÇï
cognitive-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/policy-reference.md
Title: Built-in policy definitions for Azure Cognitive Services description: Lists Azure Policy built-in policy definitions for Azure Cognitive Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/28/2022 Last updated : 01/05/2023
cognitive-services Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cognitive Services description: Lists Azure Policy Regulatory Compliance controls available for Azure Cognitive Services. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/04/2022 Last updated : 01/05/2023
communication-services Enable Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/enable-logging.md
They're all viable and flexible options that can adapt to your specific storage
## Log Analytics Workspace for additional analytics features
-By choosing to send your logs to a [Log Analytics workspace](../../../azure-monitor/logs/log-analytics-overview.md) destination, you enable more features within Azure Monitor generally and for your Communications Services. Log Analytics is a tool within Azure portal used to create, edit, and run [queries](../../../azure-monitor/logs/queries.md) with data in your Azure Monitor logs and metrics and [Workbooks](../../../azure-monitor/visualize/workbooks-overview.md), [alerts](../../../azure-monitor/alerts/alerts-log.md), [notification actions](../../../azure-monitor/alerts/action-groups.md), [REST API access](https://dev.loganalytics.io/), and many others.
+By choosing to send your logs to a [Log Analytics workspace](../../../azure-monitor/logs/log-analytics-overview.md) destination, you enable more features within Azure Monitor generally and for your Communications Services. Log Analytics is a tool within Azure portal used to create, edit, and run [queries](../../../azure-monitor/logs/queries.md) with data in your Azure Monitor logs and metrics and [Workbooks](../../../azure-monitor/visualize/workbooks-overview.md), [alerts](../../../azure-monitor/alerts/alerts-log.md), [notification actions](../../../azure-monitor/alerts/action-groups.md), [REST API access](/rest/api/loganalytics/), and many others.
-For your Communications Services logs, we've provided a useful [default query pack](../../../azure-monitor/logs/query-packs.md#default-query-pack) to provide an initial set of insights to quickly analyze and understand your data. These query packs are described here: [Log Analytics for Communications Services](log-analytics.md). We've also created many insights and visualizations using workbooks, which are described in: [Workbooks for Communications Services logs](insights.md).
+For your Communications Services logs, we've provided a useful [default query pack](../../../azure-monitor/logs/query-packs.md#default-query-pack) to provide an initial set of insights to quickly analyze and understand your data. These query packs are described here: [Log Analytics for Communications Services](log-analytics.md). We've also created many insights and visualizations using workbooks, which are described in: [Workbooks for Communications Services logs](insights.md).
communication-services Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/capabilities.md
In this article, you will learn which capabilities are supported for Teams exter
| | File sharing | ❌ | | | Reply to specific chat message | ❌ | | | React to chat message | ❌ |
+| | [Data Loss Prevention (DLP)](/microsoft-365/compliance/dlp-microsoft-teams) | ✔️* |
| Mid call control | Turn your video on/off | ✔️ | | | Mute/Unmute mic | ✔️ | | | Switch between cameras | ✔️ |
communication-services Certified Session Border Controllers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/certified-session-border-controllers.md
If you have any questions about the SBC certification program for Communication
|Vendor|Product|Software version| |: |: |:
-|[AudioCodes](https://www.audiocodes.com/media/lbjfezwn/mediant-sbc-with-microsoft-azure-communication-services.pdf)|Mediant SBC VE|7.40A
+|[AudioCodes](https://www.audiocodes.com/media/lbjfezwn/mediant-sbc-with-microsoft-azure-communication-services.pdf)|Mediant Virtual Edition SBC|7.40A|
+||Mediant 500 SBC|7.40A|
+||Mediant 800 SBC|7.40A|
+||Mediant 2600 SBC|7.40A|
+||Mediant 4000 SBC|7.40A|
+||Mediant 1000B SBC|7.40A|
+||Mediant 9000 SBC|7.40A|
|[Metaswitch](https://manuals.metaswitch.com/Perimeta/V4.9/AzureCommunicationServicesIntegrationGuide/Source/notices.html)|Perimeta SBC|4.9| |[Oracle](https://www.oracle.com/technical-resources/documentation/acme-packet.html)|Oracle Acme Packet SBC|8.4| |Ribbon Communications|[SBC SWe / SBC 5400 / SBC 7000](https://support.sonus.net/display/ALLDOC/Ribbon+Configurations+with+Azure+Communication+Services+Direct+Routing)|9.02|
Note the certification granted to a major version. That means that firmware with
### Quickstarts - [Outbound call to a phone number](../../quickstarts/telephony/pstn-call.md)
+- [Redirect inbound telephony calls with Call Automation](../../quickstarts/call-automation/redirect-inbound-telephony-calls.md)
communication-services Inbound Calling Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/inbound-calling-capabilities.md
Call Automation enables you to build custom calling workflows within your applic
Learn more about [Call Automation](../voice-video-calling/call-automation.md), currently available in public preview.
-**Inbound calling with Azure Bot Framework**
-
-Customers participating in Azure Bot Framework Telephony Channel preview can find the [instructions here](/azure/bot-service/bot-service-channel-connect-telephony)
- **Inbound calling with Power Virtual Agents** *Coming soon*
communication-services Call Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/call-transcription.md
zone_pivot_groups: acs-plat-ios-android
# Display call transcription state on the client
+> [!NOTE]
+> Call transcription state is only available from Teams meetings. Currently there's no support for call transcription state for ACS to ACS calls.
When using call transcription you may want to let your users know that a call is being transcribe. Here's how.
When using call transcription you may want to let your users know that a call is
## Next steps - [Learn how to manage video](./manage-video.md) - [Learn how to manage calls](./manage-calls.md)-- [Learn how to record calls](./record-calls.md)
+- [Learn how to record calls](./record-calls.md)
communication-services Callkit Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/callkit-integration.md
+ Last updated : 01/06/2023+++
+ Title: CallKit integration in ACS Calling SDK
++
+description: Steps on how to integrate CallKit with ACS Calling SDK
++
+ # Integrate with CallKit
+
+ In this document, we'll go through how to integrate CallKit with your iOS application.
+
+ > [!NOTE]
+ > This API is provided as a preview for developers and may change based on feedback that we receive. Do not use this API in a production environment. To use this api please use 'beta' release of Azure Communication Services Calling iOS SDK
+
+ ## Prerequisites
+
+ - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+ - A deployed Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md).
+ - A user access token to enable the calling client. For more information, see [Create and manage access tokens](../../quickstarts/access-tokens.md).
+ - Optional: Complete the quickstart to [add voice calling to your application](../../quickstarts/voice-video-calling/getting-started-with-calling.md)
+
+ ## CallKit Integration (within SDK)
+
+ CallKit Integration in the ACS iOS SDK handles interaction with CallKit for us. To perform any call operations like mute/unmute, hold/resume, we only need to call the API on the ACS SDK.
+
+ ### Initialize call agent with CallKitOptions
+
+ With configured instance of `CallKitOptions`, we can create the `CallAgent` with handling of `CallKit`.
+
+ ```Swift
+ let options = CallAgentOptions()
+ let callKitOptions = CallKitOptions(with: createProviderConfig())
+ options.callKitOptions = callKitOptions
+
+ // Configure the properties of `CallKitOptions` instance here
+
+ self.callClient!.createCallAgent(userCredential: userCredential,
+ options: options,
+ completionHandler: { (callAgent, error) in
+ // Initialization
+ })
+ ```
+
+ ### Specify call recipient info for outgoing calls
+
+ First we need to create an instance of `StartCallOptions()` for outgoing calls, or `JoinCallOptions()` for group call:
+ ```Swift
+ let options = StartCallOptions()
+ ```
+ or
+ ```Swift
+ let options = JoinCallOptions()
+ ```
+ Then create an instance of `CallKitRemoteInfo`
+ ```Swift
+ options.callKitRemoteInfo = CallKitRemoteInfo()
+ ```
+
+ 1. Assign value for `callKitRemoteInfo.displayNameForCallKit` to customize display name for call recipients and configure `CXHandle` value. This value specified in `displayNameForCallKit` is exactly how it will show up in the last dialed call log.
+
+ ```Swift
+ options.callKitRemoteInfo.displayNameForCallKit = "DISPLAY_NAME"
+ ```
+ 2. Assign the `cxHandle` value is what the application will receive when user calls back on that contact
+ ```Swift
+ options.callKitRemoteInfo.cxHandle = CXHandle(type: .generic, value: "VALUE_TO_CXHANDLE")
+ ```
+
+ ### Specify call recipient info for incoming calls
+
+ First we need to create an instance of `CallKitOptions`:
+
+ ```Swift
+ let callKitOptions = CallKitOptions(with: createProviderConfig())
+ ```
+
+ Configure the properties of `CallKitOptions` instance:
+
+ Block that is passed to variable `provideRemoteInfo` will be called by the SDK when we receive an incoming call and we need to get a display name for the incoming caller, which we need to pass to the CallKit.
+
+ ```Swift
+ callKitOptions.provideRemoteInfo = self.provideCallKitRemoteInfo
+
+ func provideCallKitRemoteInfo(callerInfo: CallerInfo) -> CallKitRemoteInfo
+ {
+ let callKitRemoteInfo = CallKitRemoteInfo()
+ callKitRemoteInfo.displayName = "CALL_TO_PHONENUMBER_BY_APP"
+ callKitRemoteInfo.cxHandle = CXHandle(type: .generic, value: "VALUE_TO_CXHANDLE")
+ return callKitRemoteInfo
+ }
+ ```
+
+ ### Configure audio session
+
+ Configure audio session will be called before placing or accepting incoming call and before resuming the call after it has been put on hold.
+
+ ```Swift
+ callKitOptions.configureAudioSession = self.configureAudioSession
+
+ public func configureAudioSession() -> Error? {
+ let audioSession: AVAudioSession = AVAudioSession.sharedInstance()
+ var configError: Error?
+ do {
+ try audioSession.setCategory(.playAndRecord)
+ } catch {
+ configError = error
+ }
+ return configError
+ }
+ ```
+
+ NOTE: In cases where Contoso has already configured audio sessions DO NOT provide `nil` but return `nil` error in the block
+
+ ```Swift
+ callKitOptions.configureAudioSession = self.configureAudioSession
+
+ public func configureAudioSession() -> Error? {
+ return nil
+ }
+ ```
+ if `nil` is provided for `configureAudioSession` then SDK will call the default implementation in the SDK.
+
+ ### Handle incoming push notification payload
+
+ When the app receives incoming push notification payload, we need to call `handlePush` to process it. ACS Calling SDK will then raise the `IncomingCall` event.
+
+ ```Swift
+ public func handlePushNotification(_ pushPayload: PKPushPayload)
+ {
+ let callNotification = PushNotificationInfo.fromDictionary(pushPayload.dictionaryPayload)
+ if let agent = self.callAgent {
+ agent.handlePush(notification: callNotification) { (error) in }
+ }
+ }
+
+ // Event raised by the SDK
+ public func callAgent(_ callAgent: CallAgent, didRecieveIncomingCall incomingcall: IncomingCall) {
+ }
+ ```
+
+ We can use `reportIncomingCallFromKillState` to handle push notifications when the app is closed.
+ `reportIncomingCallFromKillState` API shouldn't be called if `CallAgent` instance is already available when push is received.
+
+ ```Swift
+ if let agent = self.callAgent {
+ /* App is not in a killed state */
+ agent.handlePush(notification: callNotification) { (error) in }
+ } else {
+ /* App is in a killed state */
+ CallClient.reportIncomingCallFromKillState(with: callNotification, callKitOptions: callKitOptions) { (error) in
+ if (error == nil) {
+ DispatchQueue.global().async {
+ self.callClient = CallClient()
+ let options = CallAgentOptions()
+ let callKitOptions = CallKitOptions(with: createProviderConfig())
+ callKitOptions.provideRemoteInfo = self.provideCallKitRemoteInfo
+ callKitOptions.configureAudioSession = self.configureAudioSession
+ options.callKitOptions = callKitOptions
+ self.callClient!.createCallAgent(userCredential: userCredential,
+ options: options,
+ completionHandler: { (callAgent, error) in
+ if (error == nil) {
+ self.callAgent = callAgent
+ self.callAgent!.handlePush(notification: callNotification) { (error) in }
+ }
+ })
+ }
+ } else {
+ os_log("SDK couldn't handle push notification KILL mode reportToCallKit FAILED", log:self.log)
+ }
+ }
+ }
+ ```
+
+ ## CallKit Integration (within App)
+
+ If you wish to integrate the CallKit within the app and not use the CallKit implementation in the SDK, please take a look at the quickstart sample [here](https://github.com/Azure-Samples/communication-services-ios-quickstarts/tree/main/Add%20Video%20Calling).
+ But one of the important things to take care of is to start the audio at the right time. Like following
+
+ ```Swift
+let mutedAudioOptions = AudioOptions()
+mutedAudioOptions.speakerMuted = true
+mutedAudioOptions.muted = true
+
+let copyStartCallOptions = StartCallOptions()
+copyStartCallOptions.audioOptions = mutedAudioOptions
+
+callAgent.startCall(participants: participants,
+ options: copyStartCallOptions,
+ completionHandler: completionBlock)
+```
+
+Muting speaker and microphone will ensure that physical audio devices aren't used until the CallKit calls the `didActivateAudioSession` on `CXProviderDelegate`. Otherwise the call may get dropped or no audio will be flowing.
+
+```Swift
+func provider(_ provider: CXProvider, didActivate audioSession: AVAudioSession) {
+ activeCall.unmute { error in
+ if error == nil {
+ print("Successfully unmuted mic")
+ activeCall.speaker(mute: false) { error in
+ if error == nil {
+ print("Successfully unmuted speaker")
+ }
+ }
+ }
+ }
+}
+```
+
+> [!NOTE]
+> In some cases CallKit doesn't call `didActivateAudioSession` even though the app has elevated audio permissions, in that case the audio will stay muted until the call back is received. And the UI has to reflect the state of the speaker and microphone. The remote participant/s in the call will see that the user has muted audio as well. User will have to manually unmute in those cases.
+
+ ## Next steps
+ - [Learn how to manage video](./manage-video.md)
+ - [Learn how to manage calls](./manage-calls.md)
+ - [Learn how to record calls](./record-calls.md)
communication-services Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/ui-library-sdk/troubleshooting.md
+
+ Title: Troubleshooting over the UI Library
+
+description: Use Azure Communication Services UI Library for Mobile native to get debug information.
+++++ Last updated : 11/23/2022
+zone_pivot_groups: acs-plat-web-ios-android
+
+#Customer intent: As a developer, I want to get debug information for troubleshooting
++
+# Troubleshooting over the Calling UI Library
+
+When troubleshooting happens for voice or video calls, you may be asked to provide a CallID; this ID is used to identify Communication Services calls.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- A deployed Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md).
+- A `User Access Token` to enable the call client. For more information on [how to get a `User Access Token`](../../quickstarts/access-tokens.md)
+- Optional: Complete the quickstart for [getting started with the UI Library composites](../../quickstarts/ui-library/get-started-composites.md)
+
+> [!NOTE]
+> For detailed documentation and quickstarts about the Web UI Library visit the [**Web UI Library Storybook**](https://azure.github.io/communication-ui-library).
+### You can access the following link to learn more
+- [Troubleshooting](https://azure.github.io/communication-ui-library/?path=/docs/troubleshooting--page)
+++
+User may find Call ID via the action bar on the bottom of the call screen. See more [Troubleshooting guide](../../concepts/ui-library/ui-library-use-cases.md?branch=pr-en-us-217148&pivots=platform-mobile#troubleshooting-guide)
+
+## Next steps
+
+- [Learn more about UI Library](../../concepts/ui-library/ui-library-overview.md)
+- [Learn more about UI Library Design Kit](../../quickstarts/ui-library/get-started-ui-kit.md)
communication-services Handle Email Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/handle-email-events.md
After you generate an event, you'll notice that `Email Delivery Report Received`
:::image type="content" source="./media/handle-email-events/email-engagementtracking-report-received.png" alt-text="Screenshot of the Azure Event Grid viewer that shows the Event Grid schema for an EMAIL engagement tracking report event.":::
+- `EngagementContext` refers to the link clicked when the engagementType is `Click`.
+- `UserAgent` refers to the User-Agent from which this email engagement event originated. Eg. If the user interacted on Edge using a Win10 machine: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.135 Safari/537.36 Edge/12.246
+- `EngagementType` refers to the type of engagement, possible values are 'View' or 'Click'.
+ Learn more about the [event schemas and other eventing concepts](../../../event-grid/event-schema-communication-services.md). ## Clean up resources
communication-services Get Started Raw Media Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-raw-media-access.md
Last updated 06/30/2022
-zone_pivot_groups: acs-plat-android-web-ios
+zone_pivot_groups: acs-plat-web-ios-android-windows
# QuickStart: Add raw media access to your app + ::: zone pivot="platform-android" [!INCLUDE [Raw media with Android](./includes/raw-medi)] ::: zone-end
confidential-computing Confidential Node Pool Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-node-pool-aks.md
Last updated 10/04/2022 -+
confidential-computing Quick Create Confidential Vm Arm Amd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-confidential-vm-arm-amd.md
Use this example to create a custom parameter file for a Linux-based confidentia
az account set --subscription <subscription-id> ``` 1. Grant confidential VM Service Principal `Confidential VM Orchestrator` to tenant
+
+ For this step you need to be a Global Admin or you need to have the User Access Administrator RBAC role.
+
```azurecli Connect-AzureAD -Tenant "your tenant ID" New-AzureADServicePrincipal -AppId bf7b6499-ff71-4aa2-97a4-f372087be7f0 -DisplayName "Confidential VM Orchestrator"
connectors Apis List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/apis-list.md
Title: Azure Logic Apps connectors overview
-description: Overview about connectors for workflows in Azure Logic Apps.
+ Title: Connectors overview
+description: Overview about connectors in Azure Logic Apps.
ms.suite: integration Previously updated : 10/25/2022- Last updated : 01/05/2023+ # About connectors in Azure Logic Apps
container-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/policy-reference.md
Title: Built-in policy definitions for Azure Container Apps
description: Lists Azure Policy built-in policy definitions for Azure Container Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/28/2022 Last updated : 01/05/2023
container-apps Tutorial Java Quarkus Connect Managed Identity Postgresql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/tutorial-java-quarkus-connect-managed-identity-postgresql-database.md
Last updated 09/26/2022-+ # Tutorial: Connect to PostgreSQL Database from a Java Quarkus Container App without secrets using a managed identity
container-instances Container Instances Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-github-action.md
Previously updated : 06/17/2022 Last updated : 12/09/2022
This article shows two ways to set up the workflow:
## Configure GitHub workflow
-### Create service principal for Azure authentication
+### Create credentials for Azure authentication
+
+# [Service principal](#tab/userlevel)
In the GitHub workflow, you need to supply Azure credentials to authenticate to the Azure CLI. The following example creates a service principal with the Contributor role scoped to the resource group for your container registry.
Output is similar to:
Save the JSON output because it is used in a later step. Also, take note of the `clientId`, which you need to update the service principal in the next section.
-### Update service principal for registry authentication
+# [OpenID Connect](#tab/openid)
+
+OpenID Connect is an authentication method that uses short-lived tokens. Setting up [OpenID Connect with GitHub Actions](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect) is more complex process that offers hardened security.
+
+1. If you do not have an existing application, register a [new Active Directory application and service principal that can access resources](../active-directory/develop/howto-create-service-principal-portal.md). Create the Active Directory application.
+
+ ```azurecli-interactive
+ az ad app create --display-name myApp
+ ```
+
+ This command will output JSON with an `appId` that is your `client-id`. Save the value to use as the `AZURE_CLIENT_ID` GitHub secret later.
+
+ You'll use the `objectId` value when creating federated credentials with Graph API and reference it as the `APPLICATION-OBJECT-ID`.
+
+1. Create a service principal. Replace the `$appID` with the appId from your JSON output.
+
+ This command generates JSON output with a different `objectId` and will be used in the next step. The new `objectId` is the `assignee-object-id`.
+
+ Copy the `appOwnerTenantId` to use as a GitHub secret for `AZURE_TENANT_ID` later.
+
+ ```azurecli-interactive
+ az ad sp create --id $appId
+ ```
+
+1. Create a new role assignment by subscription and object. By default, the role assignment will be tied to your default subscription. Replace `$subscriptionId` with your subscription ID, `$resourceGroupName` with your resource group name, and `$assigneeObjectId` with the generated `assignee-object-id`. Learn [how to manage Azure subscriptions with the Azure CLI](/cli/azure/manage-azure-subscriptions-azure-cli).
+
+ ```azurecli-interactive
+ az role assignment create --role contributor --subscription $subscriptionId --assignee-object-id $assigneeObjectId --scope /subscriptions/$subscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.Web/sites/ --assignee-principal-type ServicePrincipal
+ ```
+
+1. Run the following command to [create a new federated identity credential](/graph/api/application-post-federatedidentitycredentials?view=graph-rest-beta&preserve-view=true) for your active directory application.
+
+ * Replace `APPLICATION-OBJECT-ID` with the **objectId (generated while creating app)** for your Active Directory application.
+ * Set a value for `CREDENTIAL-NAME` to reference later.
+ * Set the `subject`. The value of this is defined by GitHub depending on your workflow:
+ * Jobs in your GitHub Actions environment: `repo:< Organization/Repository >:environment:< Name >`
+ * For Jobs not tied to an environment, include the ref path for branch/tag based on the ref path used for triggering the workflow: `repo:< Organization/Repository >:ref:< ref path>`. For example, `repo:n-username/ node_express:ref:refs/heads/my-branch` or `repo:n-username/ node_express:ref:refs/tags/my-tag`.
+ * For workflows triggered by a pull request event: `repo:< Organization/Repository >:pull_request`.
+
+ ```azurecli
+ az ad app federated-credential create --id <APPLICATION-OBJECT-ID> --parameters credential.json
+ ("credential.json" contains the following content)
+ {
+ "name": "<CREDENTIAL-NAME>",
+ "issuer": "https://token.actions.githubusercontent.com/",
+ "subject": "repo:organization/repository:ref:refs/heads/main",
+ "description": "Testing",
+ "audiences": [
+ "api://AzureADTokenExchange"
+ ]
+ }
+ ```
+
+To learn how to create a Create an active directory application, service principal, and federated credentials in Azure portal, see [Connect GitHub and Azure](/azure/developer/github/connect-from-azure#use-the-azure-login-action-with-openid-connect).
+++
+### Update for registry authentication
+
+# [Service principal](#tab/userlevel)
Update the Azure service principal credentials to allow push and pull access to your container registry. This step enables the GitHub workflow to use the service principal to [authenticate with your container registry](../container-registry/container-registry-auth-service-principal.md) and to push and pull a Docker image.
az role assignment create \
--role AcrPush ```
+# [OpenID Connect](#tab/openid)
+
+You need to give your application permission to access the Azure Container Registry and to create an Azure Container Instance.
+
+1. In Azure portal, go to [App registrations](https://portal.azure.com/#view/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/~/RegisteredApps).
+1. Search for your OpenID Connect app registration and copy the **Application (client) ID**.
+1. Grant permissions for your app to your resource group. You'll need to set permissions at the resource group level so that you can create Azure Container instances.
+
+ ```azurecli
+ az role assignment create \
+ --assignee <appID> \
+ --role Contributor \
+ --scope /subscriptions/<subscription-id>/resourceGroups/<resource-group>
+ ```
+++ ### Save credentials to GitHub repo
+# [Service principal](#tab/userlevel)
+ 1. In the GitHub UI, navigate to your forked repository and select **Security > Secrets and variables > Actions**. 1. Select **New repository secret** to add the following secrets:
az role assignment create \
|`REGISTRY_PASSWORD` | The `clientSecret` from the JSON output from the service principal creation | | `RESOURCE_GROUP` | The name of the resource group you used to scope the service principal |
+# [OpenID Connect](#tab/openid)
+
+You need to provide your application's **Client ID**, **Tenant ID** and **Subscription ID** to the login action. These values can either be provided directly in the workflow or can be stored in GitHub secrets and referenced in your workflow. Saving the values as GitHub secrets is the more secure option.
+
+1. Open your GitHub repository and go to **Settings > Security > Secrets and variables > Actions > New repository secret**.
+
+1. Create secrets for `AZURE_CLIENT_ID`, `AZURE_TENANT_ID`, and `AZURE_SUBSCRIPTION_ID`. Use these values from your Active Directory application for your GitHub secrets:
+
+ |GitHub Secret | Active Directory Application |
+ |||
+ |AZURE_CLIENT_ID | Application (client) ID |
+ |AZURE_TENANT_ID | Directory (tenant) ID |
+ |AZURE_SUBSCRIPTION_ID | Subscription ID |
+
+1. Save each secret by selecting **Add secret**.
+++ ### Create workflow file 1. In the GitHub UI, select **Actions**.
az role assignment create \
1. In **Edit new file**, paste the following YAML contents to overwrite the sample code. Accept the default filename `main.yml`, or provide a filename you choose. 1. Select **Start commit**, optionally provide short and extended descriptions of your commit, and select **Commit new file**.
+# [Service principal](#tab/userlevel)
+ ```yml on: [push] name: Linux_Container_Workflow
jobs:
location: 'west us' ```
+# [OpenID Connect](#tab/openid)
+
+```yml
+on: [push]
+name: Linux_Container_Workflow_OIDC
+
+permissions:
+ id-token: write
+ contents: read
+
+on:
+ push:
+ branches:
+ - main
+ - release/*
+
+jobs:
+ build-and-deploy:
+ runs-on: ubuntu-latest
+ steps:
+ - name: 'Checkout GitHub Action'
+ uses: actions/checkout@main
+
+ - name: 'Login via Azure CLI'
+ uses: azure/login@v1
+ with:
+ client-id: ${{ secrets.AZURE_CLIENT_ID }}
+ tenant-id: ${{ secrets.AZURE_TENANT_ID }}
+ subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
+
+ - name: Build and push image
+ id: build-image
+ run: |
+ az acr build --image ${{ secrets.REGISTRY_LOGIN_SERVER }}/sampleapp:${{ github.sha }} --registry ${{ secrets.REGISTRY_LOGIN_SERVER }} --file "Dockerfile" .
+
+ - name: 'Deploy to Azure Container Instances'
+ uses: 'azure/aci-deploy@v1'
+ with:
+ resource-group: ${{ secrets.RESOURCE_GROUP }}
+ dns-name-label: ${{ secrets.RESOURCE_GROUP }}${{ github.run_number }}
+ image: ${{ secrets.REGISTRY_LOGIN_SERVER }}/sampleapp:${{ github.sha }}
+ registry-login-server: ${{ secrets.REGISTRY_LOGIN_SERVER }}
+ registry-username: ${{ secrets.REGISTRY_USERNAME }}
+ registry-password: ${{ secrets.REGISTRY_PASSWORD }}
+ name: aci-sampleapp
+ location: 'west us'
+```
+++ ### Validate workflow After you commit the workflow file, the workflow is triggered. To review workflow progress, navigate to **Actions** > **Workflows**.
container-instances Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/policy-reference.md
Previously updated : 11/28/2022 Last updated : 01/05/2023 # Azure Policy built-in definitions for Azure Container Instances
container-registry Container Registry Image Tag Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-image-tag-version.md
A framework team ships version 1.0. They know theyΓÇÖll ship updates, including
* `:1` ΓÇô a stable tag for the major version. `1` represents the ΓÇ£newestΓÇ¥ or ΓÇ£latestΓÇ¥ 1.* version. * `:1.0`- a stable tag for version 1.0, allowing a developer to bind to updates of 1.0, and not be rolled forward to 1.1 when it is released.
-The team also uses the `:latest` tag, which points to the latest stable tag, no matter what the current major version is.
- When base image updates are available, or any type of servicing release of the framework, images with the stable tags are updated to the newest digest that represents the most current stable release of that version. In this case, both the major and minor tags are continually being serviced. From a base image scenario, this allows the image owner to provide serviced images.
container-registry Container Registry Quickstart Task Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-quickstart-task-cli.md
Run ID: ca8 was successful after 10s
Now quickly run the image you built and pushed to your registry. Here you use [az acr run][az-acr-run] to run the container command. In your container development workflow, this might be a validation step before you deploy the image, or you could include the command in a [multi-step YAML file][container-registry-tasks-multi-step].
-The following example uses `$Registry` to specify the registry where you run the command:
+The following example uses $Registry to specify the endpoint of the registry where you run the command:
```azurecli-interactive az acr run --registry myContainerRegistry008 \
container-registry Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/policy-reference.md
Title: Built-in policy definitions for Azure Container Registry
description: Lists Azure Policy built-in policy definitions for Azure Container Registry. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/28/2022 Last updated : 01/05/2023
container-registry Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/security-controls-policy.md
description: Lists Azure Policy Regulatory Compliance controls available for Azu
Previously updated : 11/04/2022 Last updated : 01/05/2023
cosmos-db Analytical Store Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-introduction.md
Using Azure Synapse Link, you can now build no-ETL HTAP solutions by directly li
## Features of analytical store
-When you enable analytical store on an Azure Cosmos DB container, a new column-store is internally created based on the operational data in your container. This column store is persisted separately from the row-oriented transactional store for that container. The inserts, updates, and deletes to your operational data are automatically synced to analytical store. You don't need the Change Feed or ETL to sync the data.
+When you enable analytical store on an Azure Cosmos DB container, a new column-store is internally created based on the operational data in your container. This column store is persisted separately from the row-oriented transactional store for that container, in a storage account that is fully managed by Azure Cosmos DB, in an internal subscription. Customers don't need to spend time with storage administration. The inserts, updates, and deletes to your operational data are automatically synced to analytical store. You don't need the Change Feed or ETL to sync the data.
## Column store for analytical workloads on operational data
cosmos-db Migrate Data Striim https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/migrate-data-striim.md
In this section, you will configure the Azure Cosmos DB for Apache Cassandra acc
:::image type="content" source="media/migrate-data-striim/get-ssh-url.png" alt-text="Get the SSH URL":::
-1. Open a new terminal window and run the SSH command you copied from the Azure portal. This article uses terminal in a MacOS, you can follow the similar instructions using PuTTY or a different SSH client on a Windows machine. When prompted, type **yes** to continue and enter the **password** you have set for the virtual machine in the previous step.
+1. Open a new terminal window and run the SSH command you copied from the Azure portal. This article uses terminal in a MacOS, you can follow the similar instructions using an SSH client on a Windows machine. When prompted, type **yes** to continue and enter the **password** you have set for the virtual machine in the previous step.
:::image type="content" source="media/migrate-data-striim/striim-vm-connect.png" alt-text="Connect to Striim VM":::
cosmos-db Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/concepts-limits.md
An Azure Cosmos DB item can represent either a document in a collection, a row i
| Resource | Limit | | | | | Maximum size of an item | 2 MB (UTF-8 length of JSON representation) <sup>1</sup> |
-| Maximum length of partition key value | 2048 bytes |
+| Maximum length of partition key value | 2048 bytes (101 bytes if large partition-key is not enabled) |
| Maximum length of ID value | 1023 bytes | | Allowed characters for ID value | Service-side all Unicode characters except for '/' and '\\' are allowed. <br/>**WARNING: But for best interoperability we STRONGLY RECOMMEND to only use alpha-numerical ASCII characters in the ID value only**. <br/>There are several known limitations in some versions of the Cosmos DB SDK, as well as connectors (ADF, Spark, Kafka etc.) and http-drivers/libraries etc. that can prevent successful processing when the ID value contains non-alphanumerical ASCII characters. So, to increase interoperability, please encode the ID value - [for example via Base64 + custom encoding of special charatcers allowed in Base64](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/78fc16c35c521b4f9a7aeef11db4df79c2545dee/Microsoft.Azure.Cosmos.Encryption/src/EncryptionProcessor.cs#L475-L489). - if you have to support non-alphanumerical ASCII characters in your service/application. | | Maximum number of properties per item | No practical limit |
cosmos-db Consistency Levels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/consistency-levels.md
The following graphic illustrates the bounded staleness consistency with musical
### Session consistency
-In session consistency, within a single client session reads are guaranteed to honor the consistent-prefix, monotonic reads, monotonic writes, read-your-writes, and write-follows-reads guarantees. This assumes a single "writer" session or sharing the session token for multiple writers.
+In session consistency, within a single client session, reads are guaranteed to honor the read-your-writes, and write-follows-reads guarantees. This assumes a single ΓÇ£writerΓÇ¥ session or sharing the session token for multiple writers.
-Clients outside of the session performing writes will see the following guarantees:
+Like all consistency levels weaker than Strong, writes are replicated to a minimum of three replicas (in a four replica set) in the local region, with asynchronous replication to all other regions.
-- Consistency for clients in same region for an account with single write region = [Consistent Prefix](#consistent-prefix-consistency)-- Consistency for clients in different regions for an account with single write region = [Consistent Prefix](#consistent-prefix-consistency)-- Consistency for clients writing to a single region for an account with multiple write regions = [Consistent Prefix](#consistent-prefix-consistency)-- Consistency for clients writing to multiple regions for an account with multiple write regions = [Eventual](#eventual-consistency)-- Consistency for clients using the [Azure Cosmos DB integrated cache](integrated-cache.md) = [Eventual](#eventual-consistency)
+After every write operation, the client receives an updated Session Token from the server. These tokens are cached by the client and sent to the server for read operations in a specified region. If the replica against which the read operation is issued contains data for the specified token (or a more recent token), the requested data is returned. If the replica does not contain data for that session, the client will retry the request against another replica in the region. If necessary, the client will retry the read against additional available regions until data for the specified session token is retrieved.
- Session consistency is the most widely used consistency level for both single region as well as globally distributed applications. It provides write latencies, availability, and read throughput comparable to that of eventual consistency but also provides the consistency guarantees that suit the needs of applications written to operate in the context of a user. The following graphic illustrates the session consistency with musical notes. The "West US 2 writer" and the "West US 2 reader" are using the same session (Session A) so they both read the same data at the same time. Whereas the "Australia East" region is using "Session B" so, it receives data later but in the same order as the writes.
+> [!IMPORTANT]
+> In Session Consistency, the clientΓÇÖs usage of a session token guarantees that data corresponding to an older session will never be read. However, if the client is using an older session token and more recent updates have been made to the database, the more recent version of the data will be returned despite an older session token being used. The Session Token is used as a minimum version barrier but not as a specific (possibly historical) version of the data to be retrieved from the database.
+
+If the client did not initiate a write to a physical partition, it will not contain a session token in its cache and reads to that physical partition will behave as reads with Eventual Consistency. Similarly, if the client is re-created, its cache of session tokens will also be re-created. Here too, read operations will follow the same behavior as Eventual Consistency until subsequent write operations rebuild the clientΓÇÖs cache of session tokens.
+
+> [!IMPORTANT]
+> If Session Tokens are being passed from one client instance to another, the contents of the token should not be modified.
+
+ Session consistency is the most widely used consistency level for both single region as well as globally distributed applications. It provides write latencies, availability, and read throughput comparable to that of eventual consistency but also provides the consistency guarantees that suit the needs of applications written to operate in the context of a user. The following graphic illustrates the session consistency with musical notes. The "West US 2 writer" and the "East US 2 reader" are using the same session (Session A) so they both read the same data at the same time. Whereas the "Australia East" region is using "Session B" so, it receives data later but in the same order as the writes.
:::image type="content" source="media/consistency-levels/session-consistency.gif" alt-text="Illustration of session consistency level"::: ### Consistent prefix consistency
-In consistent prefix, updates made as single document writes see eventual consistency. Updates made as a batch within a transaction, are returned consistent to the transaction in which they were committed. Write operations within a transaction of multiple documents are always visible together.
-
-Assume two write operations are performed on documents Doc1 and Doc2, within transactions T1 and T2. When client does a read in any replica, the user will see either ΓÇ£Doc1 v1 and Doc2 v1ΓÇ¥ or ΓÇ£ Doc1 v2 and Doc2 v2ΓÇ¥, but never ΓÇ£Doc1 v1 and Doc2 v2ΓÇ¥ or ΓÇ£Doc1 v2 and Doc2 v1ΓÇ¥ for the same read or query operation.
+Like all consistency levels weaker than Strong, writes are replicated to a minimum of three replicas (in a four-replica set) in the local region, with asynchronous replication to all other regions.
-Below are the consistency guarantees for Consistent Prefix within a transaction context (single document writes see eventual consistency):
+In consistent prefix, updates made as single document writes see eventual consistency.
+Updates made as a batch within a transaction, are returned consistent to the transaction in which they were committed. Write operations within a transaction of multiple documents are always visible together.
-- Consistency for clients in same region for an account with single write region = [Consistent Prefix](#consistent-prefix-consistency)-- Consistency for clients in different regions for an account with single write region = [Consistent Prefix](#consistent-prefix-consistency)-- Consistency for clients writing to a single region for an account with multiple write region = [Consistent Prefix](#consistent-prefix-consistency)-- Consistency for clients writing to multiple regions for an account with multiple write region = [Eventual](#eventual-consistency)
+Assume two write operations are performed transactionally (all or nothing operations) on document Doc1 followed by document Doc2, within transactions T1 and T2. When client does a read in any replica, the user will see either ΓÇ£Doc1 v1 and Doc2 v1ΓÇ¥ or ΓÇ£Doc1 v2 and Doc2 v2ΓÇ¥ or neither document if the replica is lagging, but never ΓÇ£Doc1 v1 and Doc2 v2ΓÇ¥ or ΓÇ£Doc1 v2 and Doc2 v1ΓÇ¥ for the same read or query operation.
-The following graphic illustrates the consistency prefix consistency with musical notes. In all the regions, the reads never see out of order writes:
+The following graphic illustrates the consistency prefix consistency with musical notes. In all the regions, the reads never see out of order writes for a transactional batch of writes:
:::image type="content" source="media/consistency-levels/consistent-prefix.gif" alt-text="Illustration of consistent prefix"::: ### Eventual consistency
-In eventual consistency, there's no ordering guarantee for reads. In the absence of any further writes, the replicas eventually converge.
+Like all consistency levels weaker than Strong, writes are replicated to a minimum of three replicas (in a four replica set) in the local region, with asynchronous replication to all other regions.
+
+In Eventual consistency, the client will issue read requests against any one of the four replicas in the specified region. This replica may be lagging and could return stale or no data.
Eventual consistency is the weakest form of consistency because a client may read the values that are older than the ones it had read before. Eventual consistency is ideal where the application does not require any ordering guarantees. Examples include count of Retweets, Likes, or non-threaded comments. The following graphic illustrates the eventual consistency with musical notes.
cosmos-db Distributed Nosql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/distributed-nosql.md
One of the challenges when maintaining a database system is that many database e
## Distributed databases
-[Distributed databases](https://en.wikipedia.org/wiki/Distributed_database) refer to databases that scale across many different instances or locations. While many NoSQL databases are designed for scale, not all are necessarily distributed databases. Even more, many NoSQL databases require time and effort to distribute across redundant nodes for local-redundancy or globally for geo-redundancy. The planning, implementation, and networking requirements for a globally distribute database can be complex.
+[Distributed databases](https://en.wikipedia.org/wiki/Distributed_database) refer to databases that scale across many different instances or locations. While many NoSQL databases are designed for scale, not all are necessarily distributed databases. Even more, many NoSQL databases require time and effort to distribute across redundant nodes for local-redundancy or globally for geo-redundancy. The planning, implementation, and networking requirements for a globally distributed database can be complex.
## Azure Cosmos DB
-With a distributed database that is also a NoSQL database, high transactional workloads suddenly became easier to build and manage.[Azure Cosmos DB](introduction.md) is a database platform that offers distributed data APIs in both NoSQL and relational variants. Specifically, many of the NoSQL APIs offer various consistency options that allow you to fine tune the level of consistency or availability that meets your real-world application requirements. Your database could be configured to offer high consistency with tradeoffs to speed and availability. Similarly, your database could be configured to offer the best performance with predictable tradeoffs to consistency and latency of your replicated data. Azure Cosmos DB will automatically and dynamically distribute your data across local instances or globally. Azure Cosmos DB can also provide ACID guarantees and scale throughput to map to your applicationΓÇÖs requirements.
+With a distributed database that is also a NoSQL database, high transactional workloads suddenly became easier to build and manage. [Azure Cosmos DB](introduction.md) is a database platform that offers distributed data APIs in both NoSQL and relational variants. Specifically, many of the NoSQL APIs offer various consistency options that allow you to fine tune the level of consistency or availability that meets your real-world application requirements. Your database could be configured to offer high consistency with tradeoffs to speed and availability. Similarly, your database could be configured to offer the best performance with predictable tradeoffs to consistency and latency of your replicated data. Azure Cosmos DB will automatically and dynamically distribute your data across local instances or globally. Azure Cosmos DB can also provide ACID guarantees and scale throughput to map to your applicationΓÇÖs requirements.
## Next steps
cosmos-db How To Always Encrypted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-always-encrypted.md
To make sure you don't lose access to your encrypted data after accidental delet
If you create a new Azure Key Vault instance, enable these properties during creation: If you're using an existing Azure Key Vault instance, you can verify that these properties are enabled by looking at the **Properties** section on the Azure portal. If any of these properties isn't enabled, see the "Enabling soft-delete" and "Enabling Purge Protection" sections in one of the following articles:
cosmos-db How To Setup Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-customer-managed-keys.md
Title: Configure customer-managed keys for your Azure Cosmos DB account
-description: Learn how to configure customer-managed keys for your Azure Cosmos DB account with Azure Key Vault
+ Title: Configure customer-managed keys
+
+description: Store customer-managed keys in Azure Key Vault to use for encryption in your Azure Cosmos DB account with access control.
Previously updated : 07/20/2022 Last updated : 01/05/2023 ms.devlang: azurecli
ms.devlang: azurecli
Data stored in your Azure Cosmos DB account is automatically and seamlessly encrypted with keys managed by Microsoft (**service-managed keys**). Optionally, you can choose to add a second layer of encryption with keys you manage (**customer-managed keys** or CMK). You must store customer-managed keys in [Azure Key Vault](../key-vault/general/overview.md) and provide a key for each Azure Cosmos DB account that is enabled with customer-managed keys. This key is used to encrypt all the data stored in that account. > [!NOTE] > Currently, customer-managed keys are available only for new Azure Cosmos DB accounts. You should configure them during account creation.
-## <a id="register-resource-provider"></a> Register the Azure Cosmos DB resource provider for your Azure subscription
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+## Register the Azure Cosmos DB resource provider
+
+If the **Microsoft.DocumentDB** resource provider isn't already registered, you should register this provider as a first step.
1. Sign in to the [Azure portal](https://portal.azure.com/), go to your Azure subscription, and select **Resource providers** under the **Settings** tab:
- :::image type="content" source="./media/how-to-setup-cmk/portal-rp.png" alt-text="Resource providers entry from the left menu":::
+ :::image type="content" source="media/how-to-setup-customer-managed-keys/navigation-resource-providers.png" alt-text="Screenshot of the Resource providers option in the resource navigation menu.":::
1. Search for the **Microsoft.DocumentDB** resource provider. Verify if the resource provider is already marked as registered. If not, choose the resource provider and select **Register**:
- :::image type="content" source="./media/how-to-setup-cmk/portal-rp-register.png" alt-text="Registering the Microsoft.DocumentDB resource provider":::
+ :::image type="content" source="media/how-to-setup-customer-managed-keys/resource-provider-registration.png" lightbox="media/how-to-setup-customer-managed-keys/resource-provider-registration.png" alt-text="Screenshot of the Register option for the Microsoft.DocumentDB resource provider.":::
## Configure your Azure Key Vault instance
You must store customer-managed keys in [Azure Key Vault](../key-vault/general/o
Using customer-managed keys with Azure Cosmos DB requires you to set two properties on the Azure Key Vault instance that you plan to use to host your encryption keys: **Soft Delete** and **Purge Protection**.
-If you create a new Azure Key Vault instance, enable these properties during creation:
+1. If you create a new Azure Key Vault instance, enable these properties during creation:
+
+ :::image type="content" source="media/how-to-setup-customer-managed-keys/key-vault-properties.png" lightbox="media/how-to-setup-customer-managed-keys/key-vault-properties.png" alt-text="Screenshot of Azure Key Vault options including soft delete and purge protection.":::
+1. If you're using an existing Azure Key Vault instance, you can verify that these properties are enabled by looking at the **Properties** section on the Azure portal. If any of these properties isn't enabled, see the "Enabling soft-delete" and "Enabling Purge Protection" sections in one of the following articles:
-If you're using an existing Azure Key Vault instance, you can verify that these properties are enabled by looking at the **Properties** section on the Azure portal. If any of these properties isn't enabled, see the "Enabling soft-delete" and "Enabling Purge Protection" sections in one of the following articles:
+ - [How to use soft-delete with PowerShell](../key-vault/general/key-vault-recovery.md)
+ - [How to use soft-delete with Azure CLI](../key-vault/general/key-vault-recovery.md)
-- [How to use soft-delete with PowerShell](../key-vault/general/key-vault-recovery.md)-- [How to use soft-delete with Azure CLI](../key-vault/general/key-vault-recovery.md)
+1. Once these settings have been enabled, on the access policy tab, you can choose your preferred permission model to use. Access policies are set by default, but Azure role-based access control is supported as well.
-## <a id="add-access-policy"></a> Add an access policy to your Azure Key Vault instance
+The necessary permissions must be given for allowing Cosmos DB to use your encryption key. This step varies depending on whether the Azure Key Vault is using either Access policies or role-based access control.
+
+### Add an access policy
+
+In this variation, use the Azure Cosmos DB principal to create an access policy with the appropriate permissions.
1. From the Azure portal, go to the Azure Key Vault instance that you plan to use to host your encryption keys. Select **Access Policies** from the left menu:
- :::image type="content" source="./media/how-to-setup-cmk/portal-akv-ap.png" alt-text="Access policies from the left menu":::
+ :::image type="content" source="media/how-to-setup-customer-managed-keys/navigation-access-policies.png" alt-text="Screenshot of the Access policies option in the resource navigation menu.":::
1. Select **+ Add Access Policy**. 1. Under the **Key permissions** drop-down menu, select **Get**, **Unwrap Key**, and **Wrap Key** permissions:
- :::image type="content" source="./media/how-to-setup-cmk/portal-akv-add-ap-perm2.png" alt-text="Selecting the right permissions":::
+ :::image type="content" source="media/how-to-setup-customer-managed-keys/add-access-policy-permissions.png" lightbox="media/how-to-setup-customer-managed-keys/add-access-policy-permissions.png" alt-text="Screenshot of access policy permissions including Get, Unwrap key, and Wrap key.":::
1. Under **Select principal**, select **None selected**.
-1. Search for **Azure Cosmos DB** principal and select it (to make it easier to find, you can also search by application ID: `a232010e-820c-4083-83bb-3ace5fc29d0b` for any Azure region except Azure Government regions where the application ID is `57506a73-e302-42a9-b869-6f12d9ec29e9`). If the **Azure Cosmos DB** principal isn't in the list, you might need to re-register the **Microsoft.DocumentDB** resource provider as described in the [Register the resource provider](#register-resource-provider) section of this article.
+1. Search for **Azure Cosmos DB** principal and select it (to make it easier to find, you can also search by application ID: `a232010e-820c-4083-83bb-3ace5fc29d0b` for any Azure region except Azure Government regions where the application ID is `57506a73-e302-42a9-b869-6f12d9ec29e9`).
- > [!NOTE]
- > This registers the Azure Cosmos DB first-party-identity in your Azure Key Vault access policy. To replace this first-party identity by your Azure Cosmos DB account managed identity, see [Using a managed identity in the Azure Key Vault access policy](#using-managed-identity).
+ > [!TIP]
+ > This registers the Azure Cosmos DB first-party-identity in your Azure Key Vault access policy. If the **Azure Cosmos DB** principal isn't in the list, you might need to re-register the **Microsoft.DocumentDB** resource provider.
-1. Choose **Select** at the bottom.
+1. Choose **Select** at the bottom.
- :::image type="content" source="./media/how-to-setup-cmk/portal-akv-add-ap.png" alt-text="Select the Azure Cosmos DB principal":::
+ :::image type="content" source="media/how-to-setup-customer-managed-keys/add-access-policy-principal.png" lightbox="media/how-to-setup-customer-managed-keys/add-access-policy-principal.png" alt-text="Screenshot of the Select principal option on the Add access policy page.":::
1. Select **Add** to add the new access policy. 1. Select **Save** on the Key Vault instance to save all changes.
+### Adding role-based access control roles
+
+1. From the Azure portal, go to the Azure Key Vault instance that you plan to use to host your encryption keys. Select **Access control (IAM)** from the left menu and select **Grant access to this resource**.
+
+ :::image type="content" source="media/how-to-setup-customer-managed-keys/navigation-access-control.png" alt-text="Screenshot of the Access control option in the resource navigation menu.":::
+
+ :::image type="content" source="media/how-to-setup-customer-managed-keys/access-control-grant-access.png" lightbox="media/how-to-setup-customer-managed-keys/access-control-grant-access.png" alt-text="Screenshot of the Grant access to this resource option on the Access control page.":::
+
+1. Search the **ΓÇ£Key Vault Administrator roleΓÇ¥** and assign it to yourself. This assignment is done by first searching the role name from the list and then clicking on the **ΓÇ£MembersΓÇ¥** tab. Once on the tab, select the ΓÇ£User, group or service principalΓÇ¥ option from the radio and then look up your Azure account. Once the account has been selected, the role can be assigned.
+
+ :::image type="content" source="media/how-to-setup-customer-managed-keys/search-key-vault-admin-role.png" lightbox="media/how-to-setup-customer-managed-keys/search-key-vault-admin-role.png" alt-text="Screenshot of the Key vault administrator role in the search results.":::
+
+ :::image type="content" source="media/how-to-setup-customer-managed-keys/access-control-assign-role.png" lightbox="media/how-to-setup-customer-managed-keys/access-control-assign-role.png" alt-text="Screenshot of a role assignment on the Access control page.":::
+
+1. Then, the necessary permissions must be assigned to Cosmos DBΓÇÖs principal. So, like the last role assignment, go to the assignment page but this time look for the **ΓÇ£Key Vault Crypto Service Encryption UserΓÇ¥** role and on the members tab look for Cosmos DBΓÇÖs principal. To find the principal, search for **Azure Cosmos DB** principal and select it (to make it easier to find, you can also search by application ID: `a232010e-820c-4083-83bb-3ace5fc29d0b`.
+
+ :::image type="content" source="media/how-to-setup-customer-managed-keys/assign-permission-principal.png" lightbox="media/how-to-setup-customer-managed-keys/assign-permission-principal.png" alt-text="Screenshot of the Azure Cosmos DB principal being assigned to a permission.":::
+
+ > [!IMPORTANT]
+ > In the Azure Government region, the application ID is `57506a73-e302-42a9-b869-6f12d9ec29e9`.
+
+1. Select Review + assign and the role will be assigned to Cosmos DB.
+
+## Validate that the roles have been set correctly
+
+Next, use the access control page to confirm that all roles have been configured correctly.
+
+1. Once the roles have been assigned, select **ΓÇ£View access to this resourceΓÇ¥** on the Access Control IAM page to verify that everything has been set correctly.
+
+ :::image type="content" source="media/how-to-setup-customer-managed-keys/access-control-view-access-resource.png" lightbox="media/how-to-setup-customer-managed-keys/access-control-view-access-resource.png" alt-text="Screenshot of the View access to resource option on the Access control page.":::
+
+1. On the page, set the scope to **ΓÇ£this resourceΓÇ¥** and verify that you have the Key Vault Administrator role, and the Cosmos DB principal has the Key Vault Crypto Encryption User role.
+
+ :::image type="content" source="media/how-to-setup-customer-managed-keys/role-assignment-set-scope.png" lightbox="media/how-to-setup-customer-managed-keys/role-assignment-set-scope.png" alt-text="Screenshot of the scope adjustment option for a role assignment query.":::
+ ## Generate a key in Azure Key Vault
+Here, create a new key using Azure Key Vault and retrieve the unique identifier.
+ 1. From the Azure portal, go the Azure Key Vault instance that you plan to use to host your encryption keys. Then, select **Keys** from the left menu:
- :::image type="content" source="./media/how-to-setup-cmk/portal-akv-keys.png" alt-text="Keys entry from the left menu":::
+ :::image type="content" source="media/how-to-setup-customer-managed-keys/navigation-keys.png" alt-text="Screenshot of the Keys option in the resource navigation menu.":::
1. Select **Generate/Import**, provide a name for the new key, and select an RSA key size. A minimum of 3072 is recommended for best security. Then select **Create**:
- :::image type="content" source="./media/how-to-setup-cmk/portal-akv-gen.png" alt-text="Create a new key":::
+ :::image type="content" source="media/how-to-setup-customer-managed-keys/new-customer-managed-key.png" lightbox="media/how-to-setup-customer-managed-keys/new-customer-managed-key.png" alt-text="Screenshot of the dialog to create a new key.":::
1. After the key is created, select the newly created key and then its current version. 1. Copy the key's **Key Identifier**, except the part after the last forward slash:
- :::image type="content" source="./media/how-to-setup-cmk/portal-akv-keyid.png" alt-text="Copying the key's key identifier":::
+ :::image type="content" source="media/how-to-setup-customer-managed-keys/key-identifier.png" lightbox="media/how-to-setup-customer-managed-keys/key-identifier.png" alt-text="Screenshot of the key identifier field and the copy action.":::
+
+## Create a new Azure Cosmos DB account
-## <a id="create-a-new-azure-cosmos-account"></a>Create a new Azure Cosmos DB account
+Create a new Azure Cosmos DB account using the Azure portal or Azure CLI.
-### Using the Azure portal
+### [Azure portal](#tab/azure-portal)
When you create a new Azure Cosmos DB account from the Azure portal, choose **Customer-managed key** in the **Encryption** step. In the **Key URI** field, paste the URI/key identifier of the Azure Key Vault key that you copied from the previous step:
-### <a id="using-powershell"></a> Using Azure PowerShell
+### [PowerShell](#tab/azure-powershell)
When you create a new Azure Cosmos DB account with PowerShell:
When you create a new Azure Cosmos DB account with PowerShell:
> [!IMPORTANT] > You must set the `locations` property explicitly for the account to be successfully created with customer-managed keys.
-```powershell
-$resourceGroupName = "myResourceGroup"
-$accountLocation = "West US 2"
-$accountName = "mycosmosaccount"
-
-$failoverLocations = @(
- @{ "locationName"="West US 2"; "failoverPriority"=0 }
-)
-
-$CosmosDBProperties = @{
- "databaseAccountOfferType"="Standard";
- "locations"=$failoverLocations;
- "keyVaultKeyUri" = "https://<my-vault>.vault.azure.net/keys/<my-key>";
+```azurepowershell
+# Variable for resource group name
+$RESOURCE_GROUP_NAME = "<resource-group-name>"
+
+# Variable for location
+$LOCATION = "<azure-region>"
+
+# Variable for account name
+$ACCOUNT_NAME = "<globally-unique-account-name>"
+
+# Variable for key URI in the key vault
+$KEY_VAULT_KEY_URI="https://<key-vault-name>.vault.azure.net/keys/<key-name>"
+
+$parameters = @{
+ ResourceType = "Microsoft.DocumentDb/databaseAccounts"
+ ApiVersion = "2019-12-12"
+ ResourceGroupName = $RESOURCE_GROUP_NAME
+ Location = $LOCATION
+ Name = $ACCOUNT_NAME
+ PropertyObject = @{
+ databaseAccountOfferType = "Standard"
+ locations = @(
+ @{
+ locationName = $LOCATION
+ failoverPriority = 0
+ }
+ )
+ keyVaultKeyUri = $KEY_VAULT_KEY_URI
+ }
}-
-New-AzResource -ResourceType "Microsoft.DocumentDb/databaseAccounts" `
- -ApiVersion "2019-12-12" -ResourceGroupName $resourceGroupName `
- -Location $accountLocation -Name $accountName -PropertyObject $CosmosDBProperties
+New-AzResource @parameters
``` After the account has been created, you can verify that customer-managed keys have been enabled by fetching the URI of the Azure Key Vault key:
-```powershell
-Get-AzResource -ResourceGroupName $resourceGroupName -Name $accountName `
- -ResourceType "Microsoft.DocumentDb/databaseAccounts" `
- | Select-Object -ExpandProperty Properties `
+```azurepowershell
+$parameters = @{
+ ResourceGroupName = $RESOURCE_GROUP_NAME
+ Name = $ACCOUNT_NAME
+ ResourceType = "Microsoft.DocumentDb/databaseAccounts"
+}
+Get-AzResource @parameters
+ | Select-Object -ExpandProperty Properties
| Select-Object -ExpandProperty keyVaultKeyUri ```
-### Using an Azure Resource Manager template
+### [Azure Resource Manager template](#tab/arm-template)
When you create a new Azure Cosmos DB account through an Azure Resource Manager template:
When you create a new Azure Cosmos DB account through an Azure Resource Manager
```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "accountName": {
- "type": "string"
- },
- "location": {
- "type": "string"
- },
- "keyVaultKeyUri": {
- "type": "string"
- }
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "accountName": {
+ "type": "string"
},
- "resources":
- [
- {
- "type": "Microsoft.DocumentDB/databaseAccounts",
- "name": "[parameters('accountName')]",
- "apiVersion": "2019-12-12",
- "kind": "GlobalDocumentDB",
- "location": "[parameters('location')]",
- "properties": {
- "locations": [
- {
- "locationName": "[parameters('location')]",
- "failoverPriority": 0,
- "isZoneRedundant": false
- }
- ],
- "databaseAccountOfferType": "Standard",
- "keyVaultKeyUri": "[parameters('keyVaultKeyUri')]"
- }
- }
- ]
+ "location": {
+ "type": "string"
+ },
+ "keyVaultKeyUri": {
+ "type": "string"
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.DocumentDB/databaseAccounts",
+ "name": "[parameters('accountName')]",
+ "apiVersion": "2019-12-12",
+ "kind": "GlobalDocumentDB",
+ "location": "[parameters('location')]",
+ "properties": {
+ "locations": [
+ {
+ "locationName": "[parameters('location')]",
+ "failoverPriority": 0,
+ "isZoneRedundant": false
+ }
+ ],
+ "databaseAccountOfferType": "Standard",
+ "keyVaultKeyUri": "[parameters('keyVaultKeyUri')]"
+ }
+ }
+ ]
} ``` Deploy the template with the following PowerShell script:
-```powershell
-$resourceGroupName = "myResourceGroup"
-$accountName = "mycosmosaccount"
-$accountLocation = "West US 2"
-$keyVaultKeyUri = "https://<my-vault>.vault.azure.net/keys/<my-key>"
-
-New-AzResourceGroupDeployment `
- -ResourceGroupName $resourceGroupName `
- -TemplateFile "deploy.json" `
- -accountName $accountName `
- -location $accountLocation `
- -keyVaultKeyUri $keyVaultKeyUri
+```azurepowershell
+# Variable for resource group name
+$RESOURCE_GROUP_NAME = "<resource-group-name>"
+
+# Variable for location
+$LOCATION = "<azure-region>"
+
+# Variable for account name
+$ACCOUNT_NAME = "<globally-unique-account-name>"
+
+# Variable for key URI in the key vault
+$KEY_VAULT_KEY_URI="https://<key-vault-name>.vault.azure.net/keys/<key-name>"
+
+$parameters = @{
+ ResourceGroupName = $RESOURCE_GROUP_NAME
+ TemplateFile = "deploy.json"
+ accountName = $ACCOUNT_NAME
+ location = $LOCATION
+ keyVaultKeyUri = $KEY_VAULT_KEY_URI
+}
+New-AzResourceGroupDeployment @parameters
```
-### <a id="using-azure-cli"></a> Using the Azure CLI
+### [Azure CLI](#tab/azure-cli)
When you create a new Azure Cosmos DB account through the Azure CLI, pass the URI of the Azure Key Vault key that you copied earlier under the `--key-uri` parameter.
-```azurecli-interactive
-resourceGroupName='myResourceGroup'
-accountName='mycosmosaccount'
-keyVaultKeyUri = 'https://<my-vault>.vault.azure.net/keys/<my-key>'
+```azurecli
+# Variable for resource group name
+resourceGroupName="<resource-group-name>"
+
+# Variable for location
+location="<azure-region>"
+
+# Variable for account name
+accountName="<globally-unique-account-name>"
+
+# Variable for key URI in the key vault
+keyVaultKeyUri="https://<key-vault-name>.vault.azure.net/keys/<key-name>"
az cosmosdb create \
- -n $accountName \
- -g $resourceGroupName \
- --locations regionName='West US 2' failoverPriority=0 isZoneRedundant=False \
+ --resource-group $resourceGroupName \
+ --name $accountName \
+ --locations regionName=$location \
--key-uri $keyVaultKeyUri ``` After the account has been created, you can verify that customer-managed keys have been enabled by fetching the URI of the Azure Key Vault key:
-```azurecli-interactive
+```azurecli
az cosmosdb show \
- -n $accountName \
- -g $resourceGroupName \
- --query keyVaultKeyUri
+ --resource-group $resourceGroupName \
+ --name $accountName \
+ --query "keyVaultKeyUri"
```
-## <a id="using-managed-identity"></a> Using a managed identity in the Azure Key Vault access policy
++
+## Using a managed identity in the Azure Key Vault access policy
This access policy ensures that your encryption keys can be accessed by your Azure Cosmos DB account. The access policy is implemented by granting access to a specific Azure Active Directory (AD) identity. Two types of identities are supported: - Azure Cosmos DB's first-party identity can be used to grant access to the Azure Cosmos DB service. - Your Azure Cosmos DB account's [managed identity](how-to-setup-managed-identity.md) can be used to grant access to your account specifically.
-### To use a system-assigned managed identity
+### [Azure Resource Manager template](#tab/arm-template)
-Because a system-assigned managed identity can only be retrieved after the creation of your account, you still need to initially create your account using the first-party identity, as described [above](#add-access-policy). Then:
+You can use ARM templates to assign a managed identity to an access policy.
-1. If the system-assigned managed identity wasn't configured during account creation, [enable a system-assigned managed identity](./how-to-setup-managed-identity.md#add-a-system-assigned-identity) on your account and copy the `principalId` that got assigned.
+Because a system-assigned managed identity can only be retrieved after the creation of your account, you still need to initially create your account using the first-party identity. Then:
-1. Add a new access policy to your Azure Key Vault account as described [above](#add-access-policy), but using the `principalId` you copied at the previous step instead of Azure Cosmos DB's first-party identity.
+1. If the system-assigned managed identity wasn't configured during account creation, [enable a system-assigned managed identity](./how-to-setup-managed-identity.md#add-a-system-assigned-identity) on your account and copy the `principalId` that got assigned.
-1. Update your Azure Cosmos DB account to specify that you want to use the system-assigned managed identity when accessing your encryption keys in Azure Key Vault. You have two options:
+1. Add the correspondent permissions to your Azure Key Vault account as described previously. Instead of using the Cosmos DB principal, use the `principalId` you copied at the previous step instead of Azure Cosmos DB's first-party identity.
- - Specify the property in your account's Azure Resource Manager template:
+1. Update your Azure Cosmos DB account to specify that you want to use the system-assigned managed identity when accessing your encryption keys in Azure Key Vault.
- ```json
- {
- "type": " Microsoft.DocumentDB/databaseAccounts",
- "properties": {
- "defaultIdentity": "SystemAssignedIdentity",
- // ...
- },
- // ...
+ ```json
+ {
+ "type": " Microsoft.DocumentDB/databaseAccounts",
+ "properties": {
+ "defaultIdentity": "SystemAssignedIdentity",
+ // ...
+ },
+ // ...
+ }
+ ```
+
+1. Optionally, you can then remove the Azure Cosmos DB first-party identity from your Azure Key Vault access policy.
+
+You can also follow similar steps with a user-assigned managed identity.
+
+1. When creating the new access policy or role assignment in your Azure Key Vault account, use the `Object ID` of the managed identity you wish to use instead of Azure Cosmos DB's first-party identity.
+
+1. When creating your Azure Cosmos DB account, you must enable the user-assigned managed identity and specify that you want to use this identity when accessing your encryption keys in Azure Key Vault.
+
+ ```json
+ {
+ "type": "Microsoft.DocumentDB/databaseAccounts",
+ "identity": {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "<identity-resource-id>": {}
}
- ```
+ },
+ // ...
+ "properties": {
+ "defaultIdentity": "UserAssignedIdentity=<identity-resource-id>""keyVaultKeyUri": "<key-vault-key-uri>"
+ // ...
+ }
+ }
+ ```
- - Update your account with the Azure CLI:
+### [Azure CLI](#tab/azure-cli)
- ```azurecli
- resourceGroupName='myResourceGroup'
- accountName='mycosmosaccount'
-
- az cosmosdb update --resource-group $resourceGroupName --name $accountName --default-identity "SystemAssignedIdentity"
- ```
+You can use the Azure CLI to assign a managed identity to an access policy.
+
+Because a system-assigned managed identity can only be retrieved after the creation of your account, you still need to initially create your account using the first-party identity. Then:
+
+1. If the system-assigned managed identity wasn't configured during account creation, [enable a system-assigned managed identity](./how-to-setup-managed-identity.md#add-a-system-assigned-identity) on your account and copy the `principalId` that got assigned.
+
+1. Add the correspondent permissions to your Azure Key Vault account as described previously. Instead of using the Cosmos DB principal, use the `principalId` you copied at the previous step instead of Azure Cosmos DB's first-party identity.
+
+1. Update your Azure Cosmos DB account to specify that you want to use the system-assigned managed identity when accessing your encryption keys in Azure Key Vault.
+
+ ```azurecli
+ # Variables for resource group and account names
+ resourceGroupName="<resource-group-name>"
+ accountName="<azure-cosmos-db-account-name>"
+
+ az cosmosdb update \
+ --resource-group $resourceGroupName \
+ --name $accountName \
+ --default-identity "SystemAssignedIdentity"
+ ```
-1. Optionally, you can then remove the Azure Cosmos DB first-party identity from your Azure Key Vault access policy.
+1. Optionally, you can then remove the Azure Cosmos DB first-party identity from your Azure Key Vault access policy.
-### To use a user-assigned managed identity
+You can also follow similar steps with a user-assigned managed identity.
-1. When creating the new access policy in your Azure Key Vault account as described [above](#add-access-policy), use the `Object ID` of the managed identity you wish to use instead of Azure Cosmos DB's first-party identity.
+1. When creating the new access policy or role assignment in your Azure Key Vault account, use the `Object ID` of the managed identity you wish to use instead of Azure Cosmos DB's first-party identity.
-1. When creating your Azure Cosmos DB account, you must enable the user-assigned managed identity and specify that you want to use this identity when accessing your encryption keys in Azure Key Vault. Options include:
+1. When creating your Azure Cosmos DB account, you must enable the user-assigned managed identity and specify that you want to use this identity when accessing your encryption keys in Azure Key Vault.
- - Using an Azure Resource Manager template:
-
- ```json
- {
- "type": "Microsoft.DocumentDB/databaseAccounts",
- "identity": {
- "type": "UserAssigned",
- "userAssignedIdentities": {
- "<identity-resource-id>": {}
- }
- },
- // ...
- "properties": {
- "defaultIdentity": "UserAssignedIdentity=<identity-resource-id>"
- "keyVaultKeyUri": "<key-vault-key-uri>"
- // ...
- }
- }
- ```
+ ```azurecli
+ # Variables for resource group and account name
+ resourceGroupName="<resource-group-name>"
+ accountName="<azure-cosmos-db-account-name>"
- - Using the Azure CLI:
+ # Variable for location
+ location="<azure-region>"
- ```azurecli
- resourceGroupName='myResourceGroup'
- accountName='mycosmosaccount'
- keyVaultKeyUri = 'https://<my-vault>.vault.azure.net/keys/<my-key>'
-
- az cosmosdb create \
- -n $accountName \
- -g $resourceGroupName \
- --key-uri $keyVaultKeyUri
- --assign-identity <identity-resource-id>
- --default-identity "UserAssignedIdentity=<identity-resource-id>"
- ```
-
-## Use CMK with continuous backup
+ # Variable for key URI in the key vault
+ keyVaultKeyUri="https://<key-vault-name>.vault.azure.net/keys/<key-name>"
+
+ # Variables for identities
+ identityId="<identity-resource-id>"
+
+ az cosmosdb create \
+ --resource-group $resourceGroupName \
+ --name $accountName \
+ --locations regionName=$location \
+ --key-uri $keyVaultKeyUri
+ --assign-identity $identityId \
+ --default-identity "UserAssignedIdentity=$identityId"
+ ```
+
+### [PowerShell / Azure portal](#tab/azure-powershell+azure-portal)
+
+Not available
+++
+## Use customer-managed keys with continuous backup
You can create a continuous backup account by using the Azure CLI or an Azure Resource Manager template.
-Currently, only user-assigned managed identity is supported for creating continuous backup accounts.
+Currently, only user-assigned managed identity is supported for creating continuous backup accounts.
-Once the account has been created, user can update the identity to system-assigned managed identity using these instructions [Configure customer-managed keys for your Azure Cosmos DB account](./how-to-setup-customer-managed-keys.md#to-use-a-system-assigned-managed-identity).
+Once the account has been created, you can update the identity to system-assigned managed identity.
> [!NOTE] > System-assigned identity and continuous backup mode is currently under Public Preview and may change in the future. Alternatively, user can also create a system identity with periodic backup mode first, then migrate the account to Continuous backup mode using these instructions [Migrate an Azure Cosmos DB account from periodic to continuous backup mode](./migrate-continuous-backup.md) -
-### To create a continuous backup account by using the Azure CLI
+### [Azure CLI](#tab/azure-cli)
```azurecli
-resourceGroupName='myResourceGroup'
-accountName='mycosmosaccount'
-keyVaultKeyUri = 'https://<my-vault>.vault.azure.net/keys/<my-key>'
+# Variables for resource group and account name
+resourceGroupName="<resource-group-name>"
+accountName="<azure-cosmos-db-account-name>"
+
+# Variable for location
+location="<azure-region>"
+
+# Variable for key URI in the key vault
+keyVaultKeyUri="https://<key-vault-name>.vault.azure.net/keys/<key-name>"
+
+# Variables for identities
+identityId="<identity-resource-id>"
az cosmosdb create \
- -n $accountName \
- -g $resourceGroupName \
- --key-uri $keyVaultKeyUri \
- --locations regionName=<Location> \
- --assign-identity <identity-resource-id> \
- --default-identity "UserAssignedIdentity=<identity-resource-id>" \
- --backup-policy-type Continuous
+ --resource-group $resourceGroupName \
+ --name $accountName \
+ --locations regionName=$location \
+ --key-uri $keyVaultKeyUri
+ --assign-identity $identityId \
+ --default-identity "UserAssignedIdentity=$identityId" \
+ --backup-policy-type "Continuous"
```
-### To create a continuous backup account by using an Azure Resource Manager template
+### [Azure Resource Manager template](#tab/arm-template)
When you create a new Azure Cosmos DB account through an Azure Resource Manager template:
When you create a new Azure Cosmos DB account through an Azure Resource Manager
> You must set the `locations` property explicitly for the account to be successfully created with customer-managed keys as shown in the preceding example. ```json
- {
- "type": "Microsoft.DocumentDB/databaseAccounts",
- "identity": {
- "type": "UserAssigned",
- "userAssignedIdentities": {
- "<identity-resource-id>": {}
- }
+{
+ "type": "Microsoft.DocumentDB/databaseAccounts",
+ "identity": {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "<identity-resource-id>": {}
+ }
+ },
+ // ...
+ "properties": {
+ "backupPolicy": {
+ "type": "Continuous"
},
+ "defaultIdentity": "UserAssignedIdentity=<identity-resource-id>""keyVaultKeyUri": "<key-vault-key-uri>"
// ...
- "properties": {
- "backupPolicy": { "type": "Continuous" },
- "defaultIdentity": "UserAssignedIdentity=<identity-resource-id>"
- "keyVaultKeyUri": "<key-vault-key-uri>"
- // ...
- }
+ }
} ```
-### To restore a continuous account that is configured with managed identity using CLI
+### [PowerShell / Azure portal](#tab/azure-powershell+azure-portal)
+
+Not available
++
-#### Restore source account with system-assigned identity
+## Restore a continuous account that is configured with managed identity
+
+System identity is tied to one specific account and can't be reused in another account. So, a new user-assigned identity is required during the restore process.
+
+### [Azure CLI](#tab/azure-cli)
+
+Use the Azure CLI to restore a continuous account that is already configured using a system-assigned or user-assigned managed identity.
> [!NOTE] > This feature is currently under Public Preview and requires Cosmos DB CLI Extension version 0.20.0 or higher.
-System Identity is tied to one specific account and cannot be reused in another account. So, a new user-assigned identity is required during the restore process. This newly created user assigned identity is only needed during the restore and can be cleaned up once the restore has completed.
-
+The newly created user assigned identity is only needed during the restore and can be cleaned up once the restore has completed. First, to restore a source account with system-assigned identity.
-1. Create a new user-assigned identity (or use an existing one) for the restore process.
+1. Create a new user-assigned identity (or use an existing one) for the restore process.
-1. Create the new access policy in your Azure Key Vault account as described above, use the Object ID of the managed identity from step 1.
+1. Create the new access policy in your Azure Key Vault account as described previously, use the Object ID of the managed identity from step 1.
-1. Trigger the restore using Azure CLI:
+1. Trigger the restore using Azure CLI:
-```azurecli
-az cosmosdb restore \
-ΓÇ»--target-database-account-name {targetAccountName} \
-ΓÇ»--account-name {sourceAccountName} \
-ΓÇ»--restore-timestamp {timestampInUTC} \
-ΓÇ»--resource-group {resourceGroupName} \
-ΓÇ»--location {locationName} \
-ΓÇ»--assign-identity {userIdentity} \
-ΓÇ»--default-identity {defaultIdentity}
-```
-1. Once the restore has completed, the target (restored) account will have the user-assigned identity. If desired, user can update the account to use System-Assigned managed identity.
+ ```azurecli
+ # Variables for resource group and account names
+ resourceGroupName="<resource-group-name>"
+ sourceAccountName="<source-azure-cosmos-db-account-name>"
+ targetAccountName="<target-azure-cosmos-db-account-name>"
+
+ # Variable for location
+ location="<azure-region>"
+
+ # Variable for key URI in the key vault
+ keyVaultKeyUri="https://<key-vault-name>.vault.azure.net/keys/<key-name>"
+
+ # Variables for identities
+ identityId="<identity-resource-id>"
+
+ # Variable for timestamp to restore to
+ timestamp="<timestamp-in-utc>"
+
+ az cosmosdb restore \
+ --resource-group $resourceGroupName \
+ --account-name $sourceAccountName \
+ --target-database-account-name $targetAccountName \
+ --locations regionName=$location \
+ --restore-timestamp $timestamp \
+ --assign-identity $identityId \
+ --default-identity "UserAssignedIdentity=$identityId" \
+ ```
+
+1. Once the restore has completed, the target (restored) account will have the user-assigned identity. If desired, user can update the account to use System-Assigned managed identity.
-#### Restore source account with user-assigned identity
+By default, when you trigger a restore for an account with user-assigned managed identity, the user-assigned identity will be passed to the target account automatically.
-By default, when user trigger a restore for an account with user-assigned managed identity, the user-assigned identity will be passed to the target account automatically.
+If desired, the user can also trigger a restore using a different user-assigned identity than the source account by specifying it in the restore parameters.
-If desired, the user can also trigger a restore using a different user-assigned identity than the source account by specifying it in the restore parameters. Please follow the steps in [Restore source account with system-assigned identity](./how-to-setup-customer-managed-keys.md#restore-source-account-with-system-assigned-identity)
+### [PowerShell / Azure Resource Manager template / Azure portal](#tab/azure-powershell+arm-template+azure-portal)
+
+Not available
++ ## Customer-managed keys and double encryption
Double encryption only applies to the main Azure Cosmos DB transactional storage
- [Azure Synapse Link](./synapse-link.md) - [Continuous backups with point-in-time restore](./continuous-backup-restore-introduction.md)
-
+ ## Key rotation Rotating the customer-managed key used by your Azure Cosmos DB account can be done in two ways. - Create a new version of the key currently used from Azure Key Vault:
- :::image type="content" source="./media/how-to-setup-cmk/portal-akv-rot.png" alt-text="Screenshot of the New Version option in the Versions page of the Azure portal.":::
+ :::image type="content" source="media/how-to-setup-customer-managed-keys/new-version.png" lightbox="media/how-to-setup-customer-managed-keys/new-version.png" alt-text="Screenshot of the New Version option in the Versions page of the Azure portal.":::
- Swap the key currently used with a different one by updating the key URI on your account. From the Azure portal, go to your Azure Cosmos DB account and select **Data Encryption** from the left menu:
- :::image type="content" source="./media/how-to-setup-cmk/portal-data-encryption.png" alt-text="Screenshot of the Data Encryption menu option in the Azure portal.":::
+ :::image type="content" source="media/how-to-setup-customer-managed-keys/navigation-data-encryption.png" alt-text="Screenshot of the Data Encryption option on the resource navigation menu.":::
Then, replace the **Key URI** with the new key you want to use and select **Save**:
- :::image type="content" source="./media/how-to-setup-cmk/portal-key-swap.png" alt-text="Screenshot of the Save option in the Key page of the Azure portal.":::
+ :::image type="content" source="media/how-to-setup-customer-managed-keys/save-key-change.png" lightbox="media/how-to-setup-customer-managed-keys/save-key-change.png" alt-text="Screenshot of the Save option on the Key page.":::
Here's how to do achieve the same result in PowerShell:
- ```powershell
- $resourceGroupName = "myResourceGroup"
- $accountName = "mycosmosaccount"
- $newKeyUri = "https://<my-vault>.vault.azure.net/keys/<my-new-key>"
+ ```azurepowershell
+ # Variable for resource group name
+ $RESOURCE_GROUP_NAME = "<resource-group-name>"
+
+ # Variable for account name
+ $ACCOUNT_NAME = "<globally-unique-account-name>"
- $account = Get-AzResource -ResourceGroupName $resourceGroupName -Name $accountName `
- -ResourceType "Microsoft.DocumentDb/databaseAccounts"
+ # Variable for new key URI in the key vault
+ $NEW_KEY_VAULT_KEY_URI="https://<key-vault-name>.vault.azure.net/keys/<new-key-name>"
- $account.Properties.keyVaultKeyUri = $newKeyUri
+ $parameters = @{
+ ResourceGroupName = $RESOURCE_GROUP_NAME
+ Name = $ACCOUNT_NAME
+ ResourceType = "Microsoft.DocumentDb/databaseAccounts"
+ }
+ $ACCOUNT = Get-AzResource @parameters
+
+ $ACCOUNT.Properties.keyVaultKeyUri = $NEW_KEY_VAULT_KEY_URI
- $account | Set-AzResource -Force
+ $ACCOUNT | Set-AzResource -Force
``` The previous key or key version can be disabled after the [Azure Key Vault audit logs](../key-vault/general/logging.md) don't show activity from Azure Cosmos DB on that key or key version anymore. No more activity should take place on the previous key or key version after 24 hours of key rotation.
-
+ ## Error handling If there are any errors with customer-managed keys in Azure Cosmos DB, Azure Cosmos DB returns the error details along with an HTTP substatus code in the response. You can use the HTTP substatus code to debug the root cause of the issue. See the [HTTP Status Codes for Azure Cosmos DB](/rest/api/cosmos-db/http-status-codes-for-cosmosdb) article to get the list of supported HTTP substatus codes. ## Frequently asked questions
+Included here are frequently asked questions about setting up customer-managed keys in Azure Cosmos DB.
+ ### Are there more charges to enable customer-managed keys? No, there's no charge to enable this feature. ### How do customer-managed keys influence capacity planning?
-[Request Units](./request-units.md) consumed by your database operations see an increase to reflect the extra processing required to perform encryption and decryption of your data when using customer-managed keys. The extra RU consumption may lead to slightly higher utilization of your provisioned capacity. Use the table below for guidance:
+[Request Units](./request-units.md) consumed by your database operations see an increase to reflect the extra processing required to perform encryption and decryption of your data when using customer-managed keys. The extra RU consumption may lead to slightly higher utilization of your provisioned capacity. Use this table for guidance:
| Operation type | Request Unit increase | ||| | Point-reads (fetching items by their ID) | + 5% per operation |
-| Any write operation | + 6% per operation <br/> Approximately + 0.06 RU per indexed property |
+| Any write operation | + 6% per operation &vert; Approximately + 0.06 RU per indexed property |
| Queries, reading change feed, or conflict feed | + 15% per operation | ### What data gets encrypted with the customer-managed keys?
This feature is currently available only for new accounts.
### Is it possible to use customer-managed keys with the Azure Cosmos DB [analytical store](analytical-store-introduction.md)?
-Yes, Azure Synapse Link only supports configuring customer-managed keys using your Azure Cosmos DB account's managed identity. You must [use your Azure Cosmos DB account's managed identity](#using-managed-identity) in your Azure Key Vault access policy before [enabling Azure Synapse Link](configure-synapse-link.md#enable-synapse-link) on your account. For a how-to guide on how to enable managed identity and use it in an access policy, see [access Azure Key Vault from Azure Cosmos DB using a managed identity](access-key-vault-managed-identity.md).
+Yes, Azure Synapse Link only supports configuring customer-managed keys using your Azure Cosmos DB account's managed identity. You must use your Azure Cosmos DB account's managed identity in your Azure Key Vault access policy before [enabling Azure Synapse Link](configure-synapse-link.md#enable-synapse-link) on your account. For a how-to guide on how to enable managed identity and use it in an access policy, see [access Azure Key Vault from Azure Cosmos DB using a managed identity](access-key-vault-managed-identity.md).
### Is there a plan to support finer granularity than account-level keys?
Not currently, but container-level keys are being considered.
From the Azure portal, go to your Azure Cosmos DB account and watch for the **Data Encryption** entry in the left menu; if this entry exists, customer-managed keys are enabled on your account:
-You can also programmatically fetch the details of your Azure Cosmos DB account and look for the presence of the `keyVaultKeyUri` property. See above for ways to do that [in PowerShell](#using-powershell) and [using the Azure CLI](#using-azure-cli).
+You can also programmatically fetch the details of your Azure Cosmos DB account and look for the presence of the `keyVaultKeyUri` property.
### How do customer-managed keys affect periodic backups? Azure Cosmos DB takes [regular and automatic backups](./online-backup-and-restore.md) of the data stored in your account. This operation backs up the encrypted data. The following conditions are necessary to successfully restore a periodic backup:+ - The encryption key that you used at the time of the backup is required and must be available in Azure Key Vault. This condition requires that no revocation was made and the version of the key that was used at the time of the backup is still enabled.-- If you [used a system-assigned managed identity in the access policy](#to-use-a-system-assigned-managed-identity), temporarily [grant access to the Azure Cosmos DB first-party identity](#add-access-policy) before restoring your data. This requirement exists because a system-assigned managed identity is specific to an account and can't be reused in the target account. Once the data is fully restored to the target account, you can set your desired identity configuration and remove the first-party identity from the Key Vault access policy.
+- If you used a system-assigned managed identity in the access policy, temporarily grant access to the Azure Cosmos DB first-party identity before restoring your data. This requirement exists because a system-assigned managed identity is specific to an account and can't be reused in the target account. Once the data is fully restored to the target account, you can set your desired identity configuration and remove the first-party identity from the Key Vault access policy.
### How do customer-managed keys affect continuous backups?
-Azure Cosmos DB gives you the option to configure [continuous backups](./continuous-backup-restore-introduction.md) on your account. With continuous backups, you can restore your data to any point in time within the past 30 days. To use continuous backups on an account where customer-managed keys are enabled, you must [use a user-assigned managed identity](#to-use-a-user-assigned-managed-identity) in the Key Vault access policy. Azure Cosmos DB first-party identities or system-assigned managed identities aren't currently supported on accounts using continuous backups.
+Azure Cosmos DB gives you the option to configure [continuous backups](./continuous-backup-restore-introduction.md) on your account. With continuous backups, you can restore your data to any point in time within the past 30 days. To use continuous backups on an account where customer-managed keys are enabled, you must use a user-assigned managed identity in the Key Vault access policy. Azure Cosmos DB first-party identities or system-assigned managed identities aren't currently supported on accounts using continuous backups.
The following conditions are necessary to successfully perform a point-in-time restore:+ - The encryption key that you used at the time of the backup is required and must be available in Azure Key Vault. This requirement means that no revocation was made and the version of the key that was used at the time of the backup is still enabled. - You must ensure that the user-assigned managed identity originally used on the source account is still declared in the Key Vault access policy.
The following conditions are necessary to successfully perform a point-in-time r
Key revocation is done by disabling the latest version of the key: Alternatively, to revoke all keys from an Azure Key Vault instance, you can delete the access policy granted to the Azure Cosmos DB principal: ### What operations are available after a customer-managed key is revoked?
The only operation possible when the encryption key has been revoked is account
## Next steps -- Learn more about [data encryption in Azure Cosmos DB](./database-encryption-at-rest.md).-- Get an overview of [secure access to data in Azure Cosmos DB](secure-access-to-data.md).
+- Learn more about [data encryption in Azure Cosmos DB](database-encryption-at-rest.md).
cosmos-db Kafka Connector Sink https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/kafka-connector-sink.md
You can learn more about change feed in Azure Cosmo DB with the following docs:
* [Reading from change feed](read-change-feed.md) You can learn more about bulk operations in V4 Java SDK with the following docs:
-* [Perform bulk operations on Azure Cosmos DB data](https://learn.microsoft.com/azure/cosmos-db/nosql/bulk-executor-java)
+* [Perform bulk operations on Azure Cosmos DB data](/azure/cosmos-db/nosql/bulk-executor-java)
cosmos-db Migrate Data Striim https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/migrate-data-striim.md
In this section, you will configure the Azure Cosmos DB for NoSQL account as the
:::image type="content" source="media/migrate-data-striim/get-ssh-url.png" alt-text="Get the SSH URL":::
-1. Open a new terminal window and run the SSH command you copied from the Azure portal. This article uses terminal in a MacOS, you can follow the similar instructions using PuTTY or a different SSH client on a Windows machine. When prompted, type **yes** to continue and enter the **password** you have set for the virtual machine in the previous step.
+1. Open a new terminal window and run the SSH command you copied from the Azure portal. This article uses terminal in a MacOS, you can follow the similar instructions using an SSH client on a Windows machine. When prompted, type **yes** to continue and enter the **password** you have set for the virtual machine in the previous step.
:::image type="content" source="media/migrate-data-striim/striim-vm-connect.png" alt-text="Connect to Striim VM":::
cosmos-db Keywords https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/keywords.md
The results are:
] ```
-Queries with an aggregate system function and a subquery with `DISTINCT` are only supported in specific SDK versions. For example, queries with the following shape are only supported in the below specific SDK versions:
+Queries with an aggregate system function and a subquery with `DISTINCT` are only supported in specific SDK versions. This is because they require coordination of the results returned from every continuation to create an exact result set. For example, queries with the following shape are only supported in the below specific SDK versions:
```sql SELECT COUNT(1) FROM (SELECT DISTINCT f.lastName FROM f)
SELECT COUNT(1) FROM (SELECT DISTINCT f.lastName FROM f)
|Node.js SDK|Unsupported| |Python SDK|Unsupported|
-There are some additional restrictions on queries with an aggregate system function and a subquery with `DISTINCT`. The below queries are unsupported:
+There are some additional restrictions on nested queries with `DISTINCT` regardless of SDK version. In these cases, there may be incorrect and inconsistent results because the query would require extra coordination. The below queries are unsupported:
|**Restriction**|**Example**| |-|-|
+|Nested Subquery|SELECT VALUE f FROM (SELECT DISTINCT c.year FROM c) f|
|WHERE clause in the outer query|SELECT COUNT(1) FROM (SELECT DISTINCT VALUE c.lastName FROM c) AS lastName WHERE lastName = "Smith"| |ORDER BY clause in the outer query|SELECT VALUE COUNT(1) FROM (SELECT DISTINCT VALUE c.lastName FROM c) AS lastName ORDER BY lastName| |GROUP BY clause in the outer query|SELECT COUNT(1) as annualCount, d.year FROM (SELECT DISTINCT c.year, c.id FROM c) AS d GROUP BY d.year|
-|Nested subquery|SELECT COUNT(1) FROM (SELECT y FROM (SELECT VALUE StringToNumber(SUBSTRING(d.date, 0, 4 FROM (SELECT DISTINCT c.date FROM c) d) AS y WHERE y > 2012)|
+|Nested subquery with aggregate system function|SELECT COUNT(1) FROM (SELECT y FROM (SELECT VALUE StringToNumber(SUBSTRING(d.date, 0, 4 FROM (SELECT DISTINCT c.date FROM c) d) AS y WHERE y > 2012)|
|Multiple aggregations|SELECT COUNT(1) as AnnualCount, SUM(d.sales) as TotalSales FROM (SELECT DISTINCT c.year, c.sales, c.id FROM c) AS d| |COUNT() must have 1 as a parameter|SELECT COUNT(lastName) FROM (SELECT DISTINCT VALUE c.lastName FROM c) AS lastName|
cosmos-db String Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/string-functions.md
The below scalar functions perform an operation on a string input value and retu
| [StringToBoolean](stringtoboolean.md) | Full scan | Full scan | | | [StringToNull](stringtonull.md) | Full scan | Full scan | | | [StringToNumber](stringtonumber.md) | Full scan | Full scan | |
+| [StringToObject](stringtoobject.md) | Full scan | Full scan | |
+| [SUBSTRING](substring.md) | Full scan | Full scan | |
+| [ToString](tostring.md) | Full scan | Full scan | |
+| [TRIM](trim.md) | Full scan | Full scan | |
+| [UPPER](upper.md) | Full scan | Full scan | |
Learn about about [index usage](../../index-overview.md#index-usage) in Azure Cosmos DB.
cosmos-db Stored Procedures Triggers Udfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/stored-procedures-triggers-udfs.md
Writing stored procedures, triggers, and user-defined functions (UDFs) in JavaSc
* **Atomic transactions:** Azure Cosmos DB database operations that are performed within a single stored procedure or a trigger are atomic. This atomic functionality lets an application combine related operations into a single batch, so that either all of the operations succeed or none of them succeed.
-* **Performance:** The JSON data is intrinsically mapped to the JavaScript language type system. This mapping allows for a number of optimizations like lazy materialization of JSON documents in the buffer pool and making them available on-demand to the executing code. There are other performance benefits associated with shipping business logic to the database, which includes:
+* **Performance:** The JSON data is intrinsically mapped to the JavaScript language type system. This mapping allows for a number of optimizations like lazy materialization of JSON documents in the buffer pool and making them available on-demand to the executing code. There are other performance benefits associated with shifting business logic to the database, which includes:
* *Batching:* You can group operations like inserts and submit them in bulk. The network traffic latency costs and the store overhead to create separate transactions are reduced significantly.
cosmos-db Troubleshoot Changefeed Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/troubleshoot-changefeed-functions.md
Previously updated : 04/14/2022 Last updated : 01/03/2023
This article covers common issues, workarounds, and diagnostic steps, when you u
## Dependencies
-The Azure Functions trigger and bindings for Azure Cosmos DB depend on the extension packages over the base Azure Functions runtime. Always keep these packages updated, as they might include fixes and new features that might address any potential issues you may encounter:
-
-* For Azure Functions V2, see [Microsoft.Azure.WebJobs.Extensions.CosmosDB](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.CosmosDB).
-* For Azure Functions V1, see [Microsoft.Azure.WebJobs.Extensions.DocumentDB](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.DocumentDB).
+The Azure Functions trigger and bindings for Azure Cosmos DB depend on the extension package [Microsoft.Azure.WebJobs.Extensions.CosmosDB](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.CosmosDB) over the base Azure Functions runtime. Always keep these packages updated, as they might include fixes and new features that might address any potential issues you may encounter.
This article will always refer to Azure Functions V2 whenever the runtime is mentioned, unless explicitly specified.
This article will always refer to Azure Functions V2 whenever the runtime is men
The key functionality of the extension package is to provide support for the Azure Functions trigger and bindings for Azure Cosmos DB. It also includes the [Azure Cosmos DB .NET SDK](sdk-dotnet-core-v2.md), which is helpful if you want to interact with Azure Cosmos DB programmatically without using the trigger and bindings.
-If want to use the Azure Cosmos DB SDK, make sure that you don't add to your project another NuGet package reference. Instead, **let the SDK reference resolve through the Azure Functions' Extension package**. Consume the Azure Cosmos DB SDK separately from the trigger and bindings
+If you want to use the Azure Cosmos DB SDK, make sure that you don't add to your project another NuGet package reference. Instead, **let the SDK reference resolve through the Azure Functions' Extension package**. Consume the Azure Cosmos DB SDK separately from the trigger and bindings
-Additionally, if you are manually creating your own instance of the [Azure Cosmos DB SDK client](./sdk-dotnet-core-v2.md), you should follow the pattern of having only one instance of the client [using a Singleton pattern approach](../../azure-functions/manage-connections.md?tabs=csharp#azure-cosmos-db-clients). This process avoids the potential socket issues in your operations.
+Additionally, if you're manually creating your own instance of the [Azure Cosmos DB SDK client](./sdk-dotnet-core-v2.md), you should follow the pattern of having only one instance of the client [using a Singleton pattern approach](../../azure-functions/manage-connections.md?tabs=csharp#azure-cosmos-db-clients). This process avoids the potential socket issues in your operations.
## Common scenarios and workarounds ### Azure Function fails with error message collection doesn't exist
-Azure Function fails with error message "Either the source collection 'collection-name' (in database 'database-name') or the lease collection 'collection2-name' (in database 'database2-name') does not exist. Both collections must exist before the listener starts. To automatically create the lease collection, set 'CreateLeaseCollectionIfNotExists' to 'true'"
+Azure Function fails with error message "Either the source collection 'collection-name' (in database 'database-name') or the lease collection 'collection2-name' (in database 'database2-name') doesn't exist. Both collections must exist before the listener starts. To automatically create the lease collection, set 'CreateLeaseCollectionIfNotExists' to 'true'"
-This means that either one or both of the Azure Cosmos DB containers required for the trigger to work do not exist or are not reachable to the Azure Function. **The error itself will tell you which Azure Cosmos DB database and container is the trigger looking for** based on your configuration.
+This means that either one or both of the Azure Cosmos DB containers required for the trigger to work don't exist or aren't reachable to the Azure Function. **The error itself will tell you which Azure Cosmos DB database and container is the trigger looking for** based on your configuration.
1. Verify the `ConnectionStringSetting` attribute and that it **references a setting that exists in your Azure Function App**. The value on this attribute shouldn't be the Connection String itself, but the name of the Configuration Setting.
-2. Verify that the `databaseName` and `collectionName` exist in your Azure Cosmos DB account. If you are using automatic value replacement (using `%settingName%` patterns), make sure the name of the setting exists in your Azure Function App.
+2. Verify that the `databaseName` and `collectionName` exist in your Azure Cosmos DB account. If you're using automatic value replacement (using `%settingName%` patterns), make sure the name of the setting exists in your Azure Function App.
3. If you don't specify a `LeaseCollectionName/leaseCollectionName`, the default is "leases". Verify that such container exists. Optionally you can set the `CreateLeaseCollectionIfNotExists` attribute in your Trigger to `true` to automatically create it.
-4. Verify your [Azure Cosmos DB account's Firewall configuration](../how-to-configure-firewall.md) to see to see that it's not it's not blocking the Azure Function.
+4. Verify your [Azure Cosmos DB account's Firewall configuration](../how-to-configure-firewall.md) to see that it's not blocking the Azure Function.
### Azure Function fails to start with "Shared throughput collection should have a partition key"
-The previous versions of the Azure Cosmos DB Extension did not support using a leases container that was created within a [shared throughput database](../set-throughput.md#set-throughput-on-a-database). To resolve this issue, update the [Microsoft.Azure.WebJobs.Extensions.CosmosDB](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.CosmosDB) extension to get the latest version.
+The previous versions of the Azure Cosmos DB Extension didn't support using a leases container that was created within a [shared throughput database](../set-throughput.md#set-throughput-on-a-database). To resolve this issue, update the [Microsoft.Azure.WebJobs.Extensions.CosmosDB](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.CosmosDB) extension to get the latest version.
### Azure Function fails to start with "PartitionKey must be supplied for this operation."
-This error means that you are currently using a partitioned lease collection with an old [extension dependency](#dependencies). Upgrade to the latest available version. If you are currently running on Azure Functions V1, you will need to upgrade to Azure Functions V2.
+This error means that you're currently using a partitioned lease collection with an old [extension dependency](#dependencies). Upgrade to the latest available version. If you're currently running on Azure Functions V1, you'll need to upgrade to Azure Functions V2.
-### Azure Function fails to start with "Forbidden (403); Substatus: 5300... The given request [POST ...] cannot be authorized by AAD token in data plane"
+### Azure Function fails to start with "Forbidden (403); Substatus: 5300... The given request [POST ...] can't be authorized by AAD token in data plane"
-This error means your Function is attempting to [perform a non-data operation using Azure AD identities](troubleshoot-forbidden.md#non-data-operations-are-not-allowed). You cannot use `CreateLeaseContainerIfNotExists = true` when using Azure AD identities.
+This error means your Function is attempting to [perform a non-data operation using Azure AD identities](troubleshoot-forbidden.md#non-data-operations-are-not-allowed). You can't use `CreateLeaseContainerIfNotExists = true` when using Azure AD identities.
### Azure Function fails to start with "The lease collection, if partitioned, must have partition key equal to id."
-This error means that your current leases container is partitioned, but the partition key path is not `/id`. To resolve this issue, you need to recreate the leases container with `/id` as the partition key.
+This error means that your current leases container is partitioned, but the partition key path isn't `/id`. To resolve this issue, you need to recreate the leases container with `/id` as the partition key.
-### You see a "Value cannot be null. Parameter name: o" in your Azure Functions logs when you try to Run the Trigger
+### You see a "Value can't be null. Parameter name: o" in your Azure Functions logs when you try to Run the Trigger
-This issue appears if you are using the Azure portal and you try to select the **Run** button on the screen when inspecting an Azure Function that uses the trigger. The trigger does not require for you to select Run to start, it will automatically start when the Azure Function is deployed. If you want to check the Azure Function's log stream on the Azure portal, just go to your monitored container and insert some new items, you will automatically see the Trigger executing.
+This issue appears if you're using the Azure portal and you try to select the **Run** button on the screen when inspecting an Azure Function that uses the trigger. The trigger doesn't require for you to select Run to start, it will automatically start when the Azure Function is deployed. If you want to check the Azure Function's log stream on the Azure portal, just go to your monitored container and insert some new items, you'll automatically see the Trigger executing.
### My changes take too long to be received
This scenario can have multiple causes and all of them should be checked:
If it's the latter, there could be some delay between the changes being stored and the Azure Function picking them up. This is because internally, when the trigger checks for changes in your Azure Cosmos DB container and finds none pending to be read, it will sleep for a configurable amount of time (5 seconds, by default) before checking for new changes (to avoid high RU consumption). You can configure this sleep time through the `FeedPollDelay/feedPollDelay` setting in the [configuration](../../azure-functions/functions-bindings-cosmosdb-v2-trigger.md#configuration) of your trigger (the value is expected to be in milliseconds). 3. Your Azure Cosmos DB container might be [rate-limited](../request-units.md). 4. You can use the `PreferredLocations` attribute in your trigger to specify a comma-separated list of Azure regions to define a custom preferred connection order.
-5. The speed at which your Trigger receives new changes is dictated by the speed at which you are processing them. Verify the Function's [Execution Time / Duration](../../azure-functions/analyze-telemetry-data.md), if your Function is slow that will increase the time it takes for your Trigger to get new changes. If you see a recent increase in Duration, there could be a recent code change that might affect it. If the speed at which you are receiving operations on your Azure Cosmos DB container is faster than the speed of your Trigger, you will keep lagging behind. You might want to investigate in the Function's code, what is the most time consuming operation and how to optimize it.
+5. The speed at which your Trigger receives new changes is dictated by the speed at which you're processing them. Verify the Function's [Execution Time / Duration](../../azure-functions/analyze-telemetry-data.md), if your Function is slow that will increase the time it takes for your Trigger to get new changes. If you see a recent increase in Duration, there could be a recent code change that might affect it. If the speed at which you're receiving operations on your Azure Cosmos DB container is faster than the speed of your Trigger, you'll keep lagging behind. You might want to investigate in the Function's code, what is the most time consuming operation and how to optimize it.
### Some changes are repeated in my Trigger
-The concept of a "change" is an operation on a document. The most common scenarios where events for the same document is received are:
+The concept of a "change" is an operation on a document. The most common scenarios where events for the same document are received are:
* The account is using Eventual consistency. While consuming the change feed in an Eventual consistency level, there could be duplicate events in-between subsequent change feed read operations (the last event of one read operation appears as the first of the next). * The document is being updated. The Change Feed can contain multiple operations for the same documents, if that document is receiving updates, it can pick up multiple events (one for each update). One easy way to distinguish among different operations for the same document is to track the `_lsn` [property for each change](../change-feed.md#change-feed-and-_etag-_lsn-or-_ts). If they don't match, these are different changes over the same document.
-* If you are identifying documents just by `id`, remember that the unique identifier for a document is the `id` and its partition key (there can be two documents with the same `id` but different partition key).
+* If you're identifying documents just by `id`, remember that the unique identifier for a document is the `id` and its partition key (there can be two documents with the same `id` but different partition key).
### Some changes are missing in my Trigger
-If you find that some of the changes that happened in your Azure Cosmos DB container are not being picked up by the Azure Function or some changes are missing in the destination when you are copying them, please follow the below steps.
+If you find that some of the changes that happened in your Azure Cosmos DB container aren't being picked up by the Azure Function or some changes are missing in the destination when you're copying them, follow the below steps.
-When your Azure Function receives the changes, it often processes them, and could optionally, send the result to another destination. When you are investigating missing changes, make sure you **measure which changes are being received at the ingestion point** (when the Azure Function starts), not on the destination.
+When your Azure Function receives the changes, it often processes them, and could optionally, send the result to another destination. When you're investigating missing changes, make sure you **measure which changes are being received at the ingestion point** (when the Azure Function starts), not on the destination.
If some changes are missing on the destination, this could mean that is some error happening during the Azure Function execution after the changes were received.
-In this scenario, the best course of action is to add `try/catch` blocks in your code and inside the loops that might be processing the changes, to detect any failure for a particular subset of items and handle them accordingly (send them to another storage for further analysis or retry).
+In this scenario, the best course of action is to add `try/catch` blocks in your code and inside the loops that might be processing the changes, to detect any failure for a particular subset of items and handle them accordingly (send them to another storage for further analysis or retry). Alternatively, you can configure Azure Functions [retry policies](../../azure-functions/functions-bindings-error-pages.md#retries).
> [!NOTE]
-> The Azure Functions trigger for Azure Cosmos DB, by default, won't retry a batch of changes if there was an unhandled exception during your code execution. This means that the reason that the changes did not arrive at the destination is because that you are failing to process them.
+> The Azure Functions trigger for Azure Cosmos DB, by default, won't retry a batch of changes if there was an unhandled exception during your code execution. This means that the reason that the changes did not arrive at the destination might be because you are failing to process them.
-If the destination is another Azure Cosmos DB container and you are performing Upsert operations to copy the items, **verify that the Partition Key Definition on both the monitored and destination container are the same**. Upsert operations could be saving multiple source items as one in the destination because of this configuration difference.
+If the destination is another Azure Cosmos DB container and you're performing Upsert operations to copy the items, **verify that the Partition Key Definition on both the monitored and destination container are the same**. Upsert operations could be saving multiple source items as one in the destination because of this configuration difference.
-If you find that some changes were not received at all by your trigger, the most common scenario is that there is **another Azure Function running**. It could be another Azure Function deployed in Azure or an Azure Function running locally on a developer's machine that has **exactly the same configuration** (same monitored and lease containers), and this Azure Function is stealing a subset of the changes you would expect your Azure Function to process.
+If you find that some changes weren't received at all by your trigger, the most common scenario is that there's **another Azure Function running**. It could be another Azure Function deployed in Azure or an Azure Function running locally on a developer's machine that has **exactly the same configuration** (same monitored and lease containers), and this Azure Function is stealing a subset of the changes you would expect your Azure Function to process.
Additionally, the scenario can be validated, if you know how many Azure Function App instances you have running. If you inspect your leases container and count the number of lease items within, the distinct values of the `Owner` property in them should be equal to the number of instances of your Function App. If there are more owners than the known Azure Function App instances, it means that these extra owners are the ones "stealing" the changes.
One easy way to work around this situation, is to apply a `LeaseCollectionPrefix
### Need to restart and reprocess all the items in my container from the beginning To reprocess all the items in a container from the beginning:
-1. Stop your Azure function if it is currently running.
-1. Delete the documents in the lease collection (or delete and re-create the lease collection so it is empty)
+1. Stop your Azure function if it's currently running.
+1. Delete the documents in the lease collection (or delete and re-create the lease collection so it's empty)
1. Set the [StartFromBeginning](../../azure-functions/functions-bindings-cosmosdb-v2-trigger.md#configuration) CosmosDBTrigger attribute in your function to true. 1. Restart the Azure function. It will now read and process all changes from the beginning.
cosmos-db Partitioning Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/partitioning-overview.md
For **all** containers, your partition key should:
* Spread request unit (RU) consumption and data storage evenly across all logical partitions. This ensures even RU consumption and storage distribution across your physical partitions.
+* Have values that are no larger than 2048 bytes typically, or 101 bytes if large partition keys are not enabled. For more information, see [large partition keys](large-partition-keys.md)
+ If you need [multi-item ACID transactions](database-transactions-optimistic-concurrency.md#multi-item-transactions) in Azure Cosmos DB, you will need to use [stored procedures or triggers](how-to-write-stored-procedures-triggers-udfs.md#stored-procedures). All JavaScript-based stored procedures and triggers are scoped to a single logical partition. > [!NOTE]
cosmos-db Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/policy-reference.md
Title: Built-in policy definitions for Azure Cosmos DB description: Lists Azure Policy built-in policy definitions for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/28/2022 Last updated : 01/05/2023
cosmos-db Restore Account Continuous Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/restore-account-continuous-backup.md
Restore-AzCosmosDBAccount `
``` ### To restore a continuous account that is configured with managed identity using CLI
-To restore Customer Managed Key (CMK) continuous account please refer to the steps provided [here](./how-to-setup-customer-managed-keys.md#to-restore-a-continuous-account-that-is-configured-with-managed-identity-using-cli)
+To restore Customer Managed Key (CMK) continuous account please refer to the steps provided [here](./how-to-setup-customer-managed-keys.md)
### <a id="get-the-restore-details-powershell"></a>Get the restore details from the restored account
cosmos-db Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cosmos DB description: Lists Azure Policy Regulatory Compliance controls available for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/04/2022 Last updated : 01/05/2023
cost-management-billing Cost Analysis Common Uses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/cost-analysis-common-uses.md
Cost Management users often want answers to questions that many others ask. This article walks you through getting results for common cost analysis tasks in Cost Management.
-## View forecasted costs
+## View forecast costs
-Forecasted costs are shown in cost analysis areas for area and stacked column views. The forecast is based on your historical resource use. Changes to your resource use affect forecasted costs.
+Forecast costs are shown in cost analysis areas for area and stacked column views. The forecast is based on your historical resource use. Changes to your resource use affect forecast costs.
In the Azure portal, navigate to cost analysis for your scope. For example: **Cost Management + Billing** > **Cost Management** > **Cost analysis**.
-In the default view, the top chart has the Actual/Amortized cost and forecasted cost sections. The solid color of the chart shows your Actual/Amortized cost. The shaded color shows the forecast cost.
+In the default view, the top chart has the Actual/Amortized cost and forecast cost sections. The solid color of the chart shows your Actual/Amortized cost. The shaded color shows the forecast cost.
-[![Forecasted cost](./media/cost-analysis-common-uses/enrollment-forecast.png)](./media/cost-analysis-common-uses/enrollment-forecast.png#lightbox)
+[![Forecast cost](./media/cost-analysis-common-uses/enrollment-forecast.png)](./media/cost-analysis-common-uses/enrollment-forecast.png#lightbox)
-## View forecasted costs grouped by service
+## View forecast costs grouped by service
-The default view doesn't show forecasted costs group by a service, so you have to add a group by selection.
+The default view doesn't show forecast costs group by a service, so you have to add a group by selection.
In the Azure portal, navigate to cost analysis for your scope. For example: **Cost Management + Billing** > **Cost Management** > **Cost analysis**. Select **Group by** > **Service name**.
-The view shows your costs grouped for each service. The forecasted cost isn't calculated for each service. It's projected for the **Total** of all your services.
+The view shows your costs grouped for each service. The forecast cost isn't calculated for each service. It's projected for the **Total** of all your services.
-[![Grouped forecasted cost](./media/cost-analysis-common-uses/forecast-group-by-service.png)](./media/cost-analysis-common-uses/forecast-group-by-service.png#lightbox)
+[![Grouped forecast cost](./media/cost-analysis-common-uses/forecast-group-by-service.png)](./media/cost-analysis-common-uses/forecast-group-by-service.png#lightbox)
-## View forecasted costs for a service
+## View forecast costs for a service
-You can view forecasted costs narrowed to a single service. For example, you might want to see forecasted costs for just virtual machines.
+You can view forecast costs narrowed to a single service. For example, you might want to see forecast costs for just virtual machines.
1. In the Azure portal, navigate to cost analysis for your scope. For example: **Cost Management + Billing** > **Cost Management** > **Cost analysis**. 1. Select **Add filter** and then select **Service name**. 1. In the **choose** list, select a service. For example select, **virtual machines**.
-Review the actual cost for selection and the forecasted cost.
+Review the actual cost for selection and the forecast cost.
You can add more customizations to the view. 1. Add a second filter for **Meter** and select a value to filter for an individual type of meter under your selected service name.
-1. Group by **Resource** to see the specific resources that are accruing cost. The forecasted cost isn't calculated for each service. It's projected for the **Total** of all your resources.
+1. Group by **Resource** to see the specific resources that are accruing cost. The forecast cost isn't calculated for each service. It's projected for the **Total** of all your resources.
-[![Forecasted cost for a service](./media/cost-analysis-common-uses/forecast-by-service.png)](./media/cost-analysis-common-uses/forecast-by-service.png#lightbox)
+[![Forecast cost for a service](./media/cost-analysis-common-uses/forecast-by-service.png)](./media/cost-analysis-common-uses/forecast-by-service.png#lightbox)
## View your Azure and AWS costs together
cost-management-billing Enable Tag Inheritance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/enable-tag-inheritance.md
description: This article explains how to group costs using tag inheritance. Previously updated : 12/08/2022 Last updated : 01/09/2023
Azure tags are widely used to group costs to align with different business units
This article explains how to use the tag inheritance setting in Cost Management. When enabled, tag inheritance applies resource group and subscription tags to child resource usage records. You don't have to tag every resource or rely on resources that emit usage to have their own tags.
-Tag inheritance is available for customers with an Enterprise Account (EA) or a Microsoft Customer Agreement (MCA) account.
+Tag inheritance is available for the following billing account types:
+
+- Enterprise Agreement (EA)
+- Microsoft Customer Agreement (MCA)
+- Microsoft Partner Agreement (MPA) with Azure plan subscriptions
## Required permissions
cost-management-billing Cancel Azure Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/cancel-azure-subscription.md
tags: billing
Previously updated : 11/21/2022 Last updated : 01/09/2023
Although not required, Microsoft *recommends* that you take the following action
* Back up your data. For example, if you're storing data in Azure storage or SQL, download a copy. If you have a virtual machine, save an image of it locally. * Shut down your services. Go to the [All resources](https://portal.azure.com/?flight=1#blade/HubsExtension/Resources/resourceType/Microsoft.Resources%2Fresources) page, and **Stop** any running virtual machines, applications, or other services. * Consider migrating your data. See [Move resources to new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md).
-* Delete all resources and all resource groups.
- * To later delete a subscription, you must first delete all resources associated with the subscription.
+* Delete all resources and all resource groups.
+ * To later manually delete a subscription, you must first delete all resources associated with the subscription.
+ * You may be unable to delete all resources, depending on your configuration. For example, if you have immutable blobs. For more information, see [Immutable Blobs](../../storage/blobs/immutable-storage-overview.md#scenarios-with-version-level-scope).
* If you have any custom roles that reference this subscription in `AssignableScopes`, you should update those custom roles to remove the subscription. If you try to update a custom role after you cancel a subscription, you might get an error. For more information, see [Troubleshoot problems with custom roles](../../role-based-access-control/troubleshooting.md#custom-roles) and [Azure custom roles](../../role-based-access-control/custom-roles.md). > [!NOTE]
cost-management-billing Direct Ea Administration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/direct-ea-administration.md
Title: EA Billing administration on the Azure portal
description: This article explains the common tasks that an enterprise administrator accomplishes in the Azure portal. Previously updated : 11/11/2022 Last updated : 01/04/2023
This article explains the common tasks that an Enterprise Agreement (EA) administrator accomplishes in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade/AllBillingScopes). A direct enterprise agreement is signed between Microsoft and an enterprise agreement customer. Conversely, an indirect EA is one where a customer signs an agreement with a Microsoft partner. This article is applicable for both direct and indirect EA customers. > [!NOTE]
-> We recommend that direct EA Azure customers use Cost Management + Billing in the Azure portal to manage their enrollment and billing instead of using the EA portal. For more information about enrollment management in the Azure portal, see [Get started with the Azure portal for direct Enterprise Agreement customers](ea-direct-portal-get-started.md).
+> We recommend that both direct and indirect EA Azure customers use Cost Management + Billing in the Azure portal to manage their enrollment and billing instead of using the EA portal. For more information about enrollment management in the Azure portal, see [Get started with EA billing in the Azure portal](ea-direct-portal-get-started.md).
>
-> As of October 10, 2022 direct EA customers wonΓÇÖt be able to manage their billing account in the EA portal. Instead, they must use the Azure portal.
+> As of February 20, 2023 indirect EA customers wonΓÇÖt be able to manage their billing account in the EA portal. Instead, they must use the Azure portal.
>
-> This change doesnΓÇÖt affect direct Azure Government EA enrollments or indirect EA (an indirect EA is one where a customer signs an agreement with a Microsoft partner) enrollments. Both continue using the EA portal to manage their enrollment.
+> This change doesnΓÇÖt affect Azure Government EA enrollments. They continue using the EA portal to manage their enrollment.
## Manage your enrollment
An Azure EA account is an organizational unit in the Azure portal. In the Azure
## Enable the Enterprise Dev/Test offer
-As an EA admin, you can allow account owners in your organization to create subscriptions based on the EA Dev/Test offer. To do so, select the **Dev/Test** option in the account properties. After you've selected the Dev/Test option, let the account owner know so that they can create EA Dev/Test subscriptions needed for their teams of Dev/Test subscribers. The offer enables active Visual Studio subscribers to run development and testing workloads on Azure at special Dev/Test rates. It provides access to the full gallery of Dev/Test images including Windows 8.1 and Windows 10.
+As an EA admin, you can allow account owners in your organization to create subscriptions based on the EA Dev/Test offer. To do so, select the **Dev/Test** option in the Edit account window. After you've selected the Dev/Test option, let the account owner know so that they can create EA Dev/Test subscriptions needed for their teams of Dev/Test subscribers. The offer enables active Visual Studio subscribers to run development and testing workloads on Azure at special Dev/Test rates. It provides access to the full gallery of Dev/Test images including Windows 8.1 and Windows 10.
### To set up the Enterprise Dev/Test offer
As an EA admin, you can allow account owners in your organization to create subs
1. In the left menu, select **Billing scopes** and then select a billing account scope. 1. In the left menu, select **Accounts**. 1. Select the account where you want to enable Dev/Test access.
-1. On the enrollment account page, select **Edit**.
-1. On the Edit account page, select **Dev/Test** and then select **Save**.
+1. On the enrollment account Overview page, select **Edit Account detail**.
+1. In the Edit account window, select **Dev/Test** and then select **Save**.
+ When a user is added as an account owner, any Azure subscriptions associated with the user that are based on either the pay-as-you-go Dev/Test offer or the monthly credit offers for Visual Studio subscribers get converted to the EA Dev/Test offer. Subscriptions based on other offer types, such as pay-as-you-go, that are associated with the account owner get converted to Microsoft Azure Enterprise offers.
cost-management-billing Direct Ea Azure Usage Charges Invoices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/direct-ea-azure-usage-charges-invoices.md
Title: View your Azure usage summary details and download reports for direct EA enrollments
-description: This article explains how enterprise administrators of direct Enterprise Agreement (EA) enrollments can view a summary of their usage data, Azure Prepayment consumed, and charges associated with other usage in the Azure portal.
+ Title: View your Azure usage summary details and download reports for EA enrollments
+description: This article explains how enterprise administrators of direct and indirect Enterprise Agreement (EA) enrollments can view a summary of their usage data, Azure Prepayment consumed, and charges associated with other usage in the Azure portal.
Previously updated : 11/14/2022 Last updated : 01/04/2023
-# View your usage summary details and download reports for direct EA enrollments
+# View your usage summary details and download reports for EA enrollments
-This article explains how enterprise administrators of direct Enterprise Agreement (EA) enrollments can view a summary of their usage data, Azure Prepayment consumed, and charges associated with other usage in the Azure portal. Charges are presented at the summary level across all accounts and subscriptions of the enrollment.
+This article explains how enterprise administrators of direct and indirect Enterprise Agreement (EA) enrollments can view a summary of their usage data, Azure Prepayment consumed, and charges associated with other usage in the Azure portal. Charges are presented at the summary level across all accounts and subscriptions of the enrollment.
> [!NOTE]
-> We recommend that direct EA Azure customers use Cost Management + Billing in the Azure portal to manage their enrollment and billing instead of using the EA portal. For more information about enrollment management in the Azure portal, see [Get started with the Azure portal for direct Enterprise Agreement customers](ea-direct-portal-get-started.md).
+> We recommend that both direct and indirect EA Azure customers use Cost Management + Billing in the Azure portal to manage their enrollment and billing instead of using the EA portal. For more information about enrollment management in the Azure portal, see [Get started with EA billing in the Azure portal](ea-direct-portal-get-started.md).
>
-> As of October 10, 2022 direct EA customers wonΓÇÖt be able to manage their billing account in the EA portal. Instead, they must use the Azure portal.
+> As of February 20, 2023 indirect EA customers wonΓÇÖt be able to manage their billing account in the EA portal. Instead, they must use the Azure portal.
>
-> This change doesnΓÇÖt affect direct Azure Government EA enrollments or indirect EA (an indirect EA is one where a customer signs an agreement with a Microsoft partner) enrollments. Both continue using the EA portal to manage their enrollment.
+> This change doesnΓÇÖt affect Azure Government EA enrollments. They continue using the EA portal to manage their enrollment.
-Check out the [EA admin manage consumption and invoices](https://www.youtube.com/watch?v=bO8V9eLfQHY) video. It's part of the [Direct Enterprise Customer Billing Experience in the Azure portal](https://www.youtube.com/playlist?list=PLeZrVF6SXmsoHSnAgrDDzL0W5j8KevFIm) series of videos.
+Check out the [EA admin manage consumption and invoices](https://www.youtube.com/watch?v=bO8V9eLfQHY) video. It's part of the [Enterprise Customer Billing Experience in the Azure portal](https://www.youtube.com/playlist?list=PLeZrVF6SXmsoHSnAgrDDzL0W5j8KevFIm) series of videos.
>[!VIDEO https://www.youtube.com/embed/bO8V9eLfQHY]
For more information about invoice documents, see [Direct EA billing invoice doc
## Update a PO number for an upcoming overage invoice
-In the Azure portal, a direct enterprise administrator can update the purchase order (PO) for upcoming invoices. The PO number can get updated anytime before the invoice is created during the current billing period.
+In the Azure portal, an Enterprise Administrator for a direct EA enrollment can update the purchase order (PO) for the upcoming invoices. The PO number can get updated anytime before the invoice is created during the current billing period.
For a new enrollment, the default PO number is the enrollment number.
To import the CSV file without formatting issues:
## Next steps -- To learn about common tasks that a direct enterprise administrator accomplishes in the Azure portal, see [Azure direct EA administration](direct-ea-administration.md).
+- To learn about common tasks that an enterprise administrator accomplishes in the Azure portal, see [EA Billing administration on the Azure Portal](direct-ea-administration.md).
cost-management-billing Ea Direct Portal Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-direct-portal-get-started.md
tags: billing
Previously updated : 12/05/2022 Last updated : 12/16/2022
This article helps direct and indirect Azure Enterprise Agreement (Azure EA) cus
- Cost analysis in the Azure portal > [!NOTE]
-> We recommend that direct EA Azure customers use Cost Management + Billing in the Azure portal to manage their enrollment and billing instead of using the EA portal.
+> We recommend that both direct and indirect EA Azure customers use Cost Management + Billing in the Azure portal to manage their enrollment and billing instead of using the EA portal. For more information about enrollment management in the Azure portal, see [Get started with EA billing in the Azure portal](ea-direct-portal-get-started.md).
>
-> As of October 10, 2022 direct EA customers wonΓÇÖt be able to manage their billing account in the EA portal. Instead, they must use the Azure portal.
+> As of February 20, 2023 indirect EA customers wonΓÇÖt be able to manage their billing account in the EA portal. Instead, they must use the Azure portal.
>
-> This change doesnΓÇÖt affect direct Azure Government EA enrollments or indirect EA (an indirect EA is one where a customer signs an agreement with a Microsoft partner) enrollments. Both continue using the EA portal to manage their enrollment.
+> This change doesnΓÇÖt affect Azure Government EA enrollments. They continue using the EA portal to manage their enrollment.
We have several videos that walk you through getting started with the Azure portal for Enterprise Agreements. Check out the series at [Enterprise Customer Billing Experience in the Azure portal](https://www.youtube.com/playlist?list=PLeZrVF6SXmsoHSnAgrDDzL0W5j8KevFIm).
cost-management-billing Ea Portal Administration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-portal-administration.md
This article explains the common tasks that an administrator accomplishes in the Azure EA portal (https://ea.azure.com). The Azure EA portal is an online management portal that helps customers manage the cost of their Azure EA services. For introductory information about the Azure EA portal, see the [Get started with the Azure EA portal](ea-portal-get-started.md) article. > [!NOTE]
-> We recommend that direct EA Azure customers use Cost Management + Billing in the Azure portal to manage their enrollment and billing instead of using the EA portal. For more information about enrollment management in the Azure portal, see [Get started with the Azure portal for direct Enterprise Agreement customers](ea-direct-portal-get-started.md).
+> We recommend that both direct and indirect EA Azure customers use Cost Management + Billing in the Azure portal to manage their enrollment and billing instead of using the EA portal. For more information about enrollment management in the Azure portal, see [Get started with EA billing in the Azure portal](ea-direct-portal-get-started.md).
>
-> As of October 10, 2022 direct EA customers wonΓÇÖt be able to manage their billing account in the EA portal. Instead, they must use the Azure portal.
+> As of February 20, 2023 indirect EA customers wonΓÇÖt be able to manage their billing account in the EA portal. Instead, they must use the Azure portal.
>
-> This change doesnΓÇÖt affect direct Azure Government EA enrollments or indirect EA (an indirect EA is one where a customer signs an agreement with a Microsoft partner) enrollments. Both continue using the EA portal to manage their enrollment.
+> This change doesnΓÇÖt affect Azure Government EA enrollments. They continue using the EA portal to manage their enrollment.
## Activate your enrollment
To begin:
- Read about how [virtual machine reservations](ea-portal-vm-reservations.md) can help save you money. - If you need help with troubleshooting Azure EA portal issues, see [Troubleshoot Azure EA portal access](ea-portal-troubleshoot.md).-- Read the [Cost Management + Billing FAQ](../cost-management-billing-faq.yml) for questions about EA subscription ownership.
+- Read the [Cost Management + Billing FAQ](../cost-management-billing-faq.yml) for questions about EA subscription ownership.
cost-management-billing Ea Portal Enrollment Invoices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-portal-enrollment-invoices.md
Title: Azure Enterprise enrollment invoices
description: This article explains how to manage and act on your Azure Enterprise invoice. Previously updated : 08/08/2022 Last updated : 12/16/2022
This article explains how to manage and act on your Azure Enterprise Agreement (Azure EA) invoice. Your invoice is a representation of your bill. Review it for accuracy. You should also get familiar with other tasks that might be needed to manage your invoice. > [!NOTE]
-> We recommend that direct EA Azure customers use Cost Management + Billing in the Azure portal to manage their enrollment and billing instead of using the EA portal. For more information about enrollment management in the Azure portal, see [Get started with the Azure portal for direct Enterprise Agreement customers](ea-direct-portal-get-started.md).
+> We recommend that both direct and indirect EA Azure customers use Cost Management + Billing in the Azure portal to manage their enrollment and billing instead of using the EA portal. For more information about enrollment management in the Azure portal, see [Get started with EA billing in the Azure portal](ea-direct-portal-get-started.md).
>
-> As of October 10, 2022 direct EA customers wonΓÇÖt be able to manage their billing account in the EA portal. Instead, they must use the Azure portal.
+> As of February 20, 2023 indirect EA customers wonΓÇÖt be able to manage their billing account in the EA portal. Instead, they must use the Azure portal.
>
-> This change doesnΓÇÖt affect direct Azure Government EA enrollments or indirect EA (an indirect EA is one where a customer signs an agreement with a Microsoft partner) enrollments. Both continue using the EA portal to manage their enrollment.
+> This change doesnΓÇÖt affect Azure Government EA enrollments. They continue using the EA portal to manage their enrollment.
## View usage summary and download reports
cost-management-billing Ea Portal Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-portal-get-started.md
tags: billing
Previously updated : 08/08/2022 Last updated : 12/16/2022
This article helps direct and indirect Azure Enterprise Agreement (Azure EA) cus
- Cost analysis in the Azure Enterprise portal and the Azure portal. > [!NOTE]
-> We recommend that direct EA Azure customers use Cost Management + Billing in the Azure portal to manage their enrollment and billing instead of using the EA portal. For more information about enrollment management in the Azure portal, see [Get started with the Azure portal for direct Enterprise Agreement customers](ea-direct-portal-get-started.md).
+> We recommend that both direct and indirect EA Azure customers use Cost Management + Billing in the Azure portal to manage their enrollment and billing instead of using the EA portal. For more information about enrollment management in the Azure portal, see [Get started with EA billing in the Azure portal](ea-direct-portal-get-started.md).
>
-> As of October 10, 2022 direct EA customers wonΓÇÖt be able to manage their billing account in the EA portal. Instead, they must use the Azure portal.
+> As of February 20, 2023 indirect EA customers wonΓÇÖt be able to manage their billing account in the EA portal. Instead, they must use the Azure portal.
>
-> This change doesnΓÇÖt affect direct Azure Government EA enrollments or indirect EA (an indirect EA is one where a customer signs an agreement with a Microsoft partner) enrollments. Both continue using the EA portal to manage their enrollment.
+> This change doesnΓÇÖt affect Azure Government EA enrollments. They continue using the EA portal to manage their enrollment.
## Get started with EA onboarding
For explanations regarding the common tasks that a partner EA administrator acco
- Read the [Cost Management + Billing FAQ](../cost-management-billing-faq.yml) for questions and answers about getting started with the EA portal. - Azure Enterprise portal administrators should read [Azure Enterprise portal administration](ea-portal-administration.md) to learn about common administrative tasks.-- If you need help with troubleshooting Azure Enterprise portal issues, see [Troubleshoot Azure Enterprise portal access](ea-portal-troubleshoot.md).
+- If you need help with troubleshooting Azure Enterprise portal issues, see [Troubleshoot Azure Enterprise portal access](ea-portal-troubleshoot.md).
cost-management-billing Ea Portal Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-portal-troubleshoot.md
Title: Troubleshoot Azure EA portal access
description: This article describes some common issues that can occur with an Azure Enterprise Agreement (EA) in the Azure EA portal. Previously updated : 08/08/2022 Last updated : 12/16/2022
This article describes some common issues that can occur with an Azure Enterprise Agreement (EA). The Azure EA portal is used to manage enterprise agreement users and costs. You might come across these issues when you're configuring or updating Azure EA portal access. > [!NOTE]
-> We recommend that direct EA Azure customers use Cost Management + Billing in the Azure portal to manage their enrollment and billing instead of using the EA portal. For more information about enrollment management in the Azure portal, see [Get started with the Azure portal for direct Enterprise Agreement customers](ea-direct-portal-get-started.md).
+> We recommend that both direct and indirect EA Azure customers use Cost Management + Billing in the Azure portal to manage their enrollment and billing instead of using the EA portal. For more information about enrollment management in the Azure portal, see [Get started with EA billing in the Azure portal](ea-direct-portal-get-started.md).
>
-> As of October 10, 2022 direct EA customers wonΓÇÖt be able to manage their billing account in the EA portal. Instead, they must use the Azure portal.
+> As of February 20, 2023 indirect EA customers wonΓÇÖt be able to manage their billing account in the EA portal. Instead, they must use the Azure portal.
>
-> This change doesnΓÇÖt affect direct Azure Government EA enrollments or indirect EA (an indirect EA is one where a customer signs an agreement with a Microsoft partner) enrollments. Both continue using the EA portal to manage their enrollment.
+> This change doesnΓÇÖt affect Azure Government EA enrollments. They continue using the EA portal to manage their enrollment.
## Issues adding a user to an enrollment
cost-management-billing Enterprise Mgmt Grp Troubleshoot Cost View https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/enterprise-mgmt-grp-troubleshoot-cost-view.md
Previously updated : 08/08/2022 Last updated : 12/16/2022
Within enterprise enrollments, there are several settings that could cause users within the enrollment to not see costs. These settings are managed by the enrollment administrator. Or, if the enrollment isn't bought directly through Microsoft, the settings are managed by the partner. This article helps you understand what the settings are and how they impact the enrollment. These settings are independent of the Azure roles. > [!NOTE]
-> We recommend that direct EA Azure customers use Cost Management + Billing in the Azure portal to manage their enrollment and billing instead of using the EA portal. For more information about enrollment management in the Azure portal, see [Get started with the Azure portal for direct Enterprise Agreement customers](ea-direct-portal-get-started.md).
+> We recommend that both direct and indirect EA Azure customers use Cost Management + Billing in the Azure portal to manage their enrollment and billing instead of using the EA portal. For more information about enrollment management in the Azure portal, see [Get started with EA billing in the Azure portal](ea-direct-portal-get-started.md).
>
-> As of October 10, 2022 direct EA customers wonΓÇÖt be able to manage their billing account in the EA portal. Instead, they must use the Azure portal.
+> As of February 20, 2023 indirect EA customers wonΓÇÖt be able to manage their billing account in the EA portal. Instead, they must use the Azure portal.
>
-> This change doesnΓÇÖt affect direct Azure Government EA enrollments or indirect EA (an indirect EA is one where a customer signs an agreement with a Microsoft partner) enrollments. Both continue using the EA portal to manage their enrollment.
+> This change doesnΓÇÖt affect Azure Government EA enrollments. They continue using the EA portal to manage their enrollment.
## Enable access to costs
cost-management-billing Subscription Transfer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/subscription-transfer.md
tags: billing
Previously updated : 12/07/2022 Last updated : 01/06/2023
Dev/Test products aren't shown in the following table. Transfers for Dev/Test pr
| | | | | EA | MOSP (PAYG) | ΓÇó Transfer from an EA enrollment to a MOSP subscription requires a [billing support ticket](https://azure.microsoft.com/support/create-ticket/).<br><br> ΓÇó Reservations and savings plans don't automatically transfer and transferring them isn't supported. | | EA | MCA - individual | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation and savings plan transfers with no currency change are supported. <br><br> ΓÇó You can't transfer a savings plan purchased under an Enterprise Agreement enrollment that was bought in a non-USD currency. However, you can [change the savings plan scope](../savings-plan/manage-savings-plan.md#change-the-savings-plan-scope) so that it applies to other subscriptions. |
-| EA | EA | ΓÇó Transferring between EA enrollments requires a [billing support ticket](https://azure.microsoft.com/support/create-ticket/).<br><br> ΓÇó Self-service reservation and savings plan transfers are supported.<br><br> ΓÇó Transfer within the same enrollment is the same action as changing the account owner. For details, see [Change EA subscription or account ownership](ea-portal-administration.md#change-azure-subscription-or-account-ownership). |
+| EA | EA | ΓÇó Transferring between EA enrollments requires a [billing support ticket](https://azure.microsoft.com/support/create-ticket/).<br><br> ΓÇó Reservations and savings plans automatically get transferred during EA to EA transfers, except in transfers with a currency change.<br><br> ΓÇó Transfer within the same enrollment is the same action as changing the account owner. For details, see [Change EA subscription or account ownership](ea-portal-administration.md#change-azure-subscription-or-account-ownership). |
| EA | MCA - Enterprise | ΓÇó Transferring all enrollment products is completed as part of the MCA transition process from an EA. For more information, see [Complete Enterprise Agreement tasks in your billing account for a Microsoft Customer Agreement](mca-enterprise-operations.md).<br><br> ΓÇó If you want to transfer specific products, not all of the products in an enrollment, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md). <br><br>ΓÇó Self-service reservation and savings plan transfers with no currency change are supported.<br><br> ΓÇó You can't transfer a savings plan purchased under an Enterprise Agreement enrollment that was bought in a non-USD currency. You can [change the savings plan scope](../savings-plan/manage-savings-plan.md#change-the-savings-plan-scope) so that it applies to other subscriptions. | | EA | MPA | ΓÇó Transfer is only allowed for direct EA to MPA. A direct EA is signed between Microsoft and an EA customer.<br><br>ΓÇó Only CSP direct bill partners certified as an [Azure Expert Managed Services Provider (MSP)](https://partner.microsoft.com/membership/azure-expert-msp) can request to transfer Azure products for their customers that have a Direct Enterprise Agreement (EA). For more information, see [Get billing ownership of Azure subscriptions to your MPA account](mpa-request-ownership.md). Product transfers are allowed only for customers who have accepted a Microsoft Customer Agreement (MCA) and purchased an Azure plan with the CSP Program.<br><br> ΓÇó Transfer from EA Government to MPA isn't supported.<br><br>ΓÇó There are limitations and restrictions. For more information, see [Transfer EA subscriptions to a CSP partner](transfer-subscriptions-subscribers-csp.md#transfer-ea-or-mca-enterprise-subscriptions-to-a-csp-partner). | | MCA - individual | MOSP (PAYG) | ΓÇó For details, see [Transfer billing ownership of an Azure subscription to another account](billing-subscription-transfer.md).<br><br> ΓÇó Reservations and savings plans don't automatically transfer and transferring them isn't supported. |
cost-management-billing Troubleshoot Not Available Conflict https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/troubleshoot-not-available-conflict.md
+
+ Title: Troubleshoot Not available due to conflict error
+description: Provides the solutions for a problem where you can't select a management group for a reservation or a savings plan.
+++++ Last updated : 01/06/2023+++
+# Troubleshoot Not available due to conflict error
+
+You might see a `Not available do to conflict` error message when you try select a management group for a savings plan or reservation in to the [Azure portal](https://portal.azure.com/). This article provides solutions for the problem.
+
+## Symptom
+
+When you try to buy a reservation or savings plan in to the [Azure portal](https://portal.azure.com/) and you select a scope, you see might see a `Not available due to conflicts error`.
++
+## Cause
+
+This issue can occur when a management group is selected as the scope. An active benefit (savings plan, reservation, or centrally managed Azure Hybrid Benefit) is already applied at a parent or child scope.
+
+## Solutions
+
+To resolve this issue with overlapping benefits, you can do one of the following actions:
+
+- Select another scope.
+- Change the scope of the existing benefit (savings plan, reservation or centrally managed Azure Hybrid Benefit) to prevent the overlap.
+ - For more information about how to change the scope for a reservation, see [Change the savings plan scope](../reservations/manage-reserved-vm-instance.md#change-the-reservation-scope).
+ - For more information about how to change the scope for a savings plan, see [Change the savings plan scope](../savings-plan/manage-savings-plan.md#change-the-savings-plan-scope).
+
+## Need help? Contact us.
+
+If you have questions or need help, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458).
cost-management-billing Understand Ea Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/understand-ea-roles.md
Previously updated : 08/08/2022 Last updated : 12/16/2022
For example, if the initial authentication type is set to Mixed, the EA will be
These roles are specific to managing Azure Enterprise Agreements and are in addition to the built-in roles Azure has to control access to resources. For more information, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md). > [!NOTE]
-> We recommend that direct EA Azure customers use Cost Management + Billing in the Azure portal to manage their enrollment and billing instead of using the EA portal. For more information about enrollment management in the Azure portal, see [Get started with the Azure portal for direct Enterprise Agreement customers](ea-direct-portal-get-started.md).
+> We recommend that both direct and indirect EA Azure customers use Cost Management + Billing in the Azure portal to manage their enrollment and billing instead of using the EA portal. For more information about enrollment management in the Azure portal, see [Get started with EA billing in the Azure portal](ea-direct-portal-get-started.md).
>
-> As of October 10, 2022 direct EA customers wonΓÇÖt be able to manage their billing account in the EA portal. Instead, they must use the Azure portal.
+> As of February 20, 2023 indirect EA customers wonΓÇÖt be able to manage their billing account in the EA portal. Instead, they must use the Azure portal.
>
-> This change doesnΓÇÖt affect direct Azure Government EA enrollments or indirect EA (an indirect EA is one where a customer signs an agreement with a Microsoft partner) enrollments. Both continue using the EA portal to manage their enrollment.
+> This change doesnΓÇÖt affect Azure Government EA enrollments. They continue using the EA portal to manage their enrollment.
## Azure Enterprise portal hierarchy
cost-management-billing Reserved Instance Purchase Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/reserved-instance-purchase-recommendations.md
Previously updated : 12/13/2022 Last updated : 01/05/2023 # Reservation recommendations
The following steps define how recommendations are calculated:
3. The costs are simulated for different quantities, and the quantity that maximizes the savings is recommended. 4. If your resources are shut down regularly, the simulation won't find any savings, and no purchase recommendation is provided. 5. The recommendation calculations include any special discounts that you might have for your on-demand usage rates, such as Microsoft Azure Consumption Commitment (MACC) and Azure Commitment Discount (ACD) based solely on historic usage.
- - The recommendations donΓÇÖt account for existing reservations or savings plans.
+ - The recommendations account for existing reservations and savings plans. So, previously purchased reservations and savings plans are excluded when providing recommendations.
## Recommendations in the Azure portal
Reservation purchase recommendations are available in Azure Advisor. Keep in min
- The recommendations quantity and savings are for a three-year reservation, where available. If a three-year reservation isn't sold for the service, the recommendation is calculated using the one-year reservation price. - The recommendation calculations include any special discounts that you might have on your on-demand usage rates. - If you purchase a shared-scope reservation, Advisor reservation purchase recommendations can take up to five days to disappear.
+- Azure classic compute resources such as classic VMs are explicitly excluded from reservation recommendations. Microsoft recommends that users avoid making long-term commitments to legacy services that are being deprecated.
## Other expected API behavior
cost-management-billing Troubleshoot Reservation Recommendation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/troubleshoot-reservation-recommendation.md
Previously updated : 12/06/2022 Last updated : 01/06/2023 # Troubleshoot Azure reservation recommendations
It's also important to understand how the scope selection affects recommendation
Azure might recommend purchasing a reservation for certain terms, and not for others, based on the cost savings identified. Specifically, three-year terms have larger discounts than one-year terms. It's more likely that Azure will find savings for a three-year term than it will for a one-year term.
-If you want to understand why Azure recommends a specific resource size and quantity, select **&lt;Quantity&gt; See details** for an in-depth, visualization showing potential savings over time.
+Azure classic compute resources such as classic VMs are explicitly excluded from reservation recommendations. Microsoft recommends that users avoid making long-term commitments to legacy services that are being deprecated.
+
+If you want to understand why Azure recommends a specific resource size and quantity, select **\<Quantity\> See details** for an in-depth, visualization showing potential savings over time.
:::image type="content" source="./media/troubleshoot-reservation-recommendation/see-details-link.png" alt-text="Example showing the reservation recommendation See details link" lightbox="./media/troubleshoot-reservation-recommendation/see-details-link.png" :::
cost-management-billing Buy Savings Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/buy-savings-plan.md
Previously updated : 11/16/2022 Last updated : 01/09/2023
Before you enter a commitment to buy a savings plan, review the following sectio
## Who can buy a savings plan
-You can buy a savings plan for an Azure subscription that's of type Enterprise (MS-AZR-0017P or MS-AZR-0148P), Microsoft Customer Agreement (MCA) or Microsoft Partner Agreement.
-
-To determine if you're eligible to buy a plan, [check your billing type](../manage/view-all-accounts.md#check-the-type-of-your-account).
-
-Savings plan discounts only apply to resources associated with subscriptions purchased through an Enterprise Agreement, Microsoft Customer Agreement, or Microsoft Partner Agreement (MPA).
+You can buy a savings plan for an Azure subscription that's of type Enterprise Agreement (EA) offer code MS-AZR-0017P or MS-AZR-0148P, Microsoft Customer Agreement (MCA), or Microsoft Partner Agreement (MPA). If don't know what subscription type you have, see [check your billing type](../manage/view-all-accounts.md#check-the-type-of-your-account).
## Change agreement type to one supported by savings plan
If your current agreement type isn't supported by a savings plan, you might be a
- [Product transfer support](../manage/subscription-transfer.md#product-transfer-support) - [From MOSA to the Microsoft Customer Agreement](https://www.microsoft.com/licensing/news/from-mosa-to-microsoft-customer-agreement)
-### Enterprise Agreement customers
+## Required permission and how to buy
+
+You can buy a savings plan using the Azure portal or with the [Savings Plan Order Alias - Create](/rest/api/billingbenefits/savings-plan-order-alias/create) REST API.
+
+### Purchase in the Azure portal
+
+Required permission and the steps to buy vary, depending on your agreement type.
+
+#### Enterprise Agreement customers
- EA admins with write permissions can directly purchase savings plans from **Cost Management + Billing** > **Savings plan**. No specific permission for a subscription is needed. - Subscription owners for one of the subscriptions in the EA enrollment can purchase savings plans from **Home** > **Savings plan**. - Enterprise Agreement (EA) customers can limit purchases to EA admins only by disabling the **Add Savings Plan** option in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade/BillingAccounts). Navigate to the **Policies** menu to change settings.
-### Microsoft Customer Agreement (MCA) customers
+#### Microsoft Customer Agreement (MCA) customers
- Customers with billing profile contributor permissions and above can purchase savings plans from **Cost Management + Billing** > **Savings plan** experience. No specific permissions on a subscription needed. - Subscription owners for one of the subscriptions in the billing profile can purchase savings plans from **Home** > **Savings plan**. - To disallow savings plan purchases on a billing profile, billing profile contributors can navigate to the Policies menu under the billing profile and adjust **Azure Savings Plan** option.
-### Microsoft Partner Agreement partners
+#### Microsoft Partner Agreement partners
- Partners can use **Home** > **Savings plan** in the Azure portal to purchase savings plans for their customers.
+### Purchase with the Savings Plan Order Alias - Create API
+
+Buy savings plans by using RBAC permissions or with permissions on your billing scope. When using the [Savings Plan Order Alias - Create](/rest/api/billingbenefits/savings-plan-order-alias/create) REST API, the format of the `billingScopeId` in the request body is used to control the permissions that are checked.
+
+To purchase using RBAC permissions:
+
+- You must be an Owner of the subscription which you plan to use, specified as `billingScopeId`.
+- The `billingScopeId` property in the request body must use the `/subscriptions/10000000-0000-0000-0000-000000000000` format.
+
+To purchase using billing permissions:
+
+Permission needed to purchase varies by the type of account that you have.
+
+- For Enterprise agreement customers, you must be an EA admin with write permissions.
+- For Microsoft Customer Agreement (MCA) customers, you must be a billing profile contributor or above.
+- For Microsoft Partner Agreement partners, only RBAC permissions are currently supported
+
+The `billingScopeId` property in the request body must use the `/providers/Microsoft.Billing/billingAccounts/{accountId}/billingSubscriptions/10000000-0000-0000-0000-000000000000` format.
+ ## Scope savings plans You can scope a savings plan to a shared scope, management group, subscription, or resource group scopes. Setting the scope for a savings plan selects where the savings plan savings apply. When you scope the savings plan to a resource group, savings plan discounts apply only to the resource groupΓÇönot the entire subscription. ### Savings plan scoping options
-You have four options to scope a savings plan, depending on your needs:
+You have the following options to scope a savings plan, depending on your needs:
- **Shared scope** - Applies the savings plan discounts to matching resources in eligible subscriptions that are in the billing scope. If a subscription was moved to a different billing scope, the benefit no longer applies to the subscription. It does continue to apply to other subscriptions in the billing scope. - For Enterprise Agreement customers, the billing scope is the enrollment. The savings plan shared scope would include multiple Active Directory tenants in an enrollment.
Savings plan discounts apply to the following eligible subscriptions and offer t
- Microsoft Customer Agreement subscriptions. - Microsoft Partner Agreement subscriptions.
-## Purchase savings plans
-
-You can purchase savings plans in the Azure portal.
- ### Buy savings plans with monthly payments You can pay for savings plans with monthly payments. Unlike an up-front purchase where you pay the full amount, the monthly payment option divides the total cost of the savings plan evenly over each month of the term. The total cost of up-front and monthly savings plans is the same and you don't pay any extra fees when you choose to pay monthly.
You can trade in one or more reservations for a savings plan. When you trade in
Depending on how you pay for your Azure subscription, email savings plan notifications are sent to the following users in your organization. Notifications are sent for various events including: - Purchase-- Upcoming savings plan expiration-- Expiry
+- Upcoming savings plan expiration - 30 days before
+- Expiry - 30 days before
- Renewal - Cancellation - Scope change
cost-management-billing Download Azure Daily Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/download-azure-daily-usage.md
Previously updated : 08/08/2022 Last updated : 12/16/2022 # View and download your Azure usage and charges
If you want to get cost and usage data using the Azure CLI, see [Get usage data
To view and download usage data as a EA customer, you must be an Enterprise Administrator, Account Owner, or Department Admin with the view charges policy enabled. > [!NOTE]
-> We recommend that direct EA Azure customers use Cost Management + Billing in the Azure portal to manage their enrollment and billing instead of using the EA portal. For more information about enrollment management in the Azure portal, see [Get started with the Azure portal for direct Enterprise Agreement customers](../manage/ea-direct-portal-get-started.md).
+> We recommend that both direct and indirect EA Azure customers use Cost Management + Billing in the Azure portal to manage their enrollment and billing instead of using the EA portal. For more information about enrollment management in the Azure portal, see [Get started with EA billing in the Azure portal](../manage/ea-direct-portal-get-started.md).
>
-> As of October 10, 2022 direct EA customers wonΓÇÖt be able to manage their billing account in the EA portal. Instead, they must use the Azure portal.
+> As of February 20, 2023 indirect EA customers wonΓÇÖt be able to manage their billing account in the EA portal. Instead, they must use the Azure portal.
>
-> This change doesnΓÇÖt affect direct Azure Government EA enrollments or indirect EA (an indirect EA is one where a customer signs an agreement with a Microsoft partner) enrollments. Both continue using the EA portal to manage their enrollment.
+> This change doesnΓÇÖt affect Azure Government EA enrollments. They continue using the EA portal to manage their enrollment.
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Search for *Cost Management + Billing*.
cost-management-billing Pay Bill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/pay-bill.md
tags: billing, past due, pay now, bill, invoice, pay
Previously updated : 11/29/2022 Last updated : 01/04/2023
There are a few countries that don't allow the use of debit cards, however in ge
- Hong Kong and Brazil only support credit cards. - India supports debit and credit cards through Visa and Mastercard. - ### Check or wire transfer If the default payment method of your billing profile is check or wire transfer, follow the payment instructions shown on your invoice PDF file.
To pay invoices in the Azure portal, you must have the correct [MCA permissions]
1. Sign into the [Azure portal](https://portal.azure.com). 1. Search on **Cost Management + Billing**. 1. In the left menu, select **Invoices** under **Billing**.
-1. If any of your invoices are due or past due, you'll see a blue **Pay now** link for that invoice. Select **Pay now**.
-1. In the Pay now window, select **Select a payment method** to choose an existing credit card or add a new one.
+1. If any of your eligible invoices are due or past due, you'll see a blue **Pay now** link for that invoice. Select **Pay now**.
+1. In the Pay now window, select or tap **Select a payment method** to choose an existing credit card or add a new one.
1. After you select a payment method, select **Pay now**. The invoice status shows *paid* within 24 hours.
The invoice status shows *paid* within 24 hours.
If you have a Microsoft Online Services Program account (pay-as-you-go account), the **Pay now** option might be unavailable. Instead, you might see a **Settle balance** banner. If so, see [Resolve past due balance](../manage/resolve-past-due-balance.md#resolve-past-due-balance-in-the-azure-portal).
+Based on the default payment method and invoice amount, the **Pay now** option might be unavailable. Check your invoice for payment instructions.
+ ## Check access to a Microsoft Customer Agreement [!INCLUDE [billing-check-mca](../../../includes/billing-check-mca.md)]
cost-management-billing Review Enterprise Agreement Bill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/review-enterprise-agreement-bill.md
Azure customers with an Enterprise Agreement receive an invoice when they exceed
Your organization's credit includes your Azure Prepayment (previously called monetary commitment). Azure Prepayment is the amount your organization paid upfront for usage of Azure services. You can add Azure Prepayment funds to your Enterprise Agreement by contacting your Microsoft account manager or reseller. > [!NOTE]
-> We recommend that direct EA Azure customers use Cost Management + Billing in the Azure portal to manage their enrollment and billing instead of using the EA portal. For more information about enrollment management in the Azure portal, see [Get started with the Azure portal for direct Enterprise Agreement customers](../manage/ea-direct-portal-get-started.md).
+> We recommend that both direct and indirect EA Azure customers use Cost Management + Billing in the Azure portal to manage their enrollment and billing instead of using the EA portal. For more information about enrollment management in the Azure portal, see [Get started with EA billing in the Azure portal](../manage/ea-direct-portal-get-started.md).
>
-> As of October 10, 2022 direct EA customers wonΓÇÖt be able to manage their billing account in the EA portal. Instead, they must use the Azure portal.
+> As of February 20, 2023 indirect EA customers wonΓÇÖt be able to manage their billing account in the EA portal. Instead, they must use the Azure portal.
>
-> This change doesnΓÇÖt affect direct Azure Government EA enrollments or indirect EA (an indirect EA is one where a customer signs an agreement with a Microsoft partner) enrollments. Both continue using the EA portal to manage their enrollment.
+> This change doesnΓÇÖt affect Azure Government EA enrollments. They continue using the EA portal to manage their enrollment.
This tutorial applies only to Azure customers with an Azure Enterprise Agreement.
data-factory Concepts Change Data Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-change-data-capture.md
Previously updated : 12/15/2022 Last updated : 01/04/2023 # Change data capture in Azure Data Factory and Azure Synapse Analytics
The changed data including inserted, updated and deleted rows can be automatical
**Supported connectors** - [SAP CDC](connector-sap-change-data-capture.md) - [Azure SQL Database](connector-azure-sql-database.md)-- [Azure SQL Server](connector-sql-server.md)
+- [SQL Server](connector-sql-server.md)
- [Azure SQL Managed Instance](connector-azure-sql-managed-instance.md) - [Azure Cosmos DB (SQL API)](connector-azure-cosmos-db.md)
The newly updated rows or updated files can be automatically detected and extrac
- [ADLS Gen2](load-azure-data-lake-storage-gen2.md) - [ADLS Gen1](load-azure-data-lake-store.md) - [Azure SQL Database](connector-azure-sql-database.md)-- [Azure SQL Server](connector-sql-server.md)
+- [SQL Server](connector-sql-server.md)
- [Azure SQL Managed Instance](connector-azure-sql-managed-instance.md) - [Azure Database for MySQL](connector-azure-database-for-mysql.md) - [Azure Database for PostgreSQL](connector-azure-database-for-postgresql.md)
You can always build your own delta data extraction pipeline for all ADF support
## Checkpoint
-When you enable native change data capture or auto incremental extraction options in ADF mapping data flow, ADF helps you to manage the checkpoint to make sure each activity run will automatically only read the source data that has changed since the last time the pipeline run. By default, the checkpoint is coupled with your pipeline and activity name. If you change your pipeline name or activity name, the checkpoint will be reset, which leads you to start from beginning or get changes from now in the next run. If you do want to change the pipeline name or activity name but still keep the checkpoint to get changed data from the last run automatically, please use your own [Checkpoint key](control-flow-execute-data-flow-activity.md#checkpoint-key) in data flow activity to achieve that.
+When you enable native change data capture or auto incremental extraction options in ADF mapping data flow, ADF helps you to manage the checkpoint to make sure each activity run will automatically only read the source data that has changed since the last time the pipeline run. By default, the checkpoint is coupled with your pipeline and activity name. If you change your pipeline name or activity name, the checkpoint will be reset, which leads you to start from beginning or get changes from now in the next run. If you do want to change the pipeline name or activity name but still keep the checkpoint to get changed data from the last run automatically, please use your own [Checkpoint key](control-flow-execute-data-flow-activity.md#checkpoint-key) in data flow activity to achieve that. The [naming rule](naming-rules.md) of your own checkpoint key is same as linked services, datasets, pipelines and data flows.
When you debug the pipeline, this feature works the same. The checkpoint will be reset when you refresh your browser during the debug run. After you are satisfied with the pipeline result from debug run, you can go ahead to publish and trigger the pipeline. At the moment when you first time trigger your published pipeline, it automatically restarts from the beginning or gets changes from now on.
data-factory Connector Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-guide.md
Previously updated : 06/29/2022 Last updated : 01/05/2023
The errors below are general to the copy activity and could occur with any conne
- **Recommendation**: Verify your query is valid and can return dat) if you want to execute non-query scripts and your data store is supported. Alternatively, consider to use stored procedure that returns a dummy result to execute your non-query scripts.
+### Error code: FailToResolveParametersInExploratoryController
+
+- **Message**: `The parameters and expression cannot be resolved for schema operations. …The template function 'linkedService' is not defined or not valid.`
+
+- **Cause**: The service has limitation to support the linked service which references another linked service with parameters for test connection or preview data. For example, passing a parameter from a Key Vault to a linked service may occur the issue. 
+
+- **Recommendation**: Remove the parameters in the referred linked service to eliminate the error. Otherwise, run the pipeline without testing connection or previewing data. 
+ ## Next steps For more troubleshooting help, try these resources:
data-factory Control Flow Execute Data Flow Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-execute-data-flow-activity.md
If you're using an Azure Synapse Analytics as a sink or source, you must choose
## Checkpoint key
-When using the change capture option for data flow sources, ADF will maintain and manage the checkpoint for you automatically. The default checkpoint key is a hash of the data flow name and the pipeline name. If you are using a dynamic pattern for your source tables or folders, you may wish to override this hash and set your own checkpoint key value here.
+When using the change capture option for data flow sources, ADF will maintain and manage the checkpoint for you automatically. The default checkpoint key is a hash of the data flow name and the pipeline name. If you are using a dynamic pattern for your source tables or folders, you may wish to override this hash and set your own checkpoint key value here. The [naming rule](naming-rules.md) of your own checkpoint key is same as linked services, datasets, pipelines and data flows.
## Logging level
data-factory Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/policy-reference.md
Previously updated : 11/28/2022 Last updated : 01/05/2023 # Azure Policy built-in definitions for Data Factory (Preview)
data-lake-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Analytics description: Lists Azure Policy built-in policy definitions for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/28/2022 Last updated : 01/05/2023
data-lake-analytics Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Data Lake Analytics description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/04/2022 Last updated : 01/05/2023
data-lake-store Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Storage Gen1 description: Lists Azure Policy built-in policy definitions for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/28/2022 Last updated : 01/05/2023
data-lake-store Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Data Lake Storage Gen1 description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/04/2022 Last updated : 01/05/2023
databox-online Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/policy-reference.md
Title: Built-in policy definitions for Azure Stack Edge description: Lists Azure Policy built-in policy definitions for Azure Stack Edge. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/28/2022 Last updated : 01/05/2023
databox Data Box Deploy Copy Data Via Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-deploy-copy-data-via-rest.md
Previously updated : 08/26/2022 Last updated : 12/29/2022 #Customer intent: As an IT admin, I need to be able to copy data to Data Box to upload on-premises data from my server onto Azure.
Before you begin, make sure that:
4. You've access to a host computer that has the data that you want to copy over to Data Box. Your host computer must * Run a [Supported operating system](data-box-system-requirements.md). * Be connected to a high-speed network. We strongly recommend that you have at least one 10-GbE connection. If a 10-GbE connection isn't available, a 1-GbE data link can be used but the copy speeds will be impacted.
-5. [Download AzCopy 7.1.0](https://aka.ms/azcopyforazurestack20170417) on your host computer. You'll use AzCopy to copy data to Azure Data Box Blob storage from your host computer.
+5. [Download AzCopy V10](../storage/common/storage-use-azcopy-v10.md) on your host computer. You'll use AzCopy to copy data to Azure Data Box Blob storage from your host computer.
## Connect via http or https
databox Data Box Disk Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-disk-limits.md
Previously updated : 11/09/2021 Last updated : 12/29/2022 # Azure Data Box Disk limits
Consider these limits as you deploy and operate your Microsoft Azure Data Box Di
- Data Box service is available in the Azure regions listed in [Region availability](data-box-disk-overview.md#region-availability). - A single storage account is supported with Data Box Disk.
+ - Data Box Disk supports a maximum of 512 containers or shares in the cloud. The top-level directories within the user share become containers or Azure file shares in the cloud.
## Data Box Disk performance
databox Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/policy-reference.md
Title: Built-in policy definitions for Azure Data Box description: Lists Azure Policy built-in policy definitions for Azure Data Box. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/28/2022 Last updated : 01/05/2023
databox Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Data Box description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Box. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/04/2022 Last updated : 01/05/2023
ddos-protection Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/alerts.md
na Previously updated : 11/11/2022 Last updated : 01/05/2023 # Tutorial: View and configure Azure DDoS Protection alerts
In this tutorial, you'll learn how to:
- Before you can complete the steps in this tutorial, you must first create a [Azure DDoS Protection plan](manage-ddos-protection.md). DDoS Network Protection must be enabled on a virtual network or DDoS IP Protection must be enabled on a public IP address. - DDoS monitors public IP addresses assigned to resources within a virtual network. If you don't have any resources with public IP addresses in the virtual network, you must first create a resource with a public IP address. You can monitor the public IP address of all resources deployed through Resource Manager (not classic) listed in [Virtual network for Azure services](../virtual-network/virtual-network-for-azure-services.md#services-that-can-be-deployed-into-a-virtual-network) (including Azure Load Balancers where the backend virtual machines are in the virtual network), except for Azure App Service Environments. To continue with this tutorial, you can quickly create a [Windows](../virtual-machines/windows/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) or [Linux](../virtual-machines/linux/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) virtual machine.  
-## Configure alerts through Azure Monitor
-
-With these templates, you will be able to configure alerts for all public IP addresses that you have enabled diagnostic logging on. Hence in order to use these alert templates, you will first need a Log Analytics Workspace with diagnostic settings enabled. See [View and configure Azure DDoS Protection diagnostic logging](diagnostic-logging.md).
-
-### Azure Monitor alert rule
-
-This [Azure Monitor alert rule](https://aka.ms/DDOSmitigationstatus) will run a simple query to detect when an active DDoS mitigation is occurring. This indicates a potential attack. Action groups can be used to invoke actions as a result of the alert.
-
-[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2FAzure-Network-Security%2Fmaster%2FAzure%2520DDoS%2520Protection%2FAlert%2520-%2520DDOS%2520Mitigation%2520started%2520azure%2520monitor%2520alert%2FDDoSMitigationStarted.json)
-
-### Azure Monitor alert rule with Logic App
-
-This [DDoS Mitigation Alert Enrichment template](https://aka.ms/ddosalert) deploys the necessary components of an enriched DDoS mitigation alert: Azure Monitor alert rule, action group, and Logic App. The result of the process is an email alert with details about the IP address under attack, including information about the resource associated with the IP. The owner of the resource is added as a recipient of the email, along with the security team. A basic application availability test is also performed and the results are included in the email alert.
-
-[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2FAzure-Network-Security%2Fmaster%2FAzure%2520DDoS%2520Protection%2FAutomation%2520-%2520DDoS%2520Mitigation%2520Alert%2520Enrichment%2FEnrich-DDoSAlert.json)
- ## Configure alerts through portal You can select any of the available Azure DDoS Protection metrics to alert you when thereΓÇÖs an active mitigation during an attack, using the Azure Monitor alert configuration.
You can select any of the available Azure DDoS Protection metrics to alert you w
- Tags - Review + create
- For each step use the values described below:
+ For each step, use the values described below:
| Setting | Value | |--|--| | Scope | 1) Select **+ Select Scope**. <br/> 2) From the *Filter by subscription* dropdown list, select the **Subscription** that contains the public IP address you want to log. <br/> 3) From the *Filter by resource type* dropdown list, select **Public IP Address**, then select the specific public IP address you want to log metrics for. <br/> 4) Select **Done**. | | Condition | 1) Select the **+ Add Condition** button <br/> 2) In the *Search by signal name* search box, select **Under DDoS attack or not**. <br/> 3) Leave *Chart period* and *Alert Logic* as default. <br/> 4) From the *Operator* drop-down, select **Greater than or equal to**. <br/> 5) From the *Aggregation type* drop-down, select **Maximum**. <br/> 6) In the *Threshold value* box, enter **1**. For the *Under DDoS attack or not metric*, **0** means you're not under attack while **1** means you are under attack. <br/> 7) Select **Done**. |
- | Actions | 1) Select the **+ Create action group** button. <br/> 2) On the **Basics** tab, select your subscription, a resource group and provide the *Action group name* and *Display name*. <br/> 3) On the *Notifications* tab, under *Notification type*, select **Email/SMS message/Push/Voice**. <br/> 4) Under *Name*, enter **MyUnderAttackEmailAlert**. <br/> 5) On the *Email/SMS message/Push/Voice* page enter the **Email** and as many of the available options you require, and then select **OK**. <br/> 6) Select **Review + create** and then select **Create**. |
+ | Actions | 1) Select the **+ Create action group** button. <br/> 2) On the **Basics** tab, select your subscription, a resource group and provide the *Action group name* and *Display name*. <br/> 3) On the *Notifications* tab, under *Notification type*, select **Email/SMS message/Push/Voice**. <br/> 4) Under *Name*, enter **MyUnderAttackEmailAlert**. <br/> 5) On the *Email/SMS message/Push/Voice* page, enter the **Email** and as many of the available options you require, and then select **OK**. <br/> 6) Select **Review + create** and then select **Create**. |
| Details | 1) Under *Alert rule name*, enter *MyDdosAlert*. <br/> 2) Select **Review + create** and then select **Create**. | Within a few minutes of attack detection, you should receive an email from Azure Monitor metrics that looks similar to the following picture:
Within a few minutes of attack detection, you should receive an email from Azure
You can also learn more about [configuring webhooks](../azure-monitor/alerts/alerts-webhooks.md?toc=%2fazure%2fvirtual-network%2ftoc.json) and [logic apps](../logic-apps/logic-apps-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) for creating alerts. +
+## Configure alerts through Azure Monitor
+
+With these templates, you'll be able to configure alerts for all public IP addresses that you have enabled diagnostic logging on. Hence in order to use these alert templates, you'll first need a Log Analytics Workspace with diagnostic settings enabled. For more information, see [Log Analytics workspace overview](../azure-monitor/logs/log-analytics-workspace-overview.md).
+
+### Azure Monitor alert rule
+
+This Azure Monitor alert rule template will run a query to detect when an active DDoS mitigation is occurring. This indicates a potential attack. Action groups can be used to invoke actions as a result of the alert.
+
+[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2FAzure-Network-Security%2Fmaster%2FAzure%2520DDoS%2520Protection%2FAlert%2520-%2520DDOS%2520Mitigation%2520started%2520azure%2520monitor%2520alert%2FDDoSMitigationStarted.json)
++
+### Azure Monitor alert rule with Logic App
+
+This DDoS Mitigation Alert Enrichment template deploys the necessary components of an enriched DDoS mitigation alert: Azure Monitor alert rule, action group, and Logic App. The result of the process is an email alert with details about the IP address under attack, including information about the resource associated with the IP. The owner of the resource is added as a recipient of the email, along with the security team. A basic application availability test is also performed and the results are included in the email alert.
+
+[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2FAzure-Network-Security%2Fmaster%2FAzure%2520DDoS%2520Protection%2FAutomation%2520-%2520DDoS%2520Mitigation%2520Alert%2520Enrichment%2FEnrich-DDoSAlert.json)
+ ## View alerts in Microsoft Defender for Cloud Microsoft Defender for Cloud provides a list of [security alerts](../security-center/security-center-managing-and-responding-alerts.md), with information to help investigate and remediate problems. With this feature, you get a unified view of alerts, including DDoS attack-related alerts and the actions taken to mitigate the attack in near-time.
-There are two specific alerts that you will see for any DDoS attack detection and mitigation:
+There are two specific alerts that you'll see for any DDoS attack detection and mitigation:
- **DDoS Attack detected for Public IP**: This alert is generated when the DDoS protection service detects that one of your public IP addresses is the target of a DDoS attack. - **DDoS Attack mitigated for Public IP**: This alert is generated when an attack on the public IP address has been mitigated.
The alerts include general information about the public IP address thatΓÇÖs unde
## Validate and test
-To simulate a DDoS attack to validate your alerts, see [Validate Azure DDoS Protection detection](test-through-simulations.md).
+To simulate a DDoS attack to validate your alerts, see [Test with simulation partners](test-through-simulations.md).
## Next steps
ddos-protection Ddos Protection Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-overview.md
Azure DDoS Protection is designed for [services that are deployed in a virtual n
## Pricing For DDoS Network Protection, under a tenant, a single DDoS protection plan can be used across multiple subscriptions, so there's no need to create more than one DDoS protection plan.
-For DDoS IP Protection, there's no need to create a DDoS protection plan. Customers can enable DDoS on any public IP resource.
+For DDoS IP Protection, there's no need to create a DDoS protection plan. Customers can enable DDoS IP protection on any public IP resource.
To learn about Azure DDoS Protection pricing, see [Azure DDoS Protection pricing](https://azure.microsoft.com/pricing/details/ddos-protection/).
ddos-protection Ddos Protection Sku Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-sku-comparison.md
Previously updated : 11/14/2022 Last updated : 01/09/2023
The following table shows features and corresponding SKUs.
| Mitigation policies tuned to customers application | Yes| Yes | | Integration with Firewall Manager | Yes | Yes | | Azure Sentinel data connector and workbook | Yes | Yes |
+| Protection of resources across subscriptions in a tenant | Yes | Yes |
+| Public IP Standard SKU protection | Yes | Yes |
+| Public IP Basic SKU protection | No | Yes |
| DDoS rapid response support | Not available | Yes | | Cost protection | Not available | Yes | | WAF discount | Not available | Yes |
-| Protection of resources across subscriptions in a tenant | Yes | Yes |
| Price | Per protected IP | Per 100 protected IP addresses | >[!Note]
ddos-protection Monitor Ddos Protection Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/monitor-ddos-protection-reference.md
The following table lists the field names and descriptions:
| **TrafficOverview** | Breakdown of attack traffic. Keys include `Total packets`, `Total packets dropped`, `Total TCP packets`, `Total TCP packets dropped`, `Total UDP packets`, `Total UDP packets dropped`, `Total Other packets`, `Total Other packets dropped`. | | **Protocols** | Breakdown of protocols involved. Keys include `TCP`, `UDP`, `Other`. | | **DropReasons** | Breakdown of reasons for dropped packets. Keys include `Protocol violation invalid TCP syn`, `Protocol violation invalid TCP`, `Protocol violation invalid UDP`, `UDP reflection`, `TCP rate limit exceeded`, `UDP rate limit exceeded`, `Destination limit exceeded`, `Other packet flood`, `Rate limit exceeded`, `Packet was forwarded to service`. |
-| **TopSourceCountries** | Breakdown of top 10 source countries of incoming traffic. |
-| **TopSourceCountriesForDroppedPackets** | Breakdown of top 10 source countries of attack traffic that is/was mitigated. |
+| **TopSourceCountries** | Breakdown of top 10 source countries/regions of incoming traffic. |
+| **TopSourceCountriesForDroppedPackets** | Breakdown of top 10 source countries/regions of attack traffic that is/was mitigated. |
| **TopSourceASNs** | Breakdown of top 10 source autonomous system numbers (ASN) of the incoming traffic. | | **SourceContinents** | Breakdown of the source continents of incoming traffic. | ***
ddos-protection Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/policy-reference.md
na Previously updated : 11/28/2022 Last updated : 01/05/2023
defender-for-cloud Alerts Suppression Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-suppression-rules.md
Title: Using alerts suppression rules to suppress false positives or other unwanted security alerts in Microsoft Defender for Cloud
-description: This article explains how to use Microsoft Defender for Cloud's suppression rules to hide unwanted security alerts
Previously updated : 11/09/2021
+ Title: Suppressing false positives or other unwanted security alerts - Microsoft Defender for Cloud
+description: This article explains how to use Microsoft Defender for Cloud's suppression rules to hide unwanted security alerts, such as false positives
Last updated : 01/09/2023
This page explains how you can use alerts suppression rules to suppress false po
|Aspect|Details| |-|:-| |Release state:|General availability (GA)|
-|Pricing:|Free<br>(Most security alerts are only available with [enhanced security features](enable-enhanced-security.md))|
+|Pricing:|Free<br>(Security alerts are generated by [Defender plans](enable-enhanced-security.md))|
|Required roles and permissions:|**Security admin** and **Owner** can create/delete rules.<br>**Security reader** and **Reader** can view rules.| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet)| -- ## What are suppression rules?
-The various Microsoft Defender plans detect threats in any area of your environment and generate security alerts.
+The Microsoft Defender plans detect threats in your environment and generate security alerts. When a single alert isn't interesting or relevant, you can manually dismiss it. Suppression rules let you automatically dismiss similar alerts in the future.
-When a single alert isn't interesting or relevant, you can manually dismiss it. Alternatively, use the suppression rules feature to automatically dismiss similar alerts in the future. Typically, you'd use a suppression rule to:
+Just like when you identify an email as spam, you want to review your suppressed alerts periodically to make sure you're not missing any real threats.
-- Suppress alerts that you've identified as false positives
+Some examples of how to use suppression rule are:
+- Suppress alerts that you've identified as false positives
- Suppress alerts that are being triggered too often to be useful
-Your suppression rules define the criteria for which alerts should be automatically dismissed.
-
-> [!CAUTION]
-> Suppressing security alerts reduces the effectiveness of Defender for Cloud's threat protection. You should carefully check the potential impact of any suppression rule, and monitor it over time.
- :::image type="content" source="./media/alerts-suppression-rules/create-suppression-rule.gif" alt-text="Create alert suppression rule."::: ## Create a suppression rule
-There are a few ways you can create rules to suppress unwanted security alerts:
--- To suppress alerts at the management group level, use Azure Policy-- To suppress alerts at the subscription level, you can use the Azure portal or the REST API as explained below-
-> [!NOTE]
-> Suppression rules don't work retroactively - they'll only suppress alerts triggered _after_ the rule is created. Also, if a specific alert type has never been generated on a specific subscription, future alerts of that type won't be suppressed. For a rule to suppress an alert on a specific subscription, that alert type has to have been triggered at least once before the rule is created.
-
-To create a rule directly in the Azure portal:
-
-1. From Defender for Cloud's security alerts page:
+You can apply suppression rules to management groups or to subscriptions.
- - Select the specific alert you don't want to see anymore, and from the details pane, select **Take action**.
+- To suppress alerts for a management group, use [Azure Policy](/azure/governance/policy/overview).
+- To suppress alerts for subscriptions, use the Azure portal or the [REST API](#create-and-manage-suppression-rules-with-the-api).
- - Or, select the **suppression rules** link at the top of the page, and from the suppression rules page select **Create new suppression rule**:
+Alert types that were never triggered on a subscription or management group before the rule was created won't be suppressed.
- ![Create new suppression rule** button.](media/alerts-suppression-rules/create-new-suppression-rule.png)
+To create a rule for a specific alert in the Azure portal:
-1. In the new suppression rule pane, enter the details of your new rule.
- - Your rule can dismiss the alert on **all resources** so you don't get any alerts like this one in the future.
- - Your rule can dismiss the alert **on specific criteria** - when it relates to a specific IP address, process name, user account, Azure resource, or location.
+1. From Defender for Cloud's security alerts page, select the alert you want to suppress.
+1. From the details pane, select **Take action**.
+1. In the **Suppress similar alerts** section of the Take action tab, select **Create suppression rule**.
+1. In the **New suppression rule** pane, enter the details of your new rule.
- > [!TIP]
- > If you opened the new rule page from a specific alert, the alert and subscription will be automatically configured in your new rule. If you used the **Create new suppression rule** link, the selected subscriptions will match the current filter in the portal.
-
- [![Suppression rule creation pane.](media/alerts-suppression-rules/new-suppression-rule-pane.png)](media/alerts-suppression-rules/new-suppression-rule-pane.png#lightbox)
-1. Enter details of the rule:
+ - **Entities** - The resources that the rule applies to. You can specify a single resource, multiple resources, or resources that contain a partial resource ID. If you don't specify any resources, the rule applies to all resources in the subscription.
- **Name** - A name for the rule. Rule names must begin with a letter or a number, be between 2 and 50 characters, and contain no symbols other than dashes (-) or underscores (_). - **State** - Enabled or disabled.
- - **Reason** - Select one of the built-in reasons or 'other' if they don't meet your needs.
+ - **Reason** - Select one of the built-in reasons or 'other' to specify your own reason in the comment.
- **Expiration date** - An end date and time for the rule. Rules can run for up to six months.
-1. Optionally, test the rule using the **Simulate** button to see how many alerts would have been dismissed if this rule had been active.
-1. Save the rule.
+1. You select **Simulate** to see the number of previously received alerts that would have been dismissed if the rule was active.
+1. Save the rule.
+
+You can also select the **Suppression rules** button in the Security Alerts page and select **Create suppression rule** to enter the details of your new rule.
+ ## Edit a suppression rule
-To edit a rule you've created, use the suppression rules page.
+To edit a rule you've created from the suppression rules page:
-1. From Defender for Cloud's security alerts page, select the **suppression rules** link at the top of the page.
-1. The suppression rules page opens with all the rules for the selected subscriptions.
+1. From Defender for Cloud's security alerts page, select **Suppression rules** at the top of the page.
- [![Suppression rules list.](media/alerts-suppression-rules/suppression-rules-page.png)](media/alerts-suppression-rules/suppression-rules-page.png#lightbox)
+ :::image type="content" source="media/alerts-suppression-rules/suppression-rules-button.png" alt-text="Screenshot of the suppression rule button in the Security Alerts page.":::
-1. To edit a single rule, open the ellipsis menu (...) for the rule and select **Edit**.
-1. Make the necessary changes and select **Apply**.
+1. The suppression rules page opens with all the rules for the selected subscriptions.
-## Delete a suppression rule
+ :::image type="content" source="media/alerts-suppression-rules/suppression-rules-page.png" alt-text="Screenshot of the Suppression rules page where you can review the suppression rules and create new ones." lightbox="media/alerts-suppression-rules/suppression-rules-page.png":::
-To delete one or more rules you've created, use the suppression rules page.
+1. To edit a single rule, open the three dots (...) at the end of the rule and select **Edit**.
+1. Change the details of the rule and select **Apply**.
-1. From Defender for Cloud's security alerts page, select the **suppression rules** link at the top of the page.
-1. The suppression rules page opens with all the rules for the selected subscriptions.
-1. To delete a single rule, open the ellipsis menu (...) for the rule and select **Delete**.
-1. To delete multiple rules, select the check boxes for the rules to be deleted and select **Delete**.
- ![Deleting one or more suppression rules.](media/alerts-suppression-rules/delete-multiple-alerts.png)
+To delete a rule, use the same three dots menu and select **Remove**.
## Create and manage suppression rules with the API
-You can create, view, or delete alert suppression rules via Defender for Cloud's REST API.
+You can create, view, or delete alert suppression rules using the Defender for Cloud REST API.
The relevant HTTP methods for suppression rules in the REST API are: - **PUT**: To create or update a suppression rule in a specified subscription.- - **GET**: - To list all rules configured for a specified subscription. This method returns an array of the applicable rules.- - To get the details of a specific rule on a specified subscription. This method returns one suppression rule.- - To simulate the impact of a suppression rule still in the design phase. This call identifies which of your existing alerts would have been dismissed if the rule had been active. - **DELETE**: Deletes an existing rule (but doesn't change the status of alerts already dismissed by it).
-For full details and usage examples, see the [API documentation](/rest/api/defenderforcloud/).
-
+For details and usage examples, see the [API documentation](/rest/api/defenderforcloud/).
## Next steps This article described the suppression rules in Microsoft Defender for Cloud that automatically dismiss unwanted alerts.
-For more information on security alerts, see the following pages:
+Learn more about security alerts:
-- [Security alerts and the intent kill chain](alerts-reference.md) - A reference guide to the security alerts you might get from Defender for Cloud.
+- [Security alerts generated by Defender for Cloud](alerts-reference.md)
defender-for-cloud Defender For Cloud Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-cloud-introduction.md
Title: What is Microsoft Defender for Cloud?
description: Use Microsoft Defender for Cloud to protect your Azure, hybrid, and multicloud resources and workloads. ++ Last updated 10/04/2022
defender-for-cloud Defender For Cloud Planning And Operations Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-cloud-planning-and-operations-guide.md
Title: Defender for Cloud Planning and Operations Guide
description: This document helps you to plan before adopting Defender for Cloud and considerations regarding daily operations. Previously updated : 12/12/2022 Last updated : 01/08/2023 + # Planning and operations guide This guide is for information technology (IT) professionals, IT architects, information security analysts, and cloud administrators planning to use Defender for Cloud. - ## Planning guide+ This guide provides the background for how Defender for Cloud fits into your organization's security requirements and cloud management model. It's important to understand how different individuals or teams in your organization use the service to meet secure development and operations, monitoring, governance, and incident response needs. The key areas to consider when planning to use Defender for Cloud are: - Security Roles and Access Controls
This guide provides the background for how Defender for Cloud fits into your org
In the next section, you'll learn how to plan for each one of those areas and apply those recommendations based on your requirements. - > [!NOTE] > Read [Defender for Cloud frequently asked questions (FAQ)](faq-general.yml) for a list of common questions that can also be useful during the designing and planning phase. ## Security roles and access controls+ Depending on the size and structure of your organization, multiple individuals and teams may use Defender for Cloud to perform different security-related tasks. In the following diagram, you have an example of fictitious personas and their respective roles and security responsibilities: :::image type="content" source="./media/defender-for-cloud-planning-and-operations-guide/defender-for-cloud-planning-and-operations-guide-fig01-new.png" alt-text="Roles.":::
Defender for Cloud enables these individuals to meet these various responsibilit
Defender for Cloud uses [Azure role-based access control (Azure RBAC)](../role-based-access-control/role-assignments-portal.md), which provides [built-in roles](../role-based-access-control/built-in-roles.md) that can be assigned to users, groups, and services in Azure. When a user opens Defender for Cloud, they only see information related to resources they have access to. Which means the user is assigned the role of Owner, Contributor, or Reader to the subscription or resource group that a resource belongs to. In addition to these roles, there are two roles specific to Defender for Cloud: - **Security reader**: a user that belongs to this role is able to view only Defender for Cloud configurations, which include recommendations, alerts, policy, and health, but it won't be able to make changes.+ - **Security admin**: same as security reader but it can also update the security policy, dismiss recommendations and alerts. The personas explained in the previous diagram need these Azure RBAC roles: **Jeff (Workload Owner)** -- Resource Group Owner/Contributor
+- Resource Group Owner/Contributor.
**Ellen (CISO/CIO)** -- Subscription Owner/Contributor or Security Admin
+- Subscription Owner/Contributor or Security Admin.
**David (IT Security)** -- Subscription Owner/Contributor or Security Admin
+- Subscription Owner/Contributor or Security Admin.
**Judy (Security Operations)** -- Subscription Reader or Security Reader to view Alerts-- Subscription Owner/Contributor or Security Admin required to dismiss Alerts
+- Subscription Reader or Security Reader to view alerts.
+
+- Subscription Owner/Contributor or Security Admin required to dismiss alerts.
**Sam (Security Analyst)** -- Subscription Reader to view Alerts-- Subscription Owner/Contributor required to dismiss Alerts
+- Subscription Reader to view alerts.
+
+- Subscription Owner/Contributor required to dismiss alerts.
+ - Access to the workspace may be required Some other important information to consider: - Only subscription Owners/Contributors and Security Admins can edit a security policy.+ - Only subscription and resource group Owners and Contributors can apply security recommendations for a resource. When planning access control using Azure RBAC for Defender for Cloud, make sure you understand who in your organization needs access to Defender for Cloud the tasks they'll perform. Then you can configure Azure RBAC properly. > [!NOTE] > We recommend that you assign the least permissive role needed for users to complete their tasks. For example, users who only need to view information about the security state of resources but not take action, such as applying recommendations or editing policies, should be assigned the Reader role.
->
->
## Security policies and recommendations A security policy defines the desired configuration of your workloads and helps ensure compliance with company or regulatory security requirements. In Defender for Cloud, you can define policies for your Azure subscriptions, which can be tailored to the type of workload or the sensitivity of data.
-Defender for Cloud policies contain the following components:
+Defenders for Cloud policies contain the following components:
+ - [Data collection](monitoring-components.md): agent provisioning and data collection settings.+ - [Security policy](tutorial-security-policy.md): an [Azure Policy](../governance/policy/overview.md) that determines which controls are monitored and recommended by Defender for Cloud. You can also use Azure Policy to create new definitions, define more policies, and assign policies across management groups.+ - [Email notifications](configure-email-notifications.md): security contacts and notification settings.+ - [Pricing tier](enhanced-security-features-overview.md): with or without Microsoft Defender for Cloud's enhanced security features, which determine which Defender for Cloud features are available for resources in scope (can be specified for subscriptions and workspaces using the API). > [!NOTE] > Specifying a security contact ensures that Azure can reach the right person in your organization if a security incident occurs. Read [Provide security contact details in Defender for Cloud](configure-email-notifications.md) for more information on how to enable this recommendation. ### Security policies definitions and recommendations+ Defender for Cloud automatically creates a default security policy for each of your Azure subscriptions. You can edit the policy in Defender for Cloud or use Azure Policy to create new definitions, define more policies, and assign policies across management groups. Management groups can represent the entire organization or a business unit within the organization. You can monitor policy compliance across these management groups. Before configuring security policies, review each of the [security recommendations](review-security-recommendations.md): - See if these policies are appropriate for your various subscriptions and resource groups.+ - Understand what actions address the security recommendations.+ - Determine who in your organization is responsible for monitoring and remediating new recommendations. ## Data collection and storage+ Defender for Cloud uses the Log Analytics agent and the Azure Monitor Agent to collect security data from your virtual machines. [Data collected](monitoring-components.md) from this agent is stored in your Log Analytics workspaces. ### Agent
Data collected from the Log Analytics agent can be stored in an existing Log Ana
In the Azure portal, you can browse to see a list of your Log Analytics workspaces, including any created by Defender for Cloud. A related resource group is created for new workspaces. Resources are created according to this naming convention: - Workspace: *DefaultWorkspace-[subscription-ID]-[geo]*+ - Resource Group: *DefaultResourceGroup-[geo]* For workspaces created by Defender for Cloud, data is retained for 30 days. For existing workspaces, retention is based on the workspace pricing tier. If you want, you can also use an existing workspace.
If your agent reports to a workspace other than the **default** workspace, any M
> [!NOTE] > Microsoft makes strong commitments to protect the privacy and security of this data. Microsoft adheres to strict compliance and security guidelinesΓÇöfrom coding to operating a service. For more information about data handling and privacy, read [Defender for Cloud Data Security](data-security.md).
->
## Onboard non-Azure resources
You can use [adaptive application controls](adaptive-application-controls.md) to
## Incident response+ Defender for Cloud detects and alerts you to threats as they occur. Organizations should monitor for new security alerts and take action as needed to investigate further or remediate the attack. For more information on how Defender for Cloud threat protection works, read [How Defender for Cloud detects and responds to threats](alerts-overview.md#detect-threats). Although we can't create your Incident Response plan, we'll use Microsoft Azure Security Response in the Cloud lifecycle as the foundation for incident response stages. The stages of incident response in the cloud lifecycle are:
Although we can't create your Incident Response plan, we'll use Microsoft Azure
> [!NOTE] > You can use the National Institute of Standards and Technology (NIST) [Computer Security Incident Handling Guide](https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-61r2.pdf) as a reference to assist you building your own.
->
-You can use Defender for Cloud Alerts during the following stages:
+You can use Defender for Cloud alerts during the following stages:
- **Detect**: identify a suspicious activity in one or more resources.+ - **Assess**: perform the initial assessment to obtain more information about the suspicious activity.+ - **Diagnose**: use the remediation steps to conduct the technical procedure to address the issue. Each Security Alert provides information that can be used to better understand the nature of the attack and suggest possible mitigations. Some alerts also provide links to either more information or to other sources of information within Azure. You can use the information provided for further research and to begin mitigation, and you can also search security-related data that is stored in your workspace.
The following example shows a suspicious RDP activity taking place:
:::image type="content" source="./media/defender-for-cloud-planning-and-operations-guide/defender-for-cloud-planning-and-operations-guide-fig5-ga.png" alt-text="Suspicious activity.":::
-This page shows the details regarding the time that the attack took place, the source hostname, the target VM and also gives recommendation steps. In some circumstances, the source information of the attack may be empty. Read [Missing Source Information in Defender for Cloud Alerts](/archive/blogs/azuresecurity/missing-source-information-in-azure-security-center-alerts) for more information about this type of behavior.
+This page shows the details regarding the time that the attack took place, the source hostname, the target VM and also gives recommendation steps. In some circumstances, the source information of the attack may be empty. Read [Missing Source Information in Defender for Cloud alerts](/archive/blogs/azuresecurity/missing-source-information-in-azure-security-center-alerts) for more information about this type of behavior.
Once you identify the compromised system, you can run a [workflow automation](workflow-automation.md) that was previously created. Workflow automations are a collection of procedures that can be executed from Defender for Cloud once triggered by an alert.
defender-for-cloud Defender For Kubernetes Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-kubernetes-introduction.md
Title: Microsoft Defender for Kubernetes - the benefits and features description: Learn about the benefits and features of Microsoft Defender for Kubernetes. Last updated 07/11/2022++
defender-for-cloud Defender For Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-introduction.md
Title: Microsoft Defender for Storage - the benefits and features
description: Learn about the benefits and features of Microsoft Defender for Storage. Last updated 07/12/2022++ + # Overview of Microsoft Defender for Storage **Microsoft Defender for Storage** is an Azure-native layer of security intelligence that detects unusual and potentially harmful attempts to access or exploit your storage accounts. It uses advanced threat detection capabilities and [Microsoft Threat Intelligence](https://go.microsoft.com/fwlink/?linkid=2128684) data to provide contextual security alerts. Those alerts also include steps to mitigate the detected threats and prevent future attacks.
defender-for-cloud Enhanced Security Features Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enhanced-security-features-overview.md
If you're using a custom workspace and enable the plan on the subscription level
Enabling the Servers plan on both the subscription and its connected workspaces, won't incur a double charge. The system will identify each unique VM.
-If you enable the Servers plan on cross-subscription workspaces, connected VMs from all subscriptions will be billed, including subscriptions that don't have the Servers plan enabled.
+If you enable the Servers plan on cross-subscription workspaces, connected VMs with the Log Analytics agent installed from all subscriptions will be billed, including subscriptions that don't have the Servers plan enabled. Connected VMs with the Azure Monitor agent installed are billed only if the Servers plan is enabled at the subscription level.
### Will I be charged for machines without the Log Analytics agent installed?
defender-for-cloud Episode Twenty Four https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty-four.md
+
+ Title: Enhancements in Defender for SQL vulnerability assessment | Defender for Cloud in the field
+
+description: Learn about Enhancements in Defender for SQL Vulnerability Assessment
+ Last updated : 01/05/2023++
+# Enhancements in Defender for SQL vulnerability assessment | Defender for Cloud in the field
+
+**Episode description**: In this episode of Defender for Cloud in the Field, Catalin Esanu joins Yuri Diogenes to talk about the enhancements in Defender for SQL Vulnerability Assessment (VA) capability that were announced. Catalin explains how the new SQL VA Express changed to allow a frictionless onboarding experience and how it became easier to manage VA baselines. Catalin demonstrates how to enable this experience and how to customize the baseline with companion scripts.
+<br>
+<br>
+<iframe src="https://aka.ms/docs/player?id=cbd8ace6-4602-4900-bb73-cf8986605639" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+
+- [01:23](/shows/mdc-in-the-field/defender-sql-enhancements#time=01m23s) - Architecture change in SQL VA
+- [05:30](/shows/mdc-in-the-field/defender-sql-enhancements#time=05m30s) - Enabling SQL VA Express
+- [06:25](/shows/mdc-in-the-field/defender-sql-enhancements#time=06m25s) - Performance considerations
+- [08:49](/shows/mdc-in-the-field/defender-sql-enhancements#time=08m49s) - Other additions to SQL VA Express
+- [12:56](/shows/mdc-in-the-field/defender-sql-enhancements#time=12m56s) - Demonstration
++
+## Recommended resources
+ - [Learn more](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/new-express-configuration-for-vulnerability-assessment-in/ba-p/3695390) about Defender for SQL Vulnerability Assessment (VA).
+ - Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS)
+ - Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+ - For more about [Microsoft Security](https://msft.it/6002T9HQY)
+
+- Follow us on social media:
+
+ - [LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F)
+ - [Twitter](https://twitter.com/msftsecurity)
+
+- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+
+- Learn more about [Microsoft Security](https://msft.it/6002T9HQY)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md)
defender-for-cloud Episode Twenty Three https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty-three.md
Last updated 12/21/2022
## Next steps > [!div class="nextstepaction"]
-> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md)
+> [Enhancements in Defender for SQL Vulnerability Assessment](episode-twenty-four.md)
defender-for-cloud Just In Time Access Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/just-in-time-access-overview.md
Title: Understanding just-in-time virtual machine access in Microsoft Defender for Cloud description: This document explains how just-in-time VM access in Microsoft Defender for Cloud helps you control access to your Azure virtual machines ++ Last updated 05/15/2022
defender-for-cloud Kubernetes Workload Protections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/kubernetes-workload-protections.md
Title: Kubernetes data plane hardening description: Learn how to use Microsoft Defender for Cloud's set of Kubernetes data plane hardening security recommendations ++ Last updated 03/08/2022
defender-for-cloud Management Groups Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/management-groups-roles.md
Title: Organize subscriptions into management groups and assign roles to users for Microsoft Defender for Cloud description: Learn how to organize your Azure subscriptions into management groups in Microsoft Defender for Cloud and assign roles to users in your organization Previously updated : 11/09/2021 Last updated : 01/09/2023 # Organize subscriptions into management groups and assign roles to users
-This page explains how to manage your organizationΓÇÖs security posture at scale by applying security policies to all Azure subscriptions linked to your Azure Active Directory tenant.
+Manage your organizationΓÇÖs security posture at scale by applying security policies to all Azure subscriptions linked to your Azure Active Directory tenant.
For visibility into the security posture of all subscriptions linked to an Azure AD tenant, you'll need an Azure role with sufficient read permissions assigned on the root management group.
For visibility into the security posture of all subscriptions linked to an Azure
### Overview of management groups
-Use management groups to efficiently manage access, policies, and reporting on **groups of subscriptions**, as well as effectively manage the entire Azure estate by performing actions on the root management group. You can organize subscriptions into management groups and apply your governance policies to the management groups. All subscriptions within a management group automatically inherit the policies applied to the management group.
+Use management groups to efficiently manage access, policies, and reporting on groups of subscriptions, as well as effectively manage the entire Azure estate by performing actions on the root management group. You can organize subscriptions into management groups and apply your governance policies to the management groups. All subscriptions within a management group automatically inherit the policies applied to the management group.
-Each Azure AD tenant is given a single top-level management group called the **root management group**. This root management group is built into the hierarchy to have all management groups and subscriptions fold up to it. This group allows global policies and Azure role assignments to be applied at the directory level.
+Each Azure AD tenant is given a single top-level management group called the root management group. This root management group is built into the hierarchy to have all management groups and subscriptions fold up to it. This group allows global policies and Azure role assignments to be applied at the directory level.
The root management group is created automatically when you do any of the following actions: -- Open **Management Groups** in the [Azure portal](https://portal.azure.com).
+- In the [Azure portal](https://portal.azure.com), select **Management Groups** .
- Create a management group with an API call. - Create a management group with PowerShell. For PowerShell instructions, see [Create management groups for resource and organization management](../governance/management-groups/create-management-group-portal.md).
For a detailed overview of management groups, see the [Organize your resources w
### View and create management groups in the Azure portal
-1. From the [Azure portal](https://portal.azure.com), use the search box in the top bar to find and open **Management Groups**.
+1. Sign in to the [Azure portal](https://portal.azure.com)
- :::image type="content" source="./media/management-groups-roles/open-management-groups-service.png" alt-text="Accessing your management groups.":::
+1. Search for and select **Management Groups**.
- The list of your management groups appears.
-
-1. To create a management group, select **Add management group**, enter the relevant details, and select **Save**.
+1. To create a management group, select **Create**, enter the relevant details, and select **Submit**.
:::image type="content" source="media/management-groups-roles/add-management-group.png" alt-text="Adding a management group to Azure."::: - The **Management Group ID** is the directory unique identifier that is used to submit commands on this management group. This identifier isn't editable after creation as it is used throughout the Azure system to identify this group.
+
- The display name field is the name that is displayed within the Azure portal. A separate display name is an optional field when creating the management group and can be changed at any time. - ### Add subscriptions to a management group+ You can add subscriptions to the management group that you created.
-1. From the Azure portal, open **Management Groups** and select the management group for your subscription.
+1. Sign in to the [Azure portal](https://portal.azure.com)
- :::image type="content" source="./media/management-groups-roles/management-group-subscriptions.png" alt-text="Select a management group for your subscription.":::
+1. Search for and select **Management Groups**
+
+1. Select the management group for your subscription.
1. When the group's page opens, select **Subscriptions**.
You can add subscriptions to the management group that you created.
> [!IMPORTANT] > Management groups can contain both subscriptions and child management groups. When you assign a user an Azure role to the parent management group, the access is inherited by the child management group's subscriptions. Policies set at the parent management group are also inherited by the children. -- ## Assign Azure roles to other users ### Assign Azure roles to users through the Azure portal:
-1. From the [Azure portal](https://portal.azure.com), use the search box in the top bar to find and open **Management Groups**.
-
- :::image type="content" source="./media/management-groups-roles/open-management-groups-service.png" alt-text="Accessing your management groups.":::
+1. Sign in to the [Azure portal](https://portal.azure.com)
- The list of your management groups appears.
+1. Search for and select **Management Groups**
1. Select the relevant management group.
Once the Azure roles have been assigned to the users, the tenant administrator s
4. To save your setting, select **Save**. -- ## Next steps On this page, you learned how to organize subscriptions into management groups and assign roles to users. For related information, see:
defender-for-cloud Multi Factor Authentication Enforcement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/multi-factor-authentication-enforcement.md
Title: Microsoft Defender for Cloud's security recommendations for MFA description: Learn how to enforce multi-factor authentication for your Azure subscriptions using Microsoft Defender for Cloud Previously updated : 11/09/2021 Last updated : 01/08/2023 + # Manage multi-factor authentication (MFA) enforcement on your subscriptions
-If you're only using passwords to authenticate your users, you're leaving an attack vector open. Users often use weak passwords or reuse them for multiple services. With [MFA](https://www.microsoft.com/security/business/identity/mfa) enabled, your accounts are more secure, and users can still authenticate to almost any application with single sign-on (SSO).
+If you're using passwords, only to authenticate your users, you're leaving an attack vector open. Users often use weak passwords or reuse them for multiple services. With [MFA](https://www.microsoft.com/security/business/identity/mfa) enabled, your accounts are more secure, and users can still authenticate to almost any application with single sign-on (SSO).
There are multiple ways to enable MFA for your Azure Active Directory (AD) users based on the licenses that your organization owns. This page provides the details for each in the context of Microsoft Defender for Cloud.
The recommendations in the Enable MFA control ensure you're meeting the recommen
- MFA should be enabled on accounts with owner permissions on your subscription - MFA should be enabled on accounts with write permissions on your subscription
-There are three ways to enable MFA and be compliant with the two recommendations in Defender for Cloud: security defaults, per-user assignment, conditional access (CA) policy. Each of these options is explained below.
+There are three ways to enable MFA and be compliant with the two recommendations in Defender for Cloud: security defaults, per-user assignment, conditional access (CA) policy.
### Free option - security defaults
-If you're using the free edition of Azure AD, use [security defaults](../active-directory/fundamentals/concept-fundamentals-security-defaults.md) to enable multi-factor authentication on your tenant.
+
+If you're using the free edition of Azure AD, you should use the [security defaults](../active-directory/fundamentals/concept-fundamentals-security-defaults.md) to enable multi-factor authentication on your tenant.
### MFA for Microsoft 365 Business, E3, or E5 customers
-Customers with Microsoft 365 can use **Per-user assignment**. In this scenario, Azure AD MFA is either enabled or disabled for all users, for all sign-in events. There is no ability to enable multi-factor authentication for a subset of users, or under certain scenarios, and management is through the Office 365 portal.
+
+Customers with Microsoft 365 can use **Per-user assignment**. In this scenario, Azure AD MFA is either enabled or disabled for all users, for all sign-in events. There's no ability to enable multi-factor authentication for a subset of users, or under certain scenarios, and management is through the Office 365 portal.
### MFA for Azure AD Premium customers+ For an improved user experience, upgrade to Azure AD Premium P1 or P2 for **conditional access (CA) policy** options. To configure a CA policy, you'll need [Azure Active Directory (AD) tenant permissions](../active-directory/roles/permissions-reference.md). Your CA policy must:+ - enforce MFA+ - include the Microsoft Azure Management app ID (797f4846-ba00-4fd7-ba43-dac1f8f63013) or all apps+ - not exclude the Microsoft Azure Management app ID **Azure AD Premium P1** customers can use Azure AD CA to prompt users for multi-factor authentication during certain scenarios or events to fit your business requirements. Other licenses that include this functionality: Enterprise Mobility + Security E3, Microsoft 365 F1, and Microsoft 365 E3.
Learn more in the [Azure Conditional Access documentation](../active-directory/c
You can view the list of user accounts without MFA enabled from either the Defender for Cloud recommendations details page, or using Azure Resource Graph. ### View the accounts without MFA enabled in the Azure portal+ From the recommendation details page, select a subscription from the **Unhealthy resources** list or select **Take action** and the list will be displayed. ### View the accounts without MFA enabled using Azure Resource Graph+ To see which accounts don't have MFA enabled, use the following Azure Resource Graph query. The query returns all unhealthy resources - accounts - of the recommendation "MFA should be enabled on accounts with owner permissions on your subscription". 1. Open **Azure Resource Graph Explorer**.
To investigate why the recommendations are still being generated, verify the fol
- The Azure Management app ID isn't excluded in the **Apps** section of your MFA CA policy ### We're using a third-party MFA tool to enforce MFA. Why do we still get the Defender for Cloud recommendations?
-Defender for Cloud's MFA recommendations don't support third-party MFA tools (for example, DUO).
+Defender for Cloud's MFA recommendations doesn't support third-party MFA tools (for example, DUO).
If the recommendations are irrelevant for your organization, consider marking them as "mitigated" as described in [Exempting resources and recommendations from your secure score](exempt-resource.md). You can also [disable a recommendation](tutorial-security-policy.md#disable-security-policies-and-disable-recommendations). ### Why does Defender for Cloud show user accounts without permissions on the subscription as "requiring MFA"?
-Defender for Cloud's MFA recommendations refer to [Azure RBAC](../role-based-access-control/role-definitions-list.md) roles and the [Azure classic subscription administrators](../role-based-access-control/classic-administrators.md) role. Verify that none of the accounts have such roles.
+Defender for Cloud's MFA recommendations refers to [Azure RBAC](../role-based-access-control/role-definitions-list.md) roles and the [Azure classic subscription administrators](../role-based-access-control/classic-administrators.md) role. Verify that none of the accounts have such roles.
### We're enforcing MFA with PIM. Why are PIM accounts shown as noncompliant?
-Defender for Cloud's MFA recommendations currently don't support PIM accounts. You can add these accounts to a CA Policy in the Users/Group section.
+Defender for Cloud's MFA recommendations currently doesn't support PIM accounts. You can add these accounts to a CA Policy in the Users/Group section.
### Can I exempt or dismiss some of the accounts?
There are some limitations to Defender for Cloud's identity and access protectio
- Identity recommendations donΓÇÖt identify accounts that are managed with a privileged identity management (PIM) system. If you're using a PIM tool, you might see inaccurate results in the **Manage access and permissions** control. - Identity recommendations don't support Azure AD conditional access policies with included Directory Roles instead of users and groups. - ## Next steps To learn more about recommendations that apply to other Azure resource types, see the following article:
defender-for-cloud Os Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/os-coverage.md
Title: Platforms supported by Microsoft Defender for Cloud
description: This document provides a list of platforms supported by Microsoft Defender for Cloud. Previously updated : 11/09/2021++ Last updated : 01/09/2023 + # Supported platforms This page shows the platforms and environments supported by Microsoft Defender for Cloud.
defender-for-cloud Other Threat Protections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/other-threat-protections.md
Title: Additional threat protections from Microsoft Defender for Cloud
+ Title: Other threat protections from Microsoft Defender for Cloud
description: Learn about the threat protections available from Microsoft Defender for Cloud Previously updated : 12/05/2022 Last updated : 01/08/2023
-# Additional threat protections in Microsoft Defender for Cloud
+# Other threat protections in Microsoft Defender for Cloud
In addition to its built-in [advanced protection plans](defender-for-cloud-introduction.md), Microsoft Defender for Cloud also offers the following threat protection capabilities.
In addition to its built-in [advanced protection plans](defender-for-cloud-intro
<a name="network-layer"></a> ## Threat protection for Azure network layer
-Defender for Cloud network-layer analytics are based on sample [IPFIX data](https://en.wikipedia.org/wiki/IP_Flow_Information_Export), which are packet headers collected by Azure core routers. Based on this data feed, Defender for Cloud uses machine learning models to identify and flag malicious traffic activities. Defender for Cloud also uses the Microsoft Threat Intelligence database to enrich IP addresses.
+Defenders for Cloud network-layer analytics are based on sample [IPFIX data](https://en.wikipedia.org/wiki/IP_Flow_Information_Export), which are packet headers collected by Azure core routers. Based on this data feed, Defender for Cloud uses machine learning models to identify and flag malicious traffic activities. Defender for Cloud also uses the Microsoft Threat Intelligence database to enrich IP addresses.
Some network configurations restrict Defender for Cloud from generating alerts on suspicious network activity. For Defender for Cloud to generate network alerts, ensure that:
For a list of the Azure network layer alerts, see the [Reference table of alerts
Microsoft Defender for Cloud Apps (formerly known as Microsoft Cloud App Security) is a cloud access security broker (CASB) that supports various deployment modes including log collection, API connectors, and reverse proxy. It provides rich visibility, control over data travel, and sophisticated analytics to identify and combat cyberthreats across all your Microsoft and third-party cloud services.
-If you've enabled Microsoft Defender for Cloud Apps, and selected the integration from within Defender for Cloud's settings, your hardening recommendations from Defender for Cloud will appear in Defender for Cloud Apps with no additional configuration needed.
+Once Microsoft Defender for Cloud Apps has been enabled, you can then select the integration from within Defender for Cloud's settings. Your hardened recommendations from Defender for Cloud will appear in Defender for Cloud Apps with no other configuration needed.
> [!NOTE] > Defender for Cloud stores security-related customer data in the same geo as its resource. If Microsoft hasn't yet deployed Defender for Cloud in the resource's geo, then it stores the data in the United States. When Microsoft Defender for Cloud Apps is enabled, this information is stored in accordance with the geo location rules of Microsoft Defender for Cloud Apps. For more information, see [Data storage for non-regional services](https://azuredatacentermap.azurewebsites.net/).
Azure Application Gateway offers a web application firewall (WAF) that provides
Web applications are increasingly targeted by malicious attacks that exploit commonly known vulnerabilities. The Application Gateway WAF is based on Core Rule Set 3.0 or 2.2.9 from the Open Web Application Security Project. The WAF is updated automatically to protect against new vulnerabilities.
-If you have created [WAF Security solution](partner-integration.md#add-data-sources), your WAF alerts are streamed to Defender for Cloud with no additional configurations. For more information on the alerts generated by WAF, see [Web application firewall CRS rule groups and rules](../web-application-firewall/ag/application-gateway-crs-rulegroups-rules.md?tabs=owasp31#crs911-31).
+If you have created [WAF Security solution](partner-integration.md#add-data-sources), your WAF alerts are streamed to Defender for Cloud with no other configurations. For more information on the alerts generated by WAF, see [Web application firewall CRS rule groups and rules](../web-application-firewall/ag/application-gateway-crs-rulegroups-rules.md?tabs=owasp31#crs911-31).
> [!NOTE] > Only WAF v1 is supported and will work with Microsoft Defender for Cloud.
Distributed denial of service (DDoS) attacks are known to be easy to execute. Th
To defend against DDoS attacks, purchase a license for Azure DDoS Protection and ensure you're following application design best practices. DDoS Protection provides different service tiers. For more information, see [Azure DDoS Protection overview](../ddos-protection/ddos-protection-overview.md).
-If you have Azure DDoS Protection enabled, your DDoS alerts are streamed to Defender for Cloud with no additional configuration needed. For more information on the alerts generated by DDoS Protection, see [Reference table of alerts](alerts-reference.md#alerts-azureddos).
+If you have Azure DDoS Protection enabled, your DDoS alerts are streamed to Defender for Cloud with no other configuration needed. For more information on the alerts generated by DDoS Protection, see [Reference table of alerts](alerts-reference.md#alerts-azureddos).
## Entra Permission Management (formerly Cloudknox)
-[Microsoft Entra Permissions Management](../active-directory/cloud-infrastructure-entitlement-management/index.yml) is a cloud infrastructure entitlement management (CIEM) solution that provides comprehensive visibility and control over permissions for any identity and any resource in Azure, AWS, and GCP.
+[Microsoft Entra Permissions Management](../active-directory/cloud-infrastructure-entitlement-management/index.yml) is a cloud infrastructure entitlement management (CIEM) solution. Entra Permission Management provides comprehensive visibility and control over permissions for any identity and any resource in Azure, AWS, and GCP.
As part of the integration, each onboarded Azure subscription, AWS account, and GCP project give you a view of your [Permission Creep Index (PCI)](../active-directory/cloud-infrastructure-entitlement-management/ui-dashboard.md). The PCI is an aggregated metric that periodically evaluates the level of risk associated with the number of unused or excessive permissions across identities and resources. PCI measures how risky identities can potentially be, based on the permissions available to them.
defender-for-cloud Plan Defender For Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-defender-for-servers.md
Title: Plan a Defender for Servers deployment to protect on-premises and multicl
description: Design a solution to protect on-premises and multicloud servers with Defender for Servers Last updated 11/06/2022- + + # Plan Defender for Servers deployment Defender for Servers extends protection to your Windows and Linux machines running in Azure, AWS, GCP, and on-premises. Defender for Servers integrates with Microsoft Defender for Endpoint to provide endpoint detection and response (EDR), and also provides a host of additional threat protection features.
defender-for-cloud Plan Multicloud Security Automate Connector Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-multicloud-security-automate-connector-deployment.md
Title: Defender for Cloud planning multicloud security automating connector deployment description: Learn about automating connector deployment when planning multicloud deployment with Microsoft Defender for Cloud. ++ Last updated 10/03/2022 + # Automate connector deployment This article is part of a series to guide you in designing a solution for cloud security posture management (CSPM) and cloud workload protection (CWP) across multicloud resources with Microsoft Defender for Cloud.
defender-for-cloud Plan Multicloud Security Define Adoption Strategy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-multicloud-security-define-adoption-strategy.md
Title: Defender for Cloud Planning multicloud security defining adoption strateg
description: Learn about defining broad requirements for business needs and ownership in multicloud environment with Microsoft Defender for Cloud. ++ Last updated 10/03/2022 + # Define an adoption strategy This article is part of a series to provide guidance as you design a cloud security posture management (CSPM) and cloud workload protection platform (CWPP) solution across multicloud resources with Microsoft Defender for Cloud.
defender-for-cloud Plan Multicloud Security Determine Access Control Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-multicloud-security-determine-access-control-requirements.md
Title: Defender for Cloud Planning multicloud security determine access control requirements guidance description: Learn about determining access control requirements to meet business goals in multicloud environment with Microsoft Defender for Cloud. ++ Last updated 10/03/2022 + # Determine access control requirements This article is part of a series to provide guidance as you design a cloud security posture management (CSPM) and cloud workload protection (CWP) solution across multicloud resources with Microsoft Defender for Cloud.
defender-for-cloud Plan Multicloud Security Determine Business Needs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-multicloud-security-determine-business-needs.md
Title: Defender for Cloud Planning multicloud security determining business needs guidance description: Learn about determining business needs to meet business goals in multicloud environment with Microsoft Defender for Cloud. ++ Last updated 10/03/2022
defender-for-cloud Plan Multicloud Security Determine Compliance Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-multicloud-security-determine-compliance-requirements.md
Title: Defender for Cloud Planning multicloud security compliance requirements guidance AWS standards GCP standards description: Learn about determining compliance requirements in multicloud environment with Microsoft Defender for Cloud. ++ Last updated 10/03/2022 + # Determine compliance requirements This article is part of a series to provide guidance as you design a cloud security posture management (CSPM) and cloud workload protection (CWP) solution across multicloud resources with Microsoft Defender for Cloud.
defender-for-cloud Plan Multicloud Security Determine Data Residency Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-multicloud-security-determine-data-residency-requirements.md
Title: Defender for Cloud Planning multicloud security determine data residency requirements GDPR agent considerations guidance description: Learn about determining data residency requirements when planning multicloud deployment with Microsoft Defender for Cloud. ++ Last updated 10/03/2022
defender-for-cloud Plan Multicloud Security Determine Multicloud Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-multicloud-security-determine-multicloud-dependencies.md
Title: Defender for Cloud Planning multicloud security determine multicloud dependencies CSPM CWPP guidance cloud workload protection description: Learn about determining multicloud dependencies when planning multicloud deployment with Microsoft Defender for Cloud. ++ Last updated 10/03/2022
defender-for-cloud Plan Multicloud Security Determine Ownership Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-multicloud-security-determine-ownership-requirements.md
Title: Defender for Cloud Planning multicloud security determine ownership requirements security functions team alignment best practices guidance description: Learn about determining ownership requirements when planning multicloud deployment with Microsoft Defender for Cloud. ++ Last updated 10/03/2022 + # Determine ownership requirements This article is one of a series providing guidance as you design a cloud security posture management (CSPM) and cloud workload protection (CWP) solution across multicloud resources with Microsoft Defender for Cloud.
defender-for-cloud Plan Multicloud Security Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-multicloud-security-get-started.md
Title: Defender for Cloud Planning multicloud security get started guidance before you begin cloud solution description: Learn about designing a solution for securing and protecting your multicloud environment with Microsoft Defender for Cloud. ++ Last updated 10/03/2022 + # Get started This article introduces guidance to help you design a solution for securing and protecting your multicloud environment with Microsoft Defender for Cloud. The guidance can be used by cloud solution and infrastructure architects, security architects and analysts, and anyone else involved in designing a multicloud security solution.
defender-for-cloud Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/policy-reference.md
Title: Built-in policy definitions for Microsoft Defender for Cloud description: Lists Azure Policy built-in policy definitions for Microsoft Defender for Cloud. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/28/2022 Last updated : 01/05/2023 # Azure Policy built-in definitions for Microsoft Defender for Cloud
defender-for-cloud Powershell Sample Vulnerability Assessment Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/powershell-sample-vulnerability-assessment-azure-sql.md
Title: PowerShell script sample - Enable vulnerability assessment on a SQL server description: In this article, learn how to enable vulnerability assessments on Azure SQL databases with the express configuration using a PowerShell script. ++ Last updated 11/29/2022+ # Enable vulnerability assessments on Azure SQL databases with the express configuration
defender-for-cloud Powershell Sample Vulnerability Assessment Baselines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/powershell-sample-vulnerability-assessment-baselines.md
Title: PowerShell script sample - Set up baselines on Azure SQL databases description: In this article, learn how to set up baselines for vulnerability assessments on Azure SQL databases using a PowerShell script. ++ Last updated 11/29/2022+ # Set up baselines for vulnerability assessments on Azure SQL databases
defender-for-cloud Prevent Misconfigurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/prevent-misconfigurations.md
Title: How to prevent misconfigurations with Microsoft Defender for Cloud description: Learn how to use Defender for Cloud's 'Enforce' and 'Deny' options on the recommendations details pages Previously updated : 11/09/2021 Last updated : 01/08/2023 # Prevent misconfigurations with Enforce/Deny recommendations
-Security misconfigurations are a major cause of security incidents. Defender for Cloud can help *prevent* misconfigurations of new resources with regard to specific recommendations.
+Security misconfigurations are a major cause of security incidents. Defender for Cloud can help *prevent* misconfigurations of new resources regarding specific recommendations.
This feature can help keep your workloads secure and stabilize your secure score. Enforcing a secure configuration, based on a specific recommendation, is offered in two modes: -- Using the **Deny** effect of Azure Policy, you can stop unhealthy resources from being created-- Using the **Enforce** option, you can take advantage of Azure Policy's **DeployIfNotExist** effect and automatically remediate non-compliant resources upon creation
+- Using the **Deny** effect of Azure Policy, you can stop unhealthy resources from being created.
-This can be found at the top of the resource details page for selected security recommendations (see [Recommendations with deny/enforce options](#recommendations-with-denyenforce-options)).
+- Using the **Enforce** option, you can take advantage of Azure Policy's **DeployIfNotExist** effect and automatically remediate non-compliant resources upon creation.
+
+The ability to secure configurations can be found at the top of the resource details page for selected security recommendations (see [Recommendations with deny/enforce options](#recommendations-with-denyenforce-options)).
## Prevent resource creation
These recommendations can be used with the **enforce** option:
- Diagnostic logs in Logic Apps should be enabled - Diagnostic logs in Search services should be enabled - Diagnostic logs in Service Bus should be enabled+
+## Next steps
+
+[Automate responses to Microsoft Defender for Cloud triggers](workflow-automation.md)
defender-for-cloud Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/privacy.md
Title: Manage user data in Microsoft Defender for Cloud description: Learn how to manage the user data in Microsoft Defender for Cloud. Managing user data includes the ability to access, delete, or export data. Previously updated : 11/09/2021 Last updated : 01/08/2023 # Manage user data in Microsoft Defender for Cloud
For more information, see [Get Security Alerts (GET Collection)](/previous-versi
## Restricting the use of personal data for profiling or marketing without consent A Defender for Cloud user can choose to opt out by deleting their [security contact data](configure-email-notifications.md).
-[Just-in-time data](just-in-time-access-usage.md) is considered non-identifiable data and is retained for a period of 30 days.
+[Just-in-time data](just-in-time-access-usage.md) is considered non-identifiable data and is retained for 30 days.
-[Alert data](managing-and-responding-alerts.md) is considered security data and is retained for a period of two years.
+[Alert data](managing-and-responding-alerts.md) is considered security data and is retained for two years.
## Auditing and reporting Audit logs of security contact, just-in-time, and alert updates are maintained in [Azure Activity Logs](../azure-monitor/essentials/platform-logs-overview.md).+
+## Next steps
+
+[What is Microsoft Defender for Cloud?](defender-for-cloud-introduction.md)
defender-for-cloud Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-archive.md
description: A description of what's new and changed in Microsoft Defender for C
Previously updated : 08/14/2022 Last updated : 01/04/2023 # Archive for what's new in Defender for Cloud?
This page provides you with information about:
- Bug fixes - Deprecated functionality
+## July 2022
+
+Updates in July include:
+
+- [General availability (GA) of the Cloud-native security agent for Kubernetes runtime protection](#general-availability-ga-of-the-cloud-native-security-agent-for-kubernetes-runtime-protection)
+- [Defender for Container's VA adds support for the detection of language specific packages (Preview)](#defender-for-containers-va-adds-support-for-the-detection-of-language-specific-packages-preview)
+- [Protect against the Operations Management Infrastructure vulnerability CVE-2022-29149](#protect-against-the-operations-management-infrastructure-vulnerability-cve-2022-29149)
+- [Integration with Entra Permissions Management](#integration-with-entra-permissions-management)
+- [Key Vault recommendations changed to "audit"](#key-vault-recommendations-changed-to-audit)
+- [Deprecate API App policies for App Service](#deprecate-api-app-policies-for-app-service)
+
+### General availability (GA) of the cloud-native security agent for Kubernetes runtime protection
+
+We're excited to share that the cloud-native security agent for Kubernetes runtime protection is now generally available (GA)!
+
+The production deployments of Kubernetes clusters continue to grow as customers continue to containerize their applications. To assist with this growth, the Defender for Containers team has developed a cloud-native Kubernetes oriented security agent.
+
+The new security agent is a Kubernetes DaemonSet, based on eBPF technology and is fully integrated into AKS clusters as part of the AKS Security Profile.
+
+The security agent enablement is available through auto-provisioning, recommendations flow, AKS RP or at scale using Azure Policy.
+
+You can [deploy the Defender profile](./defender-for-containers-enable.md?pivots=defender-for-container-aks&tabs=aks-deploy-portal%2ck8s-deploy-asc%2ck8s-verify-asc%2ck8s-remove-arc%2caks-removeprofile-api#deploy-the-defender-profile) today on your AKS clusters.
+
+With this announcement, the runtime protection - threat detection (workload) is now also generally available.
+
+Learn more about the Defender for Container's [feature availability](supported-machines-endpoint-solutions-clouds-containers.md).
+
+You can also review [all available alerts](alerts-reference.md#alerts-k8scluster).
+
+Note, if you're using the preview version, the `AKS-AzureDefender` feature flag is no longer required.
+
+### Defender for Container's VA adds support for the detection of language specific packages (Preview)
+
+Defender for Container's vulnerability assessment (VA) is able to detect vulnerabilities in OS packages deployed via the OS package manager. We have now extended VA's abilities to detect vulnerabilities included in language specific packages.
+
+This feature is in preview and is only available for Linux images.
+
+To see all of the included language specific packages that have been added, check out Defender for Container's full list of [features and their availability](supported-machines-endpoint-solutions-clouds-containers.md#registries-and-images).
+
+### Protect against the Operations Management Infrastructure vulnerability CVE-2022-29149
+
+Operations Management Infrastructure (OMI) is a collection of cloud-based services for managing on-premises and cloud environments from one single place. Rather than deploying and managing on-premises resources, OMI components are entirely hosted in Azure.
+
+Log Analytics integrated with Azure HDInsight running OMI version 13 requires a patch to remediate [CVE-2022-29149](https://nvd.nist.gov/vuln/detail/CVE-2022-29149). Review the report about this vulnerability in the [Microsoft Security Update guide](https://msrc.microsoft.com/update-guide/en-US/vulnerability/CVE-2022-29149) for information about how to identify resources that are affected by this vulnerability and remediation steps.
+
+If you have Defender for Servers enabled with Vulnerability Assessment, you can use [this workbook](https://github.com/Azure/Microsoft-Defender-for-Cloud/tree/main/Workbooks/OMI%20Vulnerability%20Dashboard) to identify affected resources.
+
+### Integration with Entra Permissions Management
+
+Defender for Cloud has integrated with [Microsoft Entra Permissions Management](../active-directory/cloud-infrastructure-entitlement-management/index.yml), a cloud infrastructure entitlement management (CIEM) solution that provides comprehensive visibility and control over permissions for any identity and any resource in Azure, AWS, and GCP.
+
+Each Azure subscription, AWS account, and GCP project that you onboard, will now show you a view of your [Permission Creep Index (PCI)](../active-directory/cloud-infrastructure-entitlement-management/ui-dashboard.md).
+
+Learn more about [Entra Permission Management (formerly Cloudknox)](other-threat-protections.md#entra-permission-management-formerly-cloudknox)
+
+### Key Vault recommendations changed to "audit"
+
+The effect for the Key Vault recommendations listed here was changed to "audit":
+
+| Recommendation name | Recommendation ID |
+| - | |
+| Validity period of certificates stored in Azure Key Vault should not exceed 12 months | fc84abc0-eee6-4758-8372-a7681965ca44 |
+| Key Vault secrets should have an expiration date | 14257785-9437-97fa-11ae-898cfb24302b |
+| Key Vault keys should have an expiration date | 1aabfa0d-7585-f9f5-1d92-ecb40291d9f2 |
++
+### Deprecate API App policies for App Service
+
+We deprecated the following policies to corresponding policies that already exist to include API apps:
+
+| To be deprecated | Changing to |
+|--|--|
+|`Ensure API app has 'Client Certificates (Incoming client certificates)' set to 'On'` | `App Service apps should have 'Client Certificates (Incoming client certificates)' enabled` |
+| `Ensure that 'Python version' is the latest, if used as a part of the API app` | `App Service apps that use Python should use the latest Python version'` |
+| `CORS should not allow every resource to access your API App` | `App Service apps should not have CORS configured to allow every resource to access your apps` |
+| `Managed identity should be used in your API App` | `App Service apps should use managed identity` |
+| `Remote debugging should be turned off for API Apps` | `App Service apps should have remote debugging turned off` |
+| `Ensure that 'PHP version' is the latest, if used as a part of the API app` | `App Service apps that use PHP should use the latest 'PHP version'`|
+| `FTPS only should be required in your API App` | `App Service apps should require FTPS only` |
+| `Ensure that 'Java version' is the latest, if used as a part of the API app` | `App Service apps that use Java should use the latest 'Java version'` |
+| `Latest TLS version should be used in your API App` | `App Service apps should use the latest TLS version` |
+ ## June 2022 Updates in June include:
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 11/29/2022 Last updated : 01/04/2023 # What's new in Microsoft Defender for Cloud?
Defender for Container's vulnerability assessment (VA) now includes detailed pac
This detailed package information is available for new scans of images. :::image type="content" source="medic-container-va-package-information.png":::-
-## July 2022
-
-Updates in July include:
--- [General availability (GA) of the Cloud-native security agent for Kubernetes runtime protection](#general-availability-ga-of-the-cloud-native-security-agent-for-kubernetes-runtime-protection)-- [Defender for Container's VA adds support for the detection of language specific packages (Preview)](#defender-for-containers-va-adds-support-for-the-detection-of-language-specific-packages-preview)-- [Protect against the Operations Management Infrastructure vulnerability CVE-2022-29149](#protect-against-the-operations-management-infrastructure-vulnerability-cve-2022-29149)-- [Integration with Entra Permissions Management](#integration-with-entra-permissions-management)-- [Key Vault recommendations changed to "audit"](#key-vault-recommendations-changed-to-audit)-- [Deprecate API App policies for App Service](#deprecate-api-app-policies-for-app-service)-
-### General availability (GA) of the cloud-native security agent for Kubernetes runtime protection
-
-We're excited to share that the cloud-native security agent for Kubernetes runtime protection is now generally available (GA)!
-
-The production deployments of Kubernetes clusters continue to grow as customers continue to containerize their applications. To assist with this growth, the Defender for Containers team has developed a cloud-native Kubernetes oriented security agent.
-
-The new security agent is a Kubernetes DaemonSet, based on eBPF technology and is fully integrated into AKS clusters as part of the AKS Security Profile.
-
-The security agent enablement is available through auto-provisioning, recommendations flow, AKS RP or at scale using Azure Policy.
-
-You can [deploy the Defender profile](./defender-for-containers-enable.md?pivots=defender-for-container-aks&tabs=aks-deploy-portal%2ck8s-deploy-asc%2ck8s-verify-asc%2ck8s-remove-arc%2caks-removeprofile-api#deploy-the-defender-profile) today on your AKS clusters.
-
-With this announcement, the runtime protection - threat detection (workload) is now also generally available.
-
-Learn more about the Defender for Container's [feature availability](supported-machines-endpoint-solutions-clouds-containers.md).
-
-You can also review [all available alerts](alerts-reference.md#alerts-k8scluster).
-
-Note, if you're using the preview version, the `AKS-AzureDefender` feature flag is no longer required.
-
-### Defender for Container's VA adds support for the detection of language specific packages (Preview)
-
-Defender for Container's vulnerability assessment (VA) is able to detect vulnerabilities in OS packages deployed via the OS package manager. We have now extended VA's abilities to detect vulnerabilities included in language specific packages.
-
-This feature is in preview and is only available for Linux images.
-
-To see all of the included language specific packages that have been added, check out Defender for Container's full list of [features and their availability](supported-machines-endpoint-solutions-clouds-containers.md#registries-and-images).
-
-### Protect against the Operations Management Infrastructure vulnerability CVE-2022-29149
-
-Operations Management Infrastructure (OMI) is a collection of cloud-based services for managing on-premises and cloud environments from one single place. Rather than deploying and managing on-premises resources, OMI components are entirely hosted in Azure.
-
-Log Analytics integrated with Azure HDInsight running OMI version 13 requires a patch to remediate [CVE-2022-29149](https://nvd.nist.gov/vuln/detail/CVE-2022-29149). Review the report about this vulnerability in the [Microsoft Security Update guide](https://msrc.microsoft.com/update-guide/en-US/vulnerability/CVE-2022-29149) for information about how to identify resources that are affected by this vulnerability and remediation steps.
-
-If you have Defender for Servers enabled with Vulnerability Assessment, you can use [this workbook](https://github.com/Azure/Microsoft-Defender-for-Cloud/tree/main/Workbooks/OMI%20Vulnerability%20Dashboard) to identify affected resources.
-
-### Integration with Entra Permissions Management
-
-Defender for Cloud has integrated with [Microsoft Entra Permissions Management](../active-directory/cloud-infrastructure-entitlement-management/index.yml), a cloud infrastructure entitlement management (CIEM) solution that provides comprehensive visibility and control over permissions for any identity and any resource in Azure, AWS, and GCP.
-
-Each Azure subscription, AWS account, and GCP project that you onboard, will now show you a view of your [Permission Creep Index (PCI)](../active-directory/cloud-infrastructure-entitlement-management/ui-dashboard.md).
-
-Learn more about [Entra Permission Management (formerly Cloudknox)](other-threat-protections.md#entra-permission-management-formerly-cloudknox)
-
-### Key Vault recommendations changed to "audit"
-
-The effect for the Key Vault recommendations listed here was changed to "audit":
-
-| Recommendation name | Recommendation ID |
-| - | |
-| Validity period of certificates stored in Azure Key Vault should not exceed 12 months | fc84abc0-eee6-4758-8372-a7681965ca44 |
-| Key Vault secrets should have an expiration date | 14257785-9437-97fa-11ae-898cfb24302b |
-| Key Vault keys should have an expiration date | 1aabfa0d-7585-f9f5-1d92-ecb40291d9f2 |
--
-### Deprecate API App policies for App Service
-
-We deprecated the following policies to corresponding policies that already exist to include API apps:
-
-| To be deprecated | Changing to |
-|--|--|
-|`Ensure API app has 'Client Certificates (Incoming client certificates)' set to 'On'` | `App Service apps should have 'Client Certificates (Incoming client certificates)' enabled` |
-| `Ensure that 'Python version' is the latest, if used as a part of the API app` | `App Service apps that use Python should use the latest Python version'` |
-| `CORS should not allow every resource to access your API App` | `App Service apps should not have CORS configured to allow every resource to access your apps` |
-| `Managed identity should be used in your API App` | `App Service apps should use managed identity` |
-| `Remote debugging should be turned off for API Apps` | `App Service apps should have remote debugging turned off` |
-| `Ensure that 'PHP version' is the latest, if used as a part of the API app` | `App Service apps that use PHP should use the latest 'PHP version'`|
-| `FTPS only should be required in your API App` | `App Service apps should require FTPS only` |
-| `Ensure that 'Java version' is the latest, if used as a part of the API app` | `App Service apps that use Java should use the latest 'Java version'` |
-| `Latest TLS version should be used in your API App` | `App Service apps should use the latest TLS version` |
defender-for-cloud Remediate Vulnerability Findings Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/remediate-vulnerability-findings-vm.md
Title: View findings from vulnerability assessment solutions in Microsoft Defender for Cloud description: Microsoft Defender for Cloud includes a fully integrated vulnerability assessment solution from Qualys. Learn more about this Defender for Cloud extension on this page. ++ Last updated 11/09/2021 + # View and remediate findings from vulnerability assessment solutions on your VMs When your vulnerability assessment tool reports vulnerabilities to Defender for Cloud, Defender for Cloud presents the findings and related information as recommendations. In addition, the findings include related information such as remediation steps, relevant CVEs, CVSS scores, and more. You can view the identified vulnerabilities for one or more subscriptions, or for a specific VM.
defender-for-cloud Secure Score Access And Track https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/secure-score-access-and-track.md
Title: Tracking your secure score in Microsoft Defender for Cloud description: Learn about the multiple ways to access and track your secure score in Microsoft Defender for Cloud. Previously updated : 11/09/2021 Last updated : 01/09/2023 + # Access and track your secure score
-You can find your overall secure score, as well as your score per subscription, through the Azure portal or programmatically as described in the following sections:
+You can find your overall secure score, and your score per subscription, through the Azure portal or programmatically as described in the following sections:
> [!TIP] > For a detailed explanation of how your scores are calculated, see [Calculations - understanding your score](secure-score-security-controls.md#calculationsunderstanding-your-score). ## Get your secure score from the portal
-Defender for Cloud displays your score prominently in the portal: it's the first main tile the Defender for Cloud overview page. Selecting this tile, takes you to the dedicated secure score page, where you'll see the score broken down by subscription. Select a single subscription to see the detailed list of prioritized recommendations and the potential impact that remediating them will have on the subscription's score.
+Defender for Cloud displays your score prominently in the portal. When you select the Secure score tile on the overview page, you're taken to the dedicated secure score page, where you'll see the score broken down by subscription. Select a single subscription to see the detailed list of prioritized recommendations and the potential effect that remediating them will have on the subscription's score.
-To recap, your secure score is shown in the following locations in Defender for Cloud's portal pages.
+Your secure score is shown in the following locations in Defender for Cloud's portal pages.
- In a tile on Defender for Cloud's **Overview** (main dashboard):
To access the secure score for multiple subscriptions with Azure Resource Graph:
:::image type="content" source="./media/multi-factor-authentication-enforcement/opening-resource-graph-explorer.png" alt-text="Launching Azure Resource Graph Explorer** recommendation page" :::
-1. Enter your Kusto query (using the examples below for guidance).
+1. Enter your Kusto query (using the following examples for guidance).
- This query returns the subscription ID, the current score in points and as a percentage, and the maximum score for the subscription.
If you're a Power BI user with a Pro account, you can use the **Secure Score Ove
The dashboard contains the following two reports to help you analyze your security status: - **Resources Summary** - provides summarized data regarding your resourcesΓÇÖ health.+ - **Secure Score Summary** - provides summarized data regarding your score progress. Use the ΓÇ£Secure score over time per subscriptionΓÇ¥ chart to view changes in the score. If you notice a dramatic change in your score, check the ΓÇ£detected changes that may affect your secure scoreΓÇ¥ table for possible changes that could have caused the change. This table presents deleted resources, newly deployed resources, or resources that their security status changed for one of the recommendations. :::image type="content" source="./media/secure-score-security-controls/power-bi-secure-score-dashboard.png" alt-text="The optional Secure Score Over Time Power BI dashboard for tracking your secure score over time and investigating changes.":::
defender-for-cloud Tenant Wide Permissions Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tenant-wide-permissions-management.md
Title: Grant and request tenant-wide permissions in Microsoft Defender for Cloud description: Learn how to manage tenant-wide permissions in Microsoft Defender for Cloud Previously updated : 11/09/2021 Last updated : 01/08/2023 # Grant and request tenant-wide visibility
A user with the Azure Active Directory (AD) role of **Global Administrator** mig
## Grant tenant-wide permissions to yourself
-To assign yourself tenant-level permissions:
+**To assign yourself tenant-level permissions**:
-1. If your organization manages resource access with [Azure AD Privileged Identity Management (PIM)](../active-directory/privileged-identity-management/pim-configure.md), or any other PIM tool, the global administrator role must be active for the user following the procedure below.
+1. If your organization manages resource access with [Azure AD Privileged Identity Management (PIM)](../active-directory/privileged-identity-management/pim-configure.md), or any other PIM tool, the global administrator role must be active for the user.
1. As a Global Administrator user without an assignment on the root management group of the tenant, open Defender for Cloud's **Overview** page and select the **tenant-wide visibility** link in the banner.
To assign yourself tenant-level permissions:
The organizational-wide view is achieved by granting roles on the root management group level of the tenant.
-1. Log out of the Azure portal, and then log back in again.
+1. Sign out of the Azure portal, and then log back in again.
1. Once you have elevated access, open or refresh Microsoft Defender for Cloud to verify you have visibility into all subscriptions under your Azure AD tenant.
-The simple process above performs a number of operations automatically for you:
+The process of assigning yourself tenant-level permissions, performs many operations automatically for you:
-1. The user's permissions are temporarily elevated.
-1. Using the new permissions, the user is assigned to the desired Azure RBAC role on the root management group.
-1. The elevated permissions are removed.
+- The user's permissions are temporarily elevated.
-For more details of the Azure AD elevation process, see [Elevate access to manage all Azure subscriptions and management groups](../role-based-access-control/elevate-access-global-admin.md).
+- Utilizing the new permissions, the user is assigned to the desired Azure RBAC role on the root management group.
+- The elevated permissions are removed.
+
+For more information of the Azure AD elevation process, see [Elevate access to manage all Azure subscriptions and management groups](../role-based-access-control/elevate-access-global-admin.md).
## Request tenant-wide permissions when yours are insufficient
-If you login to Defender for Cloud and see a banner telling you that your view is limited, you can click through to send a request to the global administrator for your organization. In the request, you can include the role you'd like to be assigned and the global administrator will make a decision about which role to grant.
+When you navigate to Defender for Cloud, you may see a banner that alerts you to the fact that your view is limited. If you see this banner, select it to send a request to the global administrator for your organization. In the request, you can include the role you'd like to be assigned and the global administrator will make a decision about which role to grant.
It's the global administrator's decision whether to accept or reject these requests.
To request elevated permissions from your global administrator:
1. From the Azure portal, open Microsoft Defender for Cloud.
-1. If you see the banner "You're seeing limited information." select it.
+1. If the banner "You're seeing limited information." is present, select it.
:::image type="content" source="media/management-groups-roles/request-tenant-permissions.png" alt-text="Banner informing a user they can request tenant-wide permissions.":::
defender-for-cloud Threat Intelligence Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/threat-intelligence-reports.md
Title: Microsoft Defender for Cloud threat intelligence report description: This page helps you to use Microsoft Defender for Cloud threat intelligence reports during an investigation to find more information about security alerts Previously updated : 11/09/2021 Last updated : 01/08/2023 + # Microsoft Defender for Cloud threat intelligence report
-This page explains how Microsoft Defender for Cloud's threat intelligence reports can help you learn more about a threat that triggered a security alert.
+Microsoft Defender for Cloud's threat intelligence reports can help you learn more about a threat that triggered a security alert.
## What is a threat intelligence report?
Defender for Cloud has three types of threat reports, which can vary according t
* **Campaign Report**: focuses on details of specific attack campaigns. * **Threat Summary Report**: covers all of the items in the previous two reports.
-This type of information is useful during the incident response process, where there's an ongoing investigation to understand the source of the attack, the attackerΓÇÖs motivations, and what to do to mitigate this issue in the future.
+This type of information is useful during the incident response process. Such as when there's an ongoing investigation to understand the source of the attack, the attackerΓÇÖs motivations, and what to do to mitigate this issue in the future.
## How to access the threat intelligence report? 1. From Defender for Cloud's menu, open the **Security alerts** page. 1. Select an alert.
- The alerts details page opens with more details about the alert. Below is the **Ransomware indicators detected** alert details page.
+ The alerts details page opens with more details about the alert. For example, the **Ransomware indicators detected** alert details page:
[![Ransomware indicators detected alert details page.](media/threat-intelligence-reports/ransomware-indicators-detected-link-to-threat-intel-report.png)](media/threat-intelligence-reports/ransomware-indicators-detected-link-to-threat-intel-report.png#lightbox)
defender-for-cloud Tutorial Protect Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-protect-resources.md
Title: Access & application controls tutorial - Microsoft Defender for Cloud
description: This tutorial shows you how to configure a just-in-time VM access policy and an application control policy. Previously updated : 11/09/2021 Last updated : 01/08/2023 # Tutorial: Protect your resources with Microsoft Defender for Cloud
To step through the features covered in this tutorial, you must have Defender fo
## Manage VM access JIT VM access can be used to lock down inbound traffic to your Azure VMs, reducing exposure to attacks while providing easy access to connect to VMs when needed.
-Management ports do not need to be open at all times. They only need to be open while you are connected to the VM, for example to perform management or maintenance tasks. When just-in-time is enabled, Defender for Cloud uses Network Security Group (NSG) rules, which restrict access to management ports so they cannot be targeted by attackers.
+Management ports don't need to be open always. They only need to be open while you're connected to the VM, for example to perform management or maintenance tasks. When just-in-time is enabled, Defender for Cloud uses Network Security Group (NSG) rules, which restrict access to management ports so they can't be targeted by attackers.
Follow the guidance in [Secure your management ports with just-in-time access](just-in-time-access-usage.md).
defender-for-cloud Tutorial Security Incident https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-security-incident.md
description: In this tutorial, you'll learn how to triage security alerts and de
ms.assetid: 181e3695-cbb8-4b4e-96e9-c4396754862f Previously updated : 11/09/2021 Last updated : 01/08/2023 # Tutorial: Triage, investigate, and respond to security alerts Microsoft Defender for Cloud continuously analyzes your hybrid cloud workloads using advanced analytics and threat intelligence to alert you about potentially malicious activities in your cloud resources. You can also integrate alerts from other security products and services into Defender for Cloud. Once an alert is raised, swift action is needed to investigate and remediate the potential security issue.
-In this tutorial, you will learn how to:
+In this tutorial, you'll learn how to:
> [!div class="checklist"] > * Triage security alerts
In this tutorial, you will learn how to:
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. ## Prerequisites
-To step through the features covered in this tutorial, you must have Defender for Cloud's enhanced security features enabled. You can try these at no cost. To learn more, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/). The quickstart [Get started with Defender for Cloud](get-started.md) walks you through how to upgrade.
+To step through the features covered in this tutorial, you must have Defender for Cloud's enhanced security features enabled. To learn more about Defender for Cloud's pricing, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
+
+The quickstart, [Get started with Defender for Cloud](get-started.md) walks you through the upgrade process.
## Triage security alerts
After you've investigated a security alert and understood its scope, you can res
:::image type="content" source="./media/tutorial-security-incident/set-status-dismissed.png" alt-text="Setting an alert's status":::
- This removes the alert from the main alerts list. You can use the filter from the alerts list page to view all alerts with **Dismissed** status.
+ The alert is then removed from the main list of alerts. You can use the filter from the alerts list page to view all alerts with **Dismissed** status.
1. We encourage you to provide feedback about the alert to Microsoft: 1. Marking the alert as **Useful** or **Not useful**.
After you've investigated a security alert and understood its scope, you can res
> [!TIP] > We review your feedback to improve our algorithms and provide better security alerts.
-## End the tutorial
+## CLean up resources
Other quickstarts and tutorials in this collection build upon this quickstart. If you plan to continue to work with subsequent quickstarts and tutorials, keep automatic provisioning and Defender for Cloud's enhanced security features enabled.
If you don't plan to continue, or you want to disable either of these features:
> Disabling extensions does not remove the Log Analytics agent from Azure VMs that already have the agent, but does limits security monitoring for your resources. ## Next steps
-In this tutorial, you learned about Defender for Cloud features to be used when responding to a security alert. For related material see:
+
+In this tutorial, you learned about Defender for Cloud features to be used when responding to a security alert. For related material, see:
- [Respond to Microsoft Defender for Key Vault alerts](defender-for-key-vault-usage.md) - [Security alerts - a reference guide](alerts-reference.md)
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Title: Important changes coming to Microsoft Defender for Cloud description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan Previously updated : 12/27/2022 Last updated : 12/28/2022 # Important upcoming changes to Microsoft Defender for Cloud
If you're looking for the latest release notes, you'll find them in the [What's
| [The policy Vulnerability Assessment settings for SQL server should contain an email address to receive scan reports is set to be deprecated](#the-policy-vulnerability-assessment-settings-for-sql-server-should-contain-an-email-address-to-receive-scan-reports-is-set-to-be-deprecated) | January 2023 | | [The name of the Secure score control Protect your applications with Azure advanced networking solutions will be changed](#the-name-of-the-secure-score-control-protect-your-applications-with-azure-advanced-networking-solutions-will-be-changed) | January 2023 |
-### Recommendation to find vulnerabilities in running container images to be released for General Availability (GA)
-
-**Estimated date for change: January 2023**
-
-The [Running container images should have vulnerability findings resolved](defender-for-containers-vulnerability-assessment-azure.md#view-vulnerabilities-for-images-running-on-your-aks-clusters) recommendation is currently in preview. While a recommendation is in preview, it doesn't render a resource unhealthy and isn't included in the calculations of your secure score.
-
-We recommend that you use the recommendation to remediate vulnerabilities in your containers so that the recommendation won't impact your secure score when the recommendation is released as GA. Learn about [recommendation remediation](implement-security-recommendations.md).
- ### Recommendation to enable diagnostic logs for Virtual Machine Scale Sets to be deprecated **Estimated date for change: January 2023**
The policy [`Vulnerability Assessment settings for SQL server should contain an
The Defender for SQL vulnerability assessment email report will still be available and existing email configurations won't change after the policy is deprecated.
+### Recommendation to find vulnerabilities in running container images to be released for General Availability (GA)
+
+**Estimated date for change: January 2023**
+
+The [Running container images should have vulnerability findings resolved](defender-for-containers-vulnerability-assessment-azure.md#view-vulnerabilities-for-images-running-on-your-aks-clusters) recommendation is currently in preview. While a recommendation is in preview, it doesn't render a resource unhealthy and isn't included in the calculations of your secure score.
+
+We recommend that you use the recommendation to remediate vulnerabilities in your containers so that the recommendation won't impact your secure score when the recommendation is released as GA. Learn about [recommendation remediation](implement-security-recommendations.md).
+
+### The built-in policy \[Preview]: Private endpoint should be configured for Key Vault is set to be deprecated
+
+**Estimated date for change: January 2023**
+
+The built-in policy [`[Preview]: Private endpoint should be configured for Key Vault`](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0bc445-3935-4915-9981-011aa2b46147) is set to be deprecated and will be replaced with the [`[Preview]: Azure Key Vaults should use private link`](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6abeaec-4d90-4a02-805f-6b26c4d3fbe9) policy.
+
+The related [policy definition](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f7c1b1214-f927-48bf-8882-84f0af6588b1) will also be replaced by this new policy in all standards displayed in the regulatory compliance dashboard.
+ ### The name of the Secure score control Protect your applications with Azure advanced networking solutions will be changed **Estimated date for change: January 2023**
defender-for-cloud Windows Admin Center Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/windows-admin-center-integration.md
Title: How to protect Windows Admin Center servers with Microsoft Defender for Cloud description: This article explains how to integrate Microsoft Defender for Cloud with Windows Admin Center Previously updated : 11/09/2021 Last updated : 01/08/2023 + # Protect Windows Admin Center resources with Microsoft Defender for Cloud
-Windows Admin Center is a management tool for your Windows servers. It's a single location for system administrators to access the majority of the most commonly used admin tools. From within Windows Admin Center, you can directly onboard your on-premises servers into Microsoft Defender for Cloud. You can then view a summary of your security recommendations and alerts directly in the Windows Admin Center experience.
+Windows Admin Center is a management tool for your Windows servers. It's a single location for system administrators to access most the most commonly used admin tools. From within Windows Admin Center, you can directly onboard your on-premises servers into Microsoft Defender for Cloud. You can then view a summary of your security recommendations and alerts directly in the Windows Admin Center experience.
> [!NOTE] > Your Azure subscription and the associated Log Analytics workspace both need to have Microsoft Defender for Cloud's enhanced security features enabled in order to enable the Windows Admin Center integration.
Windows Admin Center is a management tool for your Windows servers. It's a singl
When you've successfully onboarded a server from Windows Admin Center to Microsoft Defender for Cloud, you can:
-* View security alerts and recommendations inside the Defender for Cloud extension in Windows Admin Center
-* View the security posture and retrieve additional detailed information of your Windows Admin Center managed servers in Defender for Cloud within the Azure portal (or via an API)
+- View security alerts and recommendations inside the Defender for Cloud extension in Windows Admin Center.
+
+- View the security posture and retrieve other detailed information of your Windows Admin Center managed servers in Defender for Cloud within the Azure portal (or via an API).
-By combining these two tools, Defender for Cloud becomes your single pane of glass to view all your security information, whatever the resource: protecting your Windows Admin Center managed on-premises servers, your VMs, and any additional PaaS workloads.
+Through the combination of these two tools, Defender for Cloud becomes your single pane of glass to view all your security information, whatever the resource: protecting your Windows Admin Center managed on-premises servers, your VMs, and any other PaaS workloads.
## Onboard Windows Admin Center managed servers into Defender for Cloud
By combining these two tools, Defender for Cloud becomes your single pane of gla
> [!NOTE] > If the server is already onboarded to Defender for Cloud, the set-up window will not appear.
-1. Click **Sign in to Azure and set up**.
+1. Select **Sign in to Azure and set up**.
![Onboarding Windows Admin Center extension to Defender for Cloud.](./media/windows-admin-center-integration/onboarding-from-wac-welcome.png) 1. Follow the instructions to connect your server to Defender for Cloud. After you've entered the necessary details and confirmed, Defender for Cloud makes the necessary configuration changes to ensure that all of the following are true:
By combining these two tools, Defender for Cloud becomes your single pane of gla
## View security recommendations and alerts in Windows Admin Center
-Once onboarded, you can view your alerts and recommendations directly in the Microsoft Defender for Cloud area of Windows Admin Center. Click a recommendation or an alert to view them in the Azure portal. There, you'll get additional information and learn how to remediate issues.
+Once onboarded, you can view your alerts and recommendations directly in the Microsoft Defender for Cloud area of Windows Admin Center. Select a recommendation or an alert to view them in the Azure portal. There, you'll get additional information and learn how to remediate issues.
[![Defender for Cloud recommendations and alerts as seen in Windows Admin Center.](media/windows-admin-center-integration/asc-recommendations-and-alerts-in-wac.png)](media/windows-admin-center-integration/asc-recommendations-and-alerts-in-wac.png#lightbox)
From Microsoft Defender for Cloud:
* To view security recommendations for all your Windows Admin Center servers, open [asset inventory](asset-inventory.md) and filter to the machine type that you want to investigate. select the **VMs and Computers** tab.
-* To view security alerts for all your Windows Admin Center servers, open **Security alerts**. Click **Filter** and ensure **only** "Non-Azure" is selected:
+* To view security alerts for all your Windows Admin Center servers, open **Security alerts**. Select **Filter** and ensure **only** "Non-Azure" is selected:
:::image type="content" source="./media/windows-admin-center-integration/filtering-alerts-by-environment.png" alt-text="Filter security alerts for Windows Admin Center managed servers." lightbox="./media/windows-admin-center-integration/filtering-alerts-by-environment.png":::+
+## Next steps
+
+[Integrate security solutions in Microsoft Defender for Cloud](partner-integration.md)
defender-for-cloud Workload Protections Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/workload-protections-dashboard.md
Title: Microsoft Defender for Cloud's workload protection dashboard and its features description: Learn about the features of Microsoft Defender for Cloud's workload protection dashboard Previously updated : 11/09/2021 Last updated : 01/09/2023 # The workload protections dashboard This dashboard provides: -- Visibility into your Microsoft Defender for Cloud coverage across your different resource types-- Links to configure advanced threat protection capabilities-- The onboarding state and agent installation-- Threat detection alerts
+- Visibility into your Microsoft Defender for Cloud coverage across your different resource types.
+
+- Links to configure advanced threat protection capabilities.
+
+- The onboarding state and agent installation.
+
+- Threat detection alerts.
To access the workload protections dashboard, select **Workload protections** from the Cloud Security section of Defender for Cloud's menu.
To access the workload protections dashboard, select **Workload protections** fr
The dashboard includes the following sections:
-1. **Microsoft Defender for Cloud coverage** - Here you can see the resources types that are in your subscription and eligible for protection by Defender for Cloud. Wherever relevant, you'll have the option to upgrade too. If you want to upgrade all possible eligible resources, select **Upgrade all**.
+1. **Microsoft Defender for Cloud coverage** - Here you can see the resources types that's in your subscription and eligible for protection by Defender for Cloud. Wherever relevant, you can upgrade here as well. If you want to upgrade all possible eligible resources, select **Upgrade all**.
2. **Security alerts** - When Defender for Cloud detects a threat in any area of your environment, it generates an alert. These alerts describe details of the affected resources, suggested remediation steps, and in some cases an option to trigger a logic app in response. Selecting anywhere in this graph opens the **Security alerts page**.
defender-for-iot How To Provision Micro Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/how-to-provision-micro-agent.md
This article explains how to provision the standalone Microsoft Defender for IoT micro agent using [Azure IoT Hub Device Provisioning Service](../../iot-dps/about-iot-dps.md) with [X.509 certificate attestation](../../iot-dps/concepts-x509-attestation.md).
-To learn how to configure the Microsoft Defender for IoT micro agent for Edge devices see [Create and provision IoT Edge devices at scale]../../iot-edge/how-to-provision-devices-at-scale-linux-tpm.md).
+To learn how to configure the Microsoft Defender for IoT micro agent for Edge devices see [Create and provision IoT Edge devices at scale](/azure/iot-edge/how-to-provision-devices-at-scale-linux-tpm)
## Prerequisites -- An Azure account with an active subscription. [Create an account for free]https://azure.microsoft).
+- An Azure account with an active subscription. For more information, see [Create an Azure account](https://azure.microsoft.com/free).
- An [IoT hub](../../iot-hub/iot-hub-create-through-portal.md).
To learn how to configure the Microsoft Defender for IoT micro agent for Edge de
1. In the [Azure portal](https://portal.azure.com), go to your instance of the IoT Hub device provisioning service.
-1. Under Settings, select Manage enrollments.
-1. Select Add individual enrollment, and then complete the steps to configure the enrollment:
- 1. Choose X.509 at the identity attestation Mechanism and choose your CA.
+1. Under **Settings**, select **Manage enrollments**.
+
+1. Select **Add individual enrollment**, and then complete the steps to configure the enrollment:
+
+ - In the **Mechanism** field, select **X.509** at the identity attestation Mechanism and choose your CA.
+
1. Navigate into your destination IoT Hub.
-1. Create a new module issued by the same certificate.
-1. Configure the micro agent to use the created module (Note that the device does not have to exist yet).
-1. Navigate back to DPS and provision the device through DPS.
+
+1. [Create a new module](tutorial-create-micro-agent-module-twin.md) issued by the same certificate.
+
+1. [Configure the micro agent to use the created module](tutorial-standalone-agent-binary-installation.md#authenticate-using-a-module-identity-connection-string) (note that the device does not have to exist yet).
+
+1. Navigate back to DPS and [provision the device through DPS](/azure/iot-dps/quick-create-simulated-device-x509).
+ 1. Navigate to the configured device in the destination IoT Hub.+ 1. Create a new module for the device issued by the same CA authenticator.
-1. Run the agent that you configured in step 4 to see it connects to the device.
+
+1. Run the agent that you configured in step 4 to confirm it connects to the device.
> [!NOTE]
-> Using this procedure, while you don't need the device to exists before configuring the agent, you do need to know the device name in advance in order to issue the certificate for the final module correctly.
+> When using this procedure, while you don't need the device to exist before configuring the agent, you do need to know the device name in advance in order to issue the certificate for the final module correctly.
## Next steps
-> [Configure Microsoft Defender for IoT agent-based solution](tutorial-configure-agent-based-solution.md)
+[Configure Microsoft Defender for IoT agent-based solution](tutorial-configure-agent-based-solution.md)
-> [Configure pluggable Authentication Modules (PAM) to audit sign-in events (Preview)](configure-pam-to-audit-sign-in-events.md)
+[Configure pluggable Authentication Modules (PAM) to audit sign-in events (Preview)](configure-pam-to-audit-sign-in-events.md)
defender-for-iot Alert Engine Messages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/alert-engine-messages.md
This article provides a reference of the [alerts](how-to-manage-cloud-alerts.md)
> The **Alerts** page in the Azure portal is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-## OT alerts disabled by default
+## OT alerts turned off by default
-Several alerts are disabled by default, as indicated by asterisks (*) in the tables below. OT sensor **Admin** users can enable or disable alerts from the **Support** page on a specific OT network sensor.
+Several alerts are turned off by default, as indicated by asterisks (*) in the tables below. OT sensor **Admin** users can enable or disable alerts from the **Support** page on a specific OT network sensor.
-If you disable alerts that are referenced in other places, such as [alert forwarding rules](how-to-forward-alert-information-to-partners.md), make sure to update those references as needed.
+If you turn off alerts that are referenced in other places, such as [alert forwarding rules](how-to-forward-alert-information-to-partners.md), make sure to update those references as needed.
+
+## Alert severities
+
+Defender for IoT alerts use the following severity levels:
+
+- **Critical**: Indicates a malicious attack that should be handled immediately.
+
+- **Major**: Indicates a security threat that's important to address.
+
+- **Minor**: Indicates some deviation from the baseline behavior that might contain a security threat.
+
+- **Warning**: Indicates some deviation from the baseline behavior with no security threats.
## Supported alert types
Policy engine alerts describe detected deviations from learned baseline behavior
| **Firmware Version Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change | **Tactics:** <br> - Inhibit Response Function <br> - Persistence <br><br> **Techniques:** <br> - T0857: System Firmware | | **Foxboro I/A Unauthorized Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter | | **FTP Login Failed** | A failed sign-in attempt was detected from a source device to a destination server. This alert might be the result of human error, but could also indicate a malicious attempt to compromise the server or data on it. | Major | Authentication | **Tactics:** <br> - Lateral Movement <br> - Command And Control <br><br> **Techniques:** <br> - T0812: Default Credentials <br> - T0869: Standard Application Layer Protocol |
-| **Function Code Raised Unauthorized Exception [*](#ot-alerts-disabled-by-default)** | A source device (secondary) returned an exception to a destination device (primary). | Major | Command Failures | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0835: Manipulate I/O Image |
+| **Function Code Raised Unauthorized Exception [*](#ot-alerts-turned-off-by-default)** | A source device (secondary) returned an exception to a destination device (primary). | Major | Command Failures | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0835: Manipulate I/O Image |
| **GOOSE Message Type Settings** | Message (identified by protocol ID) settings were changed on a source device. | Warning | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter | | **Honeywell Firmware Version Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change | **Tactics:** <br> - Inhibit Response Function <br> - Persistence <br><br> **Techniques:** <br> - T0857: System Firmware |
-| **Illegal HTTP Communication [*](#ot-alerts-disabled-by-default)** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal HTTP Communication Behavior | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0846: Remote System Discovery |
+| **Illegal HTTP Communication [*](#ot-alerts-turned-off-by-default)** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal HTTP Communication Behavior | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0846: Remote System Discovery |
| **Internet Access Detected** | A source device defined as part of your network is communicating with Internet addresses. The source isn't authorized to communicate with Internet addresses. | Major | Internet Access | **Tactics:** <br> - Initial Access <br><br> **Techniques:** <br> - T0883: Internet Accessible Device | | **Mitsubishi Firmware Version Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change | **Tactics:** <br> - Inhibit Response Function <br> - Persistence <br><br> **Techniques:** <br> - T0857: System Firmware | | **Modbus Address Range Violation** | A primary device requested access to a new secondary memory address. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
Policy engine alerts describe detected deviations from learned baseline behavior
| **Omron FINS Unauthorized Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter | | **S7 Plus PLC Firmware Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change | **Tactics:** <br> - Inhibit Response Function <br> - Persistence <br><br> **Techniques:** <br> - T0857: System Firmware | | **Sampled Values Message Type Settings** | Message (identified by protocol ID) settings were changed on a source device. | Warning | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
-| **Suspicion of Illegal Integrity Scan [*](#ot-alerts-disabled-by-default)** | A scan was detected on a DNP3 source device (outstation). This scan wasn't authorized as learned traffic on your network. | Major | Scan | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Suspicion of Illegal Integrity Scan [*](#ot-alerts-turned-off-by-default)** | A scan was detected on a DNP3 source device (outstation). This scan wasn't authorized as learned traffic on your network. | Major | Scan | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
| **Toshiba Computer Link Unauthorized Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Minor | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking | | **Unauthorized ABB Totalflow File Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking | | **Unauthorized ABB Totalflow Register Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
Policy engine alerts describe detected deviations from learned baseline behavior
| **Unauthorized Access to Wonderware Tag** | A source device attempted to access a resource on another device. An access attempt to this resource between these two devices hasn't been authorized as learned traffic on your network. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Collection <br> - Impair Process Control <br><br> **Techniques:** <br> - T0861: Point & Tag Identification <br> - T0855: Unauthorized Command Message | | **Unauthorized BACNet Object Access** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking | | **Unauthorized BACNet Route** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
-| **Unauthorized Database Login [*](#ot-alerts-disabled-by-default)** | A sign-in attempt between a source client and destination server was detected. Communication between these devices hasn't been authorized as learned traffic on your network. | Major | Authentication | **Tactics:** <br> - Lateral Movement <br> - Persistence <br> - Collection <br><br> **Techniques:** <br> - T0859: Valid Accounts <br> - T0811: Data from Information Repositories |
+| **Unauthorized Database Login [*](#ot-alerts-turned-off-by-default)** | A sign-in attempt between a source client and destination server was detected. Communication between these devices hasn't been authorized as learned traffic on your network. | Major | Authentication | **Tactics:** <br> - Lateral Movement <br> - Persistence <br> - Collection <br><br> **Techniques:** <br> - T0859: Valid Accounts <br> - T0811: Data from Information Repositories |
| **Unauthorized Database Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Initial Access <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0811: Data from Information Repositories | | **Unauthorized Emerson ROC Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking | | **Unauthorized GE SRTP File Access** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Collection <br> - LateralMovement <br> - Persistence <br><br> **Techniques:** <br> - T0801: Monitor Process State <br> - T0859: Valid Accounts | | **Unauthorized GE SRTP Protocol Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking | | **Unauthorized GE SRTP System Memory Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Discovery <br> - Impair Process Control <br><br> **Techniques:** <br> - T0846: Remote System Discovery <br> - T0855: Unauthorized Command Message | | **Unauthorized HTTP Activity** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal HTTP Communication Behavior | **Tactics:** <br> - Initial Access <br> - Command And Control <br><br> **Techniques:** <br> - T0822: External Remote Services <br> - T0869: Standard Application Layer Protocol |
-| **Unauthorized HTTP SOAP Action [*](#ot-alerts-disabled-by-default)** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal HTTP Communication Behavior | **Tactics:** <br> - Command And Control <br> - Execution <br><br> **Techniques:** <br> - T0869: Standard Application Layer Protocol <br> - T0871: Execution through API |
-| **Unauthorized HTTP User Agent [*](#ot-alerts-disabled-by-default)** | An unauthorized application was detected on a source device. The application hasn't been authorized as a learned application on your network. | Major | Abnormal HTTP Communication Behavior | **Tactics:** <br> - Command And Control <br><br> **Techniques:** <br> - T0869: Standard Application Layer Protocol |
+| **Unauthorized HTTP SOAP Action [*](#ot-alerts-turned-off-by-default)** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal HTTP Communication Behavior | **Tactics:** <br> - Command And Control <br> - Execution <br><br> **Techniques:** <br> - T0869: Standard Application Layer Protocol <br> - T0871: Execution through API |
+| **Unauthorized HTTP User Agent [*](#ot-alerts-turned-off-by-default)** | An unauthorized application was detected on a source device. The application hasn't been authorized as a learned application on your network. | Major | Abnormal HTTP Communication Behavior | **Tactics:** <br> - Command And Control <br><br> **Techniques:** <br> - T0869: Standard Application Layer Protocol |
| **Unauthorized Internet Connectivity Detected** | A source device defined as part of your network is communicating with Internet addresses. The source isn't authorized to communicate with Internet addresses. | Critical | Internet Access | **Tactics:** <br> - Initial Access <br><br> **Techniques:** <br> - T0883: Internet Accessible Device | | **Unauthorized Mitsubishi MELSEC Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking | | **Unauthorized MMS Program Access** | A source device attempted to access a resource on another device. An access attempt to this resource between these two devices hasn't been authorized as learned traffic on your network. | Major | Programming | **Tactics:** <br> - Impair Process Control <br> - Execution <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0821: Modify Controller Tasking |
Policy engine alerts describe detected deviations from learned baseline behavior
| **Unpermitted Modbus Schneider Electric Extension** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message | | **Unpermitted Usage of ASDU Types** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |**Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message | | **Unpermitted Usage of DNP3 Function Code** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
-| **Unpermitted Usage of Internal Indication (IIN) [*](#ot-alerts-disabled-by-default)** | A DNP3 source device (outstation) reported an internal indication (IIN) that hasn't authorized as learned traffic on your network. | Major | Illegal Commands | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Unpermitted Usage of Internal Indication (IIN) [*](#ot-alerts-turned-off-by-default)** | A DNP3 source device (outstation) reported an internal indication (IIN) that hasn't authorized as learned traffic on your network. | Major | Illegal Commands | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
| **Unpermitted Usage of Modbus Function Code** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter | ## Anomaly engine alerts
Anomaly engine alerts describe detected anomalies in network activity.
| Title | Description | Severity | Category | MITRE ATT&CK <br> tactics and techniques | |--|--|--|--|--|
-| **Abnormal Exception Pattern in Slave [*](#ot-alerts-disabled-by-default)** | An excessive number of errors were detected on a source device. This alert may be the result of an operational issue. <br><br> Threshold: 20 exceptions in 1 hour | Minor | Abnormal Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0806: Brute Force I/O |
-| **Abnormal HTTP Header Length [*](#ot-alerts-disabled-by-default)** | The source device sent an abnormal message. This alert may indicate an attempt to attack the destination device. | Critical | Abnormal HTTP Communication Behavior | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br> - Command And Control <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services <br> - T0869: Standard Application Layer Protocol |
-| **Abnormal Number of Parameters in HTTP Header [*](#ot-alerts-disabled-by-default)** | The source device sent an abnormal message. This alert may indicate an attempt to attack the destination device. | Critical | Abnormal HTTP Communication Behavior | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br> - Command And Control <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services <br> - T0869: Standard Application Layer Protocol |
+| **Abnormal Exception Pattern in Slave [*](#ot-alerts-turned-off-by-default)** | An excessive number of errors were detected on a source device. This alert may be the result of an operational issue. <br><br> Threshold: 20 exceptions in 1 hour | Minor | Abnormal Communication Behavior | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0806: Brute Force I/O |
+| **Abnormal HTTP Header Length [*](#ot-alerts-turned-off-by-default)** | The source device sent an abnormal message. This alert may indicate an attempt to attack the destination device. | Critical | Abnormal HTTP Communication Behavior | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br> - Command And Control <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services <br> - T0869: Standard Application Layer Protocol |
+| **Abnormal Number of Parameters in HTTP Header [*](#ot-alerts-turned-off-by-default)** | The source device sent an abnormal message. This alert may indicate an attempt to attack the destination device. | Critical | Abnormal HTTP Communication Behavior | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br> - Command And Control <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services <br> - T0869: Standard Application Layer Protocol |
| **Abnormal Periodic Behavior In Communication Channel** | A change in the frequency of communication between the source and destination devices was detected. | Minor | Abnormal Communication Behavior | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
-| **Abnormal Termination of Applications [*](#ot-alerts-disabled-by-default)** | An excessive number of stop commands were detected on a source device. This alert may be the result of an operational issue or an attempt to manipulate the device. <br><br> Threshold: 20 stop commands in 3 hours | Major | Abnormal Communication Behavior | **Tactics:** <br> - Persistence <br> - Impact <br><br> **Techniques:** <br> - T0889: Modify Program <br> - T0831: Manipulation of Control |
-| **Abnormal Traffic Bandwidth [*](#ot-alerts-disabled-by-default)** | Abnormal bandwidth was detected on a channel. Bandwidth appears to be lower/higher than previously detected. For details, work with the Total Bandwidth widget. | Warning | Bandwidth Anomalies | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
-| **Abnormal Traffic Bandwidth Between Devices [*](#ot-alerts-disabled-by-default)** | Abnormal bandwidth was detected on a channel. Bandwidth appears to be lower/higher than previously detected. For details, work with the Total Bandwidth widget. | Warning | Bandwidth Anomalies | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Abnormal Termination of Applications [*](#ot-alerts-turned-off-by-default)** | An excessive number of stop commands were detected on a source device. This alert may be the result of an operational issue or an attempt to manipulate the device. <br><br> Threshold: 20 stop commands in 3 hours | Major | Abnormal Communication Behavior | **Tactics:** <br> - Persistence <br> - Impact <br><br> **Techniques:** <br> - T0889: Modify Program <br> - T0831: Manipulation of Control |
+| **Abnormal Traffic Bandwidth [*](#ot-alerts-turned-off-by-default)** | Abnormal bandwidth was detected on a channel. Bandwidth appears to be lower/higher than previously detected. For details, work with the Total Bandwidth widget. | Warning | Bandwidth Anomalies | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Abnormal Traffic Bandwidth Between Devices [*](#ot-alerts-turned-off-by-default)** | Abnormal bandwidth was detected on a channel. Bandwidth appears to be lower/higher than previously detected. For details, work with the Total Bandwidth widget. | Warning | Bandwidth Anomalies | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
| **Address Scan Detected** | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. <br><br> Threshold: 50 connections to the same B class subnet in 2 minutes | Critical | Scan | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
-| **ARP Address Scan Detected [*](#ot-alerts-disabled-by-default)** | A source device was detected scanning network devices using Address Resolution Protocol (ARP). This device address hasn't been authorized as valid ARP scanning address. <br><br> Threshold: 40 scans in 6 minutes | Critical | Scan | **Tactics:** <br> - Discovery <br> - Collection <br><br> **Techniques:** <br> - T0842: Network Sniffing <br> - T0830: Man in the Middle |
-| **ARP Spoofing [*](#ot-alerts-disabled-by-default)** | An abnormal quantity of packets was detected in the network. This alert could indicate an attack, for example, an ARP spoofing or ICMP flooding attack. <br><br> Threshold: 60 packets in 1 minute | Warning | Abnormal Communication Behavior | **Tactics:** <br> - Collection <br><br> **Techniques:** <br> - T0830: Man in the Middle |
+| **ARP Address Scan Detected [*](#ot-alerts-turned-off-by-default)** | A source device was detected scanning network devices using Address Resolution Protocol (ARP). This device address hasn't been authorized as valid ARP scanning address. <br><br> Threshold: 40 scans in 6 minutes | Critical | Scan | **Tactics:** <br> - Discovery <br> - Collection <br><br> **Techniques:** <br> - T0842: Network Sniffing <br> - T0830: Man in the Middle |
+| **ARP Spoofing [*](#ot-alerts-turned-off-by-default)** | An abnormal quantity of packets was detected in the network. This alert could indicate an attack, for example, an ARP spoofing or ICMP flooding attack. <br><br> Threshold: 60 packets in 1 minute | Warning | Abnormal Communication Behavior | **Tactics:** <br> - Collection <br><br> **Techniques:** <br> - T0830: Man in the Middle |
| **Excessive Login Attempts** | A source device was seen performing excessive sign-in attempts to a destination server. This alert may indicate a brute force attack. The server may be compromised by a malicious actor. <br><br> Threshold: 20 sign-in attempts in 1 minute | Critical | Authentication | **Tactics:** <br> - LateralMovement <br> - Impair Process Control <br><br> **Techniques:** <br> - T0812: Default Credentials <br> - T0806: Brute Force I/O | | **Excessive Number of Sessions** | A source device was seen performing excessive sign-in attempts to a destination server. This may indicate a brute force attack. The server may be compromised by a malicious actor. <br><br> Threshold: 50 sessions in 1 minute | Critical | Abnormal Communication Behavior | **Tactics:** <br> - Lateral Movement <br> - Impair Process Control <br><br> **Techniques:** <br> - T0812: Default Credentials <br> - T0806: Brute Force I/O |
-| **Excessive Restart Rate of an Outstation [*](#ot-alerts-disabled-by-default)** | An excessive number of restart commands were detected on a source device. These alerts may be the result of an operational issue or an attempt to manipulate the device. <br><br> Threshold: 10 restarts in 1 hour | Major | Restart/ Stop Commands | **Tactics:** <br> - Inhibit Response Function <br> - Impair Process Control <br><br> **Techniques:** <br> - T0814: Denial of Service <br> - T0806: Brute Force I/O |
+| **Excessive Restart Rate of an Outstation [*](#ot-alerts-turned-off-by-default)** | An excessive number of restart commands were detected on a source device. These alerts may be the result of an operational issue or an attempt to manipulate the device. <br><br> Threshold: 10 restarts in 1 hour | Major | Restart/ Stop Commands | **Tactics:** <br> - Inhibit Response Function <br> - Impair Process Control <br><br> **Techniques:** <br> - T0814: Denial of Service <br> - T0806: Brute Force I/O |
| **Excessive SMB login attempts** | A source device was seen performing excessive sign-in attempts to a destination server. This may indicate a brute force attack. The server may be compromised by a malicious actor. <br><br> Threshold: 10 sign-in attempts in 10 minutes | Critical | Authentication | **Tactics:** <br> - Persistence <br> - Execution <br> - LateralMovement <br><br> **Techniques:** <br> - T0812: Default Credentials <br> - T0853: Scripting <br> - T0859: Valid Accounts |
-| **ICMP Flooding [*](#ot-alerts-disabled-by-default)** | An abnormal quantity of packets was detected in the network. This alert could indicate an attack, for example, an ARP spoofing or ICMP flooding attack. <br><br> Threshold: 60 packets in 1 minute | Warning | Abnormal Communication Behavior | **Tactics:** <br> - Discovery <br> - Collection <br><br> **Techniques:** <br> - T0842: Network Sniffing <br> - T0830: Man in the Middle |
-| **Illegal HTTP Header Content [*](#ot-alerts-disabled-by-default)** | The source device initiated an invalid request. | Critical | Abnormal HTTP Communication Behavior | **Tactics:** <br> - Initial Access <br> - LateralMovement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services |
-| **Inactive Communication Channel [*](#ot-alerts-disabled-by-default)** | A communication channel between two devices was inactive during a period in which activity is usually observed. This might indicate that the program generating this traffic was changed, or the program might be unavailable. It's recommended to review the configuration of installed program and verify that it's configured properly. <br><br> Threshold: 1 minute | Warning | Unresponsive | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0881: Service Stop |
-| **Long Duration Address Scan Detected [*](#ot-alerts-disabled-by-default)** | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. <br><br> Threshold: 50 connections to the same B class subnet in 10 minutes | Critical | Scan | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **ICMP Flooding [*](#ot-alerts-turned-off-by-default)** | An abnormal quantity of packets was detected in the network. This alert could indicate an attack, for example, an ARP spoofing or ICMP flooding attack. <br><br> Threshold: 60 packets in 1 minute | Warning | Abnormal Communication Behavior | **Tactics:** <br> - Discovery <br> - Collection <br><br> **Techniques:** <br> - T0842: Network Sniffing <br> - T0830: Man in the Middle |
+| **Illegal HTTP Header Content [*](#ot-alerts-turned-off-by-default)** | The source device initiated an invalid request. | Critical | Abnormal HTTP Communication Behavior | **Tactics:** <br> - Initial Access <br> - LateralMovement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services |
+| **Inactive Communication Channel [*](#ot-alerts-turned-off-by-default)** | A communication channel between two devices was inactive during a period in which activity is usually observed. This might indicate that the program generating this traffic was changed, or the program might be unavailable. It's recommended to review the configuration of installed program and verify that it's configured properly. <br><br> Threshold: 1 minute | Warning | Unresponsive | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0881: Service Stop |
+| **Long Duration Address Scan Detected [*](#ot-alerts-turned-off-by-default)** | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. <br><br> Threshold: 50 connections to the same B class subnet in 10 minutes | Critical | Scan | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
| **Password Guessing Attempt Detected** | A source device was seen performing excessive sign-in attempts to a destination server. This may indicate a brute force attack. The server may be compromised by a malicious actor. <br><br> Threshold: 100 attempts in 1 minute | Critical | Authentication | **Tactics:** <br> - Lateral Movement <br><br> **Techniques:** <br> - T0812: Default Credentials <br> - T0806: Brute Force I/O | | **PLC Scan Detected** | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. <br><br> Threshold: 10 scans in 2 minutes | Critical | Scan | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing | | **Port Scan Detected** | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. <br><br> Threshold: 25 scans in 2 minutes | Critical | Scan | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing | | **Unexpected message length** | The source device sent an abnormal message. This alert may indicate an attempt to attack the destination device. <br><br> Threshold: text length - 32768 | Critical | Abnormal Communication Behavior | **Tactics:** <br> - InitialAccess <br> - LateralMovement <br><br> **Techniques:** <br> - T0869: Exploitation of Remote Services |
-| **Unexpected Traffic for Standard Port [*](#ot-alerts-disabled-by-default)** | Traffic was detected on a device using a port reserved for another protocol. | Major | Abnormal Communication Behavior | **Tactics:** <br> - Command And Control <br> - Discovery <br><br> **Techniques:** <br> - T0869: Standard Application Layer Protocol <br> - T0842: Network Sniffing |
+| **Unexpected Traffic for Standard Port [*](#ot-alerts-turned-off-by-default)** | Traffic was detected on a device using a port reserved for another protocol. | Major | Abnormal Communication Behavior | **Tactics:** <br> - Command And Control <br> - Discovery <br><br> **Techniques:** <br> - T0869: Standard Application Layer Protocol <br> - T0842: Network Sniffing |
## Protocol violation engine alerts
Protocol engine alerts describe detected deviations in the packet structure, or
| Title | Description | Severity | Category | MITRE ATT&CK <br> tactics and techniques | |--|--|--|--|--|
-| **Excessive Malformed Packets In a Single Session [*](#ot-alerts-disabled-by-default)** | An abnormal number of malformed packets sent from the source device to the destination device. This alert might indicate erroneous communications, or an attempt to manipulate the targeted device. <br><br> Threshold: 2 malformed packets in 10 minutes | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0806: Brute Force I/O |
+| **Excessive Malformed Packets In a Single Session [*](#ot-alerts-turned-off-by-default)** | An abnormal number of malformed packets sent from the source device to the destination device. This alert might indicate erroneous communications, or an attempt to manipulate the targeted device. <br><br> Threshold: 2 malformed packets in 10 minutes | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0806: Brute Force I/O |
| **Firmware Update** | A source device sent a command to update firmware on a destination device. Verify that recent programming, configuration and firmware upgrades made to the destination device are valid. | Warning | Firmware Change | **Tactics:** <br> - Inhibit Response Function <br> - Persistence <br><br> **Techniques:** <br> - T0857: System Firmware | | **Function Code Not Supported by Outstation** | The destination device received an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message | | **Illegal BACNet message** | The source device initiated an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter | | **Illegal Connection Attempt on Port 0** | A source device attempted to connect to destination device on port number zero (0). For TCP, port 0 is reserved and canΓÇÖt be used. For UDP, the port is optional and a value of 0 means no port. There's usually no service on a system that listens on port 0. This event may indicate an attempt to attack the destination device, or indicate that an application was programmed incorrectly. | Minor | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter | | **Illegal DNP3 Operation** | The source device initiated an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services | | **Illegal MODBUS Operation (Exception Raised by Master)** | The source device initiated an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services |
-| **Illegal MODBUS Operation (Function Code Zero) [*](#ot-alerts-disabled-by-default)** | The source device initiated an invalid request. | Major | Illegal Commands |**Tactics:** <br> - Initial Access <br> - Lateral Movement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services |
-| **Illegal Protocol Version [*](#ot-alerts-disabled-by-default)** | The source device initiated an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Initial Access <br> - LateralMovement <br> - Impair Process Control <br><br> **Techniques:** <br> - T0820: Remote Services <br> - T0836: Modify Parameter |
+| **Illegal MODBUS Operation (Function Code Zero) [*](#ot-alerts-turned-off-by-default)** | The source device initiated an invalid request. | Major | Illegal Commands |**Tactics:** <br> - Initial Access <br> - Lateral Movement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services |
+| **Illegal Protocol Version [*](#ot-alerts-turned-off-by-default)** | The source device initiated an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Initial Access <br> - LateralMovement <br> - Impair Process Control <br><br> **Techniques:** <br> - T0820: Remote Services <br> - T0836: Modify Parameter |
| **Incorrect Parameter Sent to Outstation** | The destination device received an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter | | **Initiation of an Obsolete Function Code (Initialize Data)** | The source device initiated an invalid request. | Minor | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message | | **Initiation of an Obsolete Function Code (Save Config)** | The source device initiated an invalid request. | Minor | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
Protocol engine alerts describe detected deviations in the packet structure, or
| **Slave Device Received Illegal ASDU Type** | The destination device received an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter | | **Slave Device Received Illegal Command Cause of Transmission** | The destination device received an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter | | **Slave Device Received Illegal Common Address** | The destination device received an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
-| **Slave Device Received Illegal Data Address Parameter [*](#ot-alerts-disabled-by-default)** | The destination device received an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
-| **Slave Device Received Illegal Data Value Parameter [*](#ot-alerts-disabled-by-default)** | The destination device received an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
-| **Slave Device Received Illegal Function Code [*](#ot-alerts-disabled-by-default)** | The destination device received an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **Slave Device Received Illegal Data Address Parameter [*](#ot-alerts-turned-off-by-default)** | The destination device received an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **Slave Device Received Illegal Data Value Parameter [*](#ot-alerts-turned-off-by-default)** | The destination device received an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
+| **Slave Device Received Illegal Function Code [*](#ot-alerts-turned-off-by-default)** | The destination device received an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter |
| **Slave Device Received Illegal Information Object Address** | The destination device received an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message <br> - T0836: Modify Parameter | | **Unknown Object Sent to Outstation** | The destination device received an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message | | **Usage of a Reserved Function Code** | The source device initiated an invalid request. | Major | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
-| **Usage of Improper Formatting by Outstation [*](#ot-alerts-disabled-by-default)** | The source device initiated an invalid request. | Warning | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **Usage of Improper Formatting by Outstation [*](#ot-alerts-turned-off-by-default)** | The source device initiated an invalid request. | Warning | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
| **Usage of Reserved Status Flags (IIN) ** | A DNP3 source device (outstation) used the reserved Internal Indicator 2.6. It's recommended to check the device's configuration. | Warning | Illegal Commands | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter | ## Malware engine alerts
Malware engine alerts describe detected malicious network activity.
| **Suspicion of Malicious Activity (Poison Ivy)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services | | **Suspicion of Malicious Activity (Regin)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br> - Impact <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services <br> - T0882: Theft of Operational Information | | **Suspicion of Malicious Activity (Stuxnet)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br> - Impact <br><br> **Techniques:** <br> - T0818: Engineering Workstation Compromise <br> - T0866: Exploitation of Remote Services <br> - T0831: Manipulation of Control |
-| **Suspicion of Malicious Activity (WannaCry) [*](#ot-alerts-disabled-by-default)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malware | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services <br> - T0867: Lateral Tool Transfer |
+| **Suspicion of Malicious Activity (WannaCry) [*](#ot-alerts-turned-off-by-default)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malware | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services <br> - T0867: Lateral Tool Transfer |
| **Suspicion of NotPetya Malware - Illegal SMB Parameters Detected** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services | | **Suspicion of NotPetya Malware - Illegal SMB Transaction Detected** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Lateral Movement <br><br> **Techniques:** <br> - T0867: Lateral Tool Transfer | | **Suspicion of Remote Code Execution with PsExec** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity | **Tactics:** <br> - Lateral Movement <br> - Initial Access <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services |
-| **Suspicion of Remote Windows Service Management [*](#ot-alerts-disabled-by-default)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity | **Tactics:** <br> - Initial Access <br><br> **Techniques:** <br> - T0822: NetworkExternal Remote Services |
+| **Suspicion of Remote Windows Service Management [*](#ot-alerts-turned-off-by-default)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity | **Tactics:** <br> - Initial Access <br><br> **Techniques:** <br> - T0822: NetworkExternal Remote Services |
| **Suspicious Executable File Detected on Endpoint** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity | **Tactics:** <br> - Evasion <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0851: Rootkit |
-| **Suspicious Traffic Detected [*](#ot-alerts-disabled-by-default)** | Suspicious network activity was detected. This activity may be associated with an attack that triggered known 'Indicators of Compromise' (IOCs). Alert metadata should be reviewed by the security team | Critical | Suspicion of Malicious Activity | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Suspicious Traffic Detected [*](#ot-alerts-turned-off-by-default)** | Suspicious network activity was detected. This activity may be associated with an attack that triggered known 'Indicators of Compromise' (IOCs). Alert metadata should be reviewed by the security team | Critical | Suspicion of Malicious Activity | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
| **Backup Activity with Antivirus Signatures** | Traffic detected between the source device and the destination backup server triggered this alert. The traffic includes backup of antivirus software that might contain malware signatures. This is most likely legitimate backup activity. | Warning | Backup | **Tactics:** <br> - Impact <br><br> **Techniques:** <br> - T0882: Theft of Operational Information | ## Operational engine alerts
Operational engine alerts describe detected operational incidents, or malfunctio
| **An S7 Stop PLC Command was Sent** | The source device sent a stop command to a destination controller. The controller will stop operating until a start command is sent. | Warning | Restart/ Stop Commands | **Tactics:** <br> - Lateral Movement <br> - Defense Evasion <br> - Execution <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0843: Program Download <br> - T0858: Change Operating Mode <br> - T0814: Denial of Service | | **BACNet Operation Failed** | A server returned an error code. This alert indicates a server error or an invalid request by a client. | Major | Command Failures | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message | | **Bad MMS Device State** | An MMS Virtual Manufacturing Device (VMD) sent a status message. The message indicates that the server may not be configured correctly, partially operational, or not operational at all. | Major | Operational Issues | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0814: Denial of Service |
-| **Change of Device Configuration [*](#ot-alerts-disabled-by-default)** | A configuration change was detected on a source device. | Minor | Configuration Changes | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
-| **Continuous Event Buffer Overflow at Outstation [*](#ot-alerts-disabled-by-default)** | A buffer overflow event was detected on a source device. The event may cause data corruption, program crashes, or execution of malicious code. <br><br> Threshold: 3 occurrences in 10 minutes | Major | Buffer Overflow | **Tactics:** <br> - Inhibit Response Function <br> - Impair Process Control <br> - Persistence <br><br> **Techniques:** <br> - T0814: Denial of Service <br> - T0806: Brute Force I/O <br> - T0839: Module Firmware |
+| **Change of Device Configuration [*](#ot-alerts-turned-off-by-default)** | A configuration change was detected on a source device. | Minor | Configuration Changes | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **Continuous Event Buffer Overflow at Outstation [*](#ot-alerts-turned-off-by-default)** | A buffer overflow event was detected on a source device. The event may cause data corruption, program crashes, or execution of malicious code. <br><br> Threshold: 3 occurrences in 10 minutes | Major | Buffer Overflow | **Tactics:** <br> - Inhibit Response Function <br> - Impair Process Control <br> - Persistence <br><br> **Techniques:** <br> - T0814: Denial of Service <br> - T0806: Brute Force I/O <br> - T0839: Module Firmware |
| **Controller Reset** | A source device sent a reset command to a destination controller. The controller stopped operating temporarily and started again automatically. | Warning | Restart/ Stop Commands | **Tactics:** <br> - Defense Evasion <br> - Execution <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0858: Change Operating Mode <br> - T0814: Denial of Service | | **Controller Stop** | The source device sent a stop command to a destination controller. The controller will stop operating until a start command is sent. | Warning | Restart/ Stop Commands | **Tactics:** <br> - Lateral Movement <br> - Defense Evasion <br> - Execution <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0843: Program Download <br> - T0858: Change Operating Mode <br> - T0814: Denial of Service | | **Device Failed to Receive a Dynamic IP Address** | The source device is configured to receive a dynamic IP address from a DHCP server but didn't receive an address. This indicates a configuration error on the device, or an operational error in the DHCP server. It's recommended to notify the network administrator of the incident | Major | Command Failures | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
Operational engine alerts describe detected operational incidents, or malfunctio
| **GE SRTP Command Failure** | A server returned an error code. This alert indicates a server error or an invalid request by a client. | Major | Command Failures | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message | | **GE SRTP Stop PLC Command was Sent** | The source device sent a stop command to a destination controller. The controller will stop operating until a start command is sent. | Warning | Restart/ Stop Commands | **Tactics:** <br> - Lateral Movement <br> - Defense Evasion <br> - Execution <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0843: Program Download <br> - T0858: Change Operating Mode <br> - T0814: Denial of Service | | **GOOSE Control Block Requires Further Configuration** | A source device sent a GOOSE message indicating that the device needs commissioning. This means that the GOOSE control block requires further configuration and GOOSE messages are partially or completely non-operational. | Major | Configuration Changes | **Tactics:** <br> - Impair Process Control <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0803: Block Command Message <br> - T0821: Modify Controller Tasking |
-| **GOOSE Dataset Configuration was Changed [*](#ot-alerts-disabled-by-default)** | A message (identified by protocol ID) dataset was changed on a source device. This means the device will report a different dataset for this message. | Warning | Configuration Changes | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **GOOSE Dataset Configuration was Changed [*](#ot-alerts-turned-off-by-default)** | A message (identified by protocol ID) dataset was changed on a source device. This means the device will report a different dataset for this message. | Warning | Configuration Changes | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
| **Honeywell Controller Unexpected Status** | A Honeywell Controller sent an unexpected diagnostic message indicating a status change. | Warning | Operational Issues | **Tactics:** <br> - Evasion <br> - Execution <br><br> **Techniques:** <br> - T0858: Change Operating Mode |
-| **HTTP Client Error [*](#ot-alerts-disabled-by-default)** | The source device initiated an invalid request. | Warning | Abnormal HTTP Communication Behavior | **Tactics:** <br> - Command And Control <br><br> **Techniques:** <br> - T0869: Standard Application Layer Protocol |
+| **HTTP Client Error [*](#ot-alerts-turned-off-by-default)** | The source device initiated an invalid request. | Warning | Abnormal HTTP Communication Behavior | **Tactics:** <br> - Command And Control <br><br> **Techniques:** <br> - T0869: Standard Application Layer Protocol |
| **Illegal IP Address** | System detected traffic between a source device and an IP address that is an invalid address. This may indicate wrong configuration or an attempt to generate illegal traffic. | Minor | Abnormal Communication Behavior | **Tactics:** <br> - Discovery <br> - Impair Process Control <br><br> **Techniques:** <br> - T0842: Network Sniffing <br> - T0836: Modify Parameter | | **Master-Slave Authentication Error** | The authentication process between a DNP3 source device (primary) and a destination device (outstation) failed. | Minor | Authentication | **Tactics:** <br> - Lateral Movement <br> - Persistence <br><br> **Techniques:** <br> - T0859: Valid Accounts | | **MMS Service Request Failed** | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
Operational engine alerts describe detected operational incidents, or malfunctio
| **Outstation's Corrupted Configuration Detected** | This DNP3 source device (outstation) reported a corrupted configuration. | Major | Configuration Changes | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0809: Data Destruction | | **Profinet DCP Command Failed** | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message | | **Profinet Device Factory Reset** | A source device sent a factory reset command to a Profinet destination device. The reset command clears Profinet device configurations and stops its operation. | Warning | Restart/ Stop Commands | **Tactics:** <br> - Defense Evasion <br> - Execution <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0858: Change Operating Mode <br> - T0814: Denial of Service |
-| **RPC Operation Failed [*](#ot-alerts-disabled-by-default)** | A server returned an error code. This alert indicates a server error or an invalid request by a client. | Major | Command Failures | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
-| **Sampled Values Message Dataset Configuration was Changed [*](#ot-alerts-disabled-by-default)** | A message (identified by protocol ID) dataset was changed on a source device. This means the device will report a different dataset for this message. | Warning | Configuration Changes | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
-| **Slave Device Unrecoverable Failure [*](#ot-alerts-disabled-by-default)** | An unrecoverable condition error was detected on a source device. This kind of error usually indicates a hardware failure or failure to perform a specific command. | Major | Command Failures | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0814: Denial of Service |
+| **RPC Operation Failed [*](#ot-alerts-turned-off-by-default)** | A server returned an error code. This alert indicates a server error or an invalid request by a client. | Major | Command Failures | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0855: Unauthorized Command Message |
+| **Sampled Values Message Dataset Configuration was Changed [*](#ot-alerts-turned-off-by-default)** | A message (identified by protocol ID) dataset was changed on a source device. This means the device will report a different dataset for this message. | Warning | Configuration Changes | **Tactics:** <br> - Impair Process Control <br><br> **Techniques:** <br> - T0836: Modify Parameter |
+| **Slave Device Unrecoverable Failure [*](#ot-alerts-turned-off-by-default)** | An unrecoverable condition error was detected on a source device. This kind of error usually indicates a hardware failure or failure to perform a specific command. | Major | Command Failures | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0814: Denial of Service |
| **Suspicion of Hardware Problems in Outstation** | An unrecoverable condition error was detected on a source device. This kind of error usually indicates a hardware failure or failure to perform a specific command. | Major | Operational Issues | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0814: Denial of Service <br> - T0881: Service Stop | | **Suspicion of Unresponsive MODBUS Device** | A source device didn't respond to a command sent to it. It may have been disconnected when the command was sent. <br><br> Threshold: Minimum of 1 valid response for a minimum of 3 requests within 5 minutes | Minor | Unresponsive | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0881: Service Stop | | **Traffic Detected on Sensor Interface** | A sensor resumed detecting network traffic on a network interface. | Warning | Sensor Traffic | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
Operational engine alerts describe detected operational incidents, or malfunctio
For more information, see: - [View and manage alerts on the Defender for IoT portal (Preview)](how-to-manage-cloud-alerts.md)-- [Manage alerts](how-to-manage-the-alert-event.md) - [View alerts on your sensor](how-to-view-alerts.md) - [Accelerate alert workflows](how-to-accelerate-alert-incident-response.md) - [Forward alert information](how-to-forward-alert-information-to-partners.md)
defender-for-iot Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/alerts.md
+
+ Title: Microsoft Defender for IoT alerts
+description: Learn about Microsoft Defender for IoT alerts across the Azure portal, OT network sensors, and on-premises management consoles.
Last updated : 12/12/2022+++
+# Microsoft Defender for IoT alerts
+
+Microsoft Defender for IoT alerts enhance your network security and operations with real-time details about events logged in your network. Alerts are triggered when OT or Enterprise IoT network sensors detect changes or suspicious activity in network traffic that needs your attention.
+
+For example:
++
+Use the details shown on the **Alerts** page, or on an alert details page, to investigate and take action that remediates any risk to your network, either from related devices or the network process that triggered the alert.
+
+> [!TIP]
+> Use alert remediation steps to help your SOC teams understand possible issues and resolutions. We recommend that you review recommended remediation steps before updating an alert status or taking action on the device or network.
+>
+
+## Alert management options
+
+Defender for IoT alerts are available in the Azure portal, OT network sensor consoles, and the on-premises management console.
+
+While you can view alert details, investigate alert context, and triage and manage alert statuses from any of these locations, each location also offers extra alert actions. The following table describes the alerts supported for each location and the extra actions available from that location only:
+
+|Location |Description | Extra alert actions |
+||||
+|**Azure portal** | Alerts from all cloud-connected OT sensors and Enterprise IoT sensors | - View related MITRE ATT&CK tactics and techniques <br>- Use out-of-the-box workbooks for visibility into high priority alerts <br>- View alerts from Microsoft Sentinel and run deeper investigations with [Microsoft Sentinel playbooks and workbooks](concept-sentinel-integration.md). |
+|**OT network sensor consoles** | Alerts generated by that OT sensor | - View the alert's source and destination in the **Device map** <br>- View related events on the **Event timeline** <br>- Forward alerts directly to partner vendors <br>- Create alert comments <br> - Create custom alert rules <br>- Unlearn alerts |
+|**An on-premises management console** | Alerts generated by connected OT sensors | - Forward alerts directly to partner vendors <br> - Create alert exclusion rules |
+
+For more information, see [Accelerating OT alert workflows](#accelerating-ot-alert-workflows) and [Alert statuses and triaging options](alerts.md#alert-statuses-and-triaging-options) below.
+
+Alert options also differ depending on your location and user role. For more information, see [Azure user roles and permissions](roles-azure.md) and [On-premises users and roles](roles-on-premises.md).
+
+### Enterprise IoT alerts and Microsoft Defender for Endpoint
+
+Alerts triggered by Enterprise IoT sensors are shown in the Azure portal only.
+
+If you have an [Enterprise IoT plan](eiot-defender-for-endpoint.md) with Microsoft Defender for Endpoint, alerts for Enterprise IoT devices detected by Microsoft Defender for Endpoint are available in Microsoft 365 Defender only.
+
+For more information, see [Securing IoT devices in the enterprise](concept-enterprise.md) and the [Alerts queue in Microsoft 365 Defender](/microsoft-365/security/defender-endpoint/alerts-queue-endpoint-detection-response).
+
+## Managing OT alerts in a hybrid environment
+
+Users working in hybrid environments may be managing OT alerts in Defender for IoT on the Azure portal, the OT sensor, and an on-premises management console.
+
+Alert statuses are fully synchronized between the Azure portal and the OT sensor, and between the sensor and the on-premises management console. This means that regardless of where you manage the alert in Defender for IoT, the alert is updated in other locations as well.
+
+Setting an alert status to **Closed** or **Muted** on a sensor or on-premises management console updates the alert status to **Closed** on the Azure portal. On the on-premises management console, the **Closed** alert status is called **Acknowledged**.
+
+> [!TIP]
+> If you're working with Microsoft Sentinel, we recommend that you configure the integration to also [synchronize alert status](concept-sentinel-integration.md#defender-for-iot-alerts-in-microsoft-sentinel) with Microsoft Sentinel, and then manage alert statuses together with the related Microsoft Sentinel incidents.
+>
+> For more information, see [Tutorial: Investigate and detect threats for IoT devices](iot-advanced-threat-monitoring.md).
+>
+
+## Accelerating OT alert workflows
+
+New alerts are automatically closed if no identical traffic is detected 90 days after the initial detection. If identical traffic is detected within those first 90 days, the 90-day count is reset.
+
+In addition to the default behavior, you may want to help your SOC and OT management teams triage and remediate alerts faster. Sign into an OT sensor or an on-premises management console as an **Admin** user to use the following options:
+
+- **Create custom alert rules**. OT sensors only.
+
+ Add custom alert rules to trigger alerts for specific activity on your network that's not covered by out-of-the-box functionality.
+
+ For example, for an environment running MODBUS, you might add a rule to detect any written commands to a memory register on a specific IP address and ethernet destination.
+
+ For more information, see [Create custom alert rules on an OT sensor](how-to-accelerate-alert-incident-response.md#create-custom-alert-rules-on-an-ot-sensor).
+
+- **Create alert comments**. OT sensors only.
+
+ Create a set of alert comments that other OT sensor users can add to individual alerts, with details like custom mitigation steps, communications to other team members, or other insights or warnings about the event.
+
+ Team members can reuse these custom comments as they triage and manage alert statuses. Alert comments are shown in a comments area on an alert details page. For example:
+
+ :::image type="content" source="media/alerts/alert-comments.png" alt-text="Screenshot of the alert comments area.":::
+
+ For more information, see [Create alert comments on an OT sensor](how-to-accelerate-alert-incident-response.md#create-alert-comments-on-an-ot-sensor).
+
+- **Create alert exclusion rules**: On-premises management consoles only.
+
+ If you're working with an on-premises management console, define *alert exclusion rules* to ignore events across multiple sensors that meet specific criteria. For example, you might create an alert exclusion rule to ignore all events that would trigger irrelevant alerts during a specific maintenance window.
+
+ Alerts ignored by exclusion rules aren't shown on the Azure portal, sensor, or on-premises management console, or in the event logs.
+
+ For more information, see [Create alert exclusion rules on an on-premises management console](how-to-accelerate-alert-incident-response.md#create-alert-exclusion-rules-on-an-on-premises-management-console).
+
+- **Forward alert data to partner systems** to partner SIEMs, syslog servers, specified email addresses and more.
+
+ Supported from both OT sensors and on-premises management consoles. For more information, see [Forward alert information](how-to-forward-alert-information-to-partners.md).
+
+## Alert statuses and triaging options
+
+Use the following alert statuses and triaging options to manage alerts across Defender for IoT.
+
+When triaging an alert, consider that some alerts might reflect valid network changes, such as an authorized device attempting to access a new resource on another device.
+
+While triaging options from the OT sensor and the on-premises management console are available for OT alerts only, options available on the Azure portal are available for both OT and Enterprise IoT alerts.
+
+Use the following table to learn more about each alert status and triage option.
++
+|Status / triage action |Available on |Description |
+||||
+|**New** | - Azure portal <br><br>- OT network sensors <br><br>- On-premises management console | *New* alerts are alerts that haven't yet been triaged or investigated by the team. New traffic detected for the same devices doesn't generate a new alert, but is added to the existing alert. <br><br>On the on-premises management console, *New* alerts are called *Unacknowledged*.<br><br>**Note**: You might see multiple, *New* or *Unacknowledged* alerts with the same name. In such cases, each separate alert is triggered by separate traffic, on different sets of devices. |
+|**Active** | - Azure portal only | Set an alert to *Active* to indicate that an investigation is underway, but that the alert can't yet be closed or otherwise triaged. <br><br>This status has no effect elsewhere in Defender for IoT. |
+|**Closed** | - Azure portal <br><br>- OT network sensors <br><br>- On-premises management console | Close an alert to indicate that it's fully investigated, and you want to be alerted again the next time the same traffic is detected.<br><br>Closing an alert adds it to the sensor event timeline.<br><br>On the on-premises management console, *New* alerts are called *Acknowledged*. |
+|**Learn** | - Azure portal <br><br>- OT network sensors <br><br>- On-premises management console <br><br>*Unlearning* an alert is available only on the OT sensor. | Learn an alert when you want to close it and add it as allowed traffic, so that you aren't alerted again the next time the same traffic is detected. <br><br>For example, when the sensor detects firmware version changes following standard maintenance procedures, or when a new, expected device is added to the network. <br><br>Learning an alert closes the alert and adds an item to the sensor event timeline. Detected traffic is included in data mining reports, but not when calculating other OT sensor reports. <br><br>Learning alerts is available for selected alerts only, mostly those triggered by *Policy* and *Anomaly* engine alerts. |
+|**Mute** | - OT network sensors <br><br>- On-premises management console <br><br>*Unmuting* an alert is available only on the OT sensor. | Mute an alert when you want to close it and not see again for the same traffic, but without adding the alert allowed traffic. <br><br>For example, when the Operational engine triggers an alert indicating that the PLC Mode was changed on a device. The new mode may indicate that the PLC isn't secure, but after investigation, it's determined that the new mode is acceptable. <br><br>Muting an alert closes it, but doesn't add an item to the sensor event timeline. Detected traffic is included in data mining reports, but not when when calculating data for other sensor reports. <br><br>Muting an alert is available for selected alerts only, mostly those triggered by the *Anomaly*, *Protocol Violation*, or *Operational* engines. |
+
+> [!TIP]
+> If you know ahead of time which events are irrelevant for you, such as during a maintenance window, or if you don't want to track the event in the event timeline, create an alert exclusion rule on an on-premises management console instead.
+>
+> For more information, see [Create alert exclusion rules on an on-premises management console](how-to-accelerate-alert-incident-response.md#create-alert-exclusion-rules-on-an-on-premises-management-console).
+>
+
+## Next steps
+
+Review alert types and messages to help you understand and plan remediation actions and playbook integrations. For more information, see [OT monitoring alert types and descriptions](alert-engine-messages.md).
+
+> [!div class="nextstepaction"]
+> [View and manage alerts from the Azure portal](how-to-manage-cloud-alerts.md)
+
+> [!div class="nextstepaction"]
+> [View and manage alerts on your OT sensor](how-to-view-alerts.md)
+
+> [!div class="nextstepaction"]
+> [View and manage alerts on the on-premises management console](how-to-work-with-alerts-on-premises-management-console.md)
defender-for-iot Management Alert Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/api/management-alert-apis.md
The maintenance windows that define with the `maintenanceWindow` API appear in t
> [!IMPORTANT]
-> This API is supported for maintenance purposes only and for a limited time period, and is not meant to be used instead of [alert exclusion rules](../how-to-work-with-alerts-on-premises-management-console.md#create-alert-exclusion-rules). Use this API for one-time, temporary maintenance operations only.
+> This API is supported for maintenance purposes only and for a limited time period, and is not meant to be used instead of [alert exclusion rules](../how-to-accelerate-alert-incident-response.md#create-alert-exclusion-rules-on-an-on-premises-management-console). Use this API for one-time, temporary maintenance operations only.
**URI**: `/external/v1/maintenanceWindow`
defender-for-iot Dell Poweredge R350 E1800 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/dell-poweredge-r350-e1800.md
The following image shows a view of the Dell PowerEdge R350 back panel:
|2| 370-AGQU | 16 GB UDIMM, 3200MT/s, ECC | |1| 384-BBBH | Power Saving BIOS Settings | |1| 800-BBDM | UEFI BIOS Boot Mode with GPT Partition |
-|2| 450-AKMP | Dual, Hot-Plug, Redundant Power Supply (1+1), 600W |
|1| 450-AADY | C13 to C14, PDU Style, 10 AMP, 6.5 Feet (2m), Power Cord | |1| 330-BBWS | Riser Config 0, 1 x8, 1 x16 slots | |1| 384-BCYX | OEM R350 Motherboard with Broadcom 5720 Dual Port 1 Gb On-Board LOM |
The following image shows a view of the Dell PowerEdge R350 back panel:
|4| 400-BLLH | 1 TB Hard Drive SATA 6 Gbps 7.2K 512n 3.5in Hot-Plug | |1| 540-BBDF | Broadcom 5719 Quad Port 1 GbE BASE-T Adapter, PCIe Low Profile | |1| 780-BCDQ | RAID 10 |
+|2| 450-AKMP | Dual, Hot-Plug, Redundant Power Supply (1+1), 600W |
+
+## Optional Components
+|Quantity|PN|Description|
+|-||-|
+|2| 450-AMJH | Dual, Hot-Plug, Power Supply, 700W MM HLAC (200-220Vac) Titanium, Redundant (1+1), by LiteOn, NAF|
## Optional Expansion Modules
defender-for-iot Hpe Proliant Dl360 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-proliant-dl360.md
This procedure describes how to update the HPE BIOS configuration for your OT se
> For **Data-at-Rest** encryption, see the HPE guidance for activating RAID Secure Encryption or using Self-Encrypting-Drives (SED). >
-### Install iLO remotely from a virtual drive
+### Install OT sensor software with iLO
This procedure describes how to install iLO software remotely from a virtual drive.
-**To install sensor software with iLO**:
- 1. Sign in to the iLO console, and then right-click the servers' screen. 1. Select **HTML5 Console**.
defender-for-iot Virtual Management Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/virtual-management-hyper-v.md
This procedure describes how to create a virtual machine for your on-premises ma
1. Enter a name for the virtual machine and select **Next**.
-1. Select **Generation** and set it to **Generation 1**, and then select **Next**.
+1. Select **Generation** and set it to **Generation 1** or **Generation 2**, and then select **Next**.
1. Specify the [memory allocation for your organization's needs](../ot-appliance-sizing.md), and then select **Next**.
defender-for-iot Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/architecture.md
In contrast, when working with locally managed sensors:
For more information, see [Manage OT sensors from the sensor console](how-to-manage-individual-sensors.md) and [Manage OT sensors from the management console](how-to-manage-sensors-from-the-on-premises-management-console.md).
-### Analytics engines on OT network sensors
+### Defender for IoT analytics engines
-OT network sensors analyze ingested data using built-in analytics engines, and trigger alerts based on both real-time and pre-recorded traffic.
+Defender for IoT network sensors analyze ingested data using built-in analytics engines, and trigger alerts based on both real-time and pre-recorded traffic.
Analytics engines provide machine learning and profile analytics, risk analysis, a device database and set of insights, threat intelligence, and behavioral analytics.
For example, the **policy violation detection** engine models industry control s
Since many detection algorithms were built for IT, rather than OT networks, the extra baseline for ICS networks helps to shorten the system's learning curve for new detections.
-OT network sensors include the following analytics engines:
+Defender for IoT network sensors include the following analytics engines:
|Name |Description | |||
defender-for-iot Concept Supported Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/concept-supported-protocols.md
Defender for IoT can detect the following protocols when identifying assets and
|Brand / Vendor |Protocols | |||
-|**ABB** | ABB 800xA DCS (IEC61850 MMS including ABB extension) |
+|**ABB** | ABB 800xA DCS (IEC61850 MMS including ABB extension)<br> CNCP<br> RNRP<br> |
|**ASHRAE** | BACnet<br> BACnet BACapp<br> BACnet BVLC | |**Beckhoff** | AMS (ADS)<br> Twincat | |**Cisco** | CAPWAP Control<br> CAPWAP Data<br> CDP<br> LWAPP | |**DNP. org** | DNP3 |
-|**Emerson** | DeltaV<br> Emerson OpenBSI/BSAP<br> Ovation DCS ADMD<br>Ovation DCS DPUSTAT<br> Ovation DCS SSRPC |
+|**Emerson** | DeltaV<br> DeltaV - Discovery<br> Emerson OpenBSI/BSAP<br> Ovation DCS ADMD<br>Ovation DCS DPUSTAT<br> Ovation DCS SSRPC |
|**Emerson Fischer** | ROC | |**Eurocontrol** | ASTERIX | |**GE** | Bentley Nevada (System 1 / BN3500)<br> EGD<br> GSM (GE MarkVI and MarkVIe)<br> SRTP (GE) |
Defender for IoT can detect the following protocols when identifying assets and
|**Schneider Electric / Wonderware** | Wonderware Suitelink | |**Siemens** | CAMP<br> PCS7<br> PCS7 WinCC ΓÇô Historian<br> Profinet DCP<br> Profinet Realtime<br> Siemens PHD<br> Siemens S7<br> Siemens S7-Plus<br> Siemens SICAM<br> Siemens WinCC | |**Toshiba** |Toshiba Computer Link |
-|**Yokogawa** | Centum ODEQ (Centum / ProSafe DCS)<br> HIS Equalize<br> Vnet/IP |
+|**Yokogawa** | Centum ODEQ (Centum / ProSafe DCS)<br> HIS Equalize<br> FA-M3<br> Vnet/IP |
[!INCLUDE [active-monitoring-protocols](includes/active-monitoring-protocols.md)]
The Horizon ICS community shares knowledge between domain experts in critical in
To join the Horizon community, email us at: [horizon-community@microsoft.com](mailto:horizon-community@microsoft.com) + ## Next steps For more information: -- [Customize alert rules](how-to-accelerate-alert-incident-response.md#customize-alert-rules)-- [About forwarded alert information](how-to-forward-alert-information-to-partners.md#about-forwarded-alert-information)
+- [Create custom alert rules on an OT sensor](how-to-accelerate-alert-incident-response.md#create-custom-alert-rules-on-an-ot-sensor)
+- [Forward OT alert information](how-to-forward-alert-information-to-partners.md)
defender-for-iot How To Accelerate Alert Incident Response https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-accelerate-alert-incident-response.md
Title: Accelerate alert workflows
-description: Improve alert and incident workflows.
Previously updated : 03/10/2022
+ Title: Accelerate on-premises OT alert workflows - Microsoft Defender for IoT
+description: Learn how to improve Microsoft Defender for IoT OT alert workflows on an OT network sensor or the on-premises management console.
Last updated : 12/12/2022
-# Accelerate alert workflows
+# Accelerate on-premises OT alert workflows
-This article describes how to accelerate alert workflows using alert comments, alert groups, and custom alert rules for standard protocols and proprietary protocols in Microsoft Defender for IoT. These tools help you
+Microsoft Defender for IoT alerts enhance your network security and operations with real-time details about events logged in your network. OT alerts are triggered when OT network sensors detect changes or suspicious activity in network traffic that needs your attention.
-- Analyze and manage the large volume of alert events detected in your network.
+This article describes the following methods for reducing OT network alert fatigue in your team:
-- Pinpoint and handle specific network activity.
+- **Create alert comments** for your teams to add to individual alerts, streamlining communication and record-keeping across your alerts.
-## Accelerate incident workflows by using alert comments
+- **Create custom alert rules** to identify specific traffic in your network
-Work with alert comments to improve communication between individuals and teams while investigating an alert event.
+- **Create alert exclusion rules** to reduce the alerts triggered by your sensors
-Use alert comments to improve:
+## Prerequisites
-- **Workflow steps**: Provide alert mitigation steps.
+- To create alert comments or custom alert rules on an OT network sensor, you must have:
-- **Workflow follow-up**: Notify that steps were taken.
+ - An OT network sensor installed
+ - Access to the sensor as an **Admin** user.
-- **Workflow guidance**: Provide recommendations, insights, or warnings about the event.
+- To create alert exclusion rules on an on-premises management console, you must have:
-The list of available options appears in each alert, and users can select one or several messages.
+ - An on-premises management console installed
+ - Access to the on-premises management console as an **Admin** user.
-**To add alert comments:**
+For more information, see [Install OT agentless monitoring software](how-to-install-software.md) and [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md).
-1. On the side menu, select **System Settings** > **Network Monitoring**> **Alert Comments**.
+## Create alert comments on an OT sensor
-3. Enter a description and select **Submit**.
+1. Sign into your OT sensor and select **System Settings** > **Network Monitoring** > **Alert Comments**.
+1. In the **Alert comments** pane, in the **Description** field, enter the new comment, and select **Add**. The new comment appears in the **Description** list below the field.
-## Accelerate incident workflows by using alert groups
+ For example:
-Alert groups let SOC teams view and filter alerts in their SIEM solutions and then manage these alerts based on enterprise security policies and business priorities. For example, alerts about new detections are organized in a discovery group. This group includes alerts that deal with detecting new devices, new VLANs, new user accounts, new MAC addresses, and more.
+ :::image type="content" source="media/alerts/create-custom-comment.png" alt-text="Screenshot of the Alert comments pane on the OT sensor.":::
-Alert groups are applied when you create forwarding rules for the following partner solutions:
+1. Select **Submit** to add your comment to the list of available comments in each alert on your sensor.
- - Syslog servers
+Custom comments are available in each alert on your sensor for team members to add. For more information, see [Add alert comments](how-to-view-alerts.md#add-alert-comments).
- - QRadar
+## Create custom alert rules on an OT sensor
- - ArcSight
+Add custom alert rules to trigger alerts for specific activity on your network that's not covered by out-of-the-box functionality.
+For example, for an environment running MODBUS, you might add a rule to detect any written commands to a memory register on a specific IP address and ethernet destination.
-The relevant alert group appears in partner output solutions.
+**To create a custom alert rule**:
+
+1. Sign into your OT sensor and select **Custom alert rules** > **+ Create rule**.
+
+1. In the **Create custom alert rule** pane, define the following fields:
+
+ |Name |Description |
+ |||
+ |**Alert name** | Enter a meaningful name for the alert. |
+ |**Alert protocol** | Select the protocol you want to detect. <br> In specific cases, select one of the following protocols: <br> <br> - For a database data or structure manipulation event, select **TNS** or **TDS**. <br> - For a file event, select **HTTP**, **DELTAV**, **SMB**, or **FTP**, depending on the file type. <br> - For a package download event, select **HTTP**. <br> - For an open ports (dropped) event, select **TCP** or **UDP**, depending on the port type. <br> <br> To create rules that track specific changes in one of your OT protocols, such as S7 or CIP, use any parameters found on that protocol, such as `tag` or `sub-function`. |
+ |**Message** | Define a message to display when the alert is triggered. Alert messages support alphanumeric characters and any traffic variables detected. <br> <br> For example, you might want to include the detected source and destination addresses. Use curly brackets (**{}**) to add variables to the alert message. |
+ |**Direction** | Enter a source and/or destination IP address where you want to detect traffic. |
+ |**Conditions** | Define one or more conditions that must be met to trigger the alert. <br><br>- Select the **+** sign to create a condition set with multiple conditions that use the **AND** operator. The **+** sign is enabled only after selecting an **Alert protocol** value.<br>- If you select a MAC address or IP address as a variable, you must convert the value from a dotted-decimal address to decimal format. <br><br> You must add at least one condition to create a custom alert rule. |
+ |**Detected** | Define a date and/or time range for the traffic you want to detect. Customize the days and time range to fit with maintenance hours or set working hours. |
+ |**Action** | Define an action you want Defender for IoT to take automatically when the alert is triggered. <br>Have Defender for IoT create either an alert or event, with the specified severity. |
+ |**PCAP included** | If you've selected to create an event, clear the **PCAP included** option as needed. If you've selected to create an alert, the PCAP is always included, and can't be removed. |
-### Requirements
+ For example:
-The alert group will appear in supported partner solutions with the following prefixes:
+ :::image type="content" source="media/how-to-accelerate-alert-incident-response/create-custom-alert-rule.png" alt-text="Screenshot of the Create custom alert rule pane for creating custom alert rules." lightbox="media/how-to-accelerate-alert-incident-response/create-custom-alert-rule.png":::
-- **cat** for QRadar, ArcSight, Syslog CEF, Syslog LEEF
+1. Select **Save** when you're done to save the rule.
-- **Alert Group** for Syslog text messages
+### Edit a custom alert rule
-- **alert_group** for Syslog objects
+To edit a custom alert rule, select the rule and then select the options (**...**) menu > **Edit**. Modify the alert rule as needed and save your changes.
-These fields should be configured in the partner solution to display the alert group name. If there's no alert associated with an alert group, the field in the partner solution will display **NA**.
+Edits made to custom alert rules, such as changing a severity level or protocol, are tracked in the **Event timeline** page on the OT sensor.
-### Default alert groups
+For more information, see [Track sensor activity](how-to-track-sensor-activity.md).
-The following alert groups are automatically defined:
+### Disable, enable, or delete custom alert rules
-- Abnormal communication behavior-- Custom alerts-- Remote access-- Abnormal HTTP communication behavior-- Discovery-- Restart and stop commands-- Authentication-- Firmware change-- Scan-- Unauthorized communication behavior-- Illegal commands-- Sensor traffic-- Bandwidth anomalies-- Internet access-- Suspicion of malware-- Buffer overflow-- Operation failures-- Suspicion of malicious activity-- Command failures-- Operational issues-- Configuration changes-- Programming
+Disable custom alert rules to prevent them from running without deleting them altogether.
-Alert groups are predefined. For details about alerts associated with alert groups, and about creating custom alert groups, contact [Microsoft Support](https://support.microsoft.com/supportforbusiness/productselection?sapId=82c8f35-1b8e-f274-ec11-c6efdd6dd099).
+In the **Custom alert rules** page, select one or more rules, and then select **Disable**, **Enable**, or **Delete** in the toolbar as needed.
-## Customize alert rules
+## Create alert exclusion rules on an on-premises management console
-Add custom alert rules to pinpoint specific activity needed for your organization. The rules can refer, among others, to particular protocols, source or destination addresses, or a combination of parameters.
-For example, for an environment running MODBUS, you can define a rule to detect any written commands to a memory register on a specific IP address and ethernet destination. Another example would be setting an alert about any access to a particular IP address.
+Create alert exclusion rules to instruct your sensors to ignore specific traffic on your network that would otherwise trigger an alert.
-Specify in the custom alert rule what action Defender for IT should take when the alert is triggered. For example, the action can be allowing users to access PCAP files from the alert, assigning alert severity, or generating an event that shows in the event timeline. Alert messages show that the alert was generated from a custom alert rule.
+For example, if you know that all the OT devices monitored by a specific sensor will be going through maintenance procedures for two days, define an exclusion rule that instructs Defender for IoT to suppress alerts detected by this sensor during the predefined period.
-**To create a custom alert rule**:
+**To create an alert exclusion rule**:
-1. On the sensor console, select **Custom alert rules** > **+ Create rule**.
+1. Sign into your on-premises management console and select **Alert Exclusion** on the left-hand menu.
-1. In the **Create custom alert rule** pane that shows on the right, define the following fields:
+1. On the **Alert Exclusion** page, select the **+** button at the top-right to add a new rule.
+
+1. In the **Create Exclusion Rule** dialog, enter the following details:
|Name |Description | |||
- |**Alert name** | Enter a meaningful name for the alert. |
- |**Alert protocol** | Select the protocol you want to detect. <br> In specific cases, select one of the following protocols: <br> <br> - For a database data or structure manipulation event, select **TNS** or **TDS**. <br> - For a file event, select **HTTP**, **DELTAV**, **SMB**, or **FTP**, depending on the file type. <br> - For a package download event, select **HTTP**. <br> - For an open ports (dropped) event, select **TCP** or **UDP**, depending on the port type. <br> <br> To create rules that track specific changes in one of your OT protocols, such as S7 or CIP, use any parameters found on that protocol, such as `tag` or `sub-function`. |
- |**Message** | Define a message to display when the alert is triggered. Alert messages support alphanumeric characters and any traffic variables detected. <br> <br> For example, you might want to include the detected source and destination addresses. Use curly brackets (**{}**) to add variables to the alert message. |
- |**Direction** | Enter a source and/or destination IP address where you want to detect traffic. |
- |**Conditions** | Define one or more conditions that must be met to trigger the alert. Select the **+** sign to create a condition set with multiple conditions that use the **AND** operator. If you select a MAC address or IP address as a variable, you must convert the value from a dotted-decimal address to decimal format. <br><br> Note that the **+** sign is enabled only after selecting an **Alert protocol** from above. <br> You must add at least one condition in order to create a custom alert rule. |
- |**Detected** | Define a date and/or time range for the traffic you want to detect. You can customize the days and time range to fit with maintenance hours or set working hours. |
- |**Action** | Define an action you want Defender for IoT to take automatically when the alert is triggered. |
-
- For example:
-
- :::image type="content" source="media/how-to-accelerate-alert-incident-response/create-custom-alert-rule.png" alt-text="Screenshot of the Create custom alert rule pane for creating custom alert rules." lightbox="media/how-to-accelerate-alert-incident-response/create-custom-alert-rule.png":::
+ |**Name** | Enter a meaningful name for your rule. The name can't contain quotes (`"`). |
+ |**By Time Period** | Select a time zone and the specific time period you want the exclusion rule to be active, and then select **ADD**. <br><br>Use this option to create separate rules for different time zones. For example, you might need to apply an exclusion rule between 8:00 AM and 10:00 AM in three different time zones. In this case, create three separate exclusion rules that use the same time period and the relevant time zone. |
+ |**By Device Address** | Select and enter the following values, and then select **ADD**: <br><br>- Select whether the designated device is a source, destination, or both a source and destination device. <br>- Select whether the address is an IP address, MAC address, or subnet <br>- Enter the value of the IP address, MAC address, or subnet. |
+ |**By Alert Title** | Select one or more alerts to add to the exclusion rule and then select **ADD**. To find alert titles, enter all, or part of an alert title and select the one you want from the dropdown list. |
+ |**By Sensor Name** | Select one or more sensors to add to the exclusion rule and then select **ADD**. To find sensor names, enter all or part of the sensor name and select the one you want from the dropdown list. |
-1. Select **Save** when you're done to save the rule.
+ > [!IMPORTANT]
+ > Alert exclusion rules are `AND` based, which means that alerts are only excluded when all rule conditions are met.
+ > If a rule condition is not defined, all options are included. For example, if you don't include the name of a sensor in the rule, the rule is applied to all sensors.
-### Edit a custom alert rule
+ A summary of the rule parameters is shown at the bottom of the dialog.
-To edit a custom alert rule, select the rule and then select the options (**...**) menu > **Edit**. Modify the alert rule as needed and save your changes.
+1. Check the rule summary shown at the bottom of the **Create Exclusion Rule** dialog and then select **SAVE**
-Edits made to custom alert rules, such as changing a severity level or protocol, are tracked in the **Event timeline** page on the sensor console. For more information, see [Track sensor activity](how-to-track-sensor-activity.md).
+### Create alert exclusion rules via API
-### Disable, enable, or delete custom alert rules
+Use the [Defender for IoT API](references-work-with-defender-for-iot-apis.md) to create alert exclusion rules from an external ticketing system or other system that manage network maintenance processes.
-Disable custom alert rules to prevent them from running without deleting them altogether.
+Use the [maintenanceWindow (Create alert exclusions)](api/management-alert-apis.md#maintenancewindow-create-alert-exclusions) API to define the sensors, analytics engines, start time, and end time to apply the rule. Exclusion rules created via API are shown in the on-premises management console as read-only.
-In the **Custom alert rules** page, select one or more rules, and then select **Enable**, **Disable**, or **Delete** in the toolbar as needed.
+For more information, see
+[Defender for IoT API reference](references-work-with-defender-for-iot-apis.md).
## Next steps
-For more information, see [Manage the alert event](how-to-manage-the-alert-event.md).
+> [!div class="nextstepaction"]
+> [View and manage alerts from the Azure portal](how-to-manage-cloud-alerts.md)
+
+> [!div class="nextstepaction"]
+> [View and manage alerts on your OT sensor](how-to-view-alerts.md)
+
+> [!div class="nextstepaction"]
+> [Forward alert information](how-to-forward-alert-information-to-partners.md)
+
+> [!div class="nextstepaction"]
+> [OT monitoring alert types and descriptions](alert-engine-messages.md)
+
+> [!div class="nextstepaction"]
+> [View and manage alerts on the the on-premises management console](how-to-work-with-alerts-on-premises-management-console.md)
+
+> [!div class="nextstepaction"]
+> [Microsoft Defender for IoT alerts](alerts.md)
defender-for-iot How To Activate And Set Up Your Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-activate-and-set-up-your-sensor.md
You can access console tools from the side menu. Tools help you:
| Overview | View a dashboard with high-level information about your sensor deployment, alerts, traffic, and more. | | Device map | View the network devices, device connections, Purdue levels, and device properties in a map. Various zoom, highlight, and filter options are available to help you gain the insight you need. For more information, see [Investigate sensor detections in the Device Map](how-to-work-with-the-sensor-device-map.md#investigate-sensor-detections-in-the-device-map). | | Device inventory | The Device inventory displays a list of device attributes that this sensor detects. Options are available to: <br /> - Sort, or filter the information according to the table fields, and see the filtered information displayed. <br /> - Export information to a CSV file. <br /> - Import Windows registry details. For more information, see [Detect Windows workstations and servers with a local script](detect-windows-endpoints-script.md).|
-| Alerts | Alerts are triggered when sensor engines detect changes or suspicious activity in network traffic that requires your attention. For more information, see [View alerts on your sensor](how-to-view-alerts.md#view-alerts-on-your-sensor).|
+| Alerts | Alerts are triggered when sensor engines detect changes or suspicious activity in network traffic that requires your attention. For more information, see [View and manage alerts on your OT sensor](how-to-view-alerts.md).|
### Analyze
You can access console tools from the side menu. Tools help you:
| Tools| Description | ||| | System settings | Configure the system settings. For example, define DHCP settings, provide mail server details, or create port aliases. |
-| Custom alert rules | Use custom alert rules to more specifically pinpoint activity or traffic of interest to you. For more information, see [Customize alert rules](how-to-accelerate-alert-incident-response.md#customize-alert-rules). |
+| Custom alert rules | Use custom alert rules to more specifically pinpoint activity or traffic of interest to you. For more information, see [Create custom alert rules on an OT sensor](how-to-accelerate-alert-incident-response.md#create-custom-alert-rules-on-an-ot-sensor). |
| Users | Define users and roles with various access levels. For more information, see [Create and manage users on an OT network sensor](manage-users-sensor.md). | | Forwarding | Forward alert information to partners that integrate with Defender for IoT, for example, Microsoft Sentinel, Splunk, ServiceNow. You can also send to email addresses, webhook servers, and more. <br /> See [Forward alert information](how-to-forward-alert-information-to-partners.md) for details. |
defender-for-iot How To Deploy Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-deploy-certificates.md
Validation is carried out twice:
- Defender for IoT system components, for example, a sensor and on-premises management console.
- - Defender for IoT and certain third party servers defined in Forwarding rules. For more information, see [About forwarded alert information](how-to-forward-alert-information-to-partners.md#about-forwarded-alert-information).
+ - Defender for IoT and certain third party servers defined in alert forwarding rules. For more information, see [Forward OT alert information](how-to-forward-alert-information-to-partners.md).
If validation fails, communication between the relevant components is halted and a validation error is presented in the console.
defender-for-iot How To Forward Alert Information To Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-forward-alert-information-to-partners.md
Title: Forward alert information
-description: You can send alert information to partner systems by working with forwarding rules.
Previously updated : 11/09/2021
+ Title: Forward on-premises OT alert information to partners - Microsoft Defender for IoT
+description: Learn how to forward OT alert details from an OT sensor or on-premises management console to partner services.
Last updated : 12/08/2022
-# Forward alert information
+# Forward on-premises OT alert information
-You can send alert information to partners who are integrating with Microsoft Defender for IoT, to syslog servers, to email addresses, and more. Working with forwarding rules lets you quickly deliver alert information to security stakeholders.
+Microsoft Defender for IoT alerts enhance your network security and operations with real-time details about events logged in your network. OT alerts are triggered when OT network sensors detect changes or suspicious activity in network traffic that needs your attention.
-Define criteria by which to trigger a forwarding rule. Working with forwarding rule criteria helps pinpoint and manage the volume of information sent from the sensor to external systems.
+This article describes how to configure your OT sensor or on-premises management console to forward alerts to partner services, syslog servers, email addresses, and more. Forwarded alert information includes details like:
-Syslog and other default forwarding actions are delivered with your system. More forwarding actions might become available when you integrate with partner vendors, such as Microsoft Sentinel, ServiceNow, or Splunk.
+ :::column:::
+ - Date and time of the alert
+ - Engine that detected the event
+ - Alert title and descriptive message
+ - Alert severity
+ :::column-end:::
+ :::column:::
+ - Source and destination name and IP address
+ - Suspicious traffic detected
+ - Disconnected sensors
+ - Remote backup failures
+ :::column-end:::
+> [!NOTE]
+> Forwarding alert rules run only on alerts triggered after the forwarding rule is created. Alerts already in the system from before the forwarding rule was created are not affected by the rule.
-Defender for IoT administrators have permission to use forwarding rules.
-## About forwarded alert information
+## Prerequisites
-Alerts provide information about an extensive range of security and operational events. For example:
+- Depending on where you want to create your forwarding alert rules, you'll need to have either an [OT network sensor or on-premises management console installed](how-to-install-software.md), with access as an **Admin** user.
-- Date and time of the alert
+ For more information, see [Install OT agentless monitoring software](how-to-install-software.md) and [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md).
-- Engine that detected the event
+- You'll also need to define SMTP settings on the OT sensor or on-premises management console.
-- Alert title and descriptive message
+ For more information, see [Configure SMTP settings on an OT sensor](how-to-manage-individual-sensors.md#configure-smtp-settings) and [Configure SMTP settings on an on-premises management console](how-to-manage-the-on-premises-management-console.md#mail-server-settings).
-- Alert severity
+## Create forwarding rules on an OT sensor
-- Source and destination name and IP address
+1. Sign into the OT sensor and select **Forwarding** on the left-hand menu > **+ Create new rule**.
-- Suspicious traffic detected
+1. In the **Add forwarding rule** pane, enter a meaningful rule name, and then define rule conditions and actions as follows:
-- Disconnected sensors
+ |Name |Description |
+ |||
+ |**Minimal alert level** | Select the minimum [alert severity level](alert-engine-messages.md#alert-severities) you want to forward. <br><br> For example, if you select **Minor**, minor alerts and any alert above this severity level will be forwarded. |
+ |**Any protocol detected** | Toggle on to forward alerts from all protocol traffic or toggle off and select the specific protocols you want to include. |
+ |**Traffic detected by any engine** | Toggle on to forward alerts from all [analytics engines](architecture.md#defender-for-iot-analytics-engines), or toggle off and select the specific engines you want to include. |
+ |**Actions** | Select the type of server you want to forward alerts to, and then define any other required information for that server type. <br><br>To add multiple servers to the same rule, select **+ Add server** and add more details. <br><br>For more information, see [Configure alert forwarding rule actions](#configure-alert-forwarding-rule-actions). |
-- Remote backup failures
+1. When you're done configuring the rule, select **Save**. The rule is listed on the **Forwarding** page.
-Relevant information is sent to partner systems when forwarding rules are created in the sensor console or the [on-premises management console](how-to-work-with-alerts-on-premises-management-console.md#create-forwarding-rules).
+1. Test the rule you've created:
-## About Forwarding rules and certificates
+ 1. Select the options menu (**...**) for your rule > **Send Test Message**.
+ 1. Go to the target service to verify that the information sent by the sensor was received.
-Certain Forwarding rules allow encryption and certificate validation between the sensor or on-premises management console, and the server of the integrated vendor.
+### Edit or delete forwarding rules on an OT sensor
-In these cases, the sensor or on-premises management console is the client and initiator of the session. The certificates are typically received from the server, or use asymmetric encryption where a specific certificate will be provided to set up the integration.
+To edit or delete an existing rule:
-Your Defender for IoT system was set up to either validate certificates or ignore certificate validation. See [About certificate validation](how-to-deploy-certificates.md#about-certificate-validation) for information about enabling and disabling validation.
+1. Sign into your OT sensor and select **Forwarding** on the left-hand menu.
-If validation is enabled and the certificate cannot be verified, communication between Defender for IoT and the server will be halted. The sensor will display an error message indicating the validation failure. If the validation is disabled and the certificate isn't valid, communication will still be carried out.
+1. Select the options menu (**...**) for your rule, and then do one of the following:
-The following Forwarding rules allow encryption and certificate validation:
-- Syslog CEF-- Microsoft Sentinel-- QRadar
+ - Select **Edit** and [update the fields as needed](#create-forwarding-rules-on-an-ot-sensor). When you're done, select **Save**.
-## Create forwarding rules
+ - Select **Delete** > **Yes** to confirm the deletion.
-**Prerequisites**:
+## Create forwarding rules on an on-premises management console
-Before you can configure a forwarding rule, you'll need to define SMTP settings on your sensor. For more information, see [Configure SMTP settings](how-to-manage-individual-sensors.md#configure-smtp-settings).
+**To create a forwarding rule on the management console**:
-**To create a new forwarding rule**:
+1. Sign in to the on-premises management console and select **Forwarding** on the left-hand menu.
-1. Sign in to the sensor.
+1. Select the **+** button at the top-right to create a new rule.
-1. Select **Forwarding** on the side menu.
+1. In the **Create Forwarding Rule** window, enter a meaningful name for the rule, and then define rule conditions and actions as follows:
-1. Select **Create new rule**.
-1. Add a rule name.
-1. Define rule conditions:
- - Select the severity level. This is the minimum incident to forward, in terms of severity level. For example, if you select **Minor**, minor alerts and any alert above this severity level will be forwarded. Levels are predefined.
-
- - Select a protocol(s) that should be detected.
- Information will be forwarded if the traffic detected was running selected protocols.
-
- - Select which engines the rule should apply to.
- Alert information detected from selected engines will be forwarded
-
-1. Define rule actions by selecting a server.
+ |Name |Description |
+ |||
+ |**Minimal alert level** | At the top-right of the dialog, use the dropdown list to select the minimum [alert severity level](alert-engine-messages.md#alert-severities) that you want to forward. <br><br>For example, if you select **Minor**, minor alerts and any alert above this severity level will be forwarded. |
+ |**Protocols** | Select **All** to forward alerts from all protocol traffic, or select **Specific** to add specific protocols only. |
+ |**Engines**** | Select **All** to forward alerts triggered by all sensor analytics engines, or select **Specific** to add specific engines only. |
+ |**System Notifications** | Select the **Report System Notifications** option to notify about disconnected sensors or remote backup failures. |
+ |**Alert Notifications** | Select the **Report Alert Notifications** option to notify about an alert's date and time, title, severity, source and destination name and IP address, suspicious traffic, and the engine that detected the event. |
+ |**Actions** | Select **Add** to add an action to apply and enter any parameters values needed for the selected action. Repeat as needed to add multiple actions. <br><br>For more information, see [Configure alert forwarding rule actions](#configure-alert-forwarding-rule-actions). |
- Forwarding rule actions instruct the sensor to forward alert information to selected partner vendors or servers. You can create multiple actions for each forwarding rule.
+1. When you're done configuring the rule, select **SAVE**. The rule is listed on the **Forwarding** page.
-1. Select **Save**.
-## Forwarding rule actions
+1. Test the rule you've created:
-You can send alert information to the servers described in this section.
+ 1. On the row for your rule, select the :::image type="icon" source="media/how-to-forward-alert-information-to-partners/run-button.png" border="false"::: **test this forwarding rule** button. A success notification is shown if the message sent successfully.
+ 1. Go to your partner system to verify that the information sent by the sensor was received.
-### Email address action
+### Edit or delete forwarding rules on an on-premises management console
-Send mail that includes the alert information. You can enter one email address per rule.
+To edit or delete an existing rule:
-**To define email for the forwarding rule:**
+1. Sign into your on-premises management console and select **Forwarding** on the left-hand menu.
-1. Enter a single email address. If you need to add more than one email, you'll need to create another action for each email address.
+1. Find the row for your rule and then select either the :::image type="icon" source="media/how-to-forward-alert-information-to-partners/edit-button.png" border="false"::: **Edit** or :::image type="icon" source="media/how-to-forward-alert-information-to-partners/delete-icon.png" border="false"::: **Delete** button.
-1. Enter the time zone for the time stamp for the alert detection at the SIEM.
+ - If you're editing the rule, [update the fields as needed](#create-forwarding-rules-on-an-on-premises-management-console) and select **SAVE**.
-1. Select **Save**.
+ - If you're deleting the rule, select **CONFIRM** to confirm the deletion.
-### Syslog server actions
+## Configure alert forwarding rule actions
+
+This section describes how to configure settings for supported forwarding rule actions.
-The following formats are supported:
+### Email address action
-- Text messages
+Configure an **Email** action to forward alert data to the configured email address.
-- CEF messages
+In the **Actions** area, enter the following details:
-- LEEF messages
+|Name |Description |
+|||
+|**Server** | Select **Email**. |
+|**Email** | Enter the email address you want to forward the alerts to. Each rule supports a single email address. |
+|**Timezone** | Select the time zone you want to use for the alert detection in the target system. |
-- Object messages
-Enter the following parameters:
+### Syslog server actions
-- Syslog host name and port.
+Configure a Syslog server action to forward alert data to the selected type of Syslog server.
-- Protocol TCP and UDP.
+In the **Actions** area, enter the following details:
-- Time zone for the time stamp for the alert detection at the SIEM.
+|Name |Description |
+|||
+| **Server** | Select one of the following types of syslog formats: <br><br>- **SYSLOG Server (CEF format)** <br>- **SYSLOG Server (LEEF format)** <br>- **SYSLOG Server (Object)** <br>- **SYSLOG Server (Text Message)** |
+| **Host** / **Port** | Enter the syslog server's host name and port
+|**Timezone** | Select the time zone you want to use for the alert detection in the target system. |
+| **Protocol** | Supported for text messages only. Select **TCP** or **UDP**. |
+| **Enable encryption** | Supported for CEF format only. Toggle on to configure a TLS encryption certificate file, key file, and passphrase. |
-- TLS encryption certificate file and key file for CEF servers (optional).
+The following sections describe the syslog output syntax for each format.
+#### Syslog text message output fields
-| Syslog text message output fields | Description |
+| Name | Description |
|--|--| | Priority | User. Alert | | Message | CyberX platform name: The sensor name.<br /> Microsoft Defender for IoT Alert: The title of the alert.<br /> Type: The type of the alert. Can be **Protocol Violation**, **Policy Violation**, **Malware**, **Anomaly**, or **Operational**.<br /> Severity: The severity of the alert. Can be **Warning**, **Minor**, **Major**, or **Critical**.<br /> Source: The source device name.<br /> Source IP: The source device IP address.<br /> Protocol (Optional): The detected source protocol.<br /> Address (Optional): Source protocol address.<br /> Destination: The destination device name.<br /> Destination IP: The IP address of the destination device.<br /> Protocol (Optional): The detected destination protocol.<br /> Address (Optional): The destination protocol address.<br /> Message: The message of the alert.<br /> Alert group: The alert group associated with the alert. <br /> UUID (Optional): The UUID the alert. |
-| Syslog object output | Description |
+#### Syslog object output fields
+
+| Name | Description |
|--|--|
-| Date and Time | Date and time that the syslog server machine received the information. |
+| Date and Time | Date and time that the syslog server machine received the information. |
| Priority | User.Alert | | Hostname | Sensor IP | | Message | Sensor name: The name of the appliance. <br /> Alert time: The time that the alert was detected: Can vary from the time of the syslog server machine, and depends on the time-zone configuration of the forwarding rule. <br /> Alert Title:  The title of the alert. <br /> Alert message: The message of the alert. <br /> Alert severity: The severity of the alert: **Warning**, **Minor**, **Major**, or **Critical**. <br /> Alert type: **Protocol Violation**, **Policy Violation**, **Malware**, **Anomaly**, or **Operational**. <br /> Protocol: The protocol of the alert. <br /> **Source_MAC**: IP address, name, vendor, or OS of the source device. <br /> Destination_MAC: IP address, name, vendor, or OS of the destination. If data is missing, the value will be **N/A**. <br /> alert_group: The alert group associated with the alert. |
-| Syslog CEF output format | Description |
+#### Syslog CEF output fields
+
+| Name | Description |
|--|--| | Priority | User.Alert | | Date and time | Date and time that sensor sent the information | | Hostname | Sensor hostname | | Message | CEF:0 <br />Microsoft Defender for IoT/CyberX <br />Sensor name <br />Sensor version <br />Microsoft Defender for IoT Alert <br />Alert title <br />Integer indication of severity. 1=**Warning**, 4=**Minor**, 8=**Major**, or 10=**Critical**.<br />msg= The message of the alert. <br />protocol= The protocol of the alert. <br />severity= **Warning**, **Minor**, **Major**, or **Critical**. <br />type= **Protocol Violation**, **Policy Violation**, **Malware**, **Anomaly**, or **Operational**. <br />UUID= UUID of the alert (Optional) <br /> start= The time that the alert was detected. <br />Might vary from the time of the syslog server machine, and depends on the time-zone configuration of the forwarding rule. <br />src_ip= IP address of the source device. (Optional) <br />src_mac= MAC address of the source device. (Optional) <br />dst_ip= IP address of the destination device. (Optional)<br />dst_mac= MAC address of the destination device. (Optional)<br />cat= The alert group associated with the alert. |
-| Syslog LEEF output format | Description |
+#### Syslog LEEF output fields
+
+| Name | Description |
|--|--|
-| Date and time | Date and time that the syslog server machine received the information. |
+| Date and time | Date and time that the syslog server machine received the information. |
| Priority | User.Alert | | Hostname | Sensor IP |
-| Message | Sensor name: The name of the Microsoft Defender for IoT appliance. <br />LEEF:1.0 <br />Microsoft Defender for IoT <br />Sensor <br />Sensor version <br />Microsoft Defender for IoT Alert <br /> Title:  The title of the alert. <br />msg: The message of the alert. <br />protocol: The protocol of the alert.<br />severity: **Warning**, **Minor**, **Major**, or **Critical**. <br />type: The type of the alert: **Protocol Violation**, **Policy Violation**, **Malware**, **Anomaly**, or **Operational**. <br />start: The time of the alert. It may be different from the time of the syslog server machine. (This depends on the time-zone configuration.) <br />src_ip: IP address of the source device.<br />dst_ip: IP address of the destination device. <br />cat: The alert group associated with the alert. |
--
+| Message | Sensor name: The name of the Microsoft Defender for IoT appliance. <br />LEEF:1.0 <br />Microsoft Defender for IoT <br />Sensor <br />Sensor version <br />Microsoft Defender for IoT Alert <br /> Title:  The title of the alert. <br />msg: The message of the alert. <br />protocol: The protocol of the alert.<br />severity: **Warning**, **Minor**, **Major**, or **Critical**. <br />type: The type of the alert: **Protocol Violation**, **Policy Violation**, **Malware**, **Anomaly**, or **Operational**. <br />start: The time of the alert. It may be different from the time of the syslog server machine, and depends on the time-zone configuration. <br />src_ip: IP address of the source device.<br />dst_ip: IP address of the destination device. <br />cat: The alert group associated with the alert. |
### Webhook server action
-Send alert information to a webhook server. Working with webhook servers lets you set up integrations that subscribe to alert events with Defender for IoT. When an alert event is triggered, the management console sends an HTTP POST payload to the webhook's configured URL. Webhooks can be used to update an external SIEM system, SOAR systems, Incident management systems, etc.
-
-This action is available from the on-premises management console.
+**Supported from the on-premises management console only**
-**To define to a webhook action:**
+Configure a **Webhook** action to configure an integration that subscribes to Defender for IoT alert events. For example, send alert data to a webhook server to update an external SIEM system, SOAR system, or incident management system.
-1. Select the Webhook action.
+When you've configured alerts to be forwarded to a webhook server and an alert event is triggered, the on-premises management console sends an HTTP POST payload to the configured webhook URL.
-1. Enter the server address in the **URL** field.
+In the **Actions** area, enter the following details:
-1. In the **Key** and **Value fields**, customize the HTTP header with a key and value definition. Keys can only contain letters, numbers, dashes, and underscores. Values can only contain one leading and/or one trailing space.
-
-1. Select **Save**.
+|Name |Description |
+|||
+|**Server** | Select **Webhook**. |
+|**URL** | Enter the webhook server URL. |
+|**Key / Value** | Enter key/value pairs to customize the HTTP header as needed. Supported characters include: <br>- **Keys** can contain only letters, numbers, dashes, and underscores. <br>- **Values** can contain only one leading and/or trailing space. |
### Webhook extended
-Webhook extended can be used to send extra data to the endpoint. The extended feature includes all of the information in the Webhook alert, and adds the following information to the report:
+**Supported from the on-premises management console only**
+
+Configure a **Webhook extended** action to send the following extra data to your webhook server:
- sensorID - sensorName
Webhook extended can be used to send extra data to the endpoint. The extended fe
- handled - additionalInformation
-**To define a webhook extended action**:
-
-1. Add the endpoint data URL in the URL field.
-
-1. (Optional) Customize the HTTP header with a key and value definition. Add extra headers by selecting the :::image type="icon" source="media/how-to-forward-alert-information-to-partners/add-header.png" border="false"::: button.
-
-1. Select **Save**.
-
-Once the Webhook Extended forwarding rule has been configured, you can test the alert from the Forwarding screen on the management console.
+In the **Actions** area, enter the following details:
-**To test the Webhook Extended forwarding rule**:
+|Name |Description |
+|||
+|**Server** | Select **Webhook extended**. |
+|**URL** | Enter the endpoint data URL. |
+|**Key / Value** | Enter key/value pairs to customize the HTTP header as needed. Supported characters include: <br>- **Keys** can contain only letters, numbers, dashes, and underscores. <br>- **Values** can contain only one leading and/or trailing space. |
-1. In the management console, select **Forwarding** from the left-hand pane.
-
-1. Select the **run** button to test your alert.
+### NetWitness action
- :::image type="content" source="media/how-to-forward-alert-information-to-partners/run-button.png" alt-text="Select the run button to test your forwarding rule.":::
+Configure a **NetWitness** action to send alert information to a NetWitness server.
-You'll know the forwarding rule is working if you see the Success notification.
+In the **Actions** area, enter the following details:
+|Name |Description |
+|||
+|**Server** | Select **NetWitness**. |
+|**Hostname / Port** | Enter the NetWitness server's hostname and port. |
+|**Time zone** | Enter the time zone you want to use in the time stamp for the alert detection at the SIEM. |
-### NetWitness action
+### Other partner server integrations
-Send alert information to a NetWitness server.
+You may be integrating Defender for IoT with a partner service to send alert or device inventory information to another security or device management system, or to communicate with partner-side firewalls.
-**To define NetWitness forwarding parameters:**
+[Partner integrations](integrate-overview.md) can help to bridge previously siloed security solutions, enhance device visibility, and accelerate system-wide response to more rapidly mitigate risks.
-1. Enter NetWitness **Hostname** and **Port** information.
+In such cases, use the **Actions** area to enter credentials and other information required to communicate with integrated partner services.
-1. Enter the time zone for the time stamp for the alert detection at the SIEM.
+For more information, see:
-1. Select **Save**.
+- [Integrate Qradar with Microsoft Defender for IoT](tutorial-qradar.md)
+- [Integrate Splunk with Microsoft Defender for IoT](tutorial-splunk.md)
+- [Integrate CyberArk with Microsoft Defender for IoT](tutorial-cyberark.md)
+- [Integrate Fortinet with Microsoft Defender for IoT](tutorial-fortinet.md)
+- [Integrate ClearPass with Microsoft Defender for IoT](tutorial-clearpass.md)
+- [Integrate Forescout with Microsoft Defender for IoT](tutorial-forescout.md)
+- [Integrate Palo-Alto with Microsoft Defender for IoT](tutorial-palo-alto.md)
-### Integrated vendor actions
+## Configure alert groups in partner services
-You might have integrated your system with a security, device management, or other industry vendor. These integrations let you:
+When you configure forwarding rules to send alert data to Syslog servers, QRadar, and ArcSight, *alert groups* are automatically applied and are available in those partner servers.
- - Send alert information.
+*Alert groups* help SOC teams using those partner solutions to manage alerts based on enterprise security policies and business priorities. For example, alerts about new detections are organized into a *discovery* group, and will include any alerts about new devices, VLANs, user accounts, MAC addresses, and more.
- - Send device inventory information.
+Alert groups appear in partner services with the following prefixes:
- - Communicate with vendor-side firewalls.
+|Prefix |Partner service |
+|||
+|`cat` | [QRadar](tutorial-qradar.md), [ArcSight](integrations/arcsight.md), [Syslog CEF](#syslog-cef-output-fields), [Syslog LEEF](#syslog-leef-output-fields) |
+|`Alert Group` | [Syslog text messages](#syslog-text-message-output-fields) |
+|`alert_group` | [Syslog objects](#syslog-object-output-fields) |
-Integrations help bridge previously siloed security solutions, enhance device visibility, and accelerate system-wide response to more rapidly mitigate risks.
+To use alert groups in your integration, make sure to configure your partner services to display the alert group name.
-Use the actions section to enter the credentials and other information required to communicate with integrated vendors.
+By default, alerts are grouped as follows:
-For details about setting up forwarding rules for the integrations, refer to the relevant partner integration articles.
+ :::column:::
+ - Abnormal communication behavior
+ - Custom alerts
+ - Remote access
+ - Abnormal HTTP communication behavior
+ - Discovery
+ - Restart and stop commands
+ - Authentication
+ - Firmware change
+ - Scan
+ - Unauthorized communication behavior
+ - Illegal commands
+ :::column-end:::
+ :::column:::
+ - Sensor traffic
+ - Bandwidth anomalies
+ - Internet access
+ - Suspicion of malware
+ - Buffer overflow
+ - Operation failures
+ - Suspicion of malicious activity
+ - Command failures
+ - Operational issues
+ - Configuration changes
+ - Programming
+ :::column-end:::
-## Test forwarding rules
+For more information and to create custom alert groups, contact [Microsoft Support](https://support.microsoft.com/supportforbusiness/productselection?sapId=82c8f35-1b8e-f274-ec11-c6efdd6dd099).
-Test the connection between the sensor and the partner server that's defined in your forwarding rules:
+## Troubleshoot forwarding rules
-1. In the Forwarding page, find the rule you need and select the three dots (...) at the end of the row.
+If your forwarding alert rules aren't working as expected, check the following details:
-1. Select **Send Test Message**.
+- **Certificate validation**. Forwarding rules for [Syslog CEF](#syslog-server-actions), [Microsoft Sentinel](integrate-overview.md#microsoft-sentinel), and [QRadar](tutorial-qradar.md) support encryption and certificate validation.
-1. Go to your partner system to verify that the information sent by the sensor was received.
+ If your OT sensors or on-premises management console are configured to [validate certificates](how-to-deploy-certificates.md#about-certificate-validation) and the certificate can't be verified, the alerts aren't forwarded.
-## Edit and delete forwarding rules
+ In these cases, the sensor or on-premises management console is the session's client and initiator. Certificates are typically received from the server or use asymmetric encryption, where a specific certificate is provided to set up the integration.
-**To edit a forwarding rule**:
+- **Alert exclusion rules**. If you have exclusion rules configured on your on-premises management console, your sensors might be ignoring the alerts you're trying to forward. For more information, see [Create alert exclusion rules on an on-premises management console](how-to-accelerate-alert-incident-response.md#create-alert-exclusion-rules-on-an-on-premises-management-console).
-1. In the Forwarding page, find the rule you need and select the three dots (...) at the end of the row.
-1. Select **Edit** and update the rule.
-1. Select **Save**.
-**To remove a forwarding rule**:
+## Next steps
-1. In the Forwarding page, find the rule you need and select the three dots (...) at the end of the row.
-1. Select **Delete** and confirm.
-1. Select **Save**.
+> [!div class="nextstepaction"]
+> [View and manage alerts from the Azure portal](how-to-manage-cloud-alerts.md)
-## Forwarding rules and alert exclusion rules
+> [!div class="nextstepaction"]
+> [View and manage alerts on your OT sensor](how-to-view-alerts.md)
-The administrator might have defined alert exclusion rules. These rules help administrators achieve more granular control over alert triggering by instructing the sensor to ignore alert events based on various parameters. These parameters might include device addresses, alert names, or specific sensors.
+> [!div class="nextstepaction"]
+> [Accelerate alert workflows on an OT network sensor](how-to-accelerate-alert-incident-response.md)
-This means that the forwarding rules you define might be ignored based on exclusion rules that your administrator has created. Exclusion rules are defined in the on-premises management console.
+> [!div class="nextstepaction"]
+> [OT monitoring alert types and descriptions](alert-engine-messages.md)
-## Next steps
+> [!div class="nextstepaction"]
+> [Forward alert information](how-to-forward-alert-information-to-partners.md)
-For more information, see [Accelerate alert workflows](how-to-accelerate-alert-incident-response.md).
+> [!div class="nextstepaction"]
+> [Microsoft Defender for IoT alerts](alerts.md)
defender-for-iot How To Investigate Sensor Detections In A Device Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-investigate-sensor-detections-in-a-device-inventory.md
For more information, see [Device inventory column reference](#device-inventory-
As you manage your network devices, you may need to update their details. For example, you may want to modify security value as assets change, or personalize the inventory to better identify devices, or if a device was classified incorrectly.
+If you're working with a cloud-connected sensor, any edits you make in the sensor console are updated in the Azure portal.
+ **To edit device details**: 1. Select one or more devices in the grid, and then select **View full details** in the pane on the right.
You can delete a single device when theyΓÇÖve been inactive for more than 10 min
This procedure is supported for the *cyberx* and admin users only.
-1. Select the **Last Seen** filter icon in the Inventory.
+1. Select the **Last Activity** filter icon in the Inventory.
1. Select a filter option. 1. Select **Apply**. 1. Select **Delete Inactive Devices**. In the prompt displayed, enter the reason you're deleting the devices, and then select **Delete**.
defender-for-iot How To Manage Cloud Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-cloud-alerts.md
Title: View and manage alerts in Microsoft Defender for IoT on the Azure portal
-description: View and manage alerts detected by cloud-connected network sensors in Microsoft Defender for IoT on the Azure portal.
Previously updated : 06/30/2022
+ Title: View and manage alerts on the Azure portal - Microsoft Defender for IoT
+description: Learn about viewing and managing alerts triggered by cloud-connected Microsoft Defender for IoT network sensors on the Azure portal.
Last updated : 12/12/2022 # View and manage alerts from the Azure portal
-> [!IMPORTANT]
-> The **Alerts** page is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-This article describes how to manage alerts generated from OT and Enterprise IoT network sensors on the Azure portal.
-
-If you're integrating with Microsoft Sentinel, alert details and entity information are also sent to Microsoft Sentinel, where you can also view them from the **Alerts** page.
-
-## About alerts
-
-Defender for IoT alerts enhance your network security and operations with real-time details about events logged, such as:
--- Deviations from authorized network activity and device configurations-- Protocol and operational anomalies-- Suspected malware traffic
+Microsoft Defender for IoT alerts enhance your network security and operations with real-time details about events logged in your network. This article describes how to manage Microsoft Defender for IoT alerts on the Azure portal, including alerts generated by OT and Enterprise IoT network sensors.
+- OT alerts are also available on each [OT network sensor console](how-to-view-alerts.md), or a connected [on-premises management console](how-to-work-with-alerts-on-premises-management-console.md)
-Use the **Alerts** page on the Azure portal to take any of the following actions:
+- [Integrate with Microsoft Sentinel](iot-solution.md) to view Defender for IoT alerts in Microsoft Sentinel and manage them together with security incidents.
-- **Understand when an alert was detected**.
+- If you have an [Enterprise IoT plan](eiot-defender-for-endpoint.md) with Microsoft Defender for Endpoint, alerts for Enterprise IoT devices detected by Microsoft Defender for Endpoint are available in Defender for Endpoint only.
-- **Investigate the alert** by reviewing alert details, such as the traffic's source and destination, vendor, related firmware and operating system, and related MITRE ATT&CK tactics.
+ For more information, see [Securing IoT devices in the enterprise](concept-enterprise.md) and the [Alerts queue in Microsoft 365 Defender](/microsoft-365/security/defender-endpoint/alerts-queue-endpoint-detection-response).
-- **Manage the alert** by taking remediation steps on the device or network process, or changing the device status or severity.--- **Integrate alert details with other Microsoft services**, such as Microsoft Sentinel playbooks and workbooks. For more information, see [OT threat monitoring in enterprise SOCs](concept-sentinel-integration.md).-
-The alerts displayed on the Azure portal are alerts that have been detected by cloud-connected, Defender for IoT sensors. For more information, see [Alert types and descriptions](alert-engine-messages.md).
-
-> [!TIP]
-> We recommend that you review alert types and messages to help you understand and plan remediation actions and playbook integrations.
-
-## View alerts
+> [!IMPORTANT]
+> The **Alerts** page is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-This section describes how to view alert details in the Azure portal.
+## Prerequisites
-**To view Defender for IoT alerts on the Azure portal**:
+- **To have alerts in Defender for IoT**, you must have an [OT](onboard-sensors.md) or [Enterprise IoT sensor](eiot-sensor.md) on-boarded, and network data streaming into Defender for IoT.
-Go to **Defender for IoT** > **Alerts (Preview)**.
+- **To view alerts on the Azure portal**, you must have access as a [Security Reader](../../role-based-access-control/built-in-roles.md#security-reader), [Security Admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner)
-The following alert details are displayed by default in the grid:
+- **To manage alerts on the Azure portal**, you must have access as a [Security Admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner). Alert management activities include modifying their statuses or severities, *Learning* an alert, or accessing PCAP data.
-| Column | Description
-|--|--|
-| **Severity**| A predefined alert severity assigned by the sensor. Update the sensor severity as needed. For more information, see [Manage alert status and severity(#manage-alert-status-and-severity).
-| **Name** | The alert title. |
-| **Site** | The site associated with the sensor that detected the alert, as listed on the **Sites and sensors** page. For more information, see [Sensor management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal).|
-| **Engine** | The sensor engine that detected the Operational Technology (OT) traffic. For more information, see [Detection engines](how-to-control-what-traffic-is-monitored.md#detection-engines). For device builders, the term *micro-agent* is displayed instead. |
-| **Last detection** | The last time the alert was detected. <br>- If an alert's status is **New**, and the same traffic is seen again, the **Last detection** time is updated for the same alert. <br>- If the alert's status is **Closed** and traffic is seen again, the **Last detection** time is *not* updated, and a new alert is triggered.|
-| **Status** | The alert status: *New*, *Active*, *Closed* |
-| **Source device** | The IP address, MAC, or device name. |
-| **Tactics** | The MITRE ATT&CK stage. |
+For more information, see [Azure user roles and permissions for Defender for IoT](roles-azure.md).
-### View more alert details
+## View alerts on the Azure portal
-1. Select **Edit columns** from the Alerts page.
-1. In the Edit Columns dialog box, select **Add Column** and choose an item to add. The following items are available:
+1. In Defender for IoT on the Azure portal, select the **Alerts** page on the left. By default, the following details are shown in the grid:
| Column | Description |--|--|
- | **Source device address** |The IP address of the source device. |
- | **Destination device address** | The IP address of the destination device. |
- | **Destination device** | The IP address, MAC, or destination device name.|
- | **First detection** | Defines the first time the alert was detected in the network. |
- | **ID** |The unique alert ID.|
- | **Last activity** | Defines the last time the alert was changed, including manual updates for severity or status, or automated changes for device updates or device/alert de-duplication |
- | **Protocol** | The protocol detected in the network traffic for this alert.|
- | **Sensor** | The sensor that detected the alert.|
- | **Zone** | The zone assigned to the sensor that detected the alert.|
- | **Category**| The category associated with the alert, such as *operational issues*,*custom alerts*, or *illegal commands*. |
- | **Type**| The internal name of the alert. |
+ | **Severity**| A predefined alert severity assigned by the sensor that you can [modify as needed](#manage-alert-severity-and-status). |
+ | **Name** | The alert title. |
+ | **Site** | The site associated with the sensor that detected the alert, as listed on the [Sites and sensors](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal) page.|
+ | **Engine** | The [Defender for IoT detection engine](architecture.md#defender-for-iot-analytics-engines) that detected the activity and triggered the alert. <br><br>**Note**: A value of **Micro-agent** indicates that the event was triggered by the Defender for IoT [Device Builder](/azure/defender-for-iot/device-builders/) platform. |
+ | **Last detection** | The last time the alert was detected. <br><br>- If an alert's status is **New**, and the same traffic is seen again, the **Last detection** time is updated for the same alert. <br>- If the alert's status is **Closed** and traffic is seen again, the **Last detection** time is *not* updated, and a new alert is triggered.|
+ | **Status** | The alert status: *New*, *Active*, *Closed* <br><br>For more information, see [Alert statuses and triaging options](alerts.md#alert-statuses-and-triaging-options).|
+ | **Source device** |The IP address, MAC address, or the name of the device where the traffic that triggered the alert originated. |
+ | **Tactics** | The [MITRE ATT&CK stage](https://attack.mitre.org/tactics/ics/). |
+
+ 1. To view more details, select the :::image type="icon" source="media/how-to-manage-device-inventory-on-the-cloud/edit-columns-icon.png" border="false"::: **Edit columns** button.
+
+ In the **Edit columns** pane on the right, select **Add Column** and any of the following extra columns:
+
+ | Column | Description
+ |--|--|
+ | **Source device address** |The IP address of the source device. |
+ | **Destination device address** | The IP address of the destination device. |
+ | **Destination device** | The destination IP or MAC address, or the destination device name.|
+ | **First detection** | The first time the alert was detected in the network. |
+ | **ID** |The unique alert ID.|
+ | **Last activity** | The last time the alert was changed, including manual updates for severity or status, or automated changes for device updates or device/alert de-duplication |
+ | **Protocol** | The protocol detected in the network traffic for the alert.|
+ | **Sensor** | The sensor that detected the alert.|
+ | **Zone** | The zone assigned to the sensor that detected the alert.|
+ | **Category**| The [category](alert-engine-messages.md#supported-alert-categories) associated with the alert, such as *operational issues*, *custom alerts*, or *illegal commands*. |
+ | **Type**| The internal name of the alert. |
### Filter alerts displayed
-Use the **Search** box, **Time range**, and **Add filter** options to filter the alerts displayed by specific parameters or help locate a specific alert.
+Use the **Search** box, **Time range**, and **Add filter** options to filter the alerts displayed by specific parameters or to help locate a specific alert.
For example, filter alerts by **Category**: ### Group alerts displayed
-Use the **Group by** menu at the top right to collapse the grid into subsections according to specific parameters.
+Use the **Group by** menu at the top-right to collapse the grid into subsections according to specific parameters.
For example, while the total number of alerts appears above the grid, you may want more specific information about alert count breakdown, such as the number of alerts with a specific severity, protocol, or site.
-Supported grouping options include *Severity*, *Name*, *Site*, and *Engine*.
+Supported grouping options include *Engine*, *Name*, *Sensor*, *Severity*, and *Site*.
-## View alert details
+## View details and remediate a specific alert
-Select an alert in the grid to display more details in the pane on the right, including the alert description, traffic source and destination, and more.
+1. On the **Alerts** page, select an alert in the grid to display more details in the pane on the right. The alert details pane includes the alert description, traffic source and destination, and more.
+ Select **View full details** to drill down further. For example:
-Select **View full details** to learn more, or **Take action** to jump directly to the suggested remediation steps.
+ :::image type="content" source="media/how-to-view-manage-cloud-alerts/alert-detected.png" alt-text="Screenshot of an alert selected from Alerts page in the Azure portal." lightbox="media/how-to-view-manage-cloud-alerts/alert-detected.png":::
+1. The alert details page provides more details about the alert, and a set of remediation steps on the **Take action** tab. For example:
-## Remediate alerts
+ :::image type="content" source="media/how-to-view-manage-cloud-alerts/alert-full-details.png" alt-text="Screenshot of the alert details page on the Azure portal.":::
-On each alert details page, the **Take Action** tab lists recommended remediation steps for the alert, designed specifically to help SOC teams understand OT issues and resolutions.
+## Manage alert severity and status
+We recommend that you update alert severity as soon as you've triaged an alert so that you can prioritize the riskiest alerts as soon as possible. Make sure to update your alert status once you've taken remediation steps so that the progress is recorded.
-## Manage alert status and severity
+You can update both severity and status for a single alert or for a selection of alerts in bulk.
-**Prerequisite**: Subscription access as a **Security admin**, **Contributor**, or **Owner** user
+*Learn* an alert to indicate to Defender for IoT that the detected network traffic is authorized. Learned alerts aren't triggered again the next time the same traffic is detected on your network. Learning is supported only for selected alerts, and *unlearning* is supported only from the OT network sensor.
-You can update alert status or severity for a single alert or for a group of alerts.
-
-*Learn* an alert to indicate to Defender for IoT that the detected network traffic is authorized. Learned alerts won't be triggered again the next time the same traffic is detected on your network. For more information, see [Learn and unlearn alert traffic](how-to-manage-the-alert-event.md#learn-and-unlearn-alert-traffic).
+For more information, see [Alert statuses and triaging options](alerts.md#alert-statuses-and-triaging-options).
- **To manage a single alert**:
- 1. Select an alert in the grid.
+ 1. In Defender for IoT in the Azure portal, select the **Alerts** page on the left, and then select an alert in the grid.
1. Either on the details pane on the right, or in an alert details page itself, select the new status and/or severity. - **To manage multiple alerts in bulk**:
- 1. Select the alerts in the grid that you want to modify.
+ 1. In Defender for IoT in the Azure portal, select the **Alerts** page on the left, and then select the alerts in the grid that you want to modify.
1. Use the :::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/status-icon.png" border="false"::: **Change status** and/or :::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/severity-icon.png" border="false"::: **Change severity** options in the toolbar to update the status and/or the severity for all the selected alerts. -- **To learn one or more alerts**, do one of the following:-
- - Select one or more alerts in the grid and then select :::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/learn-icon.png" border="false"::: **Learn** in the toolbar.
- - On an alert details page, in the **Take Action** tab, select **Learn**.
+- **To learn one or more alerts**:
+ In Defender for IoT in the Azure portal, select the **Alerts** page on the left, and then do one of the following:
- - Select one or more alerts in the grid and then select :::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/learn-icon.png" border="false"::: **Learn** in the toolbar.
- - On an alert details page, in the **Take Action** tab, select **Learn**.
+ - Select one or more learnable alerts in the grid and then select :::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/learn-icon.png" border="false"::: **Learn** in the toolbar.
+ - On an alert details page for a learnable alert, in the **Take Action** tab, select **Learn**.
-### Managing alerts in a hybrid deployment
+## Access alert PCAP data
-Users working in hybrid deployments may be managing alerts in Defender for IoT on the Azure portal, the sensor, and an on-premises management console.
+You might want to access raw traffic files, also known as *packet capture files* or *PCAP* files as part of your investigation. If you're a SOC or OT security engineer, access PCAP files directly from the Azure portal to help you investigate faster.
-Alert management across all interfaces functions as follows:
+To access raw traffic files for your alert, select **Download PCAP** in the top-left corner of your alert details page.
-- **Alert statuses are fully synchronized** between the Azure portal and the sensor. This means that when you set an alert status to **Closed** on either the Azure portal or the sensor, the alert status is updated in the other location as well.
+For example:
- Setting an alert status to **Closed** or **Muted** on a sensor updates the alert status to **Closed** on the Azure portal. Alert statuses are also synchronized between the sensor and the on-premises management console to keep all management sources updated with the correct alert statuses.
- [Learning](#manage-alert-status-and-severity) an alert in Azure also updates the alert in the sensor console.
+The portal requests the file from the sensor that detected the alert and downloads it to your Azure storage.
-- **Alert Exclusion rules**: If you're working with an on-premises management console, you may have defined alert *Exclusion rules* to determine the rules detected by relevant sensors.
+Downloading the PCAP file can take several minutes, depending on the quality of your sensor connectivity.
- Alerts excluded because they meet criteria for a specific exclusion rule are not displayed on the sensor, or in the Azure portal. For more information, see [Create alert exclusion rules](how-to-work-with-alerts-on-premises-management-console.md#create-alert-exclusion-rules).
+## Export alerts to a CSV file
-## Access alert PCAP data (Public preview)
+You may want to export a selection of alerts to a CSV file for offline sharing and reporting.
-**Prerequisite**: Subscription access as a **Security admin**, **Contributor**, or **Owner** user
+1. In Defender for IoT on the Azure portal, select the **Alerts** page on the left.
-To access raw traffic files for your alert, known as packet capture files or PCAP files, select **Download PCAP** in the top-left corner of your alert details page.
+1. Use the search box and filter options to show only the alerts you want to export.
-For example:
+1. In the toolbar above the grid, select **Export** > **Confirm**.
+The file is generated, and you're prompted to save it locally.
-The portal requests the file from the sensor that detected the alert and downloads it to your Azure storage.
-Downloading the PCAP file can take several minutes, depending on the quality of your sensor connectivity.
+## Next steps
-> [!TIP]
-> Accessing PCAP files directly from the Azure portal supports SOC or OT security engineers who want to investigate alerts from Defender for IoT or Microsoft Sentinel, without having to access each sensor separately. For more information, see [OT threat monitoring in enterprise SOCs](concept-sentinel-integration.md).
->
+> [!div class="nextstepaction"]
+> [Forward alert information](how-to-forward-alert-information-to-partners.md)
-## Next steps
+> [!div class="nextstepaction"]
+> [OT monitoring alert types and descriptions](alert-engine-messages.md)
-For more information, see [Gain insight into global, regional, and local threats](how-to-gain-insight-into-global-regional-and-local-threats.md#gain-insight-into-global-regional-and-local-threats).
+> [!div class="nextstepaction"]
+> [Microsoft Defender for IoT alerts](alerts.md)
defender-for-iot How To Manage Device Inventory For Organizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-device-inventory-for-organizations.md
The device details page displays comprehensive device information, including the
| **Attributes** | Displays full device details such as class, data source, firmware details, activity, type, protocols, Purdue level, sensor, site, zone, and more. | | **Backplane** | Displays the backplane hardware configuration, including slot and rack information. Select a slot in the backplane view to see the details of the underlying devices. The backplane tab is usually visible for Purdue level 1 devices that have slots in use, such as PLC, RTU, and DCS devices. | |**Vulnerabilities** | Displays current vulnerabilities specific to the device. Vulnerability data is based on the repository of standards based vulnerability data documented at the US government National Vulnerability Database (NVD). Select the CVE name to see the CVE details and description. You can also view vulnerability data across your network with the [Defender for IoT Vulnerability workbook](workbooks.md#view-workbooks). |
-|**Alerts** | Displays current open alerts related to the device. Select any alert to view more details, and then select **View full details** to open the alert page to view the full alert information and take action. For more information on the alerts page, see [View alert details](how-to-manage-cloud-alerts.md#view-alert-details). |
+|**Alerts** | Displays current open alerts related to the device. Select any alert to view more details, and then select **View full details** to open the alert page to view the full alert information and take action. For more information on the alerts page, see [View alerts on the Azure portal](how-to-manage-cloud-alerts.md#view-alerts-on-the-azure-portal). |
|**Recommendations** | Displays current recommendations for the device, such as Review PLC operating mode and Review unauthorized devices. For more information on recommendations, see [Enhance security posture with security recommendations](recommendations.md). | For example:
defender-for-iot How To Manage Sensors On The Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-sensors-on-the-cloud.md
You may need to reactivate an OT sensor because you want to:
- **Associate the sensor to a new site**: Re-register the sensor with new site definitions and use the new activation file to activate. -- **Change your plan commitment**: If you make changes to your plan, for example if you change your price plan from a trial to a monthly commitment, you'll need to reactivate your sensors to reflect the new changes.
+- **Change your plan commitment**: If you make changes to your plan, such as changing your price plan from a trial to a monthly commitment, you'll need to reactivate your sensors to reflect the new changes.
In such cases, do the following steps:
This procedure describes how to view sensor health data from the Azure portal. S
1. From Defender for IoT in the Azure portal, select **Sites and sensors** and then check the overall health score in the widget above the grid. For example:
- :::image type="content" source="media/how-to-manage-sensors-on-the-cloud/sensor-widgets.png" alt-text="Screenshot showing the sensor health widgets." lightbox="media/how-to-manage-sensors-on-the-cloud/sensor-widgets.png":::
+ :::image type="content" source="media/how-to-manage-sensors-on-the-cloud/sensor-health-widgets.png" alt-text="Screenshot showing the sensor health widgets." lightbox="media/how-to-manage-sensors-on-the-cloud/sensor-health-widgets.png":::
- - **Unhealthy** indicates one of the following scenarios:
+ **Unsupported** means that the sensor has a software version installed that is no longer supported.
- - Sensor traffic to Azure isn't stable
- - Sensor fails regular sanity tests
- - No traffic detected by the sensor
- - Sensor software version is no longer supported
- - A [remote sensor upgrade from the Azure portal](update-ot-software.md#update-your-sensors) fails
+ **Unhealthy** indicates one of the following scenarios:
- For more information, see our [Sensor health message reference](sensor-health-messages.md).
+ - Sensor traffic to Azure isn't stable
+ - Sensor fails regular sanity tests
+ - No traffic detected by the sensor
+ - Sensor software version is no longer supported
+ - A [remote sensor upgrade from the Azure portal](update-ot-software.md#update-your-sensors) fails
- - **Updatable** means that the sensor has an older version, and there are software updates available to install
- - **Unsupported** means that the sensor has a software version install that is no longer supported.
+ For more information, see our [Sensor health message reference](sensor-health-messages.md).
1. To check on specific sensor issues, filter the grid by sensor health, and select one or more issues to verify. For example:
- :::image type="content" source="media/how-to-manage-sensors-on-the-cloud/sensor-health-filter.png" alt-text="Screenshot of the sensor health filter." lightbox="media/how-to-manage-sensors-on-the-cloud/sensor-health-filter.png":::
+ :::image type="content" source="media/how-to-manage-sensors-on-the-cloud/sensor-health-filters.png" alt-text="Screenshot of the sensor health filter." lightbox="media/how-to-manage-sensors-on-the-cloud/sensor-health-filters.png":::
1. Expand the filtered sites and sensors now displayed in the grid, and use the **Sensor health** column to learn more at a high level.
defender-for-iot How To Manage The Alert Event https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-the-alert-event.md
- Title: Manage alert events from the sensor console - Microsoft Defender for IoT
-description: Manage alerts detected in your network from a Defender for IoT sensor.
Previously updated : 02/06/2022---
-# Manage alerts from the sensor console
-
-This article describes how to manage alerts from the sensor console.
-
-## About managing alerts
-
-The following options are available for managing alerts:
-
- | Action | Description |
- |--|--|
-| **Remediate** |Remediate a device or network process that caused Defender for IoT to trigger the alert. For more information, see [View remediation steps](#view-remediation-steps).|
-| **Learn** | Authorize the detected traffic. For more information, see [Learn and unlearn alert traffic](#learn-and-unlearn-alert-traffic). |
-| **Mute** | Continuously ignore activity with identical devices and comparable traffic. For more information, see [Mute and unmute alerts](#mute-and-unmute-alerts).
-| **Change status** | Change the alert status to Closed or New. For more information, see [Close the alert](#close-the-alert). |
-| **Forward to partner solutions** | Create Forwarding rules that send alert details to integrated solutions, for example to Microsoft Sentinel, Splunk or Service Now. For more information, see [Forward alert information](how-to-forward-alert-information-to-partners.md#forward-alert-information) |
-
-Alerts are managed from the Alerts page on the sensor.
-
-**To access the Alerts page:**
-
-1. Select **Alerts** from the sensor console, side pane.
-1. Review the alerts details and decide how to manage the alert.
-
- :::image type="content" source="media/how-to-manage-the-alert-event/main-alerts-screen.png" alt-text="Screenshot of the main sensor alerts screen.":::
-
-See [View alerts on your sensor](how-to-view-alerts.md#view-alerts-on-your-sensor) for information on:
-- the kind of alert information available -- customizing the alert view-- how long alerts are saved-
-## View remediation steps
-
-Defender for IoT provides remediation steps you can carry out for the alert. Steps may include remediating a device or network process that caused Defender for IoT to trigger the alert.
-Remediation steps help SOC teams better understand Operational Technology (OT) issues and resolutions. Review remediation information before managing the alert event or taking action on the device or the network.
-
-**To view alert remediation steps:**
-
-1. Select an alert from the Alerts page.
-1. In the side pane, select **Take action.**
-1. Review remediation steps.
-
- :::image type="content" source="media/how-to-manage-the-alert-event/remediation-steps.png" alt-text="Screenshot of a sample set of remediation steps for alert action.":::
--
-Your administrator may have added instructions or comments to help you complete remediation or alert handling. If created, comments appear in the Alert Details section.
--
-After taking remediation steps, you may want to change the alert status to Close the alert.
-
-## Learn and unlearn alert traffic
-
-Some alerts indicate deviations of the learned network baseline. These alerts might reflect valid network changes, such as:
--- New activity was detected on existing device. For example, an authorized device attempted to access a new resource on another device.--- Firmware version changes following standard maintenance procedures.--- A new device is added to the network. --- A new device performed a read/write operation on a destination controller.--- A new device performs a read/write operation on a destination controller and should be defined as a programming device.--- New legitimate scanning is carried out and the device should be defined as a scanning device.-
-When you want to approve these changes, you can instruct Defender for IoT to *learn* the traffic.
-
-**To learn the traffic**:
-
-1. Navigate to the **Alerts** tab.
-
-1. Select an alert from the list of alerts.
-1. Select **Take action**.
-
-1. Enable the **Alert Learn** toggle.
-
- :::image type="content" source="media/how-to-manage-the-alert-event/learn-remediation.png" alt-text="Screenshot of the Learn option for Policy alerts.":::
-
-After learning the traffic, configurations, or activity are considered valid. An alert will no longer be triggered for this activity.
-
-In addition,
--- The alert status is automatically updated to Closed.--- The learn action appears in the **Event Timeline**.--- For this traffic, the device won't be calculated when the sensor generates Risk Assessment, Attack Vector, and other reports.-
-### Unlearn alert traffic
-
-Learned traffic can be unlearned. When the sensor unlearns traffic, alerts are retriggered for this traffic combination detected.
-
-**To unlearn an alert**
-
-1. Navigate to the alert you learned.
-
-1. Disable the **Alert learn** toggle.
-
-The alert status is automatically updated to **New**.
-
-## Mute and unmute alerts
-
-Under certain circumstances, you might want to instruct your sensor to ignore a specific scenario on your network. For example:
-
- - The Anomaly engine triggers an alert on a spike in bandwidth between two devices, but the spike is valid for these devices.
-
- - The Protocol Violation engine triggers an alert on a protocol deviation detected between two devices, but the deviation is valid between the devices.
-
- - The Operational engine triggers an alert indicating that the PLC Mode was changed on a device. The new mode may indicate that the PLC isn't secure. After investigation, it's determined that the new mode is acceptable.
-
-In these situations, learning isn't available. You can mute the alert event when learning can't be carried out and you want to suppress the alert and remove the device when calculating risks and attack vectors.
--
-A muted scenario includes the network devices and traffic detected for an event. The alert title describes the traffic that is being muted.
-
-> [!NOTE]
-> You can't mute traffic if an internet device is defined as the source or destination.
-
-**To mute an alert:**
-
-1. Select an alert from the Alerts page and then select **Take action**.
-1. Enable the **Alert mute** toggle.
-
-**After an event is muted:**
--- The alert status will automatically be changed to **Closed.**--- The mute action will appear in the **Event Timeline**.--- The sensor will recalculate devices when generating Risk Assessment, Attack Vector, and other reports. For example, if you muted an alert that detected malicious traffic on a device, that device won't be calculated in the Risk Assessment report.-
-## Close the alert
-
- Close an alert when you finish remediating, investigating, or otherwise handling the alert. For example:
--- **Mitigate a network configuration or device**: You receive an alert indicating that a new device was detected on the network. When investigating, you discover that the device is unauthorized. You handle the alert by disconnecting the device from the network.--- **Update a sensor configuration**: You receive an alert indicating that a server initiated an excessive number of remote connections. This alert was triggered because the sensor anomaly thresholds were defined to trigger alerts above a certain number of sessions within one minute. You handle the alert by updating the thresholds. -
-After you carry out remediation or investigation, you can close the alert.
-
-If the traffic is detected again, the alert will be retriggered.
-
-**To close a single alert:**
-
-1. Select an alert. The Alert Details section opens.
-1. Select the dropdown arrow in the Status field and select **Closed**.
-
- :::image type="content" source="media/how-to-manage-the-alert-event/close-alert.png" alt-text="Screenshot of the option to close an alert from the Alerts page.":::
-
-**To close multiple alerts:**
-
-1. Select the alerts you want to close from the Alerts page.
-1. Select **Change Status** from the action items on the top of the page.
-1. Select **Closed** and **Apply.**
-
- :::image type="content" source="media/how-to-manage-the-alert-event/multiple-close.png" alt-text="Screenshot of selecting multiple alerts to close from the Alerts page.":::
-
-Change the alert status to **New** if further investigation is required.
-
-To view closed alerts on the Alerts page, verify that the **Status** filter is defined to show **Closed** alerts.
--
-## Export alert information
-
-Export alert information to a .csv file. The following information is exported:
--- Source address-- Destination address-- Alert title-- Alert severity-- Alert message-- Additional information-- Acknowledged status-- PCAP availability-
-**To export:**
-
-1. Select Export to CSV on the top of the Alerts page.
--
-## Interaction with Azure Alerts page
-
-Your deployment may have been set up to work with cloud-connected sensors on the Defender for IoT portal on Azure. In cloud-connected environments, Alert detections shown on your sensors will also be seen in the Defender for IoT Alerts page, on the Azure portal.
-
-Viewing and managing alerts in the portal provides significant advantages. For example, you can:
--- Display an aggregated view of alert activity in all enterprise sensors-- Learn about related MITRE ATT&CK techniques, tactics and stages-- View alerts based on the sensor site-- Integrate alert details with Microsoft Sentinel-- Change the severity of an alert-
- :::image type="content" source="media/how-to-view-alerts/alert-cloud-mitre.png" alt-text="Screenshot of a sample alert shown in the Azure portal.":::
-
-Users working with alerts on the Defender for IoT portal on Azure should understand how alert management between the portal and the sensor operates.
--
- Parameter | Description
-|--|--|
-| **Alert Exclusion rules**| Alert *Exclusion rules* defined in the on-premises management console impact the alerts triggered by managed sensors. As a result, the alerts excluded by these rules also won't be displayed in the Alerts page on the portal. For more information, see [Create alert exclusion rules](how-to-work-with-alerts-on-premises-management-console.md#create-alert-exclusion-rules).
-| **Managing alerts on your sensor** | If you change the status of an alert, or learn or mute an alert on a sensor, the changes are not updated in the Defender for IoT Alerts page on the portal. This means that this alert will stay open on the portal. However another alert won't be triggered from the sensor for this activity.
-| **Managing alerts in the portal Alerts page** | Changing the status of an alert on the Azure portal, Alerts page or changing the alert severity on the portal, doesn't impact the alert status or severity in on-premises sensors.
-
-## Next steps
-
-For more information, see:
--- [Detection engines and alerts](concept-key-concepts.md#detection-engines-and-alerts)--- [View alerts on your sensor](how-to-view-alerts.md#view-alerts-on-your-sensor)--- [Alert types and descriptions](alert-engine-messages.md)--- [Control what traffic is monitored](how-to-control-what-traffic-is-monitored.md)-
defender-for-iot How To Troubleshoot The Sensor And On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-troubleshoot-the-sensor-and-on-premises-management-console.md
If you don't see an expected alert on the on-premises **Alerts** page, do the fo
- Verify whether the alert is already listed as a reaction to a different security instance. If it has, and that alert hasn't yet been handled, a new alert isn't shown elsewhere. -- Verify that the alert isn't being excluded by **Alert Exclusion** rules. For more information, see [Create alert exclusion rules](how-to-work-with-alerts-on-premises-management-console.md#create-alert-exclusion-rules).
+- Verify that the alert isn't being excluded by **Alert Exclusion** rules. For more information, see [Create alert exclusion rules on an on-premises management console](how-to-accelerate-alert-incident-response.md#create-alert-exclusion-rules-on-an-on-premises-management-console).
### Tweak the Quality of Service (QoS)
defender-for-iot How To View Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-view-alerts.md
Title: View alerts details on the sensor Alerts page - Microsoft Defender for IoT
-description: View alerts detected by your Defender for IoT sensor.
Previously updated : 06/02/2022
+ Title: View and manage alerts on your OT sensor - Microsoft Defender for IoT
+description: Learn about viewing and managing alerts on an OT network sensor.
Last updated : 12/12/2022
-# View alerts on your sensor
+# View and manage alerts on your OT sensor
-Alerts are triggered when sensor engines detect changes or suspicious activity in network traffic that need your attention.
+Microsoft Defender for IoT alerts enhance your network security and operations with real-time details about events logged in your network. OT alerts are triggered when OT network sensors detect changes or suspicious activity in network traffic that needs your attention.
-This article describes how to view alerts triggered by your Microsoft Defender for IoT OT network sensors.
+This article describes how to view Defender for IoT alerts directly on an OT network sensor. You can also view OT alerts on the [Azure portal](how-to-manage-cloud-alerts.md) or an [on-premises management console](how-to-work-with-alerts-on-premises-management-console.md).
-Once an alert is selected, you can view comprehensive details about the alert activity, for example,
+For more information, see [Microsoft Defender for IoT alerts](alerts.md).
-- Detected protocols-- Source and destination IP and MAC addresses-- Vendor information-- Device type information
+## Prerequisites
-You can also gain contextual information about the alert by viewing the source and destination in the Device map and viewing related events in the Event timeline.
+- **To have alerts on your OT sensor**, you must have a SPAN port configured for your sensor and Defender for IoT monitoring software installed. For more information, see [Install OT agentless monitoring software](how-to-install-software.md).
-To help you quickly pinpoint information of interest, you can view alerts:
+- **To view alerts on the OT sensor**, sign into your sensor as an *Admin*, *Security Analyst*, or *Viewer* user.
-- Based on various categories, such as alert severity, name or status-- By using filters-- By using free text search to find alert information of interest to you.
+- **To manage alerts on an OT sensor**, sign into your sensor as an *Admin* or *Security Analyst* user. Alert management activities include modifying their statuses or severities, *learning* or *muting* an alert, accessing PCAP data, or adding pre-defined comments to an alert.
-After you review the information in an alert, you can carry out various forensic steps to guide you in managing the alert event. For example:
+For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md).
-- Analyze recent device activity (data-mining report).
+## View alerts on an OT sensor
-- Analyze other events that occurred at the same time (event timeline).
+1. Sign into your OT sensor console and select the **Alerts** page on the left.
-- Analyze comprehensive event traffic (PCAP file).
+ By default, the following details are shown in the grid:
-## View alerts and alert details
+ | Name | Description |
+ |--|--|
+ | **Severity** | A predefined alert severity assigned by the sensor that you can modify as needed, including: *Critical*, *Major*, *Minor*, *Warning*. |
+ | **Name** | The alert title |
+ | **Engine** | The [Defender for IoT detection engine](architecture.md#defender-for-iot-analytics-engines) that detected the activity and triggered the alert. |
+ | **Last detection** | The last time the alert was detected. <br><br>- If an alert's status is **New**, and the same traffic is seen again, the **Last detection** time is updated for the same alert. <br>- If the alert's status is **Closed** and traffic is seen again, the **Last detection** time is *not* updated, and a new alert is triggered. |
+ | **Status** |The alert status: *New*, *Active*, *Closed*<br><br>For more information, see [Alert statuses and triaging options](alerts.md#alert-statuses-and-triaging-options).|
+ | **Source Device** | The source device IP address, MAC, or device name. |
-This section describes how to view and filter alerts details on your sensor.
+ 1. To view more details, select the :::image type="icon" source="media/how-to-manage-device-inventory-on-the-cloud/edit-columns-icon.png" border="false"::: **Edit Columns** button.
-**To view alerts in the sensor:**
+ In the **Edit Columns** pane on the right, select **Add Column** and any of the following extra columns:
-- Select **Alerts** from the side menu. The page displays the alerts detected by your sensor.
+ | Name | Description |
+ |--|--|
+ | **Destination Device** | The destination device IP address. |
+ | **First detection** | The first time the alert activity was detected. |
+ | **ID** | The alert ID. |
+ | **Last activity** | The last time the alert was changed, including manual updates for severity or status, or automated changes for device updates or device/alert de-duplication |
- :::image type="content" source="media/how-to-view-alerts/view-alerts-main-page.png" alt-text="Screenshot of the sensor Alerts page." lightbox="media/how-to-view-alerts/view-alerts-main-page.png":::
+### Filter alerts displayed
-The following information is available from the Alerts page:
+Use the **Search** box, **Time range**, and **Add filter** options to filter the alerts displayed by specific parameters or help locate a specific alert.
-| Name | Description |
-|--|--|
-| **Severity** | The alert severity: Critical, Major, Minor, Warning|
-| **Name** | The alert title |
-| **Engine** | The Defender for IoT detection engine that detected the activity and triggered the alert. If the event was detected by the Device Builder platform, the value will be Micro-agent. |
-| **Last detection** | The last time the alert activity was detected. |
-| **Status** | Indicates if the alert is new or closed. |
-| **Source Device** | The source device IP address |
-| **Destination Device** | The destination device IP address |
-| **ID** | The alert ID. |
+For example:
-**To hide or display information:**
-1. Select **Edit Columns** from the Alerts page.
-1. Add and remove columns as required from the Edit columns dialog box.
+Filtering alerts by **Groups** uses any custom groups you may have created in the [Device inventory](how-to-investigate-sensor-detections-in-a-device-inventory.md) or the [Device map](how-to-work-with-the-sensor-device-map.md) pages.
-**How long are alerts saved?**
+### Group alerts displayed
-- New alerts are automatically closed if no identical traffic detected 14 days after initial detection. After 90 days of being closed, the alert is removed from the sensor console.
+Use the **Group by** menu at the top right to collapse the grid into subsections based on *Severity*, *Name*, *Engine*, or *Status*.
-- If identical traffic is detected after the initial 14 days, the 14-day count for network traffic is reset.
+For example, while the total number of alerts appears above the grid, you may want more specific information about alert count breakdown, such as the number of alerts with a specific severity or status.
- Changing the status of an alert to *Learn*, *Mute* or *Close* does not impact how long the alert is displayed in the sensor console.
+## View details and remediate a specific alert
-### Filter the view
+1. Sign into the OT sensor and select **Alerts** on the left-hand menu.
-Use filter, grouping and text search tools to view alerts of interest to you.
+1. Select an alert in the grid to display more details in the pane on the right. The alert details pane includes the alert description, traffic source and destination, and more. Select **View full details** to drill down further. For example:
-**To filter by category:**
+ :::image type="content" source="media/alerts/alerts-on-sensor.png" alt-text="Screenshot of an alert selected from the Alerts page on an OT sensor.":::
-1. Select **Add filter**.
-1. Define a filter and select **Apply**.
+1. The alert details page provides more details about the alert, and a set of remediation steps on the **Take action** tab.
- :::image type="content" source="media/how-to-view-alerts/alerts-filter.png" alt-text="Screenshot of Alert filter options.":::
+ Use the following tabs to gain more contextual insight:
-**About the Groups type**
+ - **Map View**. View the source and destination devices in a map view with other devices connected to your sensor. For example:
-The **Groups** option refers to the Device groups you created in the Device map and inventory.
+ :::image type="content" source="media/how-to-view-alerts/map-view.png" alt-text="Screenshot of the Map View tab on an alert details page.":::
+ - **Event Timeline**. View the event together with other recent activity on the related devices. Filter options to customize the data displayed. For example:
+ :::image type="content" source="media/alerts/event-timeline-alert-sensor.png" alt-text="Screenshot of an event timeline on an alert details page.":::
-**To view alerts based on a pre-defined category:**
+## Manage alert status and triage alerts
-1. Select **Group by** from the Alerts page and choose a category. The page displays the alerts according to the category selected.
+Make sure to update your alert status once you've taken remediation steps so that the progress is recorded. You can update status for a single alert or for a selection of alerts in bulk.
-## View alert descriptions and details
+*Learn* an alert to indicate to Defender for IoT that the detected network traffic is authorized. Learned alerts won't be triggered again the next time the same traffic is detected on your network. *Mute* an alert when learning isn't available and you want to ignore a specific scenario on your network.
-View more information about the alert, such as the alert description, details about protocols, traffic and entities associated with the alert, alert remediation steps, and more.
+For more information, see [Alert statuses and triaging options](alerts.md#alert-statuses-and-triaging-options).
-**To view details:**
+- **To manage alert status**:
-1. Select an alert.
-1. The details pane opens with the alert description, source/destination information and other details.
+ 1. Sign into your OT sensor console and select the **Alerts** page on the left.
-1. To view more details and review remediation steps, select **View full details**. The Alert Details pane provides more information about the traffic and devices. Comments may also have been added by your administrator.
+ 1. Select one or more alerts in the grid whose status you want to update.
-## Gain contextual insight
+ 1. Use the toolbar :::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/status-icon.png" border="false"::: **Change Status** button or the :::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/status-icon.png" border="false"::: **Status** option in the details pane on the right to update the alert status.
-Gain contextual insight about alert activity by:
+ The :::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/status-icon.png" border="false"::: **Status** option is also available on the alert details page.
-- Viewing source and destination devices in map view with other connected devices. Select **Map View** to see the map.
+- **To learn one or more alerts**:
- :::image type="content" source="media/how-to-view-alerts/view-alerts-map.png" alt-text="Screenshot of a map view of the source and detected devices from an alert." lightbox="media/how-to-view-alerts/view-alerts-map.png" :::
-
-- Viewing an Event timeline with recent activity of the device. Select **Event Timeline** and use the filter options to customize the information displayed.
-
- :::image type="content" source="media/how-to-view-alerts/alert-event-timeline.png" alt-text="Screenshot of an alert timeline for the selected alert from the Alerts page." lightbox="media/how-to-view-alerts/alert-event-timeline.png" :::
+ Sign into your OT sensor console and select the **Alerts** page on the left, and then do one of the following:
-### Remediate the alert incident
+ - Select one or more learnable alerts in the grid and then select :::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/learn-icon.png" border="false"::: **Learn** in the toolbar.
+ - On an alert details page, in the **Take Action** tab, select **Learn**.
-Defender for IoT provides remediation steps you can carry out for the alert. This may include remediating a device or network process that caused Defender for IoT to trigger the alert.
-Remediation steps will help SOC teams better understand OT issues and resolutions. Review this information before managing the alert event or taking action on the device or the network.
+- **To mute an alert**:
-**To view alert remediation steps:**
+ 1. Sign into your OT sensor console and select the **Alerts** page on the left.
+ 1. Locate the alert you want to mute and open its alert details page.
+ 1. On the **Take action** tab, toggle on the **Alert mute** option.
-1. Select an alert from the Alerts page.
-1. In the side pane, select **Take action.**
+- **To unlearn or unmute an alert**:
- :::image type="content" source="media/how-to-view-alerts/alert-remediation-rename.png" alt-text="Screenshot of the alert's Take action section.":::
+ 1. Sign into your OT sensor console and select the **Alerts** page on the left.
+ 1. Locate the alert you've learned or muted and open its alert details page.
+ 1. On the **Take action** tab, toggle off the **Alert learn** or **Alert mute** option.
-Your administrator may have added guidance to help you complete the remediation or alert handling. If created comments will appear in the Alert Details section.
+ After you unlearn or unmute an alert, alerts are re-triggered whenever the sensor senses the selected traffic combination.
-After taking remediation steps, you may want to change the alert status to close the alert.
+## Access alert PCAP data
+You might want to access raw traffic files, also known as *packet capture files* or *PCAP* files as part of your investigation.
-## Create alert reports
+To access raw traffic files for your alert, select **Download Filtered Pcap** from the top-left corner of your alert details page:
-You can generate the following alert reports:
+For example:
-- Export information on one, all or selected alerts to a CSV file-- Export PDF reports
-**To export to CSV file:**
+The PCAP file is downloaded and your browser prompts you to open or save it locally.
-1. Select one or several alerts from the Alerts page. To create a csv file for all alert to a csv, don't select anything.
-1. Select **Export to CSV**.
+### Export alerts to CSV or PDF
-**To export a PDF:**
+You may want to export a selection of alerts to a CSV or PDF file for offline sharing and reporting.
-1. Select one or several alerts from the Alerts page.
-1. Select **Export to PDF**.
+- Export alerts to a CSV file from the main **Alerts** page. Export alerts one at a time or in bulk.
+- Export alerts to a PDF file one at a time only, either from the main **Alerts** page or an alert details page.
-### Download PCAP files
+**To export alerts to a CSV file**:
-Download a full or filtered PCAP file for a specific alert directly from the sensor. PCAP files provide more detailed information about the network traffic that occurred at the time of the alert event.
+1. Sign into your OT sensor console and select the **Alerts** page on the left.
-**To download a PCAP file:**
+1. Use the search box and filter options to show only the alerts you want to export.
-1. Select an alert
-1. Select **View full details**.
-1. Select **Download Full PCAP** or **Download Filtered PCAP**.
+1. In the toolbar above the grid, select **Export to CSV**.
+The file is generated, and you're prompted to open or save it locally.
-## View alerts in the Defender for IoT portal
+**To export an alert to a PDF file**:
-If your deployment was set up to work with cloud-connected sensors, Alert detections shown on your sensors will also be seen in the Defender for IoT Alerts page, on the Azure portal.
+Sign into your OT sensor console and select the **Alerts** page on the left, and then do one of the following:
-Viewing alerts in the portal provides significant advantages. For example, it lets you:
+- On the **Alerts** page, select an alert and then select **Export to PDF** from the toolbar above the grid.
+- On an alerts details page, select **Export to PDF**.
-- Display an aggregated view of alert activity in all enterprise sensors-- Understand related MITRE ATT&CK techniques, tactics and stages-- View alerts based on the site-- Change the severity of an alert
+The file is generated, and you're prompted to save it locally.
- :::image type="content" source="media/how-to-view-alerts/alert-cloud-mitre.png" alt-text="Screenshot of a sample alert shown in the Azure portal.":::
+## Add alert comments
-### Manage alert events
+Alert comments help you accelerate your investigation and remediation process by making communication between team members and recording data more efficient.
-You can manage an alert incident by:
+If your admin has [created custom comments](how-to-accelerate-alert-incident-response.md#create-alert-comments-on-an-ot-sensor) for your team to add to alerts, add them from the **Comments** section on an alert details page.
-- Changing the status of an alert.
+1. Sign into your OT sensor console and select the **Alerts** page on the left.
-- Instructing sensors to learn, close, or mute activity detected.
+1. Locate the alert where you want to add a comment and open the alert details page.
-- Create alert groups for display at SOC solutions.
+1. From the **Choose comment** list, select the comment you want to add, and then select **Add**. For example:
+
+ :::image type="content" source="media/alerts/add-comment-sensor.png" alt-text="Screenshot of the Comments section on an alert details page on the sensor.":::
+
+For more information, see [Accelerating OT alert workflows](alerts.md#accelerating-ot-alert-workflows).
-- Forward alerts to partner vendors: SIEM systems, MSSP systems, and more. ## Next steps
-For more information, see:
+> [!div class="nextstepaction"]
+> [View and manage alerts from the Azure portal](how-to-manage-cloud-alerts.md)
+
+> [!div class="nextstepaction"]
+> [View and manage alerts on the the on-premises management console](how-to-work-with-alerts-on-premises-management-console.md)
+
+> [!div class="nextstepaction"]
+> [Accelerate alert workflows on an OT network sensor](how-to-accelerate-alert-incident-response.md)
+
+> [!div class="nextstepaction"]
+> [Forward alert information](how-to-forward-alert-information-to-partners.md)
-- [Manage the alert event](how-to-manage-the-alert-event.md)
+> [!div class="nextstepaction"]
+> [OT monitoring alert types and descriptions](alert-engine-messages.md)
-- [Accelerate alert workflows](how-to-accelerate-alert-incident-response.md)
+> [!div class="nextstepaction"]
+> [Microsoft Defender for IoT alerts](alerts.md)
defender-for-iot How To Work With Alerts On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-work-with-alerts-on-premises-management-console.md
Title: Work with alerts on the on-premises management console
-description: Use the on-premises management console to get an enterprise view of recent threats in your network and better understand how sensor users are handling them.
Previously updated : 11/09/2021
+ Title: View and manage OT alerts on the on-premises management console - Microsoft Defender for IoT
+description: Learn how to view and manage OT alerts collected from all connected OT network sensors on a Microsoft Defender for IoT on-premises management console.
Last updated : 12/12/2022
-# Work with alerts on the on-premises management console
+# View and manage alerts on the on-premises management console
-You can do the following from the **Alerts** page in the management console:
+Microsoft Defender for IoT alerts enhance your network security and operations with real-time details about events logged in your network. OT alerts are triggered when OT network sensors detect changes or suspicious activity in network traffic that needs your attention.
-- Work with alert filters
+This article describes how to view Defender for IoT alerts on an on-premises management console, which aggregates alerts from all connected OT sensors. You can also view OT alerts on the [Azure portal](how-to-manage-cloud-alerts.md) or an [OT network sensor](how-to-view-alerts.md).
-- Work with alert counters
+## Prerequisites
-- View alert information
+- **To have alerts on the on-premises management console**, you must have an OT network sensor with alerts connected to your on-premises management console. For more information, see [View and manage alerts on your OT sensor](how-to-view-alerts.md) and [Connect sensors to the on-premises management console](how-to-activate-and-set-up-your-on-premises-management-console.md#connect-sensors-to-the-on-premises-management-console).
-- Manage alert events
+- **To view alerts the on-premises management console**, sign in as an *Admin*, *Security Analyst*, or *Viewer* user.
-- Create alert exclusion rules
+- **To manage alerts on the on-premises management console**, sign in as an *Admin* or *Security Analyst* user. Management activities include acknowledging or muting an alert, depending on the alert type.
-- Trigger alert exclusion rules from external systems
+For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md).
-- Accelerate incident workflow with alert groups
+## View alerts on the on-premises management console
-## Interaction with Cloud Alerts page
+1. Sign into the on-premises management console and select **Alerts** on the left-hand menu.
-If your deployment was set up to work with cloud-connected sensors, Alert detections shown on all enterprise sensors will also be seen in the Defender for IoT Alerts page, on the Azure portal.
+ Alerts are shown in a simple table, showing the sensor that triggered the alert and alert details in two columns.
+1. Select an alert row to expand its full details. For example:
-Viewing and managing alerts in the portal provides significant advantages. For example, it lets you:
+ :::image type="content" source="media/alerts/alerts-cm-expand.png" alt-text="Screenshot of the Alerts page on the on-premises management console with one alert expanded for details.":::
-- Display an aggregated view of alert activity in all enterprise sensors.-- Learn about related MITRE ATT&CK techniques, tactics and stages-- View alerts based on the sensor site-- Integrate alerts details with Microsoft Sentinel-- Change the severity of an alert
+1. In an expanded alert row, do any of the following to view more context about the alert:
- :::image type="content" source="media/how-to-view-alerts/alert-cloud-mitre.png" alt-text="Screenshot of a sample of alert as shown in the Azure portal.":::
+ - Select **OPEN SENSOR** to open the sensor that generated the alert and continue your investigation. For more information, see [View and manage alerts on your OT sensor](how-to-view-alerts.md).
-## View alerts in the on-premises management console
-
-The on-premises management console aggregates alerts from all connected sensors. This provides an enterprise view of recent threats in your network and helps you better understand how sensor users are handling them.
--
-### Work with alert filters
-
-The **Alerts** window displays the alerts generated by sensors connected to your on-premises management console. You can view all the alerts for connected sensors or present the alerts sent from a specific:
--- Site--- Zone--- Device--- Sensor-
-Select **Clear Filters** to view all alerts.
--
-### Work with alert counters
-
-Alert counters provide a breakdown of alerts by severity and the acknowledged state.
--
-The following severity levels appear in the alert counter:
--- **Critical**: Indicates a malicious attack that should be handled immediately.--- **Major**: Indicates a security threat that's important to address.--- **Minor**: Indicates some deviation from the baseline behavior that might contain a security threat.--- **Warning**: Indicates some deviation from the baseline behavior with no security threats.-
-Severity levels are predefined.
-
-You can adjust the counter to provide numbers based on acknowledged and unacknowledged alerts. Unacknowledged alerts were triggered at Defender for IoT sensors but haven't yet been reviewed by operators at the sensor.
-
-When the **Show Acknowledged Alerts** option is selected, all the acknowledged alerts appear in the **Alerts** window.
--
-### View alert information
-
-The alert presents the following information:
--- A summary of the alert event.--- Alert severity.--- A link to the alert in the sensor that detected it.--- An alert UUID. The UUID consists of the alert ID that's associated with the alert event detected on the sensor, separated by a hyphen, and followed by a unique system ID number.-
-**On-premises management console Alert UUID**
--
-**Sensor alert ID**
--
-Working with UUIDs ensures that each alert displayed in the on-premises management console is searchable and identifiable by a unique number. This is required because alerts generated from multiple sensors might produce the same alert ID.
+ - Select **SHOW DEVICES** to show the affected devices on a zone map. For more information, see [View information per zone](how-to-view-information-per-zone.md).
> [!NOTE]
-> By default, UUIDs are displayed in the following partner systems when forwarding rules are defined: ArcSight, syslog servers, QRadar, Sentinel, and NetWitness. No setup is required.
+> On the on-premises management console, *New* alerts are called *Unacknowledged*, and *Closed* alerts are called *Acknowledged*. For more information, see [Alert statuses and triaging options](alerts.md#alert-statuses-and-triaging-options).
-**To view alert information**:
+### Filter the alerts displayed
-- From the alert list, select an alert.
+At the top of the **Alerts** page, use the **Free Search**, **Sites**, **Zones**, **Devices**, and **Sensors** options to filter the alerts displayed by specific parameters, or to help locate a specific alert.
-**To view the alert in the sensor**:
+- [Acknowledged alerts](alerts.md#alert-statuses-and-triaging-options) aren't listed by default. Select **Show Acknowledged Alerts** to include them in the list.
-- Select **OPEN SENSOR** from the alert.
+- Select **Clear** to remove all filters.
-**To view the devices in a zone map**:
+## Manage alert status and triage alerts
-- To view the device map with a focus on the alerted device and all the devices connected to it, select **SHOW DEVICES**.
+Use the following options to manage alert status on your on-premises management console, depending on the alert type:
-## Manage alert events
+- **To acknowledge or unacknowledge an alert**: In an expanded alert row, select **ACKNOWLEDGE** or **UNACKNOWLEDGE** as needed.
-Several options are available for managing alert events from the on-premises management console.
+- **To mute or unmute an alert**: In an expanded alert row, hover over the top of the row and select the :::image type="icon" source="media/alerts/mute-on-prem.png" border="false"::: **Mute** button or :::image type="icon" source="media/alerts/unmute-on-prem.png" border="false"::: **Unmute** button as needed.
-- Learn or acknowledge alert events. Select **Learn & Acknowledge** to learn all alert events that can be authorized and to acknowledge all alert events that are currently not acknowledged.
+For more information, see [Alert statuses and triaging options](alerts.md#alert-statuses-and-triaging-options).
- :::image type="content" source="media/how-to-work-with-alerts-on-premises-management-console/learn-and-acknowledge.png" alt-text="Screenshot of the Learn & Acknowledge button.":::
+## Export alerts to a CSV file
-- Mute and unmute alert events.
+You may want to export a selection of alerts to a CSV file for offline sharing and reporting.
-To learn more about learning, acknowledging, and muting alert events, see the sensor [Manage alert events](how-to-manage-the-alert-event.md) article.
+1. Sign into your on-premises management console and select the **Alerts** page.
-## Export alert information
-
-Export alert information to a .csv file. You can export information of all alerts detected or export information based on the filtered view. The following information is exported:
--- Source Address-- Destination Address-- Alert title-- Alert severity-- Alert message-- Additional information-- Acknowledged status-- PCAP availability-
-**To export alerts**:
-
-1. Select **Alerts** from the side menu.
+1. Use the [search and filter](#filter-the-alerts-displayed) options to show only the alerts you want to export.
1. Select **Export**.
-1. Select **Export Extended Alerts** to export alert information in separate rows for each alert that covers multiple devices. When Export Extended Alerts is selected, the .csv file will create a duplicate row of the alert with the unique items in each row. Using this option makes it easier to investigate exported alert events.
-
-## Create forwarding rules
-
-**To create a forwarding rule on the management console**:
-
-1. Sign in to the sensor.
-
-1. Select **Forwarding** on the side menu.
-
-1. Select the :::image type="icon" source="media/how-to-work-with-alerts-on-premises-management-console/plus-add-icon.png" border="false"::: icon.
-
-1. In the Create Forwarding Rule window, enter a name for the rule
-
- :::image type="content" source="media/how-to-work-with-alerts-on-premises-management-console/management-console-create-forwarding-rule.png" alt-text="Screenshot of the Create Forwarding Rule window..":::
-
- Define criteria by which to trigger a forwarding rule. Working with forwarding rule criteria helps pinpoint and manage the volume of information sent from the sensor to external systems.
-
-1. Select the severity level from the drop-down menu.
-
- This is the minimum incident to forward, in terms of severity level. For example, if you select **Minor**, minor alerts and any alert above this severity level will be forwarded. Levels are predefined.
-
-1. Select any protocols to apply.
-
- Only trigger the forwarding rule if the traffic detected was running over specific protocols. Select the required protocols from the drop-down list or choose them all.
-
-1. Select which engines the rule should apply to.
-
- Select the required engines, or choose them all. Alerts from selected engines will be sent.
-
-1. Select which notifications you want to forward:
-
- - **Report system notifications:** disconnected sensors, remote backup failures.
+The CSV file is generated, and you're prompted to save it locally.
- - **Report alert notifications:** date and time of alert, alert title, alert severity, source and destination name and IP, suspicious traffic and engine that detected the event.
-1. Select **Add** to add an action to apply. Fill in any parameters needed for the selected action.
- Forwarding rule actions instruct the sensor to forward alert information to partner vendors or servers. You can create multiple actions for each forwarding rule.
-
-1. Add another action if desired.
-
-1. Select **Save**.
-
-You can learn more [About forwarded alert information](how-to-forward-alert-information-to-partners.md#about-forwarded-alert-information). You can also [Test forwarding rules](how-to-forward-alert-information-to-partners.md#test-forwarding-rules), or [Edit and delete forwarding rules](how-to-forward-alert-information-to-partners.md#edit-and-delete-forwarding-rules). You can also learn more about [Forwarding rules and alert exclusion rules](how-to-forward-alert-information-to-partners.md#forwarding-rules-and-alert-exclusion-rules).
-
-## Create alert exclusion rules
-
-Instruct Defender for IoT to ignore alert triggers based on:
--- Time zones and time periods--- Device address (IP, MAC, subnet)--- Alert names--- A specific sensor-
-Create alert exclusion rules when you want Defender for IoT to ignore activity that will trigger an alert.
-
-For example, if you know that all the OT devices monitored by a specific sensor will be going through maintenance procedures for two days, you can define an exclusion rule that instructs Defender for IoT to suppress alerts detected by this sensor during the predefined period.
-
-### Alert exclusion logic
-
-Alert rule logic is `AND` based. This means an alert will be triggered only when all the rule conditions are met.
-
-If a rule condition is not defined, the condition will include all options. For example, if you don't include the name of a sensor in the rule, it will be applied to all sensors.
--
-Rule summaries appear in the **Exclusion Rule** window.
--
-In addition to working with exclusion rules, you can suppress alerts by muting them.
-
-### Create exclusion rules
-
-**To create exclusion rules**:
-
-1. From the left pane of the on-premises management console, select **Alert Exclusion**. Define a new exclusion rule by selecting the **Add** icon :::image type="icon" source="media/how-to-work-with-alerts-on-premises-management-console/add-icon.png" border="false"::: in the upper-right corner of the window that opens. The **Create Exclusion Rule** dialog box opens.
-
- :::image type="content" source="media/how-to-work-with-alerts-on-premises-management-console/create-alert-exclusion-view.png" alt-text="Screenshot of the Create Alert Exclusion pane.":::
-
-1. Enter a rule name in the **Name** field. The name can't contain quotes (`"`).
-
-1. In the **By Time Zone/Period** section, enter a time period within a specific time zone. Use this feature when an exclusion rule is created for a specific time period in one time zone, but should be implemented at the same time in other time zones. For example, you might need to apply an exclusion rule between 8:00 AM and 10:00 AM in three different time zones. In this case, create three separate exclusion rules that use the same time period and the relevant time zone.
-
-1. Select **ADD**. During the exclusion period, no alerts are created on the connected sensors.
-
- :::image type="content" source="media/how-to-work-with-alerts-on-premises-management-console/by-the-time-period.png" alt-text="Screenshot of the By Time Period view.":::
-
-1. In the **By Device Address** section, define the:
-
- - Device IP address, MAC address, or subnet address that you want to exclude.
-
- - Traffic direction for the excluded devices, source, and destination.
-
-1. Select **ADD**.
-
-1. In the **By Alert Title** section, start typing the alert title. From the drop-down list, select the alert title or titles to be excluded.
-
- :::image type="content" source="media/how-to-work-with-alerts-on-premises-management-console/alert-title.png" alt-text="Screenshot of the By Alert Title view.":::
-
-1. Select **ADD**.
-
-1. In the **By Sensor Name** section, start typing the sensor name. From the drop-down list, select the sensor or sensors that you want to exclude.
-
-1. Select **ADD**.
-
-1. Select **SAVE**. The new rule appears in the list of rules.
-
-You can suppress alerts by either muting them or creating alert exclusion rules. This section describes potential use cases for both features.
--- **Exclusion rule**. Write an exclusion rule when:-
- - You know ahead of time that you want to exclude the event from the database. For example, you know that the scenario detected at a certain sensor will trigger irrelevant alerts. For example, you'll be carrying out maintenance work on organizational PLCs on a specific site and want to suppress alerts related to PLCs for this site.
-
- - You want Defender for IoT to ignore events for a specific range of time (for system maintenance tasks).
-
- - You want to ignore events in a specific subnet.
-
- - You want to control alert events generated from several sensors with one rule.
-
- - You don't want to track the alert exclusion as an event in the event log.
--- **Mute**. Mute an alert when:-
- - Items that need to be muted are not planned. You don't know ahead of time which events will be irrelevant.
-
- - You want to suppress the alert from the **Alerts** window, but you still want to track it in the event log.
-
- - You want to ignore events on a specific channel.
-
-### Trigger alert exclusion rules from external systems
-
-Trigger alert exclusion rules from external systems. For example, manage exclusion rules from enterprise ticketing systems or systems that manage network maintenance processes.
+## Next steps
-Define the sensors, engines, start time, and end time to apply the rule. For more information, see [Defender for IoT API sensor and management console APIs](references-work-with-defender-for-iot-apis.md).
+> [!div class="nextstepaction"]
+> [View and manage alerts from the Azure portal](how-to-manage-cloud-alerts.md)
-Rules that you create by using the API appear in the **Exclusion Rule** window as RO.
+> [!div class="nextstepaction"]
+> [View and manage alerts on your OT sensor](how-to-view-alerts.md)
+> [!div class="nextstepaction"]
+> [Accelerate alert workflows on an OT network sensor](how-to-accelerate-alert-incident-response.md)
-## Next steps
+> [!div class="nextstepaction"]
+> [Forward alert information](how-to-forward-alert-information-to-partners.md)
-Review the [Defender for IoT Engine alerts](alert-engine-messages.md).
+> [!div class="nextstepaction"]
+> [Microsoft Defender for IoT alerts](alerts.md)
defender-for-iot How To Work With The Sensor Device Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-work-with-the-sensor-device-map.md
Your user role determines which tools are available in the Device Map window. Fo
The following basic search tools are available: - Search by IP or MAC address - Multicast or broadcast traffic-- Last seen: Filter the devices on the map according to the time they last communicated with other devices.
+- Last activity: Filter the devices on the map according to the time they last communicated with other devices.
:::image type="icon" source="media/how-to-work-with-maps/search-bar-icon-v2.png" border="false":::
The following predefined groups are available:
| **VLAN** | Devices associated with a specific VLAN ID. | | **Cross subnet connections** | Devices that communicate from one subnet to another subnet. | | **Attack vector simulations** | Vulnerable devices detected in attack vector reports. To view these devices on the map, select the **Display on Device Map** checkbox when generating the Attack Vector. :::image type="content" source="media/how-to-work-with-maps/add-attack-v3.png" alt-text="Screenshot of the Add Attack Vector Simulations":::|
-| **Last seen** | Devices grouped by the time frame they were last seen, for example: One hour, six hours, one day, or seven days. |
+| **Last activity** | Devices grouped by the time frame they were last active, for example: One hour, six hours, one day, or seven days. |
| **Not In Active Directory** | All non-PLC devices that aren't communicating with the Active Directory. | For information about creating custom groups, see [Define custom groups](#define-custom-groups).
You can display devices from saved filters in the Device map. For more informati
|--|--| | :::image type="icon" source="media/how-to-work-with-maps/fit-to-screen-icon.png" border="false"::: | Fit to screen. | | :::image type="icon" source="media/how-to-work-with-maps/fit-to-selection-icon.png" border="false"::: | Fits a group of selected devices to the center of the screen. |
-| :::image type="icon" source="media/how-to-work-with-maps/collapse-view-icon.png" border="false"::: | IT/OT presentation. Collapse view to enable a focused view on OT devices, and group IT devices. |
+| :::image type="icon" source="media/how-to-work-with-maps/collapse-view-icon.png" border="false"::: | IT/OT Presentation Options. Select **Disable Display IT Networks Groups** to prevent the ability to collapse IT networks in the map. This option is turned on by default. |
|:::image type="icon" source="media/how-to-work-with-maps/layouts-icon-v2.png" border="false"::: | Layout options, including: <br />**Pin layout**. Drag devices on the map to a new location. Use the Pin option to save those locations when you leave the map to use another option. <br />**Layout by connection**. View connections between devices. <br />**Layout by Purdue**. View the devices in the map according to Enterprise, supervisory and process control layers. <br /> | | :::image type="icon" source="media/how-to-work-with-maps/zoom-in-icon-v2.png" border="false"::: :::image type="icon" source="media/how-to-work-with-maps/zoom-out-icon-v2.png" border="false"::: | Zoom in or out of the map. |
This section describes device details.
|--|--| | Name | The device name. <br /> By default, the sensor discovers the device name as it's defined in the network. For example, a name defined in the DNS server. <br /> If no such names were defined, the device IP address appears in this field. <br /> You can change a device name manually. Give your devices meaningful names that reflect their functionality. | | Authorized status | Indicates if the device is authorized or not. During the Learning period, all the devices discovered in the network are identified as Authorized. When a device is discovered after the Learning period, it appears as Unauthorized by default. You can change this definition manually. For information on this status and manually authorizing and unauthorizing, see [Authorize and unauthorize devices](#authorize-and-unauthorize-devices). |
-| Last seen | The last time the device was detected. |
+| Last activity | The last time the device was detected. |
| Alert | The number of open alerts associated with the device. | | Type | The device type as detected by the sensor. | | Vendor | The device vendor. This is determined by the leading characters of the device MAC address. This field is read-only. |
defender-for-iot Arcsight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/integrations/arcsight.md
For more information, see the [ArcSight SmartConnectors Documentation](https://w
This procedure describes how to create a forwarding rule from your OT sensor to send Defender for IoT alerts from that sensor to ArcSight.
+Forwarding alert rules run only on alerts triggered after the forwarding rule is created. Alerts already in the system from before the forwarding rule was created are not affected by the rule.
+ For more information, see [Forward alert information](../how-to-forward-alert-information-to-partners.md). 1. Sign in to your OT sensor console and select **Forwarding** on the left.
defender-for-iot Logrhythm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/integrations/logrhythm.md
Before you begin, make sure that you have the following prerequisites:
This procedure describes how to create a forwarding rule from your OT sensor to send Defender for IoT alerts from that sensor to LogRhythm.
+Forwarding alert rules run only on alerts triggered after the forwarding rule is created. Alerts already in the system from before the forwarding rule was created are not affected by the rule.
+ For more information, see [Forward alert information](../how-to-forward-alert-information-to-partners.md). 1. Sign in to your OT sensor console and select **Forwarding** on the left.
defender-for-iot Netwitness https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/integrations/netwitness.md
Before you begin, make sure that you have the following prerequisites:
This procedure describes how to create a forwarding rule from your OT sensor to send Defender for IoT alerts from that sensor to NetWitness.
+Forwarding alert rules run only on alerts triggered after the forwarding rule is created. Alerts already in the system from before the forwarding rule was created are not affected by the rule.
+ For more information, see [Forward alert information](../how-to-forward-alert-information-to-partners.md). 1. Sign in to your OT sensor console and select **Forwarding** on the left.
defender-for-iot Service Now Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/integrations/service-now-legacy.md
Last updated 08/11/2022
# Tutorial: Integrate ServiceNow with Microsoft Defender for IoT (legacy)
-> [!Note]
+> [!NOTE]
> A new [Operational Technology Manager](https://store.servicenow.com/sn_appstore_store.do#!/store/application/31eed0f72337201039e2cb0a56bf65ef/1.1.2?referer=%2Fstore%2Fsearch%3Flistingtype%3Dallintegrations%25253Bancillary_app%25253Bcertified_apps%25253Bcontent%25253Bindustry_solution%25253Boem%25253Butility%25253Btemplate%26q%3Doperational%2520technology%2520manager&sl=sh) integration is now available from the ServiceNow store. The new integration streamlines Microsoft Defender for IoT sensor appliances, OT assets, network connections, and vulnerabilities to ServiceNowΓÇÖs Operational Technology (OT) data model. > >Please read the ServiceNowΓÇÖs supporting links and docs for the ServiceNow's terms of service.
Last updated 08/11/2022
>Microsoft Defender for IoT's legacy integration with ServiceNow is not affected by the new integrations and Microsoft will continue supporting it. > > For more information, see the new [ServiceNow integrations](../tutorial-servicenow.md), and the ServiceNow documentation on the ServiceNow store:
+>
>- [Service Graph Connector (SGC)](https://store.servicenow.com/sn_appstore_store.do#!/store/application/ddd4bf1b53f130104b5cddeeff7b1229) >- [Vulnerability Response (VR)](https://store.servicenow.com/sn_appstore_store.do#!/store/application/463a7907c3313010985a1b2d3640dd7e).
This tutorial will help you learn how to integrate, and use ServiceNow with Micr
The Defender for IoT integration with ServiceNow provides a new level of centralized visibility, monitoring, and control for the IoT and OT landscape. These bridged platforms enable automated device visibility and threat management to previously unreachable ICS & IoT devices.
-The ServiceNow Configuration Management Database (CMDB) is enriched, and supplemented with a rich set of device attributes that are pushed by the Defender for IoT platform. This ensures a comprehensive, and continuous visibility into the device landscape. This visibility lets you monitor, and respond from a single-pane-of-glass.
+The ServiceNow Configuration Management Database (CMDB) is enriched, and supplemented with a rich set of device attributes that are pushed by the Defender for IoT platform. This ensures a comprehensive, and continuous visibility into the device landscape. This visibility lets you monitor, and respond from a single-pane-of-glass.
In this tutorial, you learn how to: > [!div class="checklist"]
-> * Download the Defender for IoT application in ServiceNow
-> * Set up Defender for IoT to communicate with ServiceNow
-> * Create access tokens in ServiceNow
-> * Send Defender for IoT device attributes to ServiceNow
-> * Set up the integration using an HTTPS proxy
-> * View Defender for IoT detections in ServiceNow
-> * View connected devices
+>
+> - Download the Defender for IoT application in ServiceNow
+> - Set up Defender for IoT to communicate with ServiceNow
+> - Create access tokens in ServiceNow
+> - Send Defender for IoT device attributes to ServiceNow
+> - Set up the integration using an HTTPS proxy
+> - View Defender for IoT detections in ServiceNow
+> - View connected devices
## Prerequisites ### Software requirements
-Access to ServiceNow and Defender for IoT
+Access to ServiceNow and Defender for IoT
- ServiceNow Service Management version 3.0.2. - Defender for IoT patch 2.8.11.1 or above.
-> [!Note]
+> [!NOTE]
>If you are already working with a Defender for IoT and ServiceNow integration and upgrade using the on-premises management console. In that case, the previous data from Defender for IoT sensors should be cleared from ServiceNow. ### Architecture
Access to ServiceNow and Defender for IoT
## Download the Defender for IoT application in ServiceNow
-To access the Defender for IoT application within ServiceNow, you will need to download the application from the ServiceNow application store.
+To access the Defender for IoT application within ServiceNow, you will need to download the application from the ServiceNow application store.
**To access the Defender for IoT application in ServiceNow**:
To access the Defender for IoT application within ServiceNow, you will need to d
Configure Defender for IoT to push alert information to the ServiceNow tables. Defender for IoT alerts will appear in ServiceNow as security incidents. This can be done by defining a Defender for IoT forwarding rule to send alert information to ServiceNow.
+Forwarding alert rules run only on alerts triggered after the forwarding rule is created. Alerts already in the system from before the forwarding rule was created are not affected by the rule.
+ **To push alert information to the ServiceNow tables**: 1. Sign in to the on-premises management console.
Configure Defender for IoT to push alert information to the ServiceNow tables. D
| Client Secret | Enter the client secret string you created for Defender for IoT in the **Application Registries** page in ServiceNow. | | Report Type | **Incidents**: Forward a list of alerts that are presented in ServiceNow with an incident ID and short description of each alert.<br /><br />**Defender for IoT Application**: Forward full alert information, including the sensor details, the engine, the source, and destination addresses. The information is forwarded to the Defender for IoT on the ServiceNow application. |
-1. Select **SAVE**.
+1. Select **SAVE**.
Defender for IoT alerts will now appear as incidents in ServiceNow.
Defender for IoT supports an HTTPS proxy in the ServiceNow integration by enabli
3. Select **Save and Exit**.
-4. Reset the on-premises management console using the following command:
+4. Reset the on-premises management console using the following command:
```bash sudo monit restart all
There are no resources to clean up.
## Next steps
-In this article, you learned how to get started with the ServiceNow integration. Continue on to learn about our [Cisco integration](../tutorial-forescout.md).
+In this article, you learned how to get started with the ServiceNow integration. Continue on to learn about our [Cisco integration](../tutorial-forescout.md).
defender-for-iot Manage Users On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/manage-users-on-premises-management-console.md
For more information, see [Active Directory support on sensors and on-premises m
|Field |Description | |||
- |**Domain Controller FQDN** | The fully qualified domain name (FQDN), exactly as it appears on your LDAP server. For example, enter `host1.subdomain.domain.com`. |
+ |**Domain Controller FQDN** | The fully qualified domain name (FQDN), exactly as it appears on your LDAP server. For example, enter `host1.subdomain.contoso.com`. <br><br> If you encounter an issue with the integration using the FQDN, check your DNS configuration. You can also enter the explicit IP of the LDAP server instead of the FQDN when setting up the integration. |
|**Domain Controller Port** | The port on which your LDAP is configured. |
- |**Primary Domain** | The domain name, such as `subdomain.domain.com`, and then select the connection type for your LDAP configuration. <br><br>Supported connection types include: **LDAPS/NTLMv3** (recommended), **LDAP/NTLMv3**, or **LDAP/SASL-MD5** |
+ |**Primary Domain** | The domain name, such as `subdomain.contoso.com`, and then select the connection type for your LDAP configuration. <br><br>Supported connection types include: **LDAPS/NTLMv3** (recommended), **LDAP/NTLMv3**, or **LDAP/SASL-MD5** |
|**Active Directory Groups** | Select **+ Add** to add an Active Directory group to each permission level listed, as needed. <br><br>When you enter a group name, make sure that you enter the group name as it's defined in your Active Directory configuration on the LDAP server. Then, make sure to use these groups when creating new sensor users from Active Directory.<br><br> Supported permission levels include **Read-only**, **Security Analyst**, **Admin**, and **Trusted Domains**.<br><br> Add groups as **Trusted endpoints** in a separate row from the other Active Directory groups. To add a trusted domain, add the domain name and the connection type of a trusted domain. You can configure trusted endpoints only for users who were defined under users.| Select **+ Add Server** to add another server and enter its values as needed, and **Save** when you're done.
For more information, see [Active Directory support on sensors and on-premises m
> - LDAP and LDAPS can't be configured for the same domain. However, you can configure each in different domains and then use them at the same time. >
+ For example:
+
+ :::image type="content" source="media/manage-users-on-premises-management-console/active-directory-config-example.png" alt-text="Screenshot of Active Directory integration configuration on the on-premises management console.":::
+ 1. Create access group rules for on-premises management console users. If you configure Active Directory groups for on-premises management console users, you must also create an access group rule for each Active Directory group. Active Directory credentials won't work for on-premises management console users without a corresponding access group rule. For more information, see [Define global access permission for on-premises users](#define-global-access-permission-for-on-premises-users). - ## Define global access permission for on-premises users Large organizations often have a complex user permissions model based on global organizational structures. To manage your on-premises Defender for IoT users, we recommend that you use a global business topology that's based on business units, regions, and sites, and then define user access permissions around those entities.
defender-for-iot Manage Users Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/manage-users-sensor.md
Your new user is added and is listed on the sensor **Users** page.
To edit a user, select the **Edit** :::image type="icon" source="media/manage-users-on-premises-management-console/icon-edit.png" border="false"::: icon for the user you want to edit, and change any values as needed. To delete a user, select the **Delete** button for the user you want to delete.+ ## Integrate OT sensor users with Active Directory Configure an integration between your sensor and Active Directory to:
For more information, see [Active Directory support on sensors and on-premises m
|Name |Description | |||
- |**Domain Controller FQDN** | The fully qualified domain name (FQDN), exactly as it appears on your LDAP server. For example, enter `host1.subdomain.domain.com`. |
+ |**Domain Controller FQDN** | The fully qualified domain name (FQDN), exactly as it appears on your LDAP server. For example, enter `host1.subdomain.contoso.com`. <br><br> If you encounter an issue with the integration using the FQDN, check your DNS configuration. You can also enter the explicit IP of the LDAP server instead of the FQDN when setting up the integration. |
|**Domain Controller Port** | The port where your LDAP is configured. |
- |**Primary Domain** | The domain name, such as `subdomain.domain.com`, and then select the connection type for your LDAP configuration. <br><br>Supported connection types include: **LDAPS/NTLMv3** (recommended), **LDAP/NTLMv3**, or **LDAP/SASL-MD5** |
+ |**Primary Domain** | The domain name, such as `subdomain.contoso.com`, and then select the connection type for your LDAP configuration. <br><br>Supported connection types include: **LDAPS/NTLMv3** (recommended), **LDAP/NTLMv3**, or **LDAP/SASL-MD5** |
|**Active Directory Groups** | Select **+ Add** to add an Active Directory group to each permission level listed, as needed. <br><br> When you enter a group name, make sure that you enter the group name exactly as it's defined in your Active Directory configuration on the LDAP server. You'll use these group names when [adding new sensor users](#add-new-ot-sensor-users) with Active Directory.<br><br> Supported permission levels include **Read-only**, **Security Analyst**, **Admin**, and **Trusted Domains**. |
For more information, see [Active Directory support on sensors and on-premises m
1. When you've added all your Active Directory servers, select **Save**.
+ For example:
+
+ :::image type="content" source="media/manage-users-sensor/active-directory-integration-example.png" alt-text="Screenshot of the active directory integration configuration on the sensor.":::
## Change a sensor user's password
defender-for-iot Ot Appliance Sizing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-appliance-sizing.md
# Which appliances do I need?
-This article is designed to help you choose the right OT appliances for your sensors and on-premises management consoles. Use the tables below to understand which hardware profile best fits your organization's network monitoring needs. Performance values are upper thresholds and dependent on the analyzed traffic protocols, assuming that intermittent traffic profiles are typical of M2M systems.
+This article is designed to help you choose the right OT appliances for your sensors and on-premises management consoles. Use the tables below to understand which hardware profile best fits your organization's network monitoring needs.
-Physical or virtual appliances can be used; results depend on hardware and resources available to the monitoring sensor.
+[Physical](ot-pre-configured-appliances.md) or [virtual](ot-virtual-appliances.md) appliances can be used; results depend on hardware and resources available to the monitoring sensor.
+
+> [!NOTE]
+> The performance, capacity, and activity of an OT/IoT network may vary depending on its size, capacity, protocols distribution, and overall activity. For deployments, it is important to factor in raw network speed, the size of the network to monitor, and application configuration. The selection of processors, memory, and network cards is heavily influenced by these deployment configurations. The amount of space needed on your disk will differ depending on how long you store data, and the amount and type of data you store.
+>
+>*Performance values are presented as upper thresholds under the assumption of intermittent traffic profiles, such as those found in OT/IoT systems and machine-to-machine communication networks.*
## IT/OT mixed environments Use the following hardware profiles for high bandwidth corporate IT/OT mixed networks:
-|Hardware profile |Max throughput |Max monitored Assets |Deployment |
+|Hardware profile |SPAN/TAP throughput |Max monitored Assets |Deployment |
|||||
-|C5600 | 3 Gbps | 12 K |Physical / Virtual |
+|C5600 | Up to 3 Gbps | 12 K |Physical / Virtual |
## Monitoring at the site level Use the following hardware profiles for enterprise monitoring at the site level, typically collecting multiple traffic feeds:
-|Hardware profile |Max throughput |Max monitored assets |Deployment |
+|Hardware profile |SPAN/TAP throughput |Max monitored assets |Deployment |
|||||
-|E1800 |1 Gbps |10K |Physical / Virtual |
-|E1000 |1 Gbps |10K |Physical / Virtual |
-|E500 |1 Gbps |10K |Physical / Virtual |
+|E1800 |Up to 1 Gbps |10K |Physical / Virtual |
+|E1000 |Up to 1 Gbps |10K |Physical / Virtual |
+|E500 |Up to 1 Gbps |10K |Physical / Virtual |
-## Production line monitoring
+## Production line monitoring (medium and small deployments)
Use the following hardware profiles for production line monitoring, typically in the production/mission-critical environments:
-|Hardware profile |Max throughput |Max monitored assets |Deployment |
+|Hardware profile |SPAN/TAP throughput |Max monitored assets |Deployment |
|||||
-|L500 | 200 Mbps | 1,000 |Physical / Virtual |
-|L100 | 60 Mbps | 800 | Physical / Virtual |
-|L60 | 10 Mbps | 100 |Physical / Virtual|
+|L500 | Up to 200 Mbps | 1,000 |Physical / Virtual |
+|L100 | Up to 60 Mbps | 800 | Physical / Virtual |
+|L60 | Up to 10 Mbps | 100 |Physical / Virtual|
## On-premises management console systems
defender-for-iot Ot Pre Configured Appliances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-pre-configured-appliances.md
Pre-configured physical appliances have been validated for Defender for IoT OT s
You can [order](mailto:hardware.sales@arrow.com?cc=DIoTHardwarePurchase@microsoft.com&subject=Information%20about%20Microsoft%20Defender%20for%20IoT%20pre-configured%20appliances&body=Dear%20Arrow%20Representative,%0D%0DOur%20organization%20is%20interested%20in%20receiving%20quotes%20for%20Microsoft%20Defender%20for%20IoT%20appliances%20as%20well%20as%20fulfillment%20options.%0D%0DThe%20purpose%20of%20this%20communication%20is%20to%20inform%20you%20of%20a%20project%20which%20involves%20[NUMBER]%20sites%20and%20[NUMBER]%20sensors%20for%20[ORGANIZATION%20NAME].%20Having%20reviewed%20potential%20appliances%20suitable%20for%20our%20project,%20we%20would%20like%20to%20obtain%20more%20information%20about:%20___________%0D%0D%0DI%20would%20appreciate%20being%20contacted%20by%20an%20Arrow%20representative%20to%20receive%20a%20quote%20for%20the%20items%20mentioned%20above.%0DI%20understand%20the%20quote%20and%20appliance%20delivery%20shall%20be%20in%20accordance%20with%20the%20relevant%20Arrow%20terms%20and%20conditions%20for%20Microsoft%20Defender%20for%20IoT%20pre-configured%20appliances.%0D%0D%0DBest%20Regards,%0D%0D%0D%0D%0D%0D//////////////////////////////%20%0D/////////%20Replace%20[NUMBER]%20with%20appropriate%20values%20related%20to%20your%20request.%0D/////////%20Replace%20[ORGANIZATION%20NAME]%20with%20the%20name%20of%20the%20organization%20you%20represent.%0D//////////////////////////////%0D%0D) any of the following pre-configured appliances for monitoring your OT networks:
-|Hardware profile |Appliance |Performance / Monitoring |Physical specifications |
+|Hardware profile |Appliance |SPAN/TAP throughput |Physical specifications |
|||||
-|**C5600** | [HPE ProLiant DL360](appliance-catalog/hpe-proliant-dl360.md) | **Max bandwidth**: 3Gbp/s <br>**Max devices**: 12,000 <br> 32 Cores/32G RAM/5.6TB | **Mounting**: 1U <br>**Ports**: 15x RJ45 or 8x SFP (OPT) |
-|**E1800** | [HPE ProLiant DL20 Gen10 Plus](appliance-catalog/hpe-proliant-dl20-plus-enterprise.md) (4SFF) <br><br> [Dell PowerEdge R350](appliance-catalog/dell-poweredge-r350-e1800.md) | **Max bandwidth**: 1 Gbp/s<br>**Max devices**: 10,000 <br> 8 Cores/32G RAM/1.8TB | **Mounting**: 1U <br>**Ports**: 8x RJ45 or 6x SFP (OPT) |
-|**E500** | [Dell Edge 5200](appliance-catalog/dell-edge-5200.md) <br> (Rugged MIL-STD-810G) | **Max bandwidth**: 1 Gbp/s<br>**Max devices**: 10,000 <br> 8 Cores/32G RAM/512GB | **Mounting**: Wall Mount<br>**Ports**: 3x RJ45 |
-|**L500** | [HPE ProLiant DL20 Gen10 Plus](appliance-catalog/hpe-proliant-dl20-plus-smb.md) <br> (NHP 2LFF) | **Max bandwidth**: 200Mbp/s<br>**Max devices**: 1,000 <br> 4 Cores/8G RAM/500GB | **Mounting**: 1U<br>**Ports**: 4x RJ45 |
-|**L100** | [YS-Techsystems YS-FIT2](appliance-catalog/ys-techsystems-ys-fit2.md) <br>(Rugged MIL-STD-810G) | **Max bandwidth**: 10Mbp/s <br>**Max devices**: 100 <br> 4 Cores/8G RAM/128GB | **Mounting**: DIN/VESA<br>**Ports**: 2x RJ45 |
+|**C5600** | [HPE ProLiant DL360](appliance-catalog/hpe-proliant-dl360.md) | **Max bandwidth**: Up to 3 Gbps <br>**Max devices**: 12K <br> 32 Cores/32G RAM/5.6TB | **Mounting**: 1U <br>**Ports**: 15x RJ45 or 8x SFP (OPT) |
+|**E1800** | [HPE ProLiant DL20 Gen10 Plus](appliance-catalog/hpe-proliant-dl20-plus-enterprise.md) (4SFF) <br><br> [Dell PowerEdge R350](appliance-catalog/dell-poweredge-r350-e1800.md) | **Max bandwidth**: Up to 1 Gbps<br>**Max devices**: 10K <br> 8 Cores/32G RAM/1.8TB | **Mounting**: 1U <br>**Ports**: 8x RJ45 or 6x SFP (OPT) |
+|**E500** | [Dell Edge 5200](appliance-catalog/dell-edge-5200.md) <br> (Rugged MIL-STD-810G) | **Max bandwidth**: Up to 1 Gbps<br>**Max devices**: 10K <br> 8 Cores/32G RAM/512GB | **Mounting**: Wall Mount<br>**Ports**: 3x RJ45 |
+|**L500** | [HPE ProLiant DL20 Gen10 Plus](appliance-catalog/hpe-proliant-dl20-plus-smb.md) <br> (NHP 2LFF) | **Max bandwidth**: Up to 200 Mbps<br>**Max devices**: 1,000 <br> 4 Cores/8G RAM/500GB | **Mounting**: 1U<br>**Ports**: 4x RJ45 |
+|**L100** | [YS-Techsystems YS-FIT2](appliance-catalog/ys-techsystems-ys-fit2.md) <br>(Rugged MIL-STD-810G) | **Max bandwidth**: Up to 10 Mbps <br>**Max devices**: 100 <br> 4 Cores/8G RAM/128GB | **Mounting**: DIN/VESA<br>**Ports**: 2x RJ45 |
> [!NOTE]
-> Bandwidth performance may vary depending on protocol distribution.
+> The performance, capacity, and activity of an OT/IoT network may vary depending on its size, capacity, protocols distribution, and overall activity. For deployments, it is important to factor in raw network speed, the size of the network to monitor, and application configuration. The selection of processors, memory, and network cards is heavily influenced by these deployment configurations. The amount of space needed on your disk will differ depending on how long you store data, and the amount and type of data you store. <br><br>
+> *Performance values are presented as upper thresholds under the assumption of intermittent traffic profiles, such as those found in OT/IoT systems and machine-to-machine communication networks.*
## Appliances for on-premises management consoles
You can purchase any of the following appliances for your OT on-premises managem
|Hardware profile |Appliance |Max sensors |Physical specifications | |||||
-|**E1800** | [HPE ProLiant DL20 Gen10 Plus](appliance-catalog/hpe-proliant-dl20-plus-enterprise.md) <br> (4SFF) | 300 | **Mounting**: 1U <br>**Ports**: 8x RJ45 or 6x SFP (OPT) |
+|**E1800** | [HPE ProLiant DL20 Gen10 Plus](appliance-catalog/hpe-proliant-dl20-plus-enterprise.md) (4SFF) <br><br> [Dell PowerEdge R350](appliance-catalog/dell-poweredge-r350-e1800.md) | 300 | **Mounting**: 1U <br>**Ports**: 8x RJ45 or 6x SFP (OPT) |
For information about previously supported legacy appliances, see the [appliance catalog](/azure/defender-for-iot/organizations/appliance-catalog/).
defender-for-iot Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes-archive.md
For more recent updates, see [What's new in Microsoft Defender for IoT?](whats-n
Noted features listed below are in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+## December 2021
+
+**Sensor software version**: 10.5.4
+
+- [Enhanced integration with Microsoft Sentinel (Preview)](#enhanced-integration-with-microsoft-sentinel-preview)
+- [Apache Log4j vulnerability](#apache-log4j-vulnerability)
+- [Alerting](#alerting)
+
+### Enhanced integration with Microsoft Sentinel (Preview)
+
+The new **IoT OT Threat Monitoring with Defender for IoT solution** is available and provides enhanced capabilities for Microsoft Defender for IoT integration with Microsoft Sentinel. The **IoT OT Threat Monitoring with Defender for IoT solution** is a set of bundled content, including analytics rules, workbooks, and playbooks, configured specifically for Defender for IoT data. This solution currently supports only Operational Networks (OT/ICS).
+
+For information on integrating with Microsoft Sentinel, see [Tutorial: Integrate Defender for Iot and Sentinel](../../sentinel/iot-solution.md?tabs=use-out-of-the-box-analytics-rules-recommended)
+
+### Apache Log4j vulnerability
+
+Version 10.5.4 of Microsoft Defender for IoT mitigates the Apache Log4j vulnerability. For details, see [the security advisory update](https://techcommunity.microsoft.com/t5/microsoft-defender-for-iot/updated-15-dec-defender-for-iot-security-advisory-apache-log4j/m-p/3036844).
+
+### Alerting
+
+Version 10.5.4 of Microsoft Defender for IoT delivers important alert enhancements:
+
+- Alerts for certain minor events or edge-cases are now disabled.
+- For certain scenarios, similar alerts are minimized in a single alert message.
+
+These changes reduce alert volume and enable more efficient targeting and analysis of security and operational events.
+
+For more information, see [OT monitoring alert types and descriptions](alert-engine-messages.md).
+
+#### Alerts permanently disabled
+
+The alerts listed below are permanently disabled with version 10.5.4. Detection and monitoring are still supported for traffic associated with the alerts.
+
+**Policy engine alerts**
+
+- RPC Procedure Invocations
+- Unauthorized HTTP Server
+- Abnormal usage of MAC Addresses
+
+#### Alerts disabled by default
+
+The alerts listed below are disabled by default with version 10.5.4. You can re-enable the alerts from the Support page of the sensor console, if necessary.
+
+**Anomaly engine alert**
+- Abnormal Number of Parameters in HTTP Header
+- Abnormal HTTP Header Length
+- Illegal HTTP Header Content
+
+**Operational engine alerts**
+- HTTP Client Error
+- RPC Operation Failed
+
+**Policy engine alerts**
+
+Disabling these alerts also disables monitoring of related traffic. Specifically, this traffic won't be reported in Data Mining reports.
+
+- Illegal HTTP Communication alert and HTTP Connections Data Mining traffic
+- Unauthorized HTTP User Agent alert and HTTP User Agents Data Mining traffic
+- Unauthorized HTTP SOAP Action and HTTP SOAP Actions Data Mining traffic
+
+#### Updated alert functionality
+
+**Unauthorized Database Operation alert**
+Previously, this alert covered DDL and DML alerting and Data Mining reporting. Now:
+- DDL traffic: alerting and monitoring are supported.
+- DML traffic: Monitoring is supported. Alerting isn't supported.
+
+**New Asset Detected alert**
+This alert is disabled for new devices detected in IT subnets. The New Asset Detected alert is still triggered for new devices discovered in OT subnets. OT subnets are detected automatically and can be updated by users if necessary.
+
+### Minimized alerting
+
+Alert triggering for specific scenarios has been minimized to help reduce alert volume and simplify alert investigation. In these scenarios, if a device performs repeated activity on targets, an alert is triggered once. Previously, a new alert was triggered each time the same activity was carried out.
+
+This new functionality is available on the following alerts:
+
+- Port Scan Detected alerts, based on activity of the source device (generated by the Anomaly engine)
+- Malware alerts, based on activity of the source device. (generated by the Malware engine).
+- Suspicion of Denial of Service Attack alerts, based on activity of the destination device (generated by the Malware engine)
## November 2021
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
This version includes the following new updates and fixes:
- [New naming convention for hardware profiles](ot-appliance-sizing.md) - [PCAP access from the Azure portal](how-to-manage-cloud-alerts.md)-- [Bi-directional alert synch between sensors and the Azure portal](how-to-manage-cloud-alerts.md#managing-alerts-in-a-hybrid-deployment)
+- [Bi-directional alert synch between OT sensors and the Azure portal](alerts.md#managing-ot-alerts-in-a-hybrid-environment)
- [Sensor connections restored after certificate rotation](how-to-deploy-certificates.md) - [Upload diagnostic logs for support tickets from the Azure portal](how-to-manage-sensors-on-the-cloud.md#upload-a-diagnostics-log-for-support-public-preview) - [Improved security for uploading protocol plugins](resources-manage-proprietary-protocols.md)
This version includes the following new updates and fixes:
- A new **Backup Activity with Antivirus Signatures** alert - Alert management changes during software updates -- [Enhancements for creating custom alerts on the sensor](how-to-accelerate-alert-incident-response.md#customize-alert-rules): Hit count data, advanced scheduling options, and more supported fields and protocols
+- [Enhancements for creating custom alerts on the sensor](how-to-accelerate-alert-incident-response.md#create-custom-alert-rules-on-an-ot-sensor): Hit count data, advanced scheduling options, and more supported fields and protocols
- [Modified CLI commands](references-work-with-defender-for-iot-cli-commands.md): Including the following new commands:
This version includes the following new updates and fixes:
- [New integration APIs](api/management-integration-apis.md) - [Network traffic analysis enhancements for multiple OT and ICS protocols](concept-supported-protocols.md) - [Automatic deletion for older, archived alerts](how-to-view-alerts.md)-- [Export alert enhancements](how-to-work-with-alerts-on-premises-management-console.md#export-alert-information)
+- [Export alert enhancements](how-to-work-with-alerts-on-premises-management-console.md#export-alerts-to-a-csv-file)
### 10.5.2
defender-for-iot Resources Manage Proprietary Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/resources-manage-proprietary-protocols.md
For example, in an environment running MODBUS, you may want to generate an alert
- The alert always has a severity of *Critical*. -- The alert includes static text under the **Manage this Event** section, indicating that the alert was generated by your organizationΓÇÖs security team.
+- The alert includes static text under the **Take action** section, indicating that the alert was generated by your organizationΓÇÖs security team.
-For more information, see [Customize alert rules](how-to-accelerate-alert-incident-response.md#customize-alert-rules).
+For more information, see [Create custom alert rules on an OT sensor](how-to-accelerate-alert-incident-response.md#create-custom-alert-rules-on-an-ot-sensor).
## Next steps
defender-for-iot Resources Training Sessions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/resources-training-sessions.md
- Title: Tech Community Ninja training
-description: Follow Defender for IoT training sessions
Previously updated : 11/09/2021---
-# View Tech Community training sessions
-
-This article provides links to Defender for IoTm training sessions.
-
-## About the training
-
-The Tech Community training program includes approximately 30 sessions divided into several modules. Sessions include videos, and/or presentations, as well as supporting information such as feature articles, blog posts, and additional resources.
-
-The modules are organized into groups, for example:
--- Overview-- Basic Features-- Deployment-- Sentinel Integration-- Advanced -
-### Access training
-
-Access the training at the following location:
-
-[Defender for IoT Training](https://go.microsoft.com/fwlink/?linkid=2167929)
-
-## Next steps
-
-[Quickstart: Get started with Defender for IoT](getting-started.md)
defender-for-iot Roles On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/roles-on-premises.md
Permissions applied to each role differ between the sensor and the on-premises m
| **View the dashboard** | Γ£ö | Γ£ö |Γ£ö | | **Control map zoom views** | - | - | Γ£ö | | **View alerts** | Γ£ö | Γ£ö | Γ£ö |
-| **Manage alerts**: acknowledge, learn, and pin |- | Γ£ö | Γ£ö |
+| **Manage alerts**: acknowledge, learn, and mute |- | Γ£ö | Γ£ö |
| **View events in a timeline** | - | Γ£ö | Γ£ö | | **Authorize devices**, known scanning devices, programming devices | - | Γ£ö | Γ£ö | | **Merge and delete devices** |- |- | Γ£ö |
Permissions applied to each role differ between the sensor and the on-premises m
| **Build a site** | - | - | Γ£ö | | **Manage a site** (add and edit zones) |- |- | Γ£ö | | **View and filter device inventory** | Γ£ö | Γ£ö | Γ£ö |
-| **View and manage alerts**: acknowledge, learn, and pin | Γ£ö | Γ£ö | Γ£ö |
+| **View and manage alerts**: acknowledge, learn, and mute | Γ£ö | Γ£ö | Γ£ö |
| **Generate reports** |- | Γ£ö | Γ£ö | | **View risk assessment reports** | - | Γ£ö | Γ£ö | | **Set alert exclusions** | - | Γ£ö | Γ£ö |
defender-for-iot Sensor Health Messages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/sensor-health-messages.md
For more information, see [Understand sensor health (Public preview)](how-to-man
## Critical messages - |Title |Message |Description |Recommendation | |||||
-|**Disconnected** | This sensor is not communicating with Azure | Sensor is disconnected | Try signing into the sensor to check for errors or networking failures. <br><br>We also recommend reviewing the sensor networking configuration and verifying the sensorΓÇÖs ability to communicate with Azure. |
-|**Sanity failed** | This sensor failed an internal consistency check | Sensor fails sanity | The sensor is in a degraded state. <br><br>Check the sensor for hardware failures and try restarting the sensor. If the issue isn't resolved, open a support ticket. |
+|**Disconnected** | This sensor isn't communicating with Azure | Sensor is disconnected | Try signing into the sensor to check for errors or networking failures. <br><br> We also recommend reviewing the sensor networking configuration and verifying the sensorΓÇÖs ability to communicate with Azure. |
+|**Sanity failed** | This sensor failed an internal consistency check | Sensor fails sanity | The sensor is in a degraded state. <br><br> Check the sensor for hardware failures and try restarting the sensor. If the issue isn't resolved, open a support ticket. |
|**No traffic detected** | No traffic detected on the monitored network interfaces | No traffic detected | Check that the monitoring ports are connected to SPAN/monitor ports on the adjacent switch and that traffic is active on the link. At least one link with network traffic should be connected to the monitor ports. | - ## Warning messages |Title |Message |Description |Recommendation | |||||
-|**Package upload failed** |There was an error uploading the file to the sensor |Upload error |Verify the sensorΓÇÖs ability to communicate with download.microsoft.com and retry. <br><br>If the problem persists, open a support ticket.|
+|**Package upload failed** |There was an error uploading the file to the sensor |Upload error |Verify the sensorΓÇÖs ability to communicate with download.microsoft.com and retry. <br><br> If the problem persists, open a support ticket.|
|**Sensor update failed** | There was an error installing the update.| Installation error |Open a support ticket. | | **Unstable traffic to Azure**|SensorΓÇÖs connection to Azure is unstable |Unstable traffic to Azure | We recommend that you check the sensor WAN connection, the BW limit settings, and validate network equipment that might be on the route between the sensor and the cloud.| | **Outdated**|Outdated software may result in a non-optimal experience |Sensor version is outdated |Upgrade your sensor software to the latest version to use the most recently available Defender for IoT features.|
For more information, see [Understand sensor health (Public preview)](how-to-man
|Title |Message |Description |Recommendation | |||||
-|**Pending activation** |Waiting for sensor to connect for the first time |Pending activation | Upload the activation file to the sensor. If this does not resolve the problem, verify the sensorΓÇÖs ability to communicate with Azure.|
-|**Pending reactivation** |Waiting for reactivation with new license |Pending reactivation |Upload the new activation file to the sensor. If this does not resolve the problem, verify the sensorΓÇÖs ability to communicate with Azure. |
-|**Updatable** |A new version is available |Update available | Upgrade your sensor software to the latest version to use the most recently available Defender for IoT features.|
+|**Pending activation** |Waiting for sensor to connect for the first time |Pending activation | Upload the activation file to the sensor. If this doesn't resolve the problem, verify the sensorΓÇÖs ability to communicate with Azure.|
+|**Pending reactivation** |Waiting for reactivation with new license |Pending reactivation |Upload the new activation file to the sensor. If this doesn't resolve the problem, verify the sensorΓÇÖs ability to communicate with Azure. |
## Next steps
defender-for-iot Tutorial Clearpass https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-clearpass.md
The integration allows for the following:
In this tutorial, you learn how to: > [!div class="checklist"]
+>
> - Create a ClearPass API user > - Create a ClearPass operator profile > - Create a ClearPass OAuth API client
To enable viewing the device inventory in ClearPass, you need to set up Defender
To enable viewing the alerts discovered by Defender for IoT in Aruba, you need to set the forwarding rule. This rule defines which information about the ICS, and SCADA security threats identified by Defender for IoT security engines is sent to ClearPass.
+Forwarding alert rules run only on alerts triggered after the forwarding rule is created. Alerts already in the system from before the forwarding rule was created are not affected by the rule.
+ **To define a ClearPass forwarding rule on the Defender for IoT sensor**: 1. In the Defender for IoT sensor, select **Forwarding** and then select **Create new rule**.
To enable viewing the alerts discovered by Defender for IoT in Aruba, you need t
1. In the **Host** field, define the ClearPass server IP and port to send alert information. 1. Define which alert information you want to forward. - **Report illegal function codes:** Protocol violations - Illegal field value violating ICS protocol specification (potential exploit).
- - **Report unauthorized PLC programming and firmware updates:** Unauthorized PLC changes.
- - **Report unauthorized PLC stop:** PLC stop (downtime).
- - **Report malware related alerts:** Industrial malware attempts, such as TRITON, NotPetya.
+ - **Report unauthorized PLC programming and firmware updates:** Unauthorized PLC changes.
+ - **Report unauthorized PLC stop:** PLC stop (downtime).
+ - **Report malware related alerts:** Industrial malware attempts, such as TRITON, NotPetya.
- **Report unauthorized scanning:** Unauthorized scanning (potential reconnaissance) 1. Select **Save**. - ## Monitor ClearPass and Defender for IoT communication Once the sync has started, endpoint data is populated directly into the Policy Manager EndpointDb, you can view the last update time from the integration configuration screen.
Once the sync has started, endpoint data is populated directly into the Policy M
1. Select **System settings** > **Integrations** > **ClearPass**. - :::image type="content" source="media/tutorial-clearpass/last-sync.png" alt-text="Screenshot of the view the time and date of your last sync."::: If Sync is not working, or shows an error, then itΓÇÖs likely youΓÇÖve missed capturing some of the information. Recheck the data recorded, additionally you can view the API calls between Defender for IoT and ClearPass from **Guest** > **Administration** > **Support** > **Application Log**.
There are no resources to clean up.
## Next steps
-In this article, you learned how to get started with the ClearPass integration. Continue on to learn about our [CyberArk integration](./tutorial-cyberark.md).
--
+In this article, you learned how to get started with the ClearPass integration. Continue on to learn about our [CyberArk integration](./tutorial-cyberark.md).
defender-for-iot Tutorial Fortinet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-fortinet.md
Using a Business Services view, the complexity of managing network and security
In this tutorial, you learn how to: > [!div class="checklist"]
+>
> - Create an API key in Fortinet > - Set a forwarding rule to block malware-related alerts > - Block the source of suspicious alerts
When the API key is generated, save it as it will not be provided again.
The FortiGate firewall can be used to block suspicious traffic.
+Forwarding alert rules run only on alerts triggered after the forwarding rule is created. Alerts already in the system from before the forwarding rule was created are not affected by the rule.
+ **To set a forwarding rule to block malware-related alerts**: 1. Sign in to the Microsoft Defender for IoT Management Console.
Each Defender for IoT alert is then parsed without any other configuration on th
You can then use Defender for IoT's Forwarding Rules to send alert information to FortiSIEM.
+Forwarding alert rules run only on alerts triggered after the forwarding rule is created. Alerts already in the system from before the forwarding rule was created are not affected by the rule.
+ **To use Defender for IoT's Forwarding Rules to send alert information to FortiSIEM**: 1. From the sensor, or management console left pane, select **Forwarding**.
There are no resources to clean up.
## Next steps In this article, you learned how to get started with the Fortinet integration. Continue on to learn about our [Palo Alto integration](./tutorial-palo-alto.md)--
defender-for-iot Tutorial Palo Alto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-palo-alto.md
The following integration types are available:
In this tutorial, you learn how to: > [!div class="checklist"]
+>
> - Configure immediate blocking by a specified Palo Alto firewall > - Create Panorama blocking policies in Defender for IoT
If you don't have an Azure subscription, create a [free account](https://azure.m
## Configure immediate blocking by a specified Palo Alto firewall
-In cases, such as malware-related alerts, you can enable automatic blocking. Defender for IoT forwarding rules is utilized to send a blocking command directly to a specific Palo Alto firewall.
+In cases, such as malware-related alerts, you can enable automatic blocking. Defender for IoT forwarding rules are utilized to send a blocking command directly to a specific Palo Alto firewall.
+
+Forwarding alert rules run only on alerts triggered after the forwarding rule is created. Alerts already in the system from before the forwarding rule was created are not affected by the rule.
When Defender for IoT identifies a critical threat, it sends an alert that includes an option of blocking the infected source. Selecting **Block Source** in the alertΓÇÖs details activates the forwarding rule, which sends the blocking command to the specified Palo Alto firewall.
The first step in creating Panorama blocking policies in Defender for IoT is to
1. In the console left pane, select **System settings** > **Network monitoring** > **DNS Reverse Lookup**. 1. Select **Add DNS server**.
-1. In the **Schedule Reverse Lookup** field define the scheduling options:
+1. In the **Schedule Reverse Lookup** field, define the scheduling options:
- By specific times: Specify when to perform the reverse lookup daily.
- - By fixed intervals (in hours): Set the frequency for performing the reverse lookup.
-1. In the **Number of Labels** field instruct Defender for IoT to automatically resolve network IP addresses to device FQDNs. <br />To configure DNS FQDN resolution, add the number of domain labels to display. Up to 30 characters are displayed from left to right.
+ - By fixed intervals (in hours): Set the frequency for performing the reverse lookup.
+1. In the **Number of Labels** field instruct Defender for IoT to automatically resolve network IP addresses to device FQDNs. <br /> To configure DNS FQDN resolution, add the number of domain labels to display. Up to 30 characters are displayed from left to right.
1. Add the following server details:
- - **DNS Server Address**: Enter the IP address, or the FQDN of the network DNS Server.
- - **DNS Server Port**: Enter the port used to query the DNS server.
- - **Subnets**: Set the Dynamic IP address subnet range. The range that Defender for IoT reverses lookup their IP address in the DNS server to match their current FQDN name.
+ - **DNS Server Address**: Enter the IP address, or the FQDN of the network DNS Server.
+ - **DNS Server Port**: Enter the port used to query the DNS server.
+ - **Subnets**: Set the Dynamic IP address subnet range. The range that Defender for IoT reverses lookup their IP address in the DNS server to match their current FQDN name.
1. Select **Save**. 1. Turn on the **Enabled** toggle to activate the lookup.
The first step in creating Panorama blocking policies in Defender for IoT is to
Suspicious traffic will need to be blocked with the Palo Alto firewall. You can block suspicious traffic through the use forwarding rules in Defender for IoT.
+Forwarding alert rules run only on alerts triggered after the forwarding rule is created. Alerts already in the system from before the forwarding rule was created are not affected by the rule.
+ **To block suspicious traffic with the Palo Alto firewall using a Defender for IoT forwarding rule**: 1. In the left pane, select **Forwarding**.
There are no resources to clean up.
## Next step In this article, you learned how to get started with the [Palo Alto integration](./tutorial-splunk.md).-
defender-for-iot Tutorial Qradar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-qradar.md
A **QID** is a QRadar event identifier. Since all Defender for IoT reports are t
Create a forwarding rule from your on-premises management console to forward alerts to QRadar.
+Forwarding alert rules run only on alerts triggered after the forwarding rule is created. Alerts already in the system from before the forwarding rule was created are not affected by the rule.
+ **To create a QRadar forwarding rule**: 1. Sign in to the on-premises management console and select **Forwarding** on the left.
defender-for-iot Tutorial Servicenow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-servicenow.md
Last updated 08/11/2022
# Integrate ServiceNow with Microsoft Defender for IoT
-The Defender for IoT integration with ServiceNow provides a new level of centralized visibility, monitoring, and control for the IoT and OT landscape. These bridged platforms enable automated device visibility and threat management to previously unreachable ICS & IoT devices.
+The Defender for IoT integration with ServiceNow provides a extra level of centralized visibility, monitoring, and control for the IoT and OT landscape. These bridged platforms enable automated device visibility and threat management to previously unreachable ICS & IoT devices.
-A new [Operational Technology Manager](https://store.servicenow.com/sn_appstore_store.do#!/store/application/31eed0f72337201039e2cb0a56bf65ef/1.1.2?referer=%2Fstore%2Fsearch%3Flistingtype%3Dallintegrations%25253Bancillary_app%25253Bcertified_apps%25253Bcontent%25253Bindustry_solution%25253Boem%25253Butility%25253Btemplate%26q%3Doperational%2520technology%2520manager&sl=sh) integration is now available from the ServiceNow store. The new integration streamlines Microsoft Defender for IoT sensor appliances, OT assets, network connections, and vulnerabilities to ServiceNowΓÇÖs Operational Technology (OT) data model.
+The [Operational Technology Manager](https://store.servicenow.com/sn_appstore_store.do#!/store/application/31eed0f72337201039e2cb0a56bf65ef/1.1.2?referer=%2Fstore%2Fsearch%3Flistingtype%3Dallintegrations%25253Bancillary_app%25253Bcertified_apps%25253Bcontent%25253Bindustry_solution%25253Boem%25253Butility%25253Btemplate%26q%3Doperational%2520technology%2520manager&sl=sh) integration is available from the ServiceNow store, which streamlines Microsoft Defender for IoT sensor appliances, OT assets, network connections, and vulnerabilities to ServiceNowΓÇÖs Operational Technology (OT) data model.
## ServiceNow integrations with Microsoft Defender for IoT
-Once you have the Operational Technology Manager application, two new integrations are available:
+Once you have the Operational Technology Manager application, two integrations are available:
### Service Graph Connector (SGC)
Track and resolve vulnerabilities of your OT assets with the data imported from
For more information, please see the [Vulnerability Response (VR)](https://store.servicenow.com/sn_appstore_store.do#!/store/application/463a7907c3313010985a1b2d3640dd7e) information on the ServiceNow store.
-Please read the ServiceNow supporting links and documentation for the ServiceNow terms of service.
+For more information, read the ServiceNow supporting links and documentation for the ServiceNow terms of service.
## Next steps
-For more information, please see the ServiceNow store:
+Access the ServiceNow integrations from the ServiceNow store:
- [Service Graph Connector (SGC)](https://store.servicenow.com/sn_appstore_store.do#!/store/application/ddd4bf1b53f130104b5cddeeff7b1229) - [Vulnerability Response (VR)](https://store.servicenow.com/sn_appstore_store.do#!/store/application/463a7907c3313010985a1b2d3640dd7e)
defender-for-iot Tutorial Splunk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-splunk.md
The Splunk application can be installed locally ('Splunk Enterprise') or run on
In this tutorial, you learn how to: > [!div class="checklist"]
+>
> * Download the Defender for IoT application in Splunk > * Send Defender for IoT alerts to Splunk
If you don't have an Azure subscription, create a [free account](https://azure.m
The following versions are required for the application to run. -- Defender for IoT version 2.4 and above.
+* Defender for IoT version 2.4 and above.
-- Splunkbase version 11 and above.
+* Splunkbase version 11 and above.
-- Splunk Enterprise version 7.2 and above.
+* Splunk Enterprise version 7.2 and above.
### Splunk permission requirements The following Splunk permission is required: -- Any user with an *Admin* level user role.
+* Any user with an *Admin* level user role.
## Download the Defender for IoT application in Splunk
To access the Defender for IoT application within Splunk, you will need to downl
The Defender for IoT alerts provides information about an extensive range of security events. These events include: -- Deviations from the learned baseline network activity.
+* Deviations from the learned baseline network activity.
-- Malware detections.
+* Malware detections.
-- Detections based on suspicious operational changes.
+* Detections based on suspicious operational changes.
-- Network anomalies.
+* Network anomalies.
-- Protocol deviations from protocol specifications.
+* Protocol deviations from protocol specifications.
:::image type="content" source="media/tutorial-splunk/address-scan.png" alt-text="A screen capture if an Address Scan Detected alert.":::
You can also configure Defender for IoT to send alerts to the Splunk server, whe
To send alert information to the Splunk servers from Defender for IoT, you will need to create a Forwarding Rule.
+Forwarding alert rules run only on alerts triggered after the forwarding rule is created. Alerts already in the system from before the forwarding rule was created are not affected by the rule.
+ **To create the forwarding rule**: 1. Sign in to the sensor, and select **Forwarding** from the left side pane.
defender-for-iot Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/whats-new.md
For more information, see [Manage individual sensors](how-to-manage-individual-s
|Service area |Updates | |||
-|**OT networks** | - **Cloud feature**: [New purchase experience for OT plans](#new-purchase-experience-for-ot-plans) |
+| **OT networks** | [New purchase experience for OT plans](#new-purchase-experience-for-ot-plans) |
+|**Enterprise IoT networks** | [Enterprise IoT sensor alerts and recommendations (Public Preview)](#enterprise-iot-sensor-alerts-and-recommendations-public-preview) |
+
+### Enterprise IoT sensor alerts and recommendations (Public Preview)
+
+The Azure portal now provides the following additional security data for traffic detected by Enterprise IoT network sensors:
+
+|Data type |Description |
+|||
+|**Alerts** | The Enterprise IoT network sensor now triggers the following alerts: <br>- **Connection Attempt to Known Malicious IP** <br>- **Malicious Domain Name Request** |
+|**Recommendations** | The Enterprise IoT network sensor now triggers the following recommendation for detected devices, as relevant: <br>**Disable insecure administration protocol** |
+
+For more information, see:
+
+- [Malware engine alerts](alert-engine-messages.md#malware-engine-alerts)
+- [View and manage alerts from the Azure portal](how-to-manage-cloud-alerts.md)
+- [Enhance security posture with security recommendations](recommendations.md)
+- [Discover Enterprise IoT devices with an Enterprise IoT network sensor (Public preview)](eiot-sensor.md)
### New purchase experience for OT plans
The sensor console is also synchronized with an on-premises management console,
For more information, see: - [View and manage alerts from the Azure portal](how-to-manage-cloud-alerts.md)-- [View alerts on your sensor](how-to-view-alerts.md)-- [Manage alerts from the sensor console](how-to-manage-the-alert-event.md)
+- [View and manage alerts on your sensor](how-to-view-alerts.md)
- [Work with alerts on the on-premises management console](how-to-work-with-alerts-on-premises-management-console.md) ### Sensor connections restored after certificate rotation
These features are now Generally Available (GA). Updates include the general loo
- Right-click a device on the map to view contextual information about the device, including related alerts, event timeline data, and connected devices. -- To enable the ability to collapse IT networks, ensure that the **Toggle IT Networks Grouping** option is enabled. This option is now only available from the map.
+- Select **Disable Display IT Networks Groups** to prevent the ability to collapse IT networks in the map. This option is turned on by default.
- The **Simplified Map View** option has been removed.
The sensor console's **Custom alert rules** page now provides:
:::image type="content" source="media/how-to-manage-sensors-on-the-cloud/protocol-support-custom-alerts.png" alt-text="Screenshot of the updated Custom alerts dialog. "lightbox="media/how-to-manage-sensors-on-the-cloud/protocol-support-custom-alerts.png":::
-For more information and the updated custom alert procedure, see [Customize alert rules](how-to-accelerate-alert-incident-response.md#customize-alert-rules).
+For more information, see [Create custom alert rules on an OT sensor](how-to-accelerate-alert-incident-response.md#create-custom-alert-rules-on-an-ot-sensor).
### CLI command updates
The following Defender for IoT options and configurations have been moved, remov
- Changing a locally managed sensor name is now supported only by onboarding the sensor to the Azure portal again with the new name. Sensor names can no longer be changed directly from the sensor. For more information, see [Change the name of a sensor](how-to-manage-individual-sensors.md#change-the-name-of-a-sensor). -
-## December 2021
-
-**Sensor software version**: 10.5.4
--- [Enhanced integration with Microsoft Sentinel (Preview)](#enhanced-integration-with-microsoft-sentinel-preview)-- [Apache Log4j vulnerability](#apache-log4j-vulnerability)-- [Alerting](#alerting)-
-### Enhanced integration with Microsoft Sentinel (Preview)
-
-The new **IoT OT Threat Monitoring with Defender for IoT solution** is available and provides enhanced capabilities for Microsoft Defender for IoT integration with Microsoft Sentinel. The **IoT OT Threat Monitoring with Defender for IoT solution** is a set of bundled content, including analytics rules, workbooks, and playbooks, configured specifically for Defender for IoT data. This solution currently supports only Operational Networks (OT/ICS).
-
-For information on integrating with Microsoft Sentinel, see [Tutorial: Integrate Defender for Iot and Sentinel](../../sentinel/iot-solution.md?tabs=use-out-of-the-box-analytics-rules-recommended)
-
-### Apache Log4j vulnerability
-
-Version 10.5.4 of Microsoft Defender for IoT mitigates the Apache Log4j vulnerability. For details, see [the security advisory update](https://techcommunity.microsoft.com/t5/microsoft-defender-for-iot/updated-15-dec-defender-for-iot-security-advisory-apache-log4j/m-p/3036844).
-
-### Alerting
-
-Version 10.5.4 of Microsoft Defender for IoT delivers important alert enhancements:
--- Alerts for certain minor events or edge-cases are now disabled.-- For certain scenarios, similar alerts are minimized in a single alert message.-
-These changes reduce alert volume and enable more efficient targeting and analysis of security and operational events.
-
-For more information, see [OT monitoring alert types and descriptions](alert-engine-messages.md).
-
-#### Alerts permanently disabled
-
-The alerts listed below are permanently disabled with version 10.5.4. Detection and monitoring are still supported for traffic associated with the alerts.
-
-**Policy engine alerts**
--- RPC Procedure Invocations-- Unauthorized HTTP Server-- Abnormal usage of MAC Addresses-
-#### Alerts disabled by default
-
-The alerts listed below are disabled by default with version 10.5.4. You can re-enable the alerts from the Support page of the sensor console, if necessary.
-
-**Anomaly engine alert**
-- Abnormal Number of Parameters in HTTP Header-- Abnormal HTTP Header Length-- Illegal HTTP Header Content-
-**Operational engine alerts**
-- HTTP Client Error-- RPC Operation Failed-
-**Policy engine alerts**
-
-Disabling these alerts also disables monitoring of related traffic. Specifically, this traffic won't be reported in Data Mining reports.
--- Illegal HTTP Communication alert and HTTP Connections Data Mining traffic-- Unauthorized HTTP User Agent alert and HTTP User Agents Data Mining traffic-- Unauthorized HTTP SOAP Action and HTTP SOAP Actions Data Mining traffic-
-#### Updated alert functionality
-
-**Unauthorized Database Operation alert**
-Previously, this alert covered DDL and DML alerting and Data Mining reporting. Now:
-- DDL traffic: alerting and monitoring are supported.-- DML traffic: Monitoring is supported. Alerting isn't supported.-
-**New Asset Detected alert**
-This alert is disabled for new devices detected in IT subnets. The New Asset Detected alert is still triggered for new devices discovered in OT subnets. OT subnets are detected automatically and can be updated by users if necessary.
-
-### Minimized alerting
-
-Alert triggering for specific scenarios has been minimized to help reduce alert volume and simplify alert investigation. In these scenarios, if a device performs repeated activity on targets, an alert is triggered once. Previously, a new alert was triggered each time the same activity was carried out.
-
-This new functionality is available on the following alerts:
--- Port Scan Detected alerts, based on activity of the source device (generated by the Anomaly engine)-- Malware alerts, based on activity of the source device. (generated by the Malware engine). -- Suspicion of Denial of Service Attack alerts, based on activity of the destination device (generated by the Malware engine)- ## Next steps [Getting started with Defender for IoT](getting-started.md)
digital-twins Reference Query Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/reference-query-functions.md
description: Reference documentation for the Azure Digital Twins query language functions Previously updated : 02/25/2022 Last updated : 01/05/2023
This document contains reference information on *functions* for the [Azure Digital Twins query language](concepts-query-language.md).
+## ARRAY_CONTAINS
+
+A function to determine whether an array property of a twin (supported in DTDL V3) contains another specified value.
+
+### Syntax
++
+### Arguments
+
+* `<array-to-check>`: An array-type twin property that you want to check for the specified value
+* `<contained-value>`: A string, integer, double, or boolean representing the value to check for inside the array
+
+### Returns
+
+A Boolean value indicating whether the array contains the specified value.
+
+### Example
+
+The following query returns the name of all digital twins who have an array property `floor_number`, and the array stored in this property contains a value of `2`.
++
+### Limitations
+
+The ARRAY_CONTAINS() function has the following limitations:
+* Array indexing is not supported.
+ - For example, `array-name[index] = 'foo_bar'`
+* Subqueries within the ARRAY_CONTAINS() property are not supported.
+ - For example, `SELECT T.name FROM DIGITALTWINS T WHERE ARRAY_CONTAINS (SELECT S.floor_number FROM DIGITALTWINS S, 4)`
+* ARRAY_CONTAINS() is not supported on properties of relationships.
+ - For example, say `Floor.Contains` is a relationship from Floor to Room and it has a `lift` property with a value of `["operating", "under maintenance", "under construction"]`. Queries like this are not supported: `SELECT Room FROM DIGITALTWINS Floor JOIN Room RELATED Floor.Contains WHERE Floor.$dtId = 'Floor-35' AND ARRAY_CONTAINS(Floor.Contains.lift, "operating")`.
+* ARRAY_CONTAINS() does not search inside nested arrays.
+ - For example, say a twin has a `tags` property with a value of `[1, [2,3], 3, 4]`. A search for `2` using the query `SELECT * FROM DIGITALTWINS WHERE ARRAY_CONTAINS(tags, 2)` will return `False`. A search for a value in the top level array, like `1` using the query `SELECT * FROM DIGITALTWINS WHERE ARRAY_CONTAINS(tags, 1)`, will return `True`.
+* ARRAY_CONTAINS() is not supported if the array contains objects.
+ - For example, say a twin has a `tags` property with a value of `[Room1, Room2]` where `Room1` and `Room2` are objects. Queries like this are not supported: `SELECT * FROM DIGITALTWINS WHERE ARRAY_CONTAINS(tags, Room2)`.
+ ## CONTAINS A string function to determine whether a string property of a twin contains another specified string value.
dns Dns Private Resolver Get Started Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-get-started-bicep.md
If you don't have an Azure subscription, create a [free account](https://azure.m
## Review the Bicep file
-The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/azure-dns-private-resolver).
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/azure-dns-private-resolver/).
This Bicep file is configured to create a:
Remove-AzDnsResolver -Name mydnsresolver -ResourceGroupName myresourcegroup
## Next steps In this quickstart, you created a virtual network and DNS private resolver. Now configure name resolution for Azure and on-premises domains-- [Resolve Azure and on-premises domains](private-resolver-hybrid-dns.md)
+- [Resolve Azure and on-premises domains](private-resolver-hybrid-dns.md)
dns Dns Private Resolver Get Started Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-get-started-template.md
If you don't have an Azure subscription, create a [free account](https://azure.m
## Review the template
-The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/azure-dns-private-resolver).
+The template used in this quickstart is from [Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/azure-dns-private-resolver/).
This template is configured to create a:
New-AzResourceGroupDeployment -ResourceGroupName $resourceGroupName -TemplateUri
## Next steps In this quickstart, you created a virtual network and DNS private resolver. Now configure name resolution for Azure and on-premises domains-- [Resolve Azure and on-premises domains](private-resolver-hybrid-dns.md)
+- [Resolve Azure and on-premises domains](private-resolver-hybrid-dns.md)
energy-data-services How To Manage Legal Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-manage-legal-tags.md
Run the below curl command in Azure Cloud Bash to create a legal tag for a given
``` ### Sample request
+Consider a Microsoft Energy Data Services instance named "medstest" with a data partition named "dp1"
```bash
- curl --location --request POST 'https://<instance>.energy.azure.com/api/legal/v1/legaltags' \
- --header 'data-partition-id: <instance>-<data-partition-name>' \
- --header 'Authorization: Bearer <access_token>' \
+ curl --location --request POST 'https://medstest.energy.azure.com/api/legal/v1/legaltags' \
+ --header 'data-partition-id: medstest-dp1' \
+ --header 'Authorization: Bearer eyxxxxxxx.........................' \
--header 'Content-Type: application/json' \ --data-raw '{
- "name": "<instance>-<data-partition-name>-legal-tag",
+ "name": "medstest-dp1-legal-tag",
"description": "Microsoft Energy Data Services Preview Legal Tag", "properties": { "contractId": "A1234",
Run the below curl command in Azure Cloud Bash to create a legal tag for a given
```JSON {
- "name": "<instance>-<data-partition-name>-legal-tag",
+ "name": "medsStest-dp1-legal-tag",
"description": "Microsoft Energy Data Services Preview Legal Tag", "properties": { "countryOfOrigin": [
The country of origin should follow [ISO Alpha2 format](https://www.nationsonlin
The Create Legal Tag api, internally appends data-partition-id to legal tag name if it isn't already present. For instance, if request has name as: ```legal-tag```, then the create legal tag name would be ```<instancename>-<data-partition-id>-legal-tag``` ```bash
- curl --location --request POST 'https://<instance>.energy.azure.com/api/legal/v1/legaltags' \
- --header 'data-partition-id: <instance>-<data-partition-name>' \
- --header 'Authorization: Bearer <access_token>' \
+ curl --location --request POST 'https://medstest.energy.azure.com/api/legal/v1/legaltags' \
+ --header 'data-partition-id: medstest-dp1' \
+ --header 'Authorization: Bearer eyxxxxxxx.........................' \
--header 'Content-Type: application/json' \ --data-raw '{ "name": "legal-tag",
The sample response will have data-partition-id appended to the legal tag name a
```JSON {
- "name": "<instance>-<data-partition-name>-legal-tag",
+ "name": "medstest-dp1-legal-tag",
"description": "Microsoft Energy Data Services Preview Legal Tag", "properties": { "countryOfOrigin": [
Run the below curl command in Azure Cloud Bash to get the legal tag associated w
``` ### Sample request
+Consider a Microsoft Energy Data Services instance named "medstest" with a data partition named "dp1"
```bash
- curl --location --request GET 'https://<instance>.energy.azure.com/api/legal/v1/legaltags/<instance>-<data-partition-name>-legal-tag' \
- --header 'data-partition-id: <instance>-<data-partition-name>' \
- --header 'Authorization: Bearer <access_token>'
+ curl --location --request GET 'https://medstest.energy.azure.com/api/legal/v1/legaltags/medstest-dp1-legal-tag' \
+ --header 'data-partition-id: medstest-dp1' \
+ --header 'Authorization: Bearer eyxxxxxxx.........................'
``` ### Sample response ```JSON {
- "name": "<instance>-<data-partition-name>-legal-tag",
+ "name": "medstest-dp1-legal-tag",
"description": "Microsoft Energy Data Services Preview Legal Tag", "properties": { "countryOfOrigin": [
energy-data-services How To Manage Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-manage-users.md
curl --location --request POST 'https://login.microsoftonline.com/<tenant-id>/oa
"token_type": "Bearer", "expires_in": 86399, "ext_expires_in": 86399,
- "access_token": abcdefgh123456............."
+ "access_token": "abcdefgh123456............."
} ``` Copy the `access_token` value from the response. You'll need it to pass as one of the headers in all calls to the Entitlements API of your Microsoft Energy Data Services Preview instance.
The value to be sent for the param **"email"** is the **Object_ID (OID)** of the
**Sample request**
+Consider a Microsoft Energy Data Services instance named "medstest" with a data partition named "dp1"
+ ```bash
- curl --location --request POST 'https://<instance>.energy.azure.com/api/entitlements/v2/groups/users@<instance>-<data-partition-name>.dataservices.energy/members' \
- --header 'data-partition-id: <instance>-<data-partition-name>' \
- --header 'Authorization: Bearer <access_token>' \
+ curl --location --request POST 'https://medstest.energy.azure.com/api/entitlements/v2/groups/users@medstest-dp1.dataservices.energy/members' \
+ --header 'data-partition-id: medstest-dp1' \
+ --header 'Authorization: Bearer abcdefgh123456.............' \
--header 'Content-Type: application/json' \ --data-raw '{ "email": "90e0d063-2f8e-4244-860a-XXXXXXXXXX",
The value to be sent for the param **"email"** is the **Object_ID (OID)** of the
**Sample request**
+Consider a Microsoft Energy Data Services instance named "medstest" with a data partition named "dp1"
+ ```bash
- curl --location --request POST 'https://<instance>.energy.azure.com/api/entitlements/v2/groups/service.search.user@<instance>-<data-partition-name>.dataservices.energy/members' \
- --header 'data-partition-id: <instance>-<data-partition-name>' \
- --header 'Authorization: Bearer <access_token>' \
+ curl --location --request POST 'https://medstest.energy.azure.com/api/entitlements/v2/groups/service.search.user@medstest-dp1.dataservices.energy/members' \
+ --header 'data-partition-id: medstest-dp1' \
+ --header 'Authorization: Bearer abcdefgh123456.............' \
--header 'Content-Type: application/json' \ --data-raw '{ "email": "90e0d063-2f8e-4244-860a-XXXXXXXXXX",
Run the below curl command in Azure Cloud Bash to get all the groups associated
**Sample request**
+Consider a Microsoft Energy Data Services instance named "medstest" with a data partition named "dp1"
+ ```bash
- curl --location --request GET 'https://<instance>.energy.azure.com/api/entitlements/v2/members/90e0d063-2f8e-4244-860a-XXXXXXXXXX/groups?type=none' \
- --header 'data-partition-id: <instance>-<data-partition-name>' \
- --header 'Authorization: Bearer <access_token>'
+ curl --location --request GET 'https://medstest.energy.azure.com/api/entitlements/v2/members/90e0d063-2f8e-4244-860a-XXXXXXXXXX/groups?type=none' \
+ --header 'data-partition-id: medstest-dp1' \
+ --header 'Authorization: Bearer abcdefgh123456.............'
``` **Sample response**
Run the below curl command in Azure Cloud Bash to get all the groups associated
{ "name": "users", "description": "Datalake users",
- "email": "users@<instance>-<data-partition-name>.dataservices.energy"
+ "email": "users@medstest-dp1.dataservices.energy"
}, { "name": "service.search.user", "description": "Datalake Search users",
- "email": "service.search.user@<instance>-<data-partition-name>.dataservices.energy"
+ "email": "service.search.user@medstest-dp1.dataservices.energy"
} ] }
As stated above, **DO NOT** delete the OWNER of a group unless you have another
**Sample request**
+Consider a Microsoft Energy Data Services instance named "medstest" with a data partition named "dp1"
+ ```bash
- curl --location --request DELETE 'https://<instance>.energy.azure.com/api/entitlements/v2/members/90e0d063-2f8e-4244-860a-XXXXXXXXXX' \
- --header 'data-partition-id: <instance>-<data-partition-name>' \
- --header 'Authorization: Bearer <access_token>'
+ curl --location --request DELETE 'https://medstest.energy.azure.com/api/entitlements/v2/members/90e0d063-2f8e-4244-860a-XXXXXXXXXX' \
+ --header 'data-partition-id: medstest-dp1' \
+ --header 'Authorization: Bearer abcdefgh123456.............'
``` **Sample response**
event-grid Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/policy-reference.md
Title: Built-in policy definitions for Azure Event Grid description: Lists Azure Policy built-in policy definitions for Azure Event Grid. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/28/2022 Last updated : 01/05/2023
event-grid Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Event Grid description: Lists Azure Policy Regulatory Compliance controls available for Azure Event Grid. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/04/2022 Last updated : 01/05/2023
event-hubs Event Hubs Auto Inflate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-auto-inflate.md
Azure Event Hubs is a highly scalable data streaming platform. As such, Event Hu
The Event Hubs service increases the throughput when load increases beyond the minimum threshold, without any requests failing with ServerBusy errors. > [!NOTE]
-> To learn more about the **premium** tier, see [Event Hubs Premium](event-hubs-premium-overview.md).
+> The auto-inflate feature is currently supported only in the standard tier.
## How Auto-inflate works in standard tier Event Hubs traffic is controlled by TUs (standard tier). For the limits such as ingress and egress rates per TU, see [Event Hubs quotas and limits](event-hubs-quotas.md). Auto-inflate enables you to start small with the minimum required TUs you choose. The feature then scales automatically to the maximum limit of TUs you need, depending on the increase in your traffic. Auto-inflate provides the following benefits:
event-hubs Event Hubs Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-features.md
An Event Hubs namespace is a management container for event hubs (or topics, in
## Event publishers
-Any entity that sends data to an event hub is an *event publisher* (synonymously used with *event producer*). Event publishers can publish events using HTTPS or AMQP 1.0 or the Kafka protocol. Event publishers use Azure Active Directory based authorization with OAuth2-issued JWT tokens or an Event Hub-specific Shared Access Signature (SAS) token gain publishing access.
+Any entity that sends data to an event hub is an *event publisher* (synonymously used with *event producer*). Event publishers can publish events using HTTPS or AMQP 1.0 or the Kafka protocol. Event publishers use Azure Active Directory based authorization with OAuth2-issued JWT tokens or an Event Hub-specific Shared Access Signature (SAS) token to gain publishing access.
### Publishing an event
event-hubs Event Hubs Node Get Started Send https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-node-get-started-send.md
Title: Send or receive events from Azure Event Hubs using JavaScript (latest)
-description: This article provides a walkthrough for creating a JavaScript application that sends/receives events to/from Azure Event Hubs using the latest azure/event-hubs package.
+ Title: Send or receive events from Azure Event Hubs using JavaScript
+description: This article provides a walkthrough for creating a JavaScript application that sends/receives events to/from Azure Event Hubs.
Previously updated : 02/22/2022 Last updated : 01/04/2023 ms.devlang: javascript-+
-# Send events to or receive events from event hubs by using JavaScript (azure/event-hubs)
-This quickstart shows how to send events to and receive events from an event hub using the **azure/event-hubs** JavaScript package.
+# Send events to or receive events from event hubs by using JavaScript
+This quickstart shows how to send events to and receive events from an event hub using the **@azure/event-hubs** npm package.
## Prerequisites
If you are new to Azure Event Hubs, see [Event Hubs overview](event-hubs-about.m
To complete this quickstart, you need the following prerequisites: - **Microsoft Azure subscription**. To use Azure services, including Azure Event Hubs, you need a subscription. If you don't have an existing Azure account, you can sign up for a [free trial](https://azure.microsoft.com/free/) or use your MSDN subscriber benefits when you [create an account](https://azure.microsoft.com).-- Node.js version 8.x or later. Download the latest [long-term support (LTS) version](https://nodejs.org).
+- Node.js LTS. Download the latest [long-term support (LTS) version](https://nodejs.org).
- Visual Studio Code (recommended) or any other integrated development environment (IDE). -- An active Event Hubs namespace and event hub. To create them, do the following steps:
+- **Create an Event Hubs namespace and an event hub**. The first step is to use the [Azure portal](https://portal.azure.com) to create a namespace of type Event Hubs, and obtain the management credentials your application needs to communicate with the event hub. To create a namespace and an event hub, follow the procedure in [this article](event-hubs-create.md).
- 1. In the [Azure portal](https://portal.azure.com), create a namespace of type *Event Hubs*, and then obtain the management credentials that your application needs to communicate with the event hub.
- 1. To create the namespace and event hub, follow the instructions at [Quickstart: Create an event hub by using the Azure portal](event-hubs-create.md).
- 1. Continue by following the instructions in this quickstart.
- 1. To get the connection string for your Event Hub namespace, follow the instructions in [Get connection string](event-hubs-get-connection-string.md#azure-portal). Record the connection string to use later in this quickstart.
-- **Create an Event Hubs namespace and an event hub**. The first step is to use the [Azure portal](https://portal.azure.com) to create a namespace of type Event Hubs, and obtain the management credentials your application needs to communicate with the event hub. To create a namespace and an event hub, follow the procedure in [this article](event-hubs-create.md). Then, get the **connection string for the Event Hubs namespace** by following instructions from the article: [Get connection string](event-hubs-get-connection-string.md#azure-portal). You use the connection string later in this quickstart.-
-### Install the npm package
+### Install the npm package(s) to send events
To install the [Node Package Manager (npm) package for Event Hubs](https://www.npmjs.com/package/@azure/event-hubs), open a command prompt that has *npm* in its path, change the directory
-to the folder where you want to keep your samples, and then run this command:
+to the folder where you want to keep your samples.
+
+### [Passwordless (Recommended)](#tab/passwordless)
+
+Run these commands:
```shell npm install @azure/event-hubs
+npm install @azure/identity
```
-For the receiving side, you need to install two more packages. In this quickstart, you use Azure Blob storage to persist checkpoints so that the program doesn't read the events that it has already read. It performs metadata checkpoints on received messages at regular intervals in a blob. This approach makes it easy to continue receiving messages later from where you left off.
+### [Connection String](#tab/connection-string)
-Run the following commands:
+Run this command:
```shell
-npm install @azure/storage-blob
+npm install @azure/event-hubs
```
-```shell
-npm install @azure/eventhubs-checkpointstore-blob
-```
++
+### Authenticate the app to Azure
+ ## Send events
In this section, you create a JavaScript application that sends events to an eve
1. Open your favorite editor, such as [Visual Studio Code](https://code.visualstudio.com). 1. Create a file called *send.js*, and paste the following code into it:
+ ## [Passwordless (Recommended)](#tab/passwordless)
+
+ In the code, use real values to replace the following placeholders:
+ * `EVENT HUBS RESOURCE NAME`
+ * `EVENT HUB NAME`
+
+ ```javascript
+ const { EventHubProducerClient } = require("@azure/event-hubs");
+ const { DefaultAzureCredential } = require("@azure/identity");
+
+ // Event hubs
+ const eventHubsResourceName = "EVENT HUBS RESOURCE NAME";
+ const fullyQualifiedNamespace = `${eventHubsResourceName}.servicebus.windows.net`;
+ const eventHubName = "EVENT HUB NAME";
+
+ // Azure Identity - passwordless authentication
+ const credential = new DefaultAzureCredential();
+
+ async function main() {
+
+ // Create a producer client to send messages to the event hub.
+ const producer = new EventHubProducerClient(fullyQualifiedNamespace, eventHubName, credential);
+
+ // Prepare a batch of three events.
+ const batch = await producer.createBatch();
+ batch.tryAdd({ body: "passwordless First event" });
+ batch.tryAdd({ body: "passwordless Second event" });
+ batch.tryAdd({ body: "passwordless Third event" });
+
+ // Send the batch to the event hub.
+ await producer.sendBatch(batch);
+
+ // Close the producer client.
+ await producer.close();
+
+ console.log("A batch of three events have been sent to the event hub");
+ }
+
+ main().catch((err) => {
+ console.log("Error occurred: ", err);
+ });
+ ```
+
+ ## [Connection String](#tab/connection-string)
+
+ In the code, use real values to replace the following placeholders:
+ * `EVENT HUB NAME`
+ * `EVENT HUBS NAMESPACE CONNECTION STRING`
+ ```javascript const { EventHubProducerClient } = require("@azure/event-hubs");
In this section, you create a JavaScript application that sends events to an eve
console.log("Error occurred: ", err); }); ```
-1. In the code, use real values to replace the following:
- * `EVENT HUBS NAMESPACE CONNECTION STRING`
- * `EVENT HUB NAME`
+
+
+ 1. Run `node send.js` to execute this file. This command sends a batch of three events to your event hub. 1. In the Azure portal, verify that the event hub has received the messages. Refresh the page to update the chart. It might take a few seconds for it to show that the messages have been received.
To create an Azure storage account and a blob container in it, do the following
1. [Create an Azure storage account](../storage/common/storage-account-create.md?tabs=azure-portal) 2. [Create a blob container in the storage account](../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container)
-3. [Get the connection string to the storage account](../storage/common/storage-configure-connection-string.md)
+3. Authenticate to the blob container
+
+## [Passwordless (Recommended)](#tab/passwordless)
+
+
+## [Connection String](#tab/connection-string)
+
+[Get the connection string to the storage account](../storage/common/storage-configure-connection-string.md)
+
+Note the connection string and the container name. You'll use them in the receive code.
+++
+### Install the npm packages to receive events
+
+For the receiving side, you need to install two more packages. In this quickstart, you use Azure Blob storage to persist checkpoints so that the program doesn't read the events that it has already read. It performs metadata checkpoints on received messages at regular intervals in a blob. This approach makes it easy to continue receiving messages later from where you left off.
+
+### [Passwordless (Recommended)](#tab/passwordless)
+
+Run these commands:
+
+```shell
+npm install @azure/storage-blob
+npm install @azure/eventhubs-checkpointstore-blob
+npm install @azure/identity
+```
+
+### [Connection String](#tab/connection-string)
+
+Run these commands:
+
+```shell
+npm install @azure/storage-blob
+npm install @azure/eventhubs-checkpointstore-blob
+```
-Be sure to record the connection string and container name for later use in the receive code.
+ ### Write code to receive events 1. Open your favorite editor, such as [Visual Studio Code](https://code.visualstudio.com). 1. Create a file called *receive.js*, and paste the following code into it:
+ ### [Passwordless (Recommended)](#tab/passwordless)
+
+ In the code, use real values to replace the following placeholders:
+ - `EVENT HUBS RESOURCE NAME`
+ - `EVENT HUB NAME`
+ - `STORAGE ACCOUNT NAME`
+ - `STORAGE CONTAINER NAME`
+
+ ```javascript
+ const { DefaultAzureCredential } = require("@azure/identity");
+ const { EventHubConsumerClient, earliestEventPosition } = require("@azure/event-hubs");
+ const { ContainerClient } = require("@azure/storage-blob");
+ const { BlobCheckpointStore } = require("@azure/eventhubs-checkpointstore-blob");
+
+ // Event hubs
+ const eventHubsResourceName = "EVENT HUBS RESOURCE NAME";
+ const fullyQualifiedNamespace = `${eventHubsResourceName}.servicebus.windows.net`;
+ const eventHubName = "EVENT HUB NAME";
+ const consumerGroup = "$Default"; // name of the default consumer group
+
+ // Azure Storage
+ const storageAccountName = "STORAGE ACCOUNT NAME";
+ const storageContainerName = "STORAGE CONTAINER NAME";
+ const baseUrl = `https://${storageAccountName}.blob.core.windows.net`;
+
+ // Azure Identity - passwordless authentication
+ const credential = new DefaultAzureCredential();
+
+ async function main() {
+
+ // Create a blob container client and a blob checkpoint store using the client.
+ const containerClient = new ContainerClient(
+ `${baseUrl}/${storageContainerName}`,
+ credential
+ );
+ const checkpointStore = new BlobCheckpointStore(containerClient);
+
+ // Create a consumer client for the event hub by specifying the checkpoint store.
+ const consumerClient = new EventHubConsumerClient(consumerGroup, fullyQualifiedNamespace, eventHubName, credential, checkpointStore);
+
+ // Subscribe to the events, and specify handlers for processing the events and errors.
+ const subscription = consumerClient.subscribe({
+ processEvents: async (events, context) => {
+ if (events.length === 0) {
+ console.log(`No events received within wait time. Waiting for next interval`);
+ return;
+ }
+
+ for (const event of events) {
+ console.log(`Received event: '${event.body}' from partition: '${context.partitionId}' and consumer group: '${context.consumerGroup}'`);
+ }
+ // Update the checkpoint.
+ await context.updateCheckpoint(events[events.length - 1]);
+ },
+
+ processError: async (err, context) => {
+ console.log(`Error : ${err}`);
+ }
+ },
+ { startPosition: earliestEventPosition }
+ );
+
+ // After 30 seconds, stop processing.
+ await new Promise((resolve) => {
+ setTimeout(async () => {
+ await subscription.close();
+ await consumerClient.close();
+ resolve();
+ }, 30000);
+ });
+ }
+
+ main().catch((err) => {
+ console.log("Error occurred: ", err);
+ });
+ ```
+
+ ### [Connection String](#tab/connection-string)
++
+ In the code, use real values to replace the following placeholders:
+ - `EVENT HUBS NAMESPACE CONNECTION STRING`
+ - `EVENT HUB NAME`
+ - `STORAGE CONNECTION STRING`
+ - `STORAGE CONTAINER NAME`
+ ```javascript const { EventHubConsumerClient, earliestEventPosition } = require("@azure/event-hubs"); const { ContainerClient } = require("@azure/storage-blob");
Be sure to record the connection string and container name for later use in the
const connectionString = "EVENT HUBS NAMESPACE CONNECTION STRING"; const eventHubName = "EVENT HUB NAME"; const consumerGroup = "$Default"; // name of the default consumer group
- const storageConnectionString = "AZURE STORAGE CONNECTION STRING";
- const containerName = "BLOB CONTAINER NAME";
+ const storageConnectionString = "STORAGE CONNECTION STRING";
+ const containerName = "STORAGE CONTAINER NAME";
async function main() { // Create a blob container client and a blob checkpoint store using the client.
Be sure to record the connection string and container name for later use in the
console.log("Error occurred: ", err); }); ```
-1. In the code, use real values to replace the following values:
- - `EVENT HUBS NAMESPACE CONNECTION STRING`
- - `EVENT HUB NAME`
- - `AZURE STORAGE CONNECTION STRING`
- - `BLOB CONTAINER NAME`
+
+
++ 1. Run `node receive.js` in a command prompt to execute this file. The window should display messages about received events. ```
event-hubs Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/policy-reference.md
Title: Built-in policy definitions for Azure Event Hubs description: Lists Azure Policy built-in policy definitions for Azure Event Hubs. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/28/2022 Last updated : 01/05/2023
event-hubs Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Event Hubs description: Lists Azure Policy Regulatory Compliance controls available for Azure Event Hubs. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/04/2022 Last updated : 01/05/2023
expressroute About Fastpath https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/about-fastpath.md
Title: About Azure ExpressRoute FastPath
description: Learn about Azure ExpressRoute FastPath to send network traffic by bypassing the gateway - Previously updated : 08/10/2021 Last updated : 01/05/2023 --+ # About ExpressRoute FastPath
ExpressRoute virtual network gateway is designed to exchange network routes and
### Circuits
-FastPath is available on all ExpressRoute circuits. Public preview support for Private Link connectivity over FastPath is available for connections associated to ExpressRoute Direct circuits. Connections associated to ExpressRoute partner circuits are not eligible for the preview.
+FastPath is available on all ExpressRoute circuits. Public preview support for Private Link connectivity over FastPath is available for connections associated to ExpressRoute Direct circuits. Connections associated to ExpressRoute partner circuits aren't eligible for the preview.
### Gateways
While FastPath supports most configurations, it doesn't support the following fe
> [!NOTE] > * ExpressRoute Direct has a cumulative limit at the port level. > * Traffic will flow through the ExpressRoute gateway when these limits are reached.+ ## Public preview The following FastPath features are in Public preview:
-### Virtual network (Vnet) Peering
+### Virtual network (VNet) Peering
+ FastPath will send traffic directly to any VM deployed in a virtual network peered to the one connected to ExpressRoute, bypassing the ExpressRoute virtual network gateway. This feature is available for both IPv4 and IPv6 connectivity.
-**FastPath support for vnet peering is only available for ExpressRoute Direct connections.**
+**FastPath support for VNet peering is only available for ExpressRoute Direct connections.**
> [!NOTE]
-> * FastPath Vnet peering connectivity is not supported for Azure Dedicated Host workloads.
+> * FastPath VNet peering connectivity is not supported for Azure Dedicated Host workloads.
+
+### User Defined Routes (UDRs)
-## User Defined Routes (UDRs)
FastPath will honor UDRs configured on the GatewaySubnet and send traffic directly to an Azure Firewall or third party NVA. **FastPath support for UDRs is only available for ExpressRoute Direct connections**
FastPath will honor UDRs configured on the GatewaySubnet and send traffic direct
> * FastPath UDR connectivity is not supported for Azure Dedicated Host workloads. > * FastPath UDR connectivity is not supported for IPv6 workloads.
-**Private Link Connectivity for 10Gbps ExpressRoute Direct Connectivity** - Private Link traffic sent over ExpressRoute FastPath will bypass the ExpressRoute virtual network gateway in the data path.
-This preview is available in the following Azure Regions.
+### Private Link Connectivity for 10Gbps ExpressRoute Direct
+
+Private Link traffic sent over ExpressRoute FastPath will bypass the ExpressRoute virtual network gateway in the data path.
+This preview is available in the following Azure Regions:
- Australia East - East Asia - East US
This preview supports connectivity to the following Azure
- Azure Storage - Third Party Private Link Services
-This preview is available for connections associated to ExpressRoute Direct circuits. Connections associated to ExpressRoute partner circuits are not eligible for this preview. Additionally, this preview is available for both IPv4 and IPv6 connectivity.
+This preview is available for connections associated to ExpressRoute Direct circuits. Connections associated to ExpressRoute partner circuits aren't eligible for this preview. Additionally, this preview is available for both IPv4 and IPv6 connectivity.
> [!NOTE] > Private Link pricing will not apply to traffic sent over ExpressRoute FastPath during Public preview. For more information about pricing, check out the [Private Link pricing page](https://azure.microsoft.com/pricing/details/private-link/). >
-See [How to enroll in ExpressRoute FastPath features](expressroute-howto-linkvnet-arm.md#enroll-in-expressroute-fastpath-features-preview).
-
-
## Next steps
-To enable FastPath, see [Link a virtual network to ExpressRoute](expressroute-howto-linkvnet-arm.md#configure-expressroute-fastpath).
+- To enable FastPath, see [Configure ExpressRoute FastPath](expressroute-howto-linkvnet-arm.md#configure-expressroute-fastpath).
+- To enroll in FastPath preview features, see [Enroll in ExpressRoute FastPath features](expressroute-howto-linkvnet-arm.md#enroll-in-expressroute-fastpath-features-preview).
expressroute Expressroute Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-faqs.md
Yes. ExpressRoute premium charges apply on top of ExpressRoute circuit charges a
ExpressRoute Local is a SKU of ExpressRoute circuit, in addition to the Standard SKU and the Premium SKU. A key feature of Local is that a Local circuit at an ExpressRoute peering location gives you access only to one or two Azure regions in or near the same metro. In contrast, a Standard circuit gives you access to all Azure regions in a geopolitical area and a Premium circuit to all Azure regions globally. Specifically, with a Local SKU you can only advertise routes (over Microsoft and private peering) from the corresponding local region of the ExpressRoute circuit. You won't be able to receive routes for other regions different than the defined Local region.
-ExpressRoute Local may not be available for a ExpressRoute Location. For peering location and supported Azure local region, see [locations and connectivity providers](expressroute-locations-providers.md#partners).
+ExpressRoute Local may not be available for an ExpressRoute Location. For peering location and supported Azure local region, see [locations and connectivity providers](expressroute-locations-providers.md#partners).
### What are the benefits of ExpressRoute Local?
ExpressRoute Local also has the same limits on resources (for example, the numbe
### Where is ExpressRoute Local available and which Azure regions is each peering location mapped to?
-ExpressRoute Local is available at the peering locations where one or two Azure regions are close-by. It isn't available at a peering location where there's no Azure region in that state or province or country/region. See the exact mappings on [the Locations page](expressroute-locations-providers.md).
+ExpressRoute Local is available at the peering locations where one or two Azure regions are close-by. It isn't available at a peering location where there's no Azure region in that state or province or country/region. See the exact mappings on [ExpressRoute Locations page](expressroute-locations-providers.md#partners).
## ExpressRoute for Microsoft 365
expressroute Expressroute Monitoring Metrics Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-monitoring-metrics-alerts.md
You can view the count of number of flow records processed by ExpressRoute Traff
## Alerts for ExpressRoute gateway connections
-1. To set up alerts, go to **Azure Monitor**, then select **Alerts**.
+1. To configure alerts, navigate to **Azure Monitor**, then select **Alerts**.
- :::image type="content" source="./media/expressroute-monitoring-metrics-alerts/eralertshowto.jpg" alt-text="alerts":::
-2. Select **+Select Target** and select the ExpressRoute gateway connection resource.
+ :::image type="content" source="./media/expressroute-monitoring-metrics-alerts/monitor-overview.png" alt-text="Screenshot of the alerts option from the monitor overview page.":::
- :::image type="content" source="./media/expressroute-monitoring-metrics-alerts/alerthowto2.jpg" alt-text="target":::
-3. Define the alert details.
+1. Select **+ Create > Alert rule** and select the ExpressRoute gateway connection resource. Select **Next: Condition >** to configure the signal.
- :::image type="content" source="./media/expressroute-monitoring-metrics-alerts/alerthowto3.jpg" alt-text="action group":::
-4. Define and add the action group.
+ :::image type="content" source="./media/expressroute-monitoring-metrics-alerts/select-expressroute-gateway.png" alt-text="Screenshot of the selecting ExpressRoute virtual network gateway from the select a resource page.":::
- :::image type="content" source="./media/expressroute-monitoring-metrics-alerts/actiongroup.png" alt-text="add action group":::
+1. On the *Select a signal* page, select a metric, resource health, or activity log that you want to be alerted. Depending on the signal you select, you may need to enter additional information such as a threshold value. You may also combine multiple signals into a single alert. Select **Next: Actions >** to define who and how they get notify.
-## Alerts based on each peering
+ :::image type="content" source="./media/expressroute-monitoring-metrics-alerts/signal.png" alt-text="Screenshot of list of signals that can be alerted for ExpressRoute gateways.":::
+1. Select **+ Select action groups** to choose an existing action group you previously created or select **+ Create action group** to define a new one. In the action group, you determine how notifications get sent and who will receive them.
-## Set up alerts for activity logs on circuits
+ :::image type="content" source="./media/expressroute-monitoring-metrics-alerts/action-group.png" alt-text="Screenshot of add action groups page.":::
-In the **Alert Criteria**, you can select **Activity Log** for the Signal Type and select the Signal.
+1. Select **Review + create** and then **Create** to deploy the alert into your subscription.
+### Alerts based on each peering
+
+After you select a metric, certain metrics allow you to set up dimensions based on peering or a specific peer (virtual networks).
++
+### Configure alerts for activity logs on circuits
+
+When selecting signals to be alerted on, you can select **Activity Log** signal type.
+ ## More metrics in Log Analytics
expressroute Monitor Expressroute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/monitor-expressroute.md
- Previously updated : 06/22/2021+ Last updated : 01/04/2023 # Monitoring Azure ExpressRoute
The following table lists common and recommended alert rules for ExpressRoute.
1. To configure alerts, navigate to **Azure Monitor**, then select **Alerts**.
- :::image type="content" source="./media/expressroute-monitoring-metrics-alerts/eralertshowto.jpg" alt-text="alerts":::
-2. Select **+ Select Target** and select the ExpressRoute gateway connection resource.
+ :::image type="content" source="./media/expressroute-monitoring-metrics-alerts/monitor-overview.png" alt-text="Screenshot of the alerts option from the monitor overview page.":::
- :::image type="content" source="./media/expressroute-monitoring-metrics-alerts/alerthowto2.jpg" alt-text="target":::
-3. Define the alert details.
+1. Select **+ Create > Alert rule** and select the ExpressRoute gateway connection resource. Select **Next: Condition >** to configure the signal.
- :::image type="content" source="./media/expressroute-monitoring-metrics-alerts/alerthowto3.jpg" alt-text="action group":::
-4. Define and add the action group.
+ :::image type="content" source="./media/expressroute-monitoring-metrics-alerts/select-expressroute-gateway.png" alt-text="Screenshot of the selecting ExpressRoute virtual network gateway from the select a resource page.":::
- :::image type="content" source="./media/expressroute-monitoring-metrics-alerts/actiongroup.png" alt-text="add action group":::
+1. On the *Select a signal* page, select a metric, resource health, or activity log that you want to be alerted. Depending on the signal you select, you may need to enter additional information such as a threshold value. You may also combine multiple signals into a single alert. Select **Next: Actions >** to define who and how they get notify.
+ :::image type="content" source="./media/expressroute-monitoring-metrics-alerts/signal.png" alt-text="Screenshot of list of signals that can be alerted for ExpressRoute gateways.":::
-### Alerts based on each peering
+1. Select **+ Select action groups** to choose an existing action group you previously created or select **+ Create action group** to define a new one. In the action group, you determine how notifications get sent and who will receive them.
+ :::image type="content" source="./media/expressroute-monitoring-metrics-alerts/action-group.png" alt-text="Screenshot of add action groups page.":::
-### Configure alerts for activity logs on circuits
-
-In the **Alert Criteria**, you can select **Activity Log** for the Signal Type and select the Signal.
-
+1. Select **Review + create** and then **Create** to deploy the alert into your subscription.
## Next steps
expressroute Site To Site Vpn Over Microsoft Peering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/site-to-site-vpn-over-microsoft-peering.md
Title: 'Azure ExpressRoute: Configure S2S VPN over Microsoft peering'
-description: Configure IPsec/IKE connectivity to Azure over an ExpressRoute Microsoft peering circuit using a site-to-site VPN gateway.
+description: Learn how to set up IPsec/IKE connectivity to Azure over an ExpressRoute Microsoft peering circuit using a site-to-site VPN gateway.
- Previously updated : 02/25/2019 Last updated : 01/03/2023 --+ # Configure a site-to-site VPN over ExpressRoute Microsoft peering
This article helps you configure secure encrypted connectivity between your on-p
>[!NOTE] >When you set up site-to-site VPN over Microsoft peering, you are charged for the VPN gateway and VPN egress. For more information, see [VPN Gateway pricing](https://azure.microsoft.com/pricing/details/vpn-gateway). >
->
[!INCLUDE [updated-for-az](../../includes/hybrid-az-ps.md)] ## <a name="architecture"></a>Architecture -
- ![connectivity overview](./media/site-to-site-vpn-over-microsoft-peering/IPsecER_Overview.png)
For high availability and redundancy, you can configure multiple tunnels over the two MSEE-PE pairs of a ExpressRoute circuit and enable load balancing between the tunnels.
- ![high availability options](./media/site-to-site-vpn-over-microsoft-peering/HighAvailability.png)
VPN tunnels over Microsoft peering can be terminated either using VPN gateway, or using an appropriate Network Virtual Appliance (NVA) available through Azure Marketplace. You can exchange routes statically or dynamically over the encrypted tunnels without exposing the route exchange to the underlying Microsoft peering. In the examples in this article, BGP (different from the BGP session used to create the Microsoft peering) is used to dynamically exchange prefixes over the encrypted tunnels. >[!IMPORTANT] >For the on-premises side, typically Microsoft peering is terminated on the DMZ and private peering is terminated on the core network zone. The two zones would be segregated using firewalls. If you are configuring Microsoft peering exclusively for enabling secure tunneling over ExpressRoute, remember to filter through only the public IPs of interest that are getting advertised via Microsoft peering. >
->
## <a name="workflow"></a>Workflow
VPN tunnels over Microsoft peering can be terminated either using VPN gateway, o
## <a name="peering"></a>1. Configure Microsoft peering
-To configure a site-to-site VPN connection over ExpressRoute, you must leverage ExpressRoute Microsoft peering.
+To configure a site-to-site VPN connection over ExpressRoute, you must use ExpressRoute Microsoft peering.
* To configure a new ExpressRoute circuit, start with the [ExpressRoute prerequisites](expressroute-prerequisites.md) article, and then [Create and modify an ExpressRoute circuit](expressroute-howto-circuit-arm.md).
-* If you already have an ExpressRoute circuit, but do not have Microsoft peering configured, configure Microsoft peering using the [Create and modify peering for an ExpressRoute circuit](expressroute-howto-routing-arm.md#msft) article.
+* If you already have an ExpressRoute circuit, but don't have Microsoft peering configured, configure Microsoft peering using the [Create and modify peering for an ExpressRoute circuit](expressroute-howto-routing-arm.md#msft) article.
-Once you have configured your circuit and Microsoft peering, you can easily view it using the **Overview** page in the Azure portal.
+Once you've configured your circuit and Microsoft peering, you can easily view it using the **Overview** page in the Azure portal.
-![circuit](./media/site-to-site-vpn-over-microsoft-peering/ExpressRouteCkt.png)
## <a name="routefilter"></a>2. Configure route filters
-A route filter lets you identify services you want to consume through your ExpressRoute circuit's Microsoft peering. It is essentially an allow list of all the BGP community values.
+A route filter lets you identify services you want to consume through your ExpressRoute circuit's Microsoft peering. It's essentially an allowlist of all the BGP community values.
-![route filter](./media/site-to-site-vpn-over-microsoft-peering/route-filter.png)
In this example, the deployment is only in the *Azure West US 2* region. A route filter rule is added to allow only the advertisement of Azure West US 2 regional prefixes, which has the BGP community value *12076:51026*. You specify the regional prefixes that you want to allow by selecting **Manage rule**.
To see the list of prefixes received from the neighbor, use the following exampl
sh ip bgp vpnv4 vrf 10 neighbors X.243.229.34 received-routes ```
-To confirm that you are receiving the correct set of prefixes, you can cross-verify. The following Azure PowerShell command output lists the prefixes advertised via Microsoft peering for each of the services and for each of the Azure region:
+To confirm that you're receiving the correct set of prefixes, you can cross-verify. The following Azure PowerShell command output lists the prefixes advertised via Microsoft peering for each of the services and for each of the Azure region:
```azurepowershell-interactive Get-AzBgpServiceCommunity
Get-AzBgpServiceCommunity
In this section, IPsec VPN tunnels are created between the Azure VPN gateway and the on-premises VPN device. The examples use Cisco Cloud Service Router (CSR1000) VPN devices.
-The following diagram shows the IPsec VPN tunnels established between on-premises VPN device 1, and the Azure VPN gateway instance pair. The two IPsec VPN tunnels established between the on-premises VPN device 2 and the Azure VPN gateway instance pair isn't illustrated in the diagram, and the configuration details are not listed. However, having additional VPN tunnels improves high availability.
+The following diagram shows the IPsec VPN tunnels established between on-premises VPN device 1, and the Azure VPN gateway instance pair. The two IPsec VPN tunnels established between the on-premises VPN device 2 and the Azure VPN gateway instance pair isn't illustrated in the diagram. The configuration details aren't listed. However, having more VPN tunnels improves high availability.
- ![VPN tunnels](./media/site-to-site-vpn-over-microsoft-peering/EstablishTunnels.png)
Over the IPsec tunnel pair, an eBGP session is established to exchange private network routes. The following diagram shows the eBGP session established over the IPsec tunnel pair:
- ![eBGP sessions over tunnel pair](./media/site-to-site-vpn-over-microsoft-peering/TunnelBGP.png)
The following diagram shows the abstracted overview of the example network:
- ![example network](./media/site-to-site-vpn-over-microsoft-peering/OverviewRef.png)
### About the Azure Resource Manager template examples
-In the examples, the VPN gateway and the IPsec tunnel terminations are configured using an Azure Resource Manager template. If you are new to using Resource Manager templates, or to understand the Resource Manager template basics, see [Understand the structure and syntax of Azure Resource Manager templates](../azure-resource-manager/templates/syntax.md). The template in this section creates a greenfield Azure environment (VNet). However, if you have an existing VNet, you can reference it in the template. If you are not familiar with VPN gateway IPsec/IKE site-to-site configurations, see [Create a site-to-site connection](../vpn-gateway/vpn-gateway-create-site-to-site-rm-powershell.md).
+In the examples, the VPN gateway and the IPsec tunnel terminations are configured using an Azure Resource Manager template. If you're new to using Resource Manager templates, or to understand the Resource Manager template basics, see [Understand the structure and syntax of Azure Resource Manager templates](../azure-resource-manager/templates/syntax.md). The template in this section creates a green field Azure environment (VNet). However, if you have an existing VNet, you can reference it in the template. If you aren't familiar with VPN gateway IPsec/IKE site-to-site configurations, see [Create a site-to-site connection](../vpn-gateway/vpn-gateway-create-site-to-site-rm-powershell.md).
>[!NOTE] >You do not need to use Azure Resource Manager templates in order to create this configuration. You can create this configuration using the Azure portal, or PowerShell. >
->
### <a name="variables3"></a>3.1 Declare the variables
In this example, the variable declarations correspond to the example network. Wh
### <a name="vnet"></a>3.2 Create virtual network (VNet)
-If you are associating an existing VNet with the VPN tunnels, you can skip this step.
+If you're associating an existing VNet with the VPN tunnels, you can skip this step.
```json {
This section of the template configures the VPN gateway with the required settin
* Create the VPN gateway with a **"RouteBased"** VpnType. This setting is mandatory if you want to enable the BGP routing between the VPN gateway, and the VPN on-premises. * To establish VPN tunnels between the two instances of the VPN gateway and a given on-premises device in active-active mode, the **"activeActive"** parameter is set to **true** in the Resource Manager template. To understand more about highly available VPN gateways, see [Highly available VPN gateway connectivity](../vpn-gateway/vpn-gateway-highlyavailable.md).
-* To configure eBGP sessions between the VPN tunnels, you must specify two different ASNs on either side. It is preferable to specify private ASN numbers. For more information, see [Overview of BGP and Azure VPN gateways](../vpn-gateway/vpn-gateway-bgp-overview.md).
+* To configure eBGP sessions between the VPN tunnels, you must specify two different ASNs on either side. It's preferable to specify private ASN numbers. For more information, see [Overview of BGP and Azure VPN gateways](../vpn-gateway/vpn-gateway-bgp-overview.md).
```json {
The Azure VPN gateway is compatible with many VPN devices from different vendors
When configuring your VPN device, you need the following items:
-* A shared key. This is the same shared key that you specify when creating your site-to-site VPN connection. The examples use a basic shared key. We recommend that you generate a more complex key to use.
-* The Public IP address of your VPN gateway. You can view the public IP address by using the Azure portal, PowerShell, or CLI. To find the Public IP address of your VPN gateway using the Azure portal, navigate to Virtual network gateways, then click the name of your gateway.
+* A shared key. This value is the same shared key that you specify when creating your site-to-site VPN connection. The examples use a basic shared key. We recommend that you generate a more complex key to use.
+* The Public IP address of your VPN gateway. You can view the public IP address by using the Azure portal, PowerShell, or CLI. To find the Public IP address of your VPN gateway using the Azure portal, navigate to Virtual network gateways, then select the name of your gateway.
-Typically eBGP peers are directly connected (often over a WAN connection). However, when you are configuring eBGP over IPsec VPN tunnels via ExpressRoute Microsoft peering, there are multiple routing domains between the eBGP peers. Use the **ebgp-multihop** command to establish the eBGP neighbor relationship between the two not-directly connected peers. The integer that follows ebgp-multihop command specifies the TTL value in the BGP packets. The command **maximum-paths eibgp 2** enables load balancing of traffic between the two BGP paths.
+Typically eBGP peers are directly connected (often over a WAN connection). However, when you're configuring eBGP over IPsec VPN tunnels via ExpressRoute Microsoft peering, there are multiple routing domains between the eBGP peers. Use the **ebgp-multihop** command to establish the eBGP neighbor relationship between the two not-directly connected peers. The integer that follows ebgp-multihop command specifies the TTL value in the BGP packets. The command **maximum-paths eibgp 2** enables load balancing of traffic between the two BGP paths.
### <a name="cisco1"></a>Cisco CSR1000 example
Peer: 52.175.253.112 port 4500 fvrf: (none) ivrf: (none)
Outbound: #pkts enc'ed 477 drop 0 life (KB/Sec) 4607953/437 ```
-The line protocol on the Virtual Tunnel Interface (VTI) does not change to "up" until IKE phase 2 has completed. The following command verifies the security association:
+The line protocol on the Virtual Tunnel Interface (VTI) doesn't change to "up" until IKE phase 2 has completed. The following command verifies the security association:
``` csr1#show crypto ikev2 sa
external-attack-surface-management Domain Asset Filters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/domain-asset-filters.md
The following filters require that the user manually enters the value with which
| Filter name | Description | Value format example | Applicable operators | ||-|-|--|
-| Domain Status | Any detected domain configurations. | clientDeleteProhibited, clientRenewProhibited, clientTransferProhibited, clientUpdateProhibited | `Equals` `Not Equals` `Starts with` `Does not start with` `Matches` `Does not match` `In` `Not In` `Starts with in` `Does not start with in` `Contains` `Does Not Contain` `Contains In` `Does Not Contain In` `Empty` `Not Empty` |
+| Domain Status | Any detected domain configurations. | clientDeleteProhibited, clientRenewProhibited, clientTransferProhibited, clientUpdateProhibited | `Equals` `Not Equals` `Starts with` `Does not start with` `In` `Not In` `Starts with in` `Does not start with in` `Contains` `Does Not Contain` `Contains In` `Does Not Contain In` `Empty` `Not Empty` |
| IANA ID | The allocated unique ID for a domain, IP or AS seen within WhoIs, IANA and ICANN records. | 1005 | | | Domain | The domain name of the desired asset(s). | Must align with the standard format of domains in inventory: ΓÇ£domain.tldΓÇ¥ | `Equals` `Not Equals` `Starts with` `Does not start with` `Matches` `Does not match` `In` `Not In` `Starts with in` `Does not start with in` `Matches in` `Does not match in` `Contains` `Does Not Contain` `Contains In` `Does Not Contain In` `Empty` `Not Empty` | | Name Server | Any name servers connected to the domain. | dns.domain.com | |
external-attack-surface-management Host Asset Filters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/host-asset-filters.md
The following filters require that the user manually enters the value with which
| Attribute Type | Additional services running on the asset. This can include IP addresses trackers. | address, AdblockPlusAcceptableAdsSignature | `Equals` `Not Equals` `Starts with` `Does not start with` `In` `Not in` `Starts with in` `Does not start with in` `Contains` `Does Not Contain` `Contains In` `Does Not Contain In` `Empty` `Not Empty` | | Attribute Type & Value | The attribute type and value within a single field. | address 192.168.92.73 | | | Attribute Value | The values for any attributes found on the asset. | 192.168.92.73 | |
-| CWE ID | Searches for assets by a specific CWE ID, or range of IDs. | CVE-2015-9251 | |
+| CWE ID | Searches for assets by a specific CWE ID, or range of IDs. | CWE-89 | |
| City | The city of origin detected for this asset. | Redmond | | | Country | The country/region of origin detected for this asset. | United States | | | Country Code | The country code associated with the asset. | USA | |
The following filters require that the user manually enters the value with which
| Host | The host name. | host.contoso.com | | | Name Server | Any name servers connected to the host. | dns.contoso.com | | | Registrar | The name of the registrar within the Whois record. | GODADDY.COM, INC. | |
-| Resource Hos | The hosts of any resources running on the asset. | host.resource.com | |
+| Resource Host | The hosts of any resources running on the asset. | host.resource.com | |
| Resource URL | Any URLs associated to resources on the asset. | host.resource.com/supplychain.js | | | Web Component Name | The name of a web component observed on an asset. | Netscaler Gateway | | | Web Component Name & Version | Both the name and detected version observed on the asset. | jQuery 3.4.1 | |
external-attack-surface-management Ip Address Asset Filters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/ip-address-asset-filters.md
The following filters require that the user manually enters the value with which
| Attribute Type | Additional services running on the asset. This can include IP addresses trackers. | address, AdblockPlusAcceptableAdsSignature | `Equals` `Not Equals` `Starts with` `Does not start with` `In` `Not in` `Starts with in` `Does not start with in` `Contains` `Does Not Contain` `Contains In` `Does Not Contain In` `Empty` `Not Empty` | | Attribute Type & Value | The attribute type and value within a single field. | address 192.168.92.73 | | | Attribute Value | The values for any attributes found on the asset. | 192.168.92.73 | |
-| CWE ID | Searches for assets by a specific CWE ID, or range of IDs. | CVE-2015-9251 | |
+| CWE ID | Searches for assets by a specific CWE ID, or range of IDs. | CWE-89 | |
| City | The city of origin detected for this asset. | Redmond | | | Country | The country/region of origin detected for this asset. | United States | | | Country Code | The applicable country code for the country/region of origin. | USA | |
external-attack-surface-management Ip Block Asset Filters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/ip-block-asset-filters.md
The following filters require that the user manually enters the value with which
| Filter name | Description | Value format | Applicable operators | ||-|--|| | ASN | Autonomous System Number is a network identification for transporting data on the Internet between Internet routers. An ASN will have associated public IP blocks tied to it where hosts are located. | 12345 | `Equals` `Not Equals` `In` `Not In` `Empty` `Not Empty` |
-| BGP Prefix | Any text values in the BGP prefix. | NET-HHS4 | `Equals` `Not Equals` `Starts with` `Does not start with` `In` `Not in` `Starts with in` `Does not start with in` `Contains` `Does Not Contain` `Contains In` `Does Not Contain In` `Empty` `Not Empty` |
+| BGP Prefix | Any text values in the BGP prefix. | 123 4567 89 192.168.92.73/16 | `Equals` `Not Equals` `Starts with` `Does not start with` `In` `Not in` `Starts with in` `Does not start with in` `Contains` `Does Not Contain` `Contains In` `Does Not Contain In` `Empty` `Not Empty` |
| IP Block | The IP block that is associated with the asset. | 192.168.92.73/16 | | | Whois Admin Email | The email address of the listed administrator of a Whois record. | name@domain.com | | | Whois Admin Name | The name of the listed administrator. | John Smith | |
external-attack-surface-management Page Asset Filters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/page-asset-filters.md
The following filters require that the user manually enters the value with which
| Attribute Type | Additional services running on the asset. This can include IP addresses trackers. | address, AdblockPlusAcceptableAdsSignature | `Equals` `Not Equals` `Starts with` `Does not start with` `In` `Not in` `Starts with in` `Does not start with in` `Contains` `Does Not Contain` `Contains In` `Does Not Contain In` `Empty` `Not Empty` | | Attribute Type & Value | The attribute type and value within a single field. | address 192.168.92.73 | | | Attribute Value | The values for any attributes found on the asset. | 192.168.92.73 | |
-| CWE ID | Searches for assets by a specific CWE ID, or range of IDs. | CVE-2015-9251 | |
+| CWE ID | Searches for assets by a specific CWE ID, or range of IDs. | CWE-89 | |
| City | The city of origin detected for this asset. | Redmond | | | Country | The country/region of origin detected for this asset. | United States | | | Country Code | The country code associated with the asset. | USA | |
external-attack-surface-management Understanding Billable Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/understanding-billable-assets.md
This section describes the conditions that the three asset types listed above mu
### Approved hosts
-Hosts are considered billable if the Defender EASM system has observed resolutions within the last 30 days. All host-IP combinations from Approved Inventory will be identified as potential billable assets.
+Hosts are considered billable if the Defender EASM system has observed resolutions within the last 30 days. All host-IP combinations from Approved Inventory will be identified as potential billable assets. All hosts in the Approved Inventory state are considered billable, regardless of the state of the coinciding IP address.
For example: if www.contoso.com has resolved to 1.2.3.4 and 5.6.7.8 in the past 30 days, both combinations will be added to the host count list:
firewall Long Running Sessions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/long-running-sessions.md
Previously updated : 10/03/2022 Last updated : 01/04/2023
The Azure Firewall engineering team updates the firewall on an as-needed basis (
### Idle timeout
-An idle timer is in place to recycle idle sessions. The default value is four minutes. Applications that maintain keepalives don't idle out. If the application needs more than 4 minutes (typical of IOT devices), you can contact support to extend the time to 30 minutes in the backend.
+An idle timer is in place to recycle idle sessions. The default value is four minutes. Applications that maintain keepalives don't idle out. If the application needs more than 4 minutes (typical of IOT devices), you can contact support to extent the time for inbound connections to 30 minutes in the backend. Idle timeout for outbound or east-west traffic cannot be changed.
### Auto-recovery
firewall Policy Rule Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/policy-rule-sets.md
Previously updated : 10/25/2022 Last updated : 01/04/2023
Even though you can't delete the default rule collection groups nor modify their
Rule collection groups contain one or multiple rule collections, which can be of type DNAT, network, or application. For example, you can group rules belonging to the same workloads or a VNet in a rule collection group.
-Rule collection groups have a maximum size of 2 MB. If you need more than 2 MB, you can split the rules into multiple rule collection groups. A Firewall Policy created before July 2022 can contain 50 rule collection groups and a Firewall Policy created after July 2022 can contain 60 rule collection groups.
+For rule collection group size limits, see [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-firewall-limits).
## Rule collections
frontdoor How To Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-reports.md
This report allows you to have graphical and statistics view of WAF patterns by
| Request by top Hostnames | A table of requests by top 50 hostname, in descending order. | | Requests by top user agents | A table of requests by top 50 user agents, in descending order. |
-## CVS format
+## CSV format
You can download CSV files for different tabs in reports. This section describes the values in each CSV file.
-### General information about the CVS report
+### General information about the CSV report
Every CSV report includes some general information and the information is available in all CSV files. with variables based on the report you download.
governance Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/concepts/lifecycle.md
Title: Understand the lifecycle of a blueprint description: Learn about the lifecycle that a blueprint definition goes through and details about each stage, including updating and removing blueprint assignments. Previously updated : 08/17/2021 Last updated : 01/04/2023 + # Understand the lifecycle of an Azure Blueprint
To fully understand a blueprint and the stages, we'll cover a standard lifecycle
## Creating and editing a blueprint
-When creating a blueprint, add artifacts to it, save to a management group or subscription, and
-provided a unique name and a unique version. The blueprint is now in a **Draft** mode and can't yet
+To create a blueprint, add artifacts to it, save the definition to the management group or subscription scope, and
+provide a unique name version. The blueprint is now in a **Draft** mode and can't yet
be assigned. While in the **Draft** mode, it can continue to be updated and changed. A never published blueprint in **Draft** mode displays a different icon on the **Blueprint
governance Machine Configuration Dsc Extension Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/machine-configuration-dsc-extension-migration.md
As a result, new Linux packages will require custom module development.
Linux content authored using ChefInspec remains supported but should only be used for legacy configurations.
+#### Updated "nx" module functionality
+
+A new "nx" module will be released with the purpose of making managing Linux systems easier for PowerShell users.
+
+The module will help in managing common tasks such as:
+
+- User and group management
+- File system operations (changing mode, owner, listing, set/replace content)
+- Service management (start, stop, restart, remove, add)
+- Archive operations (compress, extract)
+- Package Management (list, search, install, uninstall packages)
+
+The module will include class based DSC resources for Linux, as well as built-in Machine Configuration packages.
+
+To provide feedback on the above listed fuctionality, please open an issue on the documentation and we will respond accordingly.
+ #### Will I have to add "Reasons" property to custom resources? Implementing the
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/overview.md
Title: Understand the machine configuration feature of Azure Policy
+ Title: Understand Azure Automanage Machine Configuration
description: Learn how Azure Policy uses the machine configuration feature to audit or configure settings inside virtual machines. Last updated 01/03/2023
governance NZ_ISM_Restricted_V3_5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/NZ_ISM_Restricted_v3_5.md
Title: Regulatory Compliance details for NZ ISM Restricted v3.5 description: Details of the NZ ISM Restricted v3.5 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/04/2022 Last updated : 01/05/2023
initiative definition.
|[All authorization rules except RootManageSharedAccessKey should be removed from Service Bus namespace](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa1817ec0-a368-432a-8057-8371e17ac6ee) |Service Bus clients should not use a namespace level access policy that provides access to all queues and topics in a namespace. To align with the least privilege security model, you should create access policies at the entity level for queues and topics to provide access to only the specific entity |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Bus/ServiceBus_AuditNamespaceAccessRules_Audit.json) | |[Authorized IP ranges should be defined on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e246bcf-5f6f-4f87-bc6f-775d4712c7ea) |Restrict access to the Kubernetes Service Management API by granting API access only to IP addresses in specific ranges. It is recommended to limit access to authorized IP ranges to ensure that only applications from allowed networks can access the cluster. |Audit, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableIpRanges_KubernetesService_Audit.json) | |[Azure Key Vault Managed HSM should have purge protection enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc39ba22d-4428-4149-b981-70acb31fc383) |Malicious deletion of an Azure Key Vault Managed HSM can lead to permanent data loss. A malicious insider in your organization can potentially delete and purge Azure Key Vault Managed HSM. Purge protection protects you from insider attacks by enforcing a mandatory retention period for soft deleted Azure Key Vault Managed HSM. No one inside your organization or Microsoft will be able to purge your Azure Key Vault Managed HSM during the soft delete retention period. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/ManagedHsm_Recoverable_Audit.json) |
-|[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
+|[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |To improve the security of Cognitive Services accounts, ensure that it isn't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
|[Internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6de0be7-9a8a-4b8a-b349-43cf02d22f7c) |Protect your virtual machines from potential threats by restricting access to them with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternetFacingVirtualMachines_Audit.json) | |[Non-internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbb91dfba-c30d-4263-9add-9c2384e659a6) |Protect your non-internet-facing virtual machines from potential threats by restricting access with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternalVirtualMachines_Audit.json) | |[Public network access on Azure SQL Database should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b8ca024-1d5c-4dec-8995-b1a932b41780) |Disabling the public network access property improves security by ensuring your Azure SQL Database can only be accessed from a private endpoint. This configuration denies all logins that match IP or virtual network based firewall rules. |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_PublicNetworkAccess_Audit.json) |
initiative definition.
||||| |[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) | |[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
-|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
### 6.2.6 Resolving vulnerabilities
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Azure DDoS Protection should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS Protection should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDDoSProtection_Audit.json) |
+|[Azure DDoS Protection Standard should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS protection standard should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDDoSProtection_Audit.json) |
### 18.4.7 Intrusion Detection and Prevention strategy (IDS/IPS)
governance Pciv3_2_1_2018_Audit Pci Dss 3 2 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/PCIv3_2_1_2018_audit.md pci-dss-3-2-1.md
Title: Regulatory Compliance details for PCI v3.2.1:2018 PCI DSS 3.2.1 description: Details of the PCI v3.2.1:2018 PCI DSS 3.2.1 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/04/2022 Last updated : 01/05/2023
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Audit diagnostic setting](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types |AuditIfNotExists |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
+|[Audit diagnostic setting for selected resource types](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types. Be sure to select only resource types which support diagnostics settings. |AuditIfNotExists |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
|[Auditing on SQL server should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6fb4358-5bf4-4ad7-ba82-2cd2f41ce5e9) |Auditing on your SQL Server should be enabled to track database activities across all databases on the server and save them in an audit log. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditing_Audit.json) | |[Storage accounts should be migrated to new Azure Resource Manager resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F37e0d2fe-28a5-43d6-a273-67d37d1f5606) |Use new Azure Resource Manager for your storage accounts to provide security enhancements such as: stronger access control (RBAC), better auditing, Azure Resource Manager based deployment and governance, access to managed identities, access to key vault for secrets, Azure AD-based authentication and support for tags and resource groups for easier security management |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Classic_AuditForClassicStorages_Audit.json) | |[Virtual machines should be migrated to new Azure Resource Manager resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1d84d5fb-01f6-4d12-ba4f-4a26081d403d) |Use new Azure Resource Manager for your virtual machines to provide security enhancements such as: stronger access control (RBAC), better auditing, Azure Resource Manager based deployment and governance, access to managed identities, access to key vault for secrets, Azure AD-based authentication and support for tags and resource groups for easier security management |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/ClassicCompute_Audit.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Audit diagnostic setting](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types |AuditIfNotExists |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
+|[Audit diagnostic setting for selected resource types](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types. Be sure to select only resource types which support diagnostics settings. |AuditIfNotExists |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
|[Auditing on SQL server should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6fb4358-5bf4-4ad7-ba82-2cd2f41ce5e9) |Auditing on your SQL Server should be enabled to track database activities across all databases on the server and save them in an audit log. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditing_Audit.json) | |[Storage accounts should be migrated to new Azure Resource Manager resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F37e0d2fe-28a5-43d6-a273-67d37d1f5606) |Use new Azure Resource Manager for your storage accounts to provide security enhancements such as: stronger access control (RBAC), better auditing, Azure Resource Manager based deployment and governance, access to managed identities, access to key vault for secrets, Azure AD-based authentication and support for tags and resource groups for easier security management |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Classic_AuditForClassicStorages_Audit.json) | |[Virtual machines should be migrated to new Azure Resource Manager resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1d84d5fb-01f6-4d12-ba4f-4a26081d403d) |Use new Azure Resource Manager for your virtual machines to provide security enhancements such as: stronger access control (RBAC), better auditing, Azure Resource Manager based deployment and governance, access to managed identities, access to key vault for secrets, Azure AD-based authentication and support for tags and resource groups for easier security management |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/ClassicCompute_Audit.json) |
governance RBI_ITF_Banks_V2016 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/RBI_ITF_Banks_v2016.md
Title: Regulatory Compliance details for Reserve Bank of India IT Framework for Banks v2016 description: Details of the Reserve Bank of India IT Framework for Banks v2016 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/04/2022 Last updated : 01/05/2023
initiative definition.
|[MFA should be enabled on accounts with owner permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faa633080-8b72-40c4-a2d7-d00c03e80bed) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with owner permissions to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForOwnerPermissions_Audit.json) | |[MFA should be enabled on accounts with read permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe3576e28-8b17-4677-84c3-db2990658d64) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with read privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForReadPermissions_Audit.json) |
-### Authentication Framework For Customers-9.1
+### Authentication Framework For Customers-9.3
**ID**:
initiative definition.
|[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) | |[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) | |[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
-|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
### Patch/Vulnerability & Change Management-7.2
initiative definition.
|[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) | |[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) | |[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
-|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
### Patch/Vulnerability & Change Management-7.6
initiative definition.
|[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) | |[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) | |[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
-|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
### Patch/Vulnerability & Change Management-7.2
initiative definition.
|[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) | |[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) | |[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
-|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
### Patch/Vulnerability & Change Management-7.6
initiative definition.
||||| |[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) | |[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
-|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
### Vulnerability Assessment And Penetration Test And Red Team Exercises-18.2
initiative definition.
|[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) | |[SQL databases should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffeedbf84-6b99-488c-acc2-71c829aa5ffc) |Monitor vulnerability assessment scan results and recommendations for how to remediate database vulnerabilities. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_SQLDbVulnerabilities_Audit.json) | |[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
-|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
### Vulnerability Assessment And Penetration Test And Red Team Exercises-18.4
initiative definition.
|[Email notification to subscription owner for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b15565f-aa9e-48ba-8619-45960f2c314d) |To ensure your subscription owners are notified when there is a potential security breach in their subscription, set email notifications to subscription owners for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification_to_subscription_owner.json) | |[Subscriptions should have a contact email address for security issues](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f4f78b8-e367-4b10-a341-d9a4ad5cf1c7) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, set a security contact to receive email notifications from Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Security_contact_email.json) | |[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
-|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
### Vulnerability Assessment And Penetration Test And Red Team Exercises-18.1
initiative definition.
||||| |[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) | |[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
-|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
### Vulnerability Assessment And Penetration Test And Red Team Exercises-18.2
initiative definition.
|[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) | |[SQL databases should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffeedbf84-6b99-488c-acc2-71c829aa5ffc) |Monitor vulnerability assessment scan results and recommendations for how to remediate database vulnerabilities. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_SQLDbVulnerabilities_Audit.json) | |[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
-|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
### Vulnerability Assessment And Penetration Test And Red Team Exercises-18.4
initiative definition.
|[MFA should be enabled on accounts with owner permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faa633080-8b72-40c4-a2d7-d00c03e80bed) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with owner permissions to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForOwnerPermissions_Audit.json) | |[MFA should be enabled on accounts with read permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe3576e28-8b17-4677-84c3-db2990658d64) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with read privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForReadPermissions_Audit.json) |
-### Authentication Framework For Customers-9.1
+### Authentication Framework For Customers-9.3
**ID**:
initiative definition.
|[Azure Key Vault should have firewall enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55615ac9-af46-4a59-874e-391cc3dfb490) |Enable the key vault firewall so that the key vault is not accessible by default to any public IPs. You can then configure specific IP ranges to limit access to those networks. Learn more at: [https://docs.microsoft.com/azure/key-vault/general/network-security](../../../key-vault/general/network-security.md) |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultFirewallEnabled_Audit.json) | |[Azure Machine Learning workspaces should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F40cec1dd-a100-4920-b15b-3024fe8901ab) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Machine Learning workspaces, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/machine-learning/how-to-configure-private-link](../../../machine-learning/how-to-configure-private-link.md). |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/Workspace_PrivateEndpoint_Audit.json) | |[Azure Spring Cloud should use network injection](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf35e2a4-ef96-44e7-a9ae-853dd97032c4) |Azure Spring Cloud instances should use virtual network injection for the following purposes: 1. Isolate Azure Spring Cloud from Internet. 2. Enable Azure Spring Cloud to interact with systems in either on premises data centers or Azure service in other virtual networks. 3. Empower customers to control inbound and outbound network communications for Azure Spring Cloud. |Audit, Disabled, Deny |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Platform/Spring_VNETEnabled_Audit.json) |
-|[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
+|[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |To improve the security of Cognitive Services accounts, ensure that it isn't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
|[Cognitive Services accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) | |[Container registries should not allow unrestricted network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) | |[Container registries should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8eef0a8-67cf-4eb4-9386-14b0e78733d4) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network.By mapping private endpoints to your container registries instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/acr/private-link](https://aka.ms/acr/private-link). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_PrivateEndpointEnabled_Audit.json) |
initiative definition.
|[Microsoft Defender for Containers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c988dd6-ade4-430f-a608-2a3e5b0a6d38) |Microsoft Defender for Containers provides hardening, vulnerability assessment and run-time protections for your Azure, hybrid, and multi-cloud Kubernetes environments. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnContainers_Audit.json) | |[Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf6cd1bd-1635-48cb-bde7-5b15693900b9) |Servers without an installed Endpoint Protection agent will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingEndpointProtection_Audit.json) | |[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
-|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
|[Windows Defender Exploit Guard should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbed48b13-6647-468e-aa2f-1af1d3f4dd40) |Windows Defender Exploit Guard uses the Azure Policy Guest Configuration agent. Exploit Guard has four components that are designed to lock down devices against a wide variety of attack vectors and block behaviors commonly used in malware attacks while enabling enterprises to balance their security risk and productivity requirements (Windows only). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsDefenderExploitGuard_AINE.json) | ### Advanced Real-Timethreat Defenceand Management-13.3
initiative definition.
|[Transparent Data Encryption on SQL databases should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F17k78e20-9358-41c9-923c-fb736d382a12) |Transparent data encryption should be enabled to protect data-at-rest and meet compliance requirements |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlDBEncryption_Audit.json) | |[Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0961003e-5a0a-4549-abde-af6a37f2724d) |By default, a virtual machine's OS and data disks are encrypted-at-rest using platform-managed keys. Temp disks, data caches and data flowing between compute and storage aren't encrypted. Disregard this recommendation if: 1. using encryption-at-host, or 2. server-side encryption on Managed Disks meets your security requirements. Learn more in: Server-side encryption of Azure Disk Storage: [https://aka.ms/disksse,](https://aka.ms/disksse,) Different disk encryption offerings: [https://aka.ms/diskencryptioncomparison](https://aka.ms/diskencryptioncomparison) |AuditIfNotExists, Disabled |[2.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnencryptedVMDisks_Audit.json) | |[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
-|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
|[Web Application Firewall (WAF) should be enabled for Application Gateway](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F564feb30-bf6a-4854-b4bb-0d2d2d1e6c66) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AppGatewayEnabled_Audit.json) | |[Windows web servers should be configured to use secure communication protocols](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5752e6d6-1206-46d8-8ab1-ecc2f71a8112) |To protect the privacy of information communicated over the Internet, your web servers should use the latest version of the industry-standard cryptographic protocol, Transport Layer Security (TLS). TLS secures communications over a network by using security certificates to encrypt a connection between machines. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SecureWebProtocol_AINE.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Azure DDoS Protection should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS Protection should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDDoSProtection_Audit.json) |
+|[Azure DDoS Protection Standard should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS protection standard should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDDoSProtection_Audit.json) |
## Incident Response & Management
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Azure DDoS Protection should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS Protection should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDDoSProtection_Audit.json) |
+|[Azure DDoS Protection Standard should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS protection standard should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDDoSProtection_Audit.json) |
|[Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) | |[Email notification for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6e2593d9-add6-4083-9c9b-4b7d2188c899) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, enable email notifications for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification.json) | |[Email notification to subscription owner for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b15565f-aa9e-48ba-8619-45960f2c314d) |To ensure your subscription owners are notified when there is a potential security breach in their subscription, set email notifications to subscription owners for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification_to_subscription_owner.json) |
initiative definition.
||||| |[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) | |[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
-|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
### Vulnerability Assessment And Penetration Test And Red Team Exercises-18.2
initiative definition.
|[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) | |[SQL databases should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffeedbf84-6b99-488c-acc2-71c829aa5ffc) |Monitor vulnerability assessment scan results and recommendations for how to remediate database vulnerabilities. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_SQLDbVulnerabilities_Audit.json) | |[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
-|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
### Vulnerability Assessment And Penetration Test And Red Team Exercises-18.4
governance Australia Ism https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/australia-ism.md
Title: Regulatory Compliance details for Australian Government ISM PROTECTED description: Details of the Australian Government ISM PROTECTED Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/04/2022 Last updated : 01/05/2023
initiative definition.
|[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) | |[Vulnerability Assessment settings for SQL server should contain an email address to receive scan reports](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F057d6cfe-9c4f-4a6d-bc60-14420ea1f1a9) |Ensure that an email address is provided for the 'Send scan reports to' field in the Vulnerability Assessment settings. This email address receives scan result summary after a periodic scan runs on SQL servers. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_VulnerabilityAssessmentEmails_Audit.json) | |[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
-|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
### When to patch security vulnerabilities - 1144
initiative definition.
|[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) | |[Vulnerability Assessment settings for SQL server should contain an email address to receive scan reports](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F057d6cfe-9c4f-4a6d-bc60-14420ea1f1a9) |Ensure that an email address is provided for the 'Send scan reports to' field in the Vulnerability Assessment settings. This email address receives scan result summary after a periodic scan runs on SQL servers. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_VulnerabilityAssessmentEmails_Audit.json) | |[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
-|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
### When to patch security vulnerabilities - 1472
initiative definition.
|[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) | |[Vulnerability Assessment settings for SQL server should contain an email address to receive scan reports](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F057d6cfe-9c4f-4a6d-bc60-14420ea1f1a9) |Ensure that an email address is provided for the 'Send scan reports to' field in the Vulnerability Assessment settings. This email address receives scan result summary after a periodic scan runs on SQL servers. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_VulnerabilityAssessmentEmails_Audit.json) | |[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
-|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
### When to patch security vulnerabilities - 1494
initiative definition.
|[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) | |[Vulnerability Assessment settings for SQL server should contain an email address to receive scan reports](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F057d6cfe-9c4f-4a6d-bc60-14420ea1f1a9) |Ensure that an email address is provided for the 'Send scan reports to' field in the Vulnerability Assessment settings. This email address receives scan result summary after a periodic scan runs on SQL servers. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_VulnerabilityAssessmentEmails_Audit.json) | |[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
-|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
### When to patch security vulnerabilities - 1495
initiative definition.
|[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) | |[Vulnerability Assessment settings for SQL server should contain an email address to receive scan reports](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F057d6cfe-9c4f-4a6d-bc60-14420ea1f1a9) |Ensure that an email address is provided for the 'Send scan reports to' field in the Vulnerability Assessment settings. This email address receives scan result summary after a periodic scan runs on SQL servers. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_VulnerabilityAssessmentEmails_Audit.json) | |[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
-|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
### When to patch security vulnerabilities - 1496
initiative definition.
|[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) | |[Vulnerability Assessment settings for SQL server should contain an email address to receive scan reports](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F057d6cfe-9c4f-4a6d-bc60-14420ea1f1a9) |Ensure that an email address is provided for the 'Send scan reports to' field in the Vulnerability Assessment settings. This email address receives scan result summary after a periodic scan runs on SQL servers. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_VulnerabilityAssessmentEmails_Audit.json) | |[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
-|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
## Guidelines for System Management - Data backup and restoration
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Audit diagnostic setting](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types |AuditIfNotExists |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
+|[Audit diagnostic setting for selected resource types](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types. Be sure to select only resource types which support diagnostics settings. |AuditIfNotExists |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
|[Virtual machines should be connected to a specified workspace](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff47b5582-33ec-4c5c-87c0-b010a6b2e917) |Reports virtual machines as non-compliant if they aren't logging to the Log Analytics workspace specified in the policy/initiative assignment. |AuditIfNotExists, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalytics_WorkspaceMismatch_VM_Audit.json) | ### Events to be logged - 1537
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Audit diagnostic setting](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types |AuditIfNotExists |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
+|[Audit diagnostic setting for selected resource types](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types. Be sure to select only resource types which support diagnostics settings. |AuditIfNotExists |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
|[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) | |[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Azure DDoS Protection should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS protection should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDDoSProtection_Audit.json) |
+|[Azure DDoS Protection Standard should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS protection standard should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDDoSProtection_Audit.json) |
## Guidelines for Cryptography - Cryptographic fundamentals
governance Azure Security Benchmark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/azure-security-benchmark.md
Title: Regulatory Compliance details for Azure Security Benchmark description: Details of the Azure Security Benchmark Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/04/2022 Last updated : 01/05/2023
initiative definition.
|[Azure Machine Learning workspaces should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F40cec1dd-a100-4920-b15b-3024fe8901ab) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Machine Learning workspaces, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/machine-learning/how-to-configure-private-link](../../../machine-learning/how-to-configure-private-link.md). |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/Workspace_PrivateEndpoint_Audit.json) | |[Azure SignalR Service should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2393d2cf-a342-44cd-a2e2-fe0188fd1234) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure SignalR Service resource instead of the entire service, you'll reduce your data leakage risks. Learn more about private links at: [https://aka.ms/asrs/privatelink](https://aka.ms/asrs/privatelink). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit_v2.json) | |[Azure Spring Cloud should use network injection](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf35e2a4-ef96-44e7-a9ae-853dd97032c4) |Azure Spring Cloud instances should use virtual network injection for the following purposes: 1. Isolate Azure Spring Cloud from Internet. 2. Enable Azure Spring Cloud to interact with systems in either on premises data centers or Azure service in other virtual networks. 3. Empower customers to control inbound and outbound network communications for Azure Spring Cloud. |Audit, Disabled, Deny |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Platform/Spring_VNETEnabled_Audit.json) |
-|[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
+|[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |To improve the security of Cognitive Services accounts, ensure that it isn't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
|[Cognitive Services accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) | |[Container registries should not allow unrestricted network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) | |[Container registries should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8eef0a8-67cf-4eb4-9386-14b0e78733d4) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network.By mapping private endpoints to your container registries instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/acr/private-link](https://aka.ms/acr/private-link). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_PrivateEndpointEnabled_Audit.json) |
initiative definition.
|[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) | |[Management ports should be closed on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F22730e10-96f6-4aac-ad84-9383d35b5917) |Open remote management ports are exposing your VM to a high level of risk from Internet-based attacks. These attacks attempt to brute force credentials to gain admin access to the machine. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OpenManagementPortsOnVirtualMachines_Audit.json) |
-### Deploy firewall at the edge of enterprise network
+### Deploy DDOS protection
-**ID**: Azure Security Benchmark NS-3
+**ID**: Azure Security Benchmark NS-5
**Ownership**: Shared |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Azure DDoS Protection should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS protection should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDDoSProtection_Audit.json) |
+|[Azure DDoS Protection Standard should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS protection standard should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDDoSProtection_Audit.json) |
### Deploy web application firewall
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
+|[\[Preview\]: Linux machines should encrypt temp disks, caches, and data flows between Compute and Storage resources.](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fca88aadc-6e2b-416c-9de2-5a0f01d1693f) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Use Azure Disk Encryption or Encryption At Host to protect your virtual machine's OS and data disks, temp disks, data caches and any data flowing between compute and storage. To learn more about different disk encryption offerings, see [https://aka.ms/diskencryptioncomparison](https://aka.ms/diskencryptioncomparison). |AuditIfNotExists, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxVMEncryption_AINE.json) |
+|[\[Preview\]: Windows machines should encrypt temp disks, caches, and data flows between Compute and Storage resources.](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3dc5edcd-002d-444c-b216-e123bbfa37c0) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Use Azure Disk Encryption or Encryption At Host to protect your virtual machine's OS and data disks, temp disks, data caches and any data flowing between compute and storage. To learn more about different disk encryption offerings, see [https://aka.ms/diskencryptioncomparison](https://aka.ms/diskencryptioncomparison). |AuditIfNotExists, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsVMEncryption_AINE.json) |
|[Automation account variables should be encrypted](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3657f5a0-770e-44a3-b44e-9431ba1e9735) |It is important to enable encryption of Automation account variable assets when storing sensitive data |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Automation/Automation_AuditUnencryptedVars_Audit.json) | |[Service Fabric clusters should have the ClusterProtectionLevel property set to EncryptAndSign](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F617c02be-7f02-4efd-8836-3180d47b6c68) |Service Fabric provides three levels of protection (None, Sign and EncryptAndSign) for node-to-node communication using a primary cluster certificate. Set the protection level to ensure that all node-to-node messages are encrypted and digitally signed |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Fabric/ServiceFabric_AuditClusterProtectionLevel_Audit.json) | |[Transparent Data Encryption on SQL databases should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F17k78e20-9358-41c9-923c-fb736d382a12) |Transparent data encryption should be enabled to protect data-at-rest and meet compliance requirements |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlDBEncryption_Audit.json) |
initiative definition.
|[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) | |[Azure Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Azure Defender for Storage provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) | |[Azure Kubernetes Service clusters should have Defender profile enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa1840de2-8088-4ea8-b153-b4c723e9cb01) |Microsoft Defender for Containers provides cloud-native Kubernetes security capabilities including environment hardening, workload protection, and run-time protection. When you enable the SecurityProfile.AzureDefender on your Azure Kubernetes Service cluster, an agent is deployed to your cluster to collect security event data. Learn more about Microsoft Defender for Containers in [https://docs.microsoft.com/azure/defender-for-cloud/defender-for-containers-introduction?tabs=defender-for-container-arch-aks](../../../defender-for-cloud/defender-for-containers-introduction.md?tabs=defender-for-container-arch-aks) |Audit, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ASC_Azure_Defender_Kubernetes_AKS_SecurityProfile_Audit.json) |
+|[Microsoft Defender CSPM should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f90fc71-a595-4066-8974-d4d0802e8ef0) |Defender Cloud Security Posture Management (CSPM) provides enhanced posture capabilities and a new intelligent cloud security graph to help identify, prioritize, and reduce risk. Defender CSPM is available in addition to the free foundational security posture capabilities turned on by default in Defender for Cloud. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Azure_Defender_CSPM_Audit.json) |
|[Microsoft Defender for Containers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c988dd6-ade4-430f-a608-2a3e5b0a6d38) |Microsoft Defender for Containers provides hardening, vulnerability assessment and run-time protections for your Azure, hybrid, and multi-cloud Kubernetes environments. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnContainers_Audit.json) |
+|[Microsoft Defender for SQL status should be protected for Arc-enabled SQL Servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F938c4981-c2c9-4168-9cd6-972b8675f906) |Microsoft Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, discovering and classifying sensitive data. Once enabled, the protection status indicates that the resource is actively monitored. Even when Defender is enabled, multiple configuration settings should be validated on the agent, machine, workspace and SQL server to ensure active protection. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ProtectDefenderForSQLOnArc_Audit.json) |
|[Windows Defender Exploit Guard should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbed48b13-6647-468e-aa2f-1af1d3f4dd40) |Windows Defender Exploit Guard uses the Azure Policy Guest Configuration agent. Exploit Guard has four components that are designed to lock down devices against a wide variety of attack vectors and block behaviors commonly used in malware attacks while enabling enterprises to balance their security risk and productivity requirements (Windows only). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsDefenderExploitGuard_AINE.json) | ### Enable threat detection for identity and access management
initiative definition.
|[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) | |[Azure Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Azure Defender for Storage provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) | |[Azure Kubernetes Service clusters should have Defender profile enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa1840de2-8088-4ea8-b153-b4c723e9cb01) |Microsoft Defender for Containers provides cloud-native Kubernetes security capabilities including environment hardening, workload protection, and run-time protection. When you enable the SecurityProfile.AzureDefender on your Azure Kubernetes Service cluster, an agent is deployed to your cluster to collect security event data. Learn more about Microsoft Defender for Containers in [https://docs.microsoft.com/azure/defender-for-cloud/defender-for-containers-introduction?tabs=defender-for-container-arch-aks](../../../defender-for-cloud/defender-for-containers-introduction.md?tabs=defender-for-container-arch-aks) |Audit, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ASC_Azure_Defender_Kubernetes_AKS_SecurityProfile_Audit.json) |
+|[Microsoft Defender CSPM should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f90fc71-a595-4066-8974-d4d0802e8ef0) |Defender Cloud Security Posture Management (CSPM) provides enhanced posture capabilities and a new intelligent cloud security graph to help identify, prioritize, and reduce risk. Defender CSPM is available in addition to the free foundational security posture capabilities turned on by default in Defender for Cloud. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Azure_Defender_CSPM_Audit.json) |
|[Microsoft Defender for Containers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c988dd6-ade4-430f-a608-2a3e5b0a6d38) |Microsoft Defender for Containers provides hardening, vulnerability assessment and run-time protections for your Azure, hybrid, and multi-cloud Kubernetes environments. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnContainers_Audit.json) |
+|[Microsoft Defender for SQL status should be protected for Arc-enabled SQL Servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F938c4981-c2c9-4168-9cd6-972b8675f906) |Microsoft Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, discovering and classifying sensitive data. Once enabled, the protection status indicates that the resource is actively monitored. Even when Defender is enabled, multiple configuration settings should be validated on the agent, machine, workspace and SQL server to ensure active protection. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ProtectDefenderForSQLOnArc_Audit.json) |
|[Windows Defender Exploit Guard should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbed48b13-6647-468e-aa2f-1af1d3f4dd40) |Windows Defender Exploit Guard uses the Azure Policy Guest Configuration agent. Exploit Guard has four components that are designed to lock down devices against a wide variety of attack vectors and block behaviors commonly used in malware attacks while enabling enterprises to balance their security risk and productivity requirements (Windows only). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsDefenderExploitGuard_AINE.json) | ### Enable logging for security investigation
initiative definition.
|[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) | |[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) | |[Azure Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Azure Defender for Storage provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) |
+|[Microsoft Defender CSPM should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f90fc71-a595-4066-8974-d4d0802e8ef0) |Defender Cloud Security Posture Management (CSPM) provides enhanced posture capabilities and a new intelligent cloud security graph to help identify, prioritize, and reduce risk. Defender CSPM is available in addition to the free foundational security posture capabilities turned on by default in Defender for Cloud. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Azure_Defender_CSPM_Audit.json) |
|[Microsoft Defender for Containers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c988dd6-ade4-430f-a608-2a3e5b0a6d38) |Microsoft Defender for Containers provides hardening, vulnerability assessment and run-time protections for your Azure, hybrid, and multi-cloud Kubernetes environments. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnContainers_Audit.json) |
+|[Microsoft Defender for SQL status should be protected for Arc-enabled SQL Servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F938c4981-c2c9-4168-9cd6-972b8675f906) |Microsoft Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, discovering and classifying sensitive data. Once enabled, the protection status indicates that the resource is actively monitored. Even when Defender is enabled, multiple configuration settings should be validated on the agent, machine, workspace and SQL server to ensure active protection. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ProtectDefenderForSQLOnArc_Audit.json) |
### Detection and analysis - investigate an incident
initiative definition.
|[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) | |[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) | |[Azure Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Azure Defender for Storage provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) |
+|[Microsoft Defender CSPM should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f90fc71-a595-4066-8974-d4d0802e8ef0) |Defender Cloud Security Posture Management (CSPM) provides enhanced posture capabilities and a new intelligent cloud security graph to help identify, prioritize, and reduce risk. Defender CSPM is available in addition to the free foundational security posture capabilities turned on by default in Defender for Cloud. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Azure_Defender_CSPM_Audit.json) |
|[Microsoft Defender for Containers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c988dd6-ade4-430f-a608-2a3e5b0a6d38) |Microsoft Defender for Containers provides hardening, vulnerability assessment and run-time protections for your Azure, hybrid, and multi-cloud Kubernetes environments. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnContainers_Audit.json) |
+|[Microsoft Defender for SQL status should be protected for Arc-enabled SQL Servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F938c4981-c2c9-4168-9cd6-972b8675f906) |Microsoft Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, discovering and classifying sensitive data. Once enabled, the protection status indicates that the resource is actively monitored. Even when Defender is enabled, multiple configuration settings should be validated on the agent, machine, workspace and SQL server to ensure active protection. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ProtectDefenderForSQLOnArc_Audit.json) |
## Posture and Vulnerability Management
initiative definition.
||||| |[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) | |[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
-|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
### Rapidly and automatically remediate vulnerabilities
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[\[Preview\]: Machines should be configured to periodically check for missing system updates](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd876905-5b84-4f73-ab2d-2e7a7c4568d9) |To ensure periodic assessments for missing system updates are triggered automatically every 24 hours, the AssessmentMode property should be set to 'AutomaticByPlatform'. Learn more about AssessmentMode property for Windows: [https://aka.ms/computevm-windowspatchassessmentmode,](https://aka.ms/computevm-windowspatchassessmentmode,) for Linux: [https://aka.ms/computevm-linuxpatchassessmentmode](https://aka.ms/computevm-linuxpatchassessmentmode). |Audit, Deny, Disabled |[2.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Update%20Management%20Center/AzUpdateMgmtCenter_AutoAssessmentMode_Audit.json) |
+|[\[Preview\]: Machines should be configured to periodically check for missing system updates](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd876905-5b84-4f73-ab2d-2e7a7c4568d9) |To ensure periodic assessments for missing system updates are triggered automatically every 24 hours, the AssessmentMode property should be set to 'AutomaticByPlatform'. Learn more about AssessmentMode property for Windows: [https://aka.ms/computevm-windowspatchassessmentmode,](https://aka.ms/computevm-windowspatchassessmentmode,) for Linux: [https://aka.ms/computevm-linuxpatchassessmentmode](https://aka.ms/computevm-linuxpatchassessmentmode). |Audit, Deny, Disabled |[3.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Update%20Management%20Center/AzUpdateMgmtCenter_AutoAssessmentMode_Audit.json) |
|[\[Preview\]: System updates should be installed on your machines (powered by Update Center)](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff85bf3e0-d513-442e-89c3-1784ad63382b) |Your machines are missing system, security, and critical updates. Software updates often include critical patches to security holes. Such holes are frequently exploited in malware attacks so it's vital to keep your software updated. To install all outstanding patches and secure your machines, follow the remediation steps. |AuditIfNotExists, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingSystemUpdatesV2_Audit.json) | |[App Service apps that use Java should use the latest 'Java version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F496223c3-ad65-4ecd-878a-bae78737e9ed) |Periodically, newer versions are released for Java software either due to security flaws or to include additional functionality. Using the latest Java version for web apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. Currently, this policy only applies to Linux apps. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_WebApp_Audit_java_Latest.json) | |[App Service apps that use PHP should use the latest 'PHP version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7261b898-8a84-4db8-9e04-18527132abb3) |Periodically, newer versions are released for PHP software either due to security flaws or to include additional functionality. Using the latest PHP version for App Service apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. Currently, this policy only applies to Linux apps. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_Webapp_Audit_PHP_Latest.json) |
governance Azure Security Benchmarkv1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/azure-security-benchmarkv1.md
Title: Regulatory Compliance details for Azure Security Benchmark v1 description: Details of the Azure Security Benchmark v1 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/04/2022 Last updated : 01/05/2023
initiative definition.
|[\[Preview\]: All Internet traffic should be routed via your deployed Azure Firewall](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc5e4038-4584-4632-8c85-c0448d374b2c) |Azure Security Center has identified that some of your subnets aren't protected with a next generation firewall. Protect your subnets from potential threats by restricting access to them with Azure Firewall or a supported next generation firewall |AuditIfNotExists, Disabled |[3.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ASC_All_Internet_traffic_should_be_routed_via_Azure_Firewall.json) | |[\[Preview\]: Container Registry should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4857be7-912a-4c75-87e6-e30292bcdf78) |This policy audits any Container Registry not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_ContainerRegistry_Audit.json) | |[Adaptive network hardening recommendations should be applied on internet facing virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F08e6af2d-db70-460a-bfe9-d5bd474ba9d6) |Azure Security Center analyzes the traffic patterns of Internet facing virtual machines and provides Network Security Group rule recommendations that reduce the potential attack surface |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveNetworkHardenings_Audit.json) |
-|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aka.ms/appservice-vnet-service-endpoint](https://aka.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) |
+|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aks.ms/appservice-vnet-service-endpoint](https://aks.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) |
|[Authorized IP ranges should be defined on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e246bcf-5f6f-4f87-bc6f-775d4712c7ea) |Restrict access to the Kubernetes Service Management API by granting API access only to IP addresses in specific ranges. It is recommended to limit access to authorized IP ranges to ensure that only applications from allowed networks can access the cluster. |Audit, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableIpRanges_KubernetesService_Audit.json) | |[Cosmos DB should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe0a2b1a3-f7f9-4569-807f-2a9edebdf4d9) |This policy audits any Cosmos DB not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_CosmosDB_Audit.json) | |[Event Hub should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd63edb4a-c612-454d-b47d-191a724fcbf0) |This policy audits any Event Hub not configured to use a virtual network service endpoint. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_EventHub_AuditIfNotExists.json) |
initiative definition.
||||| |[\[Preview\]: All Internet traffic should be routed via your deployed Azure Firewall](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc5e4038-4584-4632-8c85-c0448d374b2c) |Azure Security Center has identified that some of your subnets aren't protected with a next generation firewall. Protect your subnets from potential threats by restricting access to them with Azure Firewall or a supported next generation firewall |AuditIfNotExists, Disabled |[3.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ASC_All_Internet_traffic_should_be_routed_via_Azure_Firewall.json) | |[Adaptive network hardening recommendations should be applied on internet facing virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F08e6af2d-db70-460a-bfe9-d5bd474ba9d6) |Azure Security Center analyzes the traffic patterns of Internet facing virtual machines and provides Network Security Group rule recommendations that reduce the potential attack surface |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveNetworkHardenings_Audit.json) |
-|[Azure DDoS Protection should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS protection should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDDoSProtection_Audit.json) |
+|[Azure DDoS Protection Standard should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS protection standard should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDDoSProtection_Audit.json) |
|[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) | ### Record network packets and flow logs
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[App Service apps should have resource logs enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F91a78b24-f231-4a8a-8da9-02c35b2b6510) |Audit enabling of resource logs on the app. This enables you to recreate activity trails for investigation purposes if a security incident occurs or your network is compromised. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_ResourceLoggingMonitoring_Audit.json) |
-|[Audit diagnostic setting](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types |AuditIfNotExists |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
+|[Audit diagnostic setting for selected resource types](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types. Be sure to select only resource types which support diagnostics settings. |AuditIfNotExists |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
|[Auditing on SQL server should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6fb4358-5bf4-4ad7-ba82-2cd2f41ce5e9) |Auditing on your SQL Server should be enabled to track database activities across all databases on the server and save them in an audit log. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditing_Audit.json) | |[Resource logs in Azure Data Lake Store should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F057ef27e-665e-4328-8ea3-04b3122bd9fb) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Data%20Lake/DataLakeStore_AuditDiagnosticLog_Audit.json) | |[Resource logs in Azure Stream Analytics should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff9be5368-9bf5-4b84-9e0a-7850da98bb46) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Stream%20Analytics/StreamAnalytics_AuditDiagnosticLog_Audit.json) |
initiative definition.
||||| |[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) | |[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
-|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
### Deploy automated operating system patch management solution
governance Built In Initiatives https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-initiatives.md
Title: List of built-in policy initiatives description: List built-in policy initiatives for Azure Policy. Categories include Regulatory Compliance, Guest Configuration, and more. Previously updated : 11/28/2022 Last updated : 01/05/2023
The name on each built-in links to the initiative definition source on the
**category** property in **metadata**. To jump to a specific **category**, use the menu on the right side of the page. Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> to use your browser's search feature.
+## ChangeTrackingAndInventory
++ ## Cosmos DB [!INCLUDE [azure-policy-reference-policysets-cosmos-db](../../../../includes/policy/reference/bycat/policysets-cosmos-db.md)]
governance Built In Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-policies.md
Title: List of built-in policy definitions description: List built-in policy definitions for Azure Policy. Categories include Tags, Regulatory Compliance, Key Vault, Kubernetes, Guest Configuration, and more. Previously updated : 11/28/2022 Last updated : 01/05/2023
side of the page. Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> to use your browse
[!INCLUDE [azure-policy-reference-policies-cdn](../../../../includes/policy/reference/bycat/policies-cdn.md)]
+## ChangeTrackingAndInventory
++ ## Cognitive Services [!INCLUDE [azure-policy-reference-policies-cognitive-services](../../../../includes/policy/reference/bycat/policies-cognitive-services.md)]
governance Canada Federal Pbmm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/canada-federal-pbmm.md
Title: Regulatory Compliance details for Canada Federal PBMM description: Details of the Canada Federal PBMM Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/04/2022 Last updated : 01/05/2023
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Audit diagnostic setting](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types |AuditIfNotExists |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
+|[Audit diagnostic setting for selected resource types](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types. Be sure to select only resource types which support diagnostics settings. |AuditIfNotExists |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
|[Auditing on SQL server should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6fb4358-5bf4-4ad7-ba82-2cd2f41ce5e9) |Auditing on your SQL Server should be enabled to track database activities across all databases on the server and save them in an audit log. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditing_Audit.json) | |[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) | |[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) |
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[\[Preview\]: Log Analytics Extension should be enabled for listed virtual machine images](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F32133ab0-ee4b-4b44-98d6-042180979d50) |Reports virtual machines as non-compliant if the virtual machine image is not in the list defined and the extension is not installed. |AuditIfNotExists, Disabled |[2.0.1-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalytics_OSImage_Audit.json) |
-|[Audit diagnostic setting](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types |AuditIfNotExists |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
+|[Audit diagnostic setting for selected resource types](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types. Be sure to select only resource types which support diagnostics settings. |AuditIfNotExists |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
|[Auditing on SQL server should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6fb4358-5bf4-4ad7-ba82-2cd2f41ce5e9) |Auditing on your SQL Server should be enabled to track database activities across all databases on the server and save them in an audit log. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditing_Audit.json) | |[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) | |[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) |
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Azure DDoS Protection should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS protection should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDDoSProtection_Audit.json) |
+|[Azure DDoS Protection Standard should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS protection standard should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDDoSProtection_Audit.json) |
### Boundary Protection
governance Cis Azure 1 1 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-1-1-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.1.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.1.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/04/2022 Last updated : 01/05/2023
This built-in initiative is deployed as part of the
|[Develop an incident response plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b4e134f-1e4c-2bff-573e-082d85479b6e) |CMA_0145 - Develop an incident response plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0145.json) | |[Set automated notifications for new and trending cloud applications in your organization](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf38215f-70c4-0cd6-40c2-c52d86690a45) |CMA_0495 - Set automated notifications for new and trending cloud applications in your organization |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0495.json) |
-### Ensure that Activity Log Alert exists for Create or Update Security Solution
+### Ensure that Activity Log Alert exists for Delete Security Solution
-**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 5.2.6
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 5.2.7
**Ownership**: Shared |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
governance Cis Azure 1 3 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-1-3-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.3.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.3.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/04/2022 Last updated : 01/05/2023
initiative definition.
|[Perform vulnerability scans](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c5e0e1a-216f-8f49-0a15-76ed0d8b8e1f) |CMA_0393 - Perform vulnerability scans |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0393.json) | |[Remediate information system flaws](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbe38a620-000b-21cf-3cb3-ea151b704c3b) |CMA_0427 - Remediate information system flaws |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0427.json) | |[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
-|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
### Ensure that VA setting Periodic Recurring Scans is enabled on a SQL server
governance Cmmc L3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cmmc-l3.md
Title: Regulatory Compliance details for CMMC Level 3 description: Details of the CMMC Level 3 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/04/2022 Last updated : 01/05/2023
This built-in initiative is deployed as part of the
|[App Service apps should not have CORS configured to allow every resource to access your apps](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5744710e-cc2f-4ee8-8809-3b11e89f4bc9) |Cross-Origin Resource Sharing (CORS) should not allow all domains to access your app. Allow only required domains to interact with your app. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RestrictCORSAccess_WebApp_Audit.json) | |[Audit Linux machines that allow remote connections from accounts without passwords](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea53dbee-c6c9-4f0e-9f9e-de0039b78023) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if Linux machines that allow remote connections from accounts without passwords |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxPassword110_AINE.json) | |[Azure Key Vault should have firewall enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55615ac9-af46-4a59-874e-391cc3dfb490) |Enable the key vault firewall so that the key vault is not accessible by default to any public IPs. You can then configure specific IP ranges to limit access to those networks. Learn more at: [https://docs.microsoft.com/azure/key-vault/general/network-security](../../../key-vault/general/network-security.md) |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultFirewallEnabled_Audit.json) |
-|[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
+|[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |To improve the security of Cognitive Services accounts, ensure that it isn't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
|[Cognitive Services accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) | |[Container registries should not allow unrestricted network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) | |[CORS should not allow every domain to access your API for FHIR](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0fea8f8a-4169-495d-8307-30ec335f387d) |Cross-Origin Resource Sharing (CORS) should not allow all domains to access your API for FHIR. To protect your API for FHIR, remove access for all domains and explicitly define the domains allowed to connect. |audit, Audit, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/API%20for%20FHIR/HealthcareAPIs_RestrictCORSAccess_Audit.json) |
This built-in initiative is deployed as part of the
|[App Service apps should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa4af4a39-4135-47fb-b175-47fbdf85311d) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled, Deny |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceWebapp_AuditHTTP_Audit.json) | |[Audit Linux machines that allow remote connections from accounts without passwords](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea53dbee-c6c9-4f0e-9f9e-de0039b78023) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if Linux machines that allow remote connections from accounts without passwords |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxPassword110_AINE.json) | |[Azure Key Vault should have firewall enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55615ac9-af46-4a59-874e-391cc3dfb490) |Enable the key vault firewall so that the key vault is not accessible by default to any public IPs. You can then configure specific IP ranges to limit access to those networks. Learn more at: [https://docs.microsoft.com/azure/key-vault/general/network-security](../../../key-vault/general/network-security.md) |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultFirewallEnabled_Audit.json) |
-|[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
+|[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |To improve the security of Cognitive Services accounts, ensure that it isn't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
|[Cognitive Services accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) | |[Container registries should not allow unrestricted network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) | |[CORS should not allow every domain to access your API for FHIR](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0fea8f8a-4169-495d-8307-30ec335f387d) |Cross-Origin Resource Sharing (CORS) should not allow all domains to access your API for FHIR. To protect your API for FHIR, remove access for all domains and explicitly define the domains allowed to connect. |audit, Audit, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/API%20for%20FHIR/HealthcareAPIs_RestrictCORSAccess_Audit.json) |
This built-in initiative is deployed as part of the
|[\[Preview\]: All Internet traffic should be routed via your deployed Azure Firewall](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc5e4038-4584-4632-8c85-c0448d374b2c) |Azure Security Center has identified that some of your subnets aren't protected with a next generation firewall. Protect your subnets from potential threats by restricting access to them with Azure Firewall or a supported next generation firewall |AuditIfNotExists, Disabled |[3.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ASC_All_Internet_traffic_should_be_routed_via_Azure_Firewall.json) | |[\[Preview\]: Storage account public access should be disallowed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4fa4b6c0-31ca-4c0d-b10d-24b96f62a751) |Anonymous public read access to containers and blobs in Azure Storage is a convenient way to share data but might present security risks. To prevent data breaches caused by undesired anonymous access, Microsoft recommends preventing public access to a storage account unless your scenario requires it. |audit, Audit, deny, Deny, disabled, Disabled |[3.1.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/ASC_Storage_DisallowPublicBlobAccess_Audit.json) | |[Adaptive network hardening recommendations should be applied on internet facing virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F08e6af2d-db70-460a-bfe9-d5bd474ba9d6) |Azure Security Center analyzes the traffic patterns of Internet facing virtual machines and provides Network Security Group rule recommendations that reduce the potential attack surface |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveNetworkHardenings_Audit.json) |
-|[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
+|[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |To improve the security of Cognitive Services accounts, ensure that it isn't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
|[Cognitive Services accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) | |[Container registries should not allow unrestricted network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) | |[CORS should not allow every domain to access your API for FHIR](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0fea8f8a-4169-495d-8307-30ec335f387d) |Cross-Origin Resource Sharing (CORS) should not allow all domains to access your API for FHIR. To protect your API for FHIR, remove access for all domains and explicitly define the domains allowed to connect. |audit, Audit, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/API%20for%20FHIR/HealthcareAPIs_RestrictCORSAccess_Audit.json) |
This built-in initiative is deployed as part of the
|[An activity log alert should exist for specific Administrative operations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb954148f-4c11-4c38-8221-be76711e194a) |This policy audits specific Administrative operations with no activity log alerts configured. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_AdministrativeOperations_Audit.json) | |[An activity log alert should exist for specific Policy operations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc5447c04-a4d7-4ba8-a263-c9ee321a6858) |This policy audits specific Policy operations with no activity log alerts configured. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_PolicyOperations_Audit.json) | |[An activity log alert should exist for specific Security operations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3b980d31-7904-4bb7-8575-5665739a8052) |This policy audits specific Security operations with no activity log alerts configured. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_SecurityOperations_Audit.json) |
-|[Audit diagnostic setting](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types |AuditIfNotExists |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
+|[Audit diagnostic setting for selected resource types](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types. Be sure to select only resource types which support diagnostics settings. |AuditIfNotExists |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
|[Auditing on SQL server should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6fb4358-5bf4-4ad7-ba82-2cd2f41ce5e9) |Auditing on your SQL Server should be enabled to track database activities across all databases on the server and save them in an audit log. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditing_Audit.json) | |[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) | |[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) |
This built-in initiative is deployed as part of the
|[An activity log alert should exist for specific Administrative operations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb954148f-4c11-4c38-8221-be76711e194a) |This policy audits specific Administrative operations with no activity log alerts configured. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_AdministrativeOperations_Audit.json) | |[An activity log alert should exist for specific Policy operations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc5447c04-a4d7-4ba8-a263-c9ee321a6858) |This policy audits specific Policy operations with no activity log alerts configured. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_PolicyOperations_Audit.json) | |[An activity log alert should exist for specific Security operations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3b980d31-7904-4bb7-8575-5665739a8052) |This policy audits specific Security operations with no activity log alerts configured. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_SecurityOperations_Audit.json) |
-|[Audit diagnostic setting](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types |AuditIfNotExists |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
+|[Audit diagnostic setting for selected resource types](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types. Be sure to select only resource types which support diagnostics settings. |AuditIfNotExists |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
|[Auditing on SQL server should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6fb4358-5bf4-4ad7-ba82-2cd2f41ce5e9) |Auditing on your SQL Server should be enabled to track database activities across all databases on the server and save them in an audit log. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditing_Audit.json) | |[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) | |[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) |
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[\[Preview\]: Log Analytics Extension should be enabled for listed virtual machine images](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F32133ab0-ee4b-4b44-98d6-042180979d50) |Reports virtual machines as non-compliant if the virtual machine image is not in the list defined and the extension is not installed. |AuditIfNotExists, Disabled |[2.0.1-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalytics_OSImage_Audit.json) |
-|[Audit diagnostic setting](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types |AuditIfNotExists |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
+|[Audit diagnostic setting for selected resource types](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types. Be sure to select only resource types which support diagnostics settings. |AuditIfNotExists |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
|[Auditing on SQL server should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6fb4358-5bf4-4ad7-ba82-2cd2f41ce5e9) |Auditing on your SQL Server should be enabled to track database activities across all databases on the server and save them in an audit log. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditing_Audit.json) | |[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) | |[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) |
This built-in initiative is deployed as part of the
||||| |[\[Preview\]: Log Analytics Extension should be enabled for listed virtual machine images](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F32133ab0-ee4b-4b44-98d6-042180979d50) |Reports virtual machines as non-compliant if the virtual machine image is not in the list defined and the extension is not installed. |AuditIfNotExists, Disabled |[2.0.1-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalytics_OSImage_Audit.json) | |[App Service apps should have resource logs enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F91a78b24-f231-4a8a-8da9-02c35b2b6510) |Audit enabling of resource logs on the app. This enables you to recreate activity trails for investigation purposes if a security incident occurs or your network is compromised. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_ResourceLoggingMonitoring_Audit.json) |
-|[Audit diagnostic setting](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types |AuditIfNotExists |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
+|[Audit diagnostic setting for selected resource types](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types. Be sure to select only resource types which support diagnostics settings. |AuditIfNotExists |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
|[Log Analytics extension should be enabled in virtual machine scale sets for listed virtual machine images](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5c3bc7b8-a64c-4e08-a9cd-7ff0f31e1138) |Reports virtual machine scale sets as non-compliant if the virtual machine image is not in the list defined and the extension is not installed. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalytics_OSImage_VMSS_Audit.json) | |[Resource logs in IoT Hub should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F383856f8-de7f-44a2-81fc-e5135b5c2aa4) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Internet%20of%20Things/IoTHub_AuditDiagnosticLog_Audit.json) | |[The Log Analytics extension should be installed on Virtual Machine Scale Sets](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fefbde977-ba53-4479-b8e9-10b957924fbf) |This policy audits any Windows/Linux Virtual Machine Scale Sets if the Log Analytics extension is not installed. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/VMSS_LogAnalyticsAgent_AuditIfNotExists.json) |
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[An activity log alert should exist for specific Policy operations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc5447c04-a4d7-4ba8-a263-c9ee321a6858) |This policy audits specific Policy operations with no activity log alerts configured. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_PolicyOperations_Audit.json) |
-|[Audit diagnostic setting](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types |AuditIfNotExists |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
+|[Audit diagnostic setting for selected resource types](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types. Be sure to select only resource types which support diagnostics settings. |AuditIfNotExists |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
## Security Assessment
This built-in initiative is deployed as part of the
|[Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf6cd1bd-1635-48cb-bde7-5b15693900b9) |Servers without an installed Endpoint Protection agent will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingEndpointProtection_Audit.json) | |[Security Center standard pricing tier should be selected](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa1181c5f-672a-477a-979a-7d58aa086233) |The standard pricing tier enables threat detection for networks and virtual machines, providing threat intelligence, anomaly detection, and behavior analytics in Azure Security Center |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Standard_pricing_tier.json) | |[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
-|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
### Monitor security controls on an ongoing basis to ensure the continued effectiveness of the controls.
This built-in initiative is deployed as part of the
|[Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf6cd1bd-1635-48cb-bde7-5b15693900b9) |Servers without an installed Endpoint Protection agent will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingEndpointProtection_Audit.json) | |[Security Center standard pricing tier should be selected](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa1181c5f-672a-477a-979a-7d58aa086233) |The standard pricing tier enables threat detection for networks and virtual machines, providing threat intelligence, anomaly detection, and behavior analytics in Azure Security Center |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Standard_pricing_tier.json) | |[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
-|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
## Configuration Management
This built-in initiative is deployed as part of the
|[Allowlist rules in your adaptive application control policy should be updated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F123a3936-f020-408a-ba0c-47873faf1534) |Monitor for changes in behavior on groups of machines configured for auditing by Azure Security Center's adaptive application controls. Security Center uses machine learning to analyze the running processes on your machines and suggest a list of known-safe applications. These are presented as recommended apps to allow in adaptive application control policies. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveApplicationControlsUpdate_Audit.json) | |[App Service apps should have remote debugging turned off](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcb510bfd-1cba-4d9f-a230-cb0976f4bb71) |Remote debugging requires inbound ports to be opened on an App Service app. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_WebApp_Audit.json) | |[App Service apps should not have CORS configured to allow every resource to access your apps](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5744710e-cc2f-4ee8-8809-3b11e89f4bc9) |Cross-Origin Resource Sharing (CORS) should not allow all domains to access your app. Allow only required domains to interact with your app. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RestrictCORSAccess_WebApp_Audit.json) |
-|[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
+|[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |To improve the security of Cognitive Services accounts, ensure that it isn't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
|[Cognitive Services accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) | |[Container registries should not allow unrestricted network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) | |[CORS should not allow every domain to access your API for FHIR](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0fea8f8a-4169-495d-8307-30ec335f387d) |Cross-Origin Resource Sharing (CORS) should not allow all domains to access your API for FHIR. To protect your API for FHIR, remove access for all domains and explicitly define the domains allowed to connect. |audit, Audit, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/API%20for%20FHIR/HealthcareAPIs_RestrictCORSAccess_Audit.json) |
This built-in initiative is deployed as part of the
|[Security Center standard pricing tier should be selected](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa1181c5f-672a-477a-979a-7d58aa086233) |The standard pricing tier enables threat detection for networks and virtual machines, providing threat intelligence, anomaly detection, and behavior analytics in Azure Security Center |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Standard_pricing_tier.json) | |[Vulnerability Assessment settings for SQL server should contain an email address to receive scan reports](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F057d6cfe-9c4f-4a6d-bc60-14420ea1f1a9) |Ensure that an email address is provided for the 'Send scan reports to' field in the Vulnerability Assessment settings. This email address receives scan result summary after a periodic scan runs on SQL servers. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_VulnerabilityAssessmentEmails_Audit.json) | |[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
-|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
### Scan for vulnerabilities in organizational systems and applications periodically and when new vulnerabilities affecting those systems and applications are identified.
This built-in initiative is deployed as part of the
|[Security Center standard pricing tier should be selected](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa1181c5f-672a-477a-979a-7d58aa086233) |The standard pricing tier enables threat detection for networks and virtual machines, providing threat intelligence, anomaly detection, and behavior analytics in Azure Security Center |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Standard_pricing_tier.json) | |[Vulnerability Assessment settings for SQL server should contain an email address to receive scan reports](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F057d6cfe-9c4f-4a6d-bc60-14420ea1f1a9) |Ensure that an email address is provided for the 'Send scan reports to' field in the Vulnerability Assessment settings. This email address receives scan result summary after a periodic scan runs on SQL servers. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_VulnerabilityAssessmentEmails_Audit.json) | |[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
-|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
### Remediate vulnerabilities in accordance with risk assessments.
This built-in initiative is deployed as part of the
|[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) | |[Vulnerability Assessment settings for SQL server should contain an email address to receive scan reports](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F057d6cfe-9c4f-4a6d-bc60-14420ea1f1a9) |Ensure that an email address is provided for the 'Send scan reports to' field in the Vulnerability Assessment settings. This email address receives scan result summary after a periodic scan runs on SQL servers. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_VulnerabilityAssessmentEmails_Audit.json) | |[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
-|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
## Risk Management
This built-in initiative is deployed as part of the
|[App Service apps should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa4af4a39-4135-47fb-b175-47fbdf85311d) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled, Deny |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceWebapp_AuditHTTP_Audit.json) | |[App Service apps should use the latest TLS version](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff0e6e85b-9b9f-4a4b-b67b-f730d42f1b0b) |Periodically, newer versions are released for TLS either due to security flaws, include additional functionality, and enhance speed. Upgrade to the latest TLS version for App Service apps to take advantage of security fixes, if any, and/or new functionalities of the latest version. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RequireLatestTls_WebApp_Audit.json) | |[Azure Web Application Firewall should be enabled for Azure Front Door entry-points](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F055aa869-bc98-4af8-bafc-23f1ab6ffe2c) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AFD_Enabled_Audit.json) |
-|[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
+|[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |To improve the security of Cognitive Services accounts, ensure that it isn't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
|[Cognitive Services accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) | |[Container registries should not allow unrestricted network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) | |[Flow logs should be configured for every network security group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc251913d-7d24-4958-af87-478ed3b9ba41) |Audit for network security groups to verify if flow logs are configured. Enabling flow logs allows to log information about IP traffic flowing through network security group. It can be used for optimizing network flows, monitoring throughput, verifying compliance, detecting intrusions and more. |Audit, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkSecurityGroup_FlowLog_Audit.json) |
This built-in initiative is deployed as part of the
|[App Service apps should not have CORS configured to allow every resource to access your apps](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5744710e-cc2f-4ee8-8809-3b11e89f4bc9) |Cross-Origin Resource Sharing (CORS) should not allow all domains to access your app. Allow only required domains to interact with your app. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RestrictCORSAccess_WebApp_Audit.json) | |[Azure Key Vault should have firewall enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55615ac9-af46-4a59-874e-391cc3dfb490) |Enable the key vault firewall so that the key vault is not accessible by default to any public IPs. You can then configure specific IP ranges to limit access to those networks. Learn more at: [https://docs.microsoft.com/azure/key-vault/general/network-security](../../../key-vault/general/network-security.md) |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultFirewallEnabled_Audit.json) | |[Azure Web Application Firewall should be enabled for Azure Front Door entry-points](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F055aa869-bc98-4af8-bafc-23f1ab6ffe2c) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AFD_Enabled_Audit.json) |
-|[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
+|[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |To improve the security of Cognitive Services accounts, ensure that it isn't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
|[Cognitive Services accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) | |[Container registries should not allow unrestricted network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) | |[CORS should not allow every domain to access your API for FHIR](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0fea8f8a-4169-495d-8307-30ec335f387d) |Cross-Origin Resource Sharing (CORS) should not allow all domains to access your API for FHIR. To protect your API for FHIR, remove access for all domains and explicitly define the domains allowed to connect. |audit, Audit, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/API%20for%20FHIR/HealthcareAPIs_RestrictCORSAccess_Audit.json) |
governance Fedramp High https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/fedramp-high.md
Title: Regulatory Compliance details for FedRAMP High description: Details of the FedRAMP High Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/04/2022 Last updated : 01/05/2023
initiative definition.
|[Azure SignalR Service should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2393d2cf-a342-44cd-a2e2-fe0188fd1234) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure SignalR Service resource instead of the entire service, you'll reduce your data leakage risks. Learn more about private links at: [https://aka.ms/asrs/privatelink](https://aka.ms/asrs/privatelink). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit_v2.json) | |[Azure Synapse workspaces should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72d11df1-dd8a-41f7-8925-b05b960ebafc) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Synapse workspace, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/synapse-analytics/security/how-to-connect-to-workspace-with-private-links](../../../synapse-analytics/security/how-to-connect-to-workspace-with-private-links.md). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/SynapseWorkspaceUsePrivateLinks_Audit.json) | |[Azure Web PubSub Service should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb907f70-7514-460d-92b3-a5ae93b4f917) |Azure Private Link lets you connect your virtual networks to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure Web PubSub Service, you can reduce data leakage risks. Learn more about private links at: [https://aka.ms/awps/privatelink](https://aka.ms/awps/privatelink). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Web%20PubSub/WebPubSub_PrivateEndpointEnabled_Audit_v2.json) |
-|[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
+|[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |To improve the security of Cognitive Services accounts, ensure that it isn't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
|[Cognitive Services accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) | |[Cognitive Services should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcddd188c-4b82-4c48-a19d-ddf74ee66a01) |Azure Private Link lets you connect your virtual networks to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Cognitive Services, you'll reduce the potential for data leakage. Learn more about private links at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_EnablePrivateEndpoints_Audit.json) | |[Container registries should not allow unrestricted network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) |
Policy And Procedures
|[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) | |[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) | |[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
-|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
|[Vulnerability assessment should be enabled on your Synapse workspaces](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0049a6b3-a662-4f3e-8635-39cf44ace45a) |Discover, track, and remediate potential vulnerabilities by configuring recurring SQL vulnerability assessment scans on your Synapse workspaces. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/ASC_SQLVulnerabilityAssessmentOnSynapse_Audit.json) | ### Update Tool Capability
Policy And Procedures
||||| |[Ensure external providers consistently meet interests of the customers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3eabed6d-1912-2d3c-858b-f438d08d0412) |CMA_C1592 - Ensure external providers consistently meet interests of the customers |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1592.json) |
-### Processing, Storage, And Service Location
+### Consistent Interests Of Consumers And Providers
-**ID**: FedRAMP High SA-9 (5)
+**ID**: FedRAMP High SA-9 (4)
**Ownership**: Shared |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
Policy And Procedures
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Azure DDoS Protection should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS protection should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDDoSProtection_Audit.json) |
+|[Azure DDoS Protection Standard should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS protection standard should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDDoSProtection_Audit.json) |
|[Azure Web Application Firewall should be enabled for Azure Front Door entry-points](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F055aa869-bc98-4af8-bafc-23f1ab6ffe2c) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AFD_Enabled_Audit.json) | |[Develop and document a DDoS response plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb7306e73-0494-83a2-31f5-280e934a8f70) |CMA_0147 - Develop and document a DDoS response plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0147.json) | |[IP Forwarding on your virtual machine should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd352bd5-2853-4985-bf0d-73806b4a5744) |Enabling IP forwarding on a virtual machine's NIC allows the machine to receive traffic addressed to other destinations. IP forwarding is rarely required (e.g., when using the VM as a network virtual appliance), and therefore, this should be reviewed by the network security team. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_IPForwardingOnVirtualMachines_Audit.json) |
Policy And Procedures
|[Azure Synapse workspaces should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72d11df1-dd8a-41f7-8925-b05b960ebafc) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Synapse workspace, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/synapse-analytics/security/how-to-connect-to-workspace-with-private-links](../../../synapse-analytics/security/how-to-connect-to-workspace-with-private-links.md). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/SynapseWorkspaceUsePrivateLinks_Audit.json) | |[Azure Web Application Firewall should be enabled for Azure Front Door entry-points](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F055aa869-bc98-4af8-bafc-23f1ab6ffe2c) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AFD_Enabled_Audit.json) | |[Azure Web PubSub Service should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb907f70-7514-460d-92b3-a5ae93b4f917) |Azure Private Link lets you connect your virtual networks to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure Web PubSub Service, you can reduce data leakage risks. Learn more about private links at: [https://aka.ms/awps/privatelink](https://aka.ms/awps/privatelink). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Web%20PubSub/WebPubSub_PrivateEndpointEnabled_Audit_v2.json) |
-|[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
+|[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |To improve the security of Cognitive Services accounts, ensure that it isn't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
|[Cognitive Services accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) | |[Cognitive Services should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcddd188c-4b82-4c48-a19d-ddf74ee66a01) |Azure Private Link lets you connect your virtual networks to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Cognitive Services, you'll reduce the potential for data leakage. Learn more about private links at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_EnablePrivateEndpoints_Audit.json) | |[Container registries should not allow unrestricted network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) |
Policy And Procedures
|[Azure Synapse workspaces should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72d11df1-dd8a-41f7-8925-b05b960ebafc) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Synapse workspace, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/synapse-analytics/security/how-to-connect-to-workspace-with-private-links](../../../synapse-analytics/security/how-to-connect-to-workspace-with-private-links.md). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/SynapseWorkspaceUsePrivateLinks_Audit.json) | |[Azure Web Application Firewall should be enabled for Azure Front Door entry-points](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F055aa869-bc98-4af8-bafc-23f1ab6ffe2c) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AFD_Enabled_Audit.json) | |[Azure Web PubSub Service should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb907f70-7514-460d-92b3-a5ae93b4f917) |Azure Private Link lets you connect your virtual networks to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure Web PubSub Service, you can reduce data leakage risks. Learn more about private links at: [https://aka.ms/awps/privatelink](https://aka.ms/awps/privatelink). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Web%20PubSub/WebPubSub_PrivateEndpointEnabled_Audit_v2.json) |
-|[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
+|[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |To improve the security of Cognitive Services accounts, ensure that it isn't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
|[Cognitive Services accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) | |[Cognitive Services should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcddd188c-4b82-4c48-a19d-ddf74ee66a01) |Azure Private Link lets you connect your virtual networks to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Cognitive Services, you'll reduce the potential for data leakage. Learn more about private links at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_EnablePrivateEndpoints_Audit.json) | |[Container registries should not allow unrestricted network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) |
governance Fedramp Moderate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/fedramp-moderate.md
Title: Regulatory Compliance details for FedRAMP Moderate description: Details of the FedRAMP Moderate Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/04/2022 Last updated : 01/05/2023
initiative definition.
|[Azure SignalR Service should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2393d2cf-a342-44cd-a2e2-fe0188fd1234) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure SignalR Service resource instead of the entire service, you'll reduce your data leakage risks. Learn more about private links at: [https://aka.ms/asrs/privatelink](https://aka.ms/asrs/privatelink). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit_v2.json) | |[Azure Synapse workspaces should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72d11df1-dd8a-41f7-8925-b05b960ebafc) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Synapse workspace, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/synapse-analytics/security/how-to-connect-to-workspace-with-private-links](../../../synapse-analytics/security/how-to-connect-to-workspace-with-private-links.md). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/SynapseWorkspaceUsePrivateLinks_Audit.json) | |[Azure Web PubSub Service should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb907f70-7514-460d-92b3-a5ae93b4f917) |Azure Private Link lets you connect your virtual networks to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure Web PubSub Service, you can reduce data leakage risks. Learn more about private links at: [https://aka.ms/awps/privatelink](https://aka.ms/awps/privatelink). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Web%20PubSub/WebPubSub_PrivateEndpointEnabled_Audit_v2.json) |
-|[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
+|[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |To improve the security of Cognitive Services accounts, ensure that it isn't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
|[Cognitive Services accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) | |[Cognitive Services should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcddd188c-4b82-4c48-a19d-ddf74ee66a01) |Azure Private Link lets you connect your virtual networks to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Cognitive Services, you'll reduce the potential for data leakage. Learn more about private links at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_EnablePrivateEndpoints_Audit.json) | |[Container registries should not allow unrestricted network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) |
initiative definition.
|[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) | |[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) | |[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
-|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
|[Vulnerability assessment should be enabled on your Synapse workspaces](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0049a6b3-a662-4f3e-8635-39cf44ace45a) |Discover, track, and remediate potential vulnerabilities by configuring recurring SQL vulnerability assessment scans on your Synapse workspaces. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/ASC_SQLVulnerabilityAssessmentOnSynapse_Audit.json) | ### Update Tool Capability
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Azure DDoS Protection should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS protection should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDDoSProtection_Audit.json) |
+|[Azure DDoS Protection Standard should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS protection standard should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDDoSProtection_Audit.json) |
|[Azure Web Application Firewall should be enabled for Azure Front Door entry-points](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F055aa869-bc98-4af8-bafc-23f1ab6ffe2c) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AFD_Enabled_Audit.json) | |[Develop and document a DDoS response plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb7306e73-0494-83a2-31f5-280e934a8f70) |CMA_0147 - Develop and document a DDoS response plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0147.json) | |[IP Forwarding on your virtual machine should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd352bd5-2853-4985-bf0d-73806b4a5744) |Enabling IP forwarding on a virtual machine's NIC allows the machine to receive traffic addressed to other destinations. IP forwarding is rarely required (e.g., when using the VM as a network virtual appliance), and therefore, this should be reviewed by the network security team. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_IPForwardingOnVirtualMachines_Audit.json) |
initiative definition.
|[Azure Synapse workspaces should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72d11df1-dd8a-41f7-8925-b05b960ebafc) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Synapse workspace, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/synapse-analytics/security/how-to-connect-to-workspace-with-private-links](../../../synapse-analytics/security/how-to-connect-to-workspace-with-private-links.md). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/SynapseWorkspaceUsePrivateLinks_Audit.json) | |[Azure Web Application Firewall should be enabled for Azure Front Door entry-points](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F055aa869-bc98-4af8-bafc-23f1ab6ffe2c) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AFD_Enabled_Audit.json) | |[Azure Web PubSub Service should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb907f70-7514-460d-92b3-a5ae93b4f917) |Azure Private Link lets you connect your virtual networks to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure Web PubSub Service, you can reduce data leakage risks. Learn more about private links at: [https://aka.ms/awps/privatelink](https://aka.ms/awps/privatelink). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Web%20PubSub/WebPubSub_PrivateEndpointEnabled_Audit_v2.json) |
-|[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
+|[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |To improve the security of Cognitive Services accounts, ensure that it isn't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
|[Cognitive Services accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) | |[Cognitive Services should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcddd188c-4b82-4c48-a19d-ddf74ee66a01) |Azure Private Link lets you connect your virtual networks to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Cognitive Services, you'll reduce the potential for data leakage. Learn more about private links at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_EnablePrivateEndpoints_Audit.json) | |[Container registries should not allow unrestricted network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) |
initiative definition.
|[Azure Synapse workspaces should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72d11df1-dd8a-41f7-8925-b05b960ebafc) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Synapse workspace, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/synapse-analytics/security/how-to-connect-to-workspace-with-private-links](../../../synapse-analytics/security/how-to-connect-to-workspace-with-private-links.md). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/SynapseWorkspaceUsePrivateLinks_Audit.json) | |[Azure Web Application Firewall should be enabled for Azure Front Door entry-points](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F055aa869-bc98-4af8-bafc-23f1ab6ffe2c) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AFD_Enabled_Audit.json) | |[Azure Web PubSub Service should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb907f70-7514-460d-92b3-a5ae93b4f917) |Azure Private Link lets you connect your virtual networks to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure Web PubSub Service, you can reduce data leakage risks. Learn more about private links at: [https://aka.ms/awps/privatelink](https://aka.ms/awps/privatelink). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Web%20PubSub/WebPubSub_PrivateEndpointEnabled_Audit_v2.json) |
-|[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
+|[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |To improve the security of Cognitive Services accounts, ensure that it isn't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
|[Cognitive Services accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) | |[Cognitive Services should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcddd188c-4b82-4c48-a19d-ddf74ee66a01) |Azure Private Link lets you connect your virtual networks to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Cognitive Services, you'll reduce the potential for data leakage. Learn more about private links at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_EnablePrivateEndpoints_Audit.json) | |[Container registries should not allow unrestricted network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) |
governance Gov Azure Security Benchmark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-azure-security-benchmark.md
Title: Regulatory Compliance details for Azure Security Benchmark (Azure Government) description: Details of the Azure Security Benchmark (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/04/2022 Last updated : 01/05/2023
initiative definition.
|[Azure Event Grid topics should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4b90e17e-8448-49db-875e-bd83fb6f804f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Event Grid topic instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/privateendpoints](https://aka.ms/privateendpoints). |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Grid/Topics_PrivateEndpoint_Audit.json) | |[Azure Machine Learning workspaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F40cec1dd-a100-4920-b15b-3024fe8901ab) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Machine Learning workspaces, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/machine-learning/how-to-configure-private-link](../../../machine-learning/how-to-configure-private-link.md). |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/Workspace_PrivateEndpoint_Audit.json) | |[Azure SignalR Service should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2393d2cf-a342-44cd-a2e2-fe0188fd1234) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure SignalR Service resource instead of the entire service, you'll reduce your data leakage risks. Learn more about private links at: [https://aka.ms/asrs/privatelink](https://aka.ms/asrs/privatelink). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit_v2.json) |
-|[Cognitive Services accounts should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
+|[Cognitive Services accounts should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |To improve the security of Cognitive Services accounts, ensure that it isn't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
|[Cognitive Services accounts should restrict network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) | |[Container registries should not allow unrestricted network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) | |[Container registries should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8eef0a8-67cf-4eb4-9386-14b0e78733d4) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network.By mapping private endpoints to your container registries instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/acr/private-link](https://aka.ms/acr/private-link). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_PrivateEndpointEnabled_Audit.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Azure DDoS Protection should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS protection should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableDDoSProtection_Audit.json) |
+|[Azure DDoS Protection Standard should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS protection standard should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableDDoSProtection_Audit.json) |
### Deploy web application firewall
initiative definition.
|[Function apps should only be accessible over HTTPS](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6d555dd1-86f2-4f1c-8ed7-5abae7c6cbab) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled, Deny |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceFunctionApp_AuditHTTP_Audit.json) | |[Function apps should require FTPS only](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F399b2637-a50f-4f95-96f8-3a145476eb15) |Enable FTPS enforcement for enhanced security. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_AuditFTPS_FunctionApp_Audit.json) | |[Function apps should use the latest TLS version](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff9d614c5-c173-4d56-95a7-b4437057d193) |Periodically, newer versions are released for TLS either due to security flaws, include additional functionality, and enhance speed. Upgrade to the latest TLS version for Function apps to take advantage of security fixes, if any, and/or new functionalities of the latest version. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RequireLatestTls_FunctionApp_Audit.json) |
-|[Kubernetes clusters should be accessible only over HTTPS](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1a5b4dca-0b6f-4cf5-907c-56316bc1bf3d) |Use of HTTPS ensures authentication and protects data in transit from network layer eavesdropping attacks. This capability is currently generally available for Kubernetes Service (AKS), and in preview for Azure Arc enabled Kubernetes. For more info, visit [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc) |audit, Audit, deny, Deny, disabled, Disabled |[8.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/IngressHttpsOnly.json) |
+|[Kubernetes clusters should be accessible only over HTTPS](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1a5b4dca-0b6f-4cf5-907c-56316bc1bf3d) |Use of HTTPS ensures authentication and protects data in transit from network layer eavesdropping attacks. This capability is currently generally available for Kubernetes Service (AKS), and in preview for Azure Arc enabled Kubernetes. For more info, visit [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc) |audit, Audit, deny, Deny, disabled, Disabled |[9.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/IngressHttpsOnly.json) |
|[Only secure connections to your Azure Cache for Redis should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F22bee202-a82f-4305-9a2a-6d7f44d4dedb) |Audit enabling of only connections via SSL to Azure Cache for Redis. Use of secure connections ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cache/RedisCache_AuditSSLPort_Audit.json) | |[Secure transfer to storage accounts should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F404c3081-a854-4457-ae30-26a93ef643f9) |Audit requirement of Secure transfer in your storage account. Secure transfer is an option that forces your storage account to accept requests only from secure connections (HTTPS). Use of HTTPS ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_AuditForHTTPSEnabled_Audit.json) |
initiative definition.
|[Function apps should have 'Client Certificates (Incoming client certificates)' enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feaebaea7-8013-4ceb-9d14-7eb32271373c) |Client certificates allow for the app to request a certificate for incoming requests. Only clients with valid certificates will be able to reach the app. |Audit, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_FunctionApp_Audit_ClientCert.json) | |[Function apps should have remote debugging turned off](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e60b895-3786-45da-8377-9c6b4b6ac5f9) |Remote debugging requires inbound ports to be opened on Function apps. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_FunctionApp_Audit.json) | |[Function apps should not have CORS configured to allow every resource to access your apps](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0820b7b9-23aa-4725-a1ce-ae4558f718e5) |Cross-Origin Resource Sharing (CORS) should not allow all domains to access your Function app. Allow only required domains to interact with your Function app. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RestrictCORSAccess_FuntionApp_Audit.json) |
-|[Kubernetes cluster containers CPU and memory resource limits should not exceed the specified limits](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe345eecc-fa47-480f-9e88-67dcc122b164) |Enforce container CPU and memory resource limits to prevent resource exhaustion attacks in a Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[9.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerResourceLimits.json) |
-|[Kubernetes cluster containers should not share host process ID or host IPC namespace](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F47a1ee2f-2a2a-4576-bf2a-e0e36709c2b8) |Block pod containers from sharing the host process ID namespace and host IPC namespace in a Kubernetes cluster. This recommendation is part of CIS 5.2.2 and CIS 5.2.3 which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[5.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/BlockHostNamespace.json) |
-|[Kubernetes cluster containers should only use allowed AppArmor profiles](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F511f5417-5d12-434d-ab2e-816901e72a5e) |Containers should only use allowed AppArmor profiles in a Kubernetes cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[6.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/EnforceAppArmorProfile.json) |
-|[Kubernetes cluster containers should only use allowed capabilities](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc26596ff-4d70-4e6a-9a30-c2506bd2f80c) |Restrict the capabilities to reduce the attack surface of containers in a Kubernetes cluster. This recommendation is part of CIS 5.2.8 and CIS 5.2.9 which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[6.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerAllowedCapabilities.json) |
-|[Kubernetes cluster containers should only use allowed images](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffebd0533-8e55-448f-b837-bd0e06f16469) |Use images from trusted registries to reduce the Kubernetes cluster's exposure risk to unknown vulnerabilities, security issues and malicious images. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[9.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerAllowedImages.json) |
-|[Kubernetes cluster containers should run with a read only root file system](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdf49d893-a74c-421d-bc95-c663042e5b80) |Run containers with a read only root file system to protect from changes at run-time with malicious binaries being added to PATH in a Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[6.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ReadOnlyRootFileSystem.json) |
-|[Kubernetes cluster pod hostPath volumes should only use allowed host paths](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F098fc59e-46c7-4d99-9b16-64990e543d75) |Limit pod HostPath volume mounts to the allowed host paths in a Kubernetes Cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[6.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/AllowedHostPaths.json) |
-|[Kubernetes cluster pods and containers should only run with approved user and group IDs](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff06ddb64-5fa3-4b77-b166-acb36f7f6042) |Control the user, primary group, supplemental group and file system group IDs that pods and containers can use to run in a Kubernetes Cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[6.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/AllowedUsersGroups.json) |
-|[Kubernetes cluster pods should only use approved host network and port range](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F82985f06-dc18-4a48-bc1c-b9f4f0098cfe) |Restrict pod access to the host network and the allowable host port range in a Kubernetes cluster. This recommendation is part of CIS 5.2.4 which is intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[6.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/HostNetworkPorts.json) |
-|[Kubernetes cluster services should listen only on allowed ports](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F233a2a17-77ca-4fb1-9b6b-69223d272a44) |Restrict services to listen only on allowed ports to secure access to the Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[8.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ServiceAllowedPorts.json) |
-|[Kubernetes cluster should not allow privileged containers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F95edb821-ddaf-4404-9732-666045e056b4) |Do not allow privileged containers creation in a Kubernetes cluster. This recommendation is part of CIS 5.2.1 which is intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[9.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerNoPrivilege.json) |
-|[Kubernetes clusters should disable automounting API credentials](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F423dd1ba-798e-40e4-9c4d-b6902674b423) |Disable automounting API credentials to prevent a potentially compromised Pod resource to run API commands against Kubernetes clusters. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[4.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/BlockAutomountToken.json) |
-|[Kubernetes clusters should not allow container privilege escalation](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c6e92c9-99f0-4e55-9cf2-0c234dc48f99) |Do not allow containers to run with privilege escalation to root in a Kubernetes cluster. This recommendation is part of CIS 5.2.5 which is intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[7.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerNoPrivilegeEscalation.json) |
-|[Kubernetes clusters should not grant CAP_SYS_ADMIN security capabilities](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd2e7ea85-6b44-4317-a0be-1b951587f626) |To reduce the attack surface of your containers, restrict CAP_SYS_ADMIN Linux capabilities. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[4.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerDisallowedSysAdminCapability.json) |
-|[Kubernetes clusters should not use the default namespace](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9f061a12-e40d-4183-a00e-171812443373) |Prevent usage of the default namespace in Kubernetes clusters to protect against unauthorized access for ConfigMap, Pod, Secret, Service, and ServiceAccount resource types. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[4.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/BlockDefaultNamespace.json) |
+|[Kubernetes cluster containers CPU and memory resource limits should not exceed the specified limits](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe345eecc-fa47-480f-9e88-67dcc122b164) |Enforce container CPU and memory resource limits to prevent resource exhaustion attacks in a Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[10.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerResourceLimits.json) |
+|[Kubernetes cluster containers should not share host process ID or host IPC namespace](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F47a1ee2f-2a2a-4576-bf2a-e0e36709c2b8) |Block pod containers from sharing the host process ID namespace and host IPC namespace in a Kubernetes cluster. This recommendation is part of CIS 5.2.2 and CIS 5.2.3 which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[6.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/BlockHostNamespace.json) |
+|[Kubernetes cluster containers should only use allowed AppArmor profiles](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F511f5417-5d12-434d-ab2e-816901e72a5e) |Containers should only use allowed AppArmor profiles in a Kubernetes cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[7.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/EnforceAppArmorProfile.json) |
+|[Kubernetes cluster containers should only use allowed capabilities](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc26596ff-4d70-4e6a-9a30-c2506bd2f80c) |Restrict the capabilities to reduce the attack surface of containers in a Kubernetes cluster. This recommendation is part of CIS 5.2.8 and CIS 5.2.9 which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[7.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerAllowedCapabilities.json) |
+|[Kubernetes cluster containers should only use allowed images](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffebd0533-8e55-448f-b837-bd0e06f16469) |Use images from trusted registries to reduce the Kubernetes cluster's exposure risk to unknown vulnerabilities, security issues and malicious images. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[10.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerAllowedImages.json) |
+|[Kubernetes cluster containers should run with a read only root file system](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdf49d893-a74c-421d-bc95-c663042e5b80) |Run containers with a read only root file system to protect from changes at run-time with malicious binaries being added to PATH in a Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[7.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ReadOnlyRootFileSystem.json) |
+|[Kubernetes cluster pod hostPath volumes should only use allowed host paths](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F098fc59e-46c7-4d99-9b16-64990e543d75) |Limit pod HostPath volume mounts to the allowed host paths in a Kubernetes Cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[7.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/AllowedHostPaths.json) |
+|[Kubernetes cluster pods and containers should only run with approved user and group IDs](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff06ddb64-5fa3-4b77-b166-acb36f7f6042) |Control the user, primary group, supplemental group and file system group IDs that pods and containers can use to run in a Kubernetes Cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[7.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/AllowedUsersGroups.json) |
+|[Kubernetes cluster pods should only use approved host network and port range](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F82985f06-dc18-4a48-bc1c-b9f4f0098cfe) |Restrict pod access to the host network and the allowable host port range in a Kubernetes cluster. This recommendation is part of CIS 5.2.4 which is intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[7.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/HostNetworkPorts.json) |
+|[Kubernetes cluster services should listen only on allowed ports](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F233a2a17-77ca-4fb1-9b6b-69223d272a44) |Restrict services to listen only on allowed ports to secure access to the Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[9.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ServiceAllowedPorts.json) |
+|[Kubernetes cluster should not allow privileged containers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F95edb821-ddaf-4404-9732-666045e056b4) |Do not allow privileged containers creation in a Kubernetes cluster. This recommendation is part of CIS 5.2.1 which is intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[10.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerNoPrivilege.json) |
+|[Kubernetes clusters should disable automounting API credentials](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F423dd1ba-798e-40e4-9c4d-b6902674b423) |Disable automounting API credentials to prevent a potentially compromised Pod resource to run API commands against Kubernetes clusters. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[5.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/BlockAutomountToken.json) |
+|[Kubernetes clusters should not allow container privilege escalation](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c6e92c9-99f0-4e55-9cf2-0c234dc48f99) |Do not allow containers to run with privilege escalation to root in a Kubernetes cluster. This recommendation is part of CIS 5.2.5 which is intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[8.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerNoPrivilegeEscalation.json) |
+|[Kubernetes clusters should not grant CAP_SYS_ADMIN security capabilities](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd2e7ea85-6b44-4317-a0be-1b951587f626) |To reduce the attack surface of your containers, restrict CAP_SYS_ADMIN Linux capabilities. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[6.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerDisallowedSysAdminCapability.json) |
+|[Kubernetes clusters should not use the default namespace](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9f061a12-e40d-4183-a00e-171812443373) |Prevent usage of the default namespace in Kubernetes clusters to protect against unauthorized access for ConfigMap, Pod, Secret, Service, and ServiceAccount resource types. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[5.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/BlockDefaultNamespace.json) |
### Audit and enforce secure configurations for compute resources
governance Gov Cis Azure 1 1 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-cis-azure-1-1-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.1.0 (Azure Government) description: Details of the CIS Microsoft Azure Foundations Benchmark 1.1.0 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/04/2022 Last updated : 01/05/2023
governance Gov Cis Azure 1 3 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-cis-azure-1-3-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.3.0 (Azure Government) description: Details of the CIS Microsoft Azure Foundations Benchmark 1.3.0 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/04/2022 Last updated : 01/05/2023
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
-|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
### Ensure that VA setting Send scan reports to is configured for a SQL server
governance Gov Cmmc L3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-cmmc-l3.md
Title: Regulatory Compliance details for CMMC Level 3 (Azure Government) description: Details of the CMMC Level 3 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/04/2022 Last updated : 01/05/2023
This built-in initiative is deployed as part of the
||||| |[App Service apps should have remote debugging turned off](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcb510bfd-1cba-4d9f-a230-cb0976f4bb71) |Remote debugging requires inbound ports to be opened on an App Service app. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_WebApp_Audit.json) | |[App Service apps should not have CORS configured to allow every resource to access your apps](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5744710e-cc2f-4ee8-8809-3b11e89f4bc9) |Cross-Origin Resource Sharing (CORS) should not allow all domains to access your app. Allow only required domains to interact with your app. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RestrictCORSAccess_WebApp_Audit.json) |
-|[Cognitive Services accounts should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
+|[Cognitive Services accounts should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |To improve the security of Cognitive Services accounts, ensure that it isn't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
|[Cognitive Services accounts should restrict network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) | |[Container registries should not allow unrestricted network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) | |[Deprecated accounts should be removed from your subscription](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6b1cbf55-e8b6-442f-ba4c-7246b6381474) |Deprecated accounts should be removed from your subscriptions. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_RemoveDeprecatedAccounts_Audit.json) |
This built-in initiative is deployed as part of the
||||| |[App Service apps should not have CORS configured to allow every resource to access your apps](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5744710e-cc2f-4ee8-8809-3b11e89f4bc9) |Cross-Origin Resource Sharing (CORS) should not allow all domains to access your app. Allow only required domains to interact with your app. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RestrictCORSAccess_WebApp_Audit.json) | |[App Service apps should only be accessible over HTTPS](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa4af4a39-4135-47fb-b175-47fbdf85311d) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled, Deny |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceWebapp_AuditHTTP_Audit.json) |
-|[Cognitive Services accounts should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
+|[Cognitive Services accounts should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |To improve the security of Cognitive Services accounts, ensure that it isn't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
|[Cognitive Services accounts should restrict network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) | |[Container registries should not allow unrestricted network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) | |[Enforce SSL connection should be enabled for MySQL database servers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe802a67a-daf5-4436-9ea6-f6d821dd0c5d) |Azure Database for MySQL supports connecting your Azure Database for MySQL server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between your database server and your client applications helps protect against 'man in the middle' attacks by encrypting the data stream between the server and your application. This configuration enforces that SSL is always enabled for accessing your database server. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_EnableSSL_Audit.json) |
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Cognitive Services accounts should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
+|[Cognitive Services accounts should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |To improve the security of Cognitive Services accounts, ensure that it isn't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
|[Cognitive Services accounts should restrict network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) | |[Container registries should not allow unrestricted network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) | |[Function apps should not have CORS configured to allow every resource to access your apps](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0820b7b9-23aa-4725-a1ce-ae4558f718e5) |Cross-Origin Resource Sharing (CORS) should not allow all domains to access your Function app. Allow only required domains to interact with your Function app. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RestrictCORSAccess_FuntionApp_Audit.json) |
This built-in initiative is deployed as part of the
|[An activity log alert should exist for specific Administrative operations](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb954148f-4c11-4c38-8221-be76711e194a) |This policy audits specific Administrative operations with no activity log alerts configured. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_AdministrativeOperations_Audit.json) | |[An activity log alert should exist for specific Policy operations](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc5447c04-a4d7-4ba8-a263-c9ee321a6858) |This policy audits specific Policy operations with no activity log alerts configured. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_PolicyOperations_Audit.json) | |[An activity log alert should exist for specific Security operations](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3b980d31-7904-4bb7-8575-5665739a8052) |This policy audits specific Security operations with no activity log alerts configured. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_SecurityOperations_Audit.json) |
-|[Audit diagnostic setting](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types |AuditIfNotExists |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
+|[Audit diagnostic setting for selected resource types](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types. Be sure to select only resource types which support diagnostics settings. |AuditIfNotExists |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
|[Auditing on SQL server should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6fb4358-5bf4-4ad7-ba82-2cd2f41ce5e9) |Auditing on your SQL Server should be enabled to track database activities across all databases on the server and save them in an audit log. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditing_Audit.json) | |[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) | |[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) |
This built-in initiative is deployed as part of the
|[An activity log alert should exist for specific Administrative operations](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb954148f-4c11-4c38-8221-be76711e194a) |This policy audits specific Administrative operations with no activity log alerts configured. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_AdministrativeOperations_Audit.json) | |[An activity log alert should exist for specific Policy operations](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc5447c04-a4d7-4ba8-a263-c9ee321a6858) |This policy audits specific Policy operations with no activity log alerts configured. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_PolicyOperations_Audit.json) | |[An activity log alert should exist for specific Security operations](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3b980d31-7904-4bb7-8575-5665739a8052) |This policy audits specific Security operations with no activity log alerts configured. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_SecurityOperations_Audit.json) |
-|[Audit diagnostic setting](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types |AuditIfNotExists |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
+|[Audit diagnostic setting for selected resource types](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types. Be sure to select only resource types which support diagnostics settings. |AuditIfNotExists |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
|[Auditing on SQL server should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6fb4358-5bf4-4ad7-ba82-2cd2f41ce5e9) |Auditing on your SQL Server should be enabled to track database activities across all databases on the server and save them in an audit log. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditing_Audit.json) | |[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) | |[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) |
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[\[Preview\]: Log Analytics Extension should be enabled for listed virtual machine images](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F32133ab0-ee4b-4b44-98d6-042180979d50) |Reports virtual machines as non-compliant if the virtual machine image is not in the list defined and the extension is not installed. |AuditIfNotExists, Disabled |[2.0.1-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalytics_OSImage_Audit.json) |
-|[Audit diagnostic setting](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types |AuditIfNotExists |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
+|[Audit diagnostic setting for selected resource types](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types. Be sure to select only resource types which support diagnostics settings. |AuditIfNotExists |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
|[Auditing on SQL server should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6fb4358-5bf4-4ad7-ba82-2cd2f41ce5e9) |Auditing on your SQL Server should be enabled to track database activities across all databases on the server and save them in an audit log. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditing_Audit.json) | |[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) | |[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) |
This built-in initiative is deployed as part of the
||||| |[\[Preview\]: Log Analytics Extension should be enabled for listed virtual machine images](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F32133ab0-ee4b-4b44-98d6-042180979d50) |Reports virtual machines as non-compliant if the virtual machine image is not in the list defined and the extension is not installed. |AuditIfNotExists, Disabled |[2.0.1-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalytics_OSImage_Audit.json) | |[App Service apps should have resource logs enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F91a78b24-f231-4a8a-8da9-02c35b2b6510) |Audit enabling of resource logs on the app. This enables you to recreate activity trails for investigation purposes if a security incident occurs or your network is compromised. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_ResourceLoggingMonitoring_Audit.json) |
-|[Audit diagnostic setting](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types |AuditIfNotExists |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
+|[Audit diagnostic setting for selected resource types](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types. Be sure to select only resource types which support diagnostics settings. |AuditIfNotExists |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
|[Log Analytics extension should be enabled in virtual machine scale sets for listed virtual machine images](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5c3bc7b8-a64c-4e08-a9cd-7ff0f31e1138) |Reports virtual machine scale sets as non-compliant if the virtual machine image is not in the list defined and the extension is not installed. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalytics_OSImage_VMSS_Audit.json) | |[The Log Analytics extension should be installed on Virtual Machine Scale Sets](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fefbde977-ba53-4479-b8e9-10b957924fbf) |This policy audits any Windows/Linux Virtual Machine Scale Sets if the Log Analytics extension is not installed. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/VMSS_LogAnalyticsAgent_AuditIfNotExists.json) | |[Virtual machines should be connected to a specified workspace](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff47b5582-33ec-4c5c-87c0-b010a6b2e917) |Reports virtual machines as non-compliant if they aren't logging to the Log Analytics workspace specified in the policy/initiative assignment. |AuditIfNotExists, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalytics_WorkspaceMismatch_VM_Audit.json) |
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[An activity log alert should exist for specific Policy operations](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc5447c04-a4d7-4ba8-a263-c9ee321a6858) |This policy audits specific Policy operations with no activity log alerts configured. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_PolicyOperations_Audit.json) |
-|[Audit diagnostic setting](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types |AuditIfNotExists |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
+|[Audit diagnostic setting for selected resource types](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types. Be sure to select only resource types which support diagnostics settings. |AuditIfNotExists |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
## Security Assessment
This built-in initiative is deployed as part of the
|[Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf6cd1bd-1635-48cb-bde7-5b15693900b9) |Servers without an installed Endpoint Protection agent will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_MissingEndpointProtection_Audit.json) | |[Security Center standard pricing tier should be selected](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa1181c5f-672a-477a-979a-7d58aa086233) |The standard pricing tier enables threat detection for networks and virtual machines, providing threat intelligence, anomaly detection, and behavior analytics in Azure Security Center |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_Standard_pricing_tier.json) | |[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
-|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
### Monitor security controls on an ongoing basis to ensure the continued effectiveness of the controls.
This built-in initiative is deployed as part of the
|[Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf6cd1bd-1635-48cb-bde7-5b15693900b9) |Servers without an installed Endpoint Protection agent will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_MissingEndpointProtection_Audit.json) | |[Security Center standard pricing tier should be selected](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa1181c5f-672a-477a-979a-7d58aa086233) |The standard pricing tier enables threat detection for networks and virtual machines, providing threat intelligence, anomaly detection, and behavior analytics in Azure Security Center |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_Standard_pricing_tier.json) | |[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
-|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
## Configuration Management
This built-in initiative is deployed as part of the
|[Allowlist rules in your adaptive application control policy should be updated](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F123a3936-f020-408a-ba0c-47873faf1534) |Monitor for changes in behavior on groups of machines configured for auditing by Azure Security Center's adaptive application controls. Security Center uses machine learning to analyze the running processes on your machines and suggest a list of known-safe applications. These are presented as recommended apps to allow in adaptive application control policies. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_AdaptiveApplicationControlsUpdate_Audit.json) | |[App Service apps should have remote debugging turned off](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcb510bfd-1cba-4d9f-a230-cb0976f4bb71) |Remote debugging requires inbound ports to be opened on an App Service app. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_WebApp_Audit.json) | |[App Service apps should not have CORS configured to allow every resource to access your apps](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5744710e-cc2f-4ee8-8809-3b11e89f4bc9) |Cross-Origin Resource Sharing (CORS) should not allow all domains to access your app. Allow only required domains to interact with your app. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RestrictCORSAccess_WebApp_Audit.json) |
-|[Cognitive Services accounts should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
+|[Cognitive Services accounts should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |To improve the security of Cognitive Services accounts, ensure that it isn't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
|[Cognitive Services accounts should restrict network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) | |[Container registries should not allow unrestricted network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) | |[Function apps should have remote debugging turned off](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e60b895-3786-45da-8377-9c6b4b6ac5f9) |Remote debugging requires inbound ports to be opened on Function apps. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_FunctionApp_Audit.json) |
This built-in initiative is deployed as part of the
|[Security Center standard pricing tier should be selected](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa1181c5f-672a-477a-979a-7d58aa086233) |The standard pricing tier enables threat detection for networks and virtual machines, providing threat intelligence, anomaly detection, and behavior analytics in Azure Security Center |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_Standard_pricing_tier.json) | |[Vulnerability Assessment settings for SQL server should contain an email address to receive scan reports](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F057d6cfe-9c4f-4a6d-bc60-14420ea1f1a9) |Ensure that an email address is provided for the 'Send scan reports to' field in the Vulnerability Assessment settings. This email address receives scan result summary after a periodic scan runs on SQL servers. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_VulnerabilityAssessmentEmails_Audit.json) | |[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
-|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
### Scan for vulnerabilities in organizational systems and applications periodically and when new vulnerabilities affecting those systems and applications are identified.
This built-in initiative is deployed as part of the
|[Security Center standard pricing tier should be selected](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa1181c5f-672a-477a-979a-7d58aa086233) |The standard pricing tier enables threat detection for networks and virtual machines, providing threat intelligence, anomaly detection, and behavior analytics in Azure Security Center |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_Standard_pricing_tier.json) | |[Vulnerability Assessment settings for SQL server should contain an email address to receive scan reports](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F057d6cfe-9c4f-4a6d-bc60-14420ea1f1a9) |Ensure that an email address is provided for the 'Send scan reports to' field in the Vulnerability Assessment settings. This email address receives scan result summary after a periodic scan runs on SQL servers. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_VulnerabilityAssessmentEmails_Audit.json) | |[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
-|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
### Remediate vulnerabilities in accordance with risk assessments.
This built-in initiative is deployed as part of the
|[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) | |[Vulnerability Assessment settings for SQL server should contain an email address to receive scan reports](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F057d6cfe-9c4f-4a6d-bc60-14420ea1f1a9) |Ensure that an email address is provided for the 'Send scan reports to' field in the Vulnerability Assessment settings. This email address receives scan result summary after a periodic scan runs on SQL servers. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_VulnerabilityAssessmentEmails_Audit.json) | |[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
-|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
## Risk Management
This built-in initiative is deployed as part of the
|[App Service apps should only be accessible over HTTPS](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa4af4a39-4135-47fb-b175-47fbdf85311d) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled, Deny |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceWebapp_AuditHTTP_Audit.json) | |[App Service apps should use the latest TLS version](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff0e6e85b-9b9f-4a4b-b67b-f730d42f1b0b) |Periodically, newer versions are released for TLS either due to security flaws, include additional functionality, and enhance speed. Upgrade to the latest TLS version for App Service apps to take advantage of security fixes, if any, and/or new functionalities of the latest version. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RequireLatestTls_WebApp_Audit.json) | |[Azure Web Application Firewall should be enabled for Azure Front Door entry-points](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F055aa869-bc98-4af8-bafc-23f1ab6ffe2c) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Network/WAF_AFD_Enabled_Audit.json) |
-|[Cognitive Services accounts should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
+|[Cognitive Services accounts should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |To improve the security of Cognitive Services accounts, ensure that it isn't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
|[Cognitive Services accounts should restrict network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) | |[Container registries should not allow unrestricted network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) | |[Flow logs should be configured for every network security group](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc251913d-7d24-4958-af87-478ed3b9ba41) |Audit for network security groups to verify if flow logs are configured. Enabling flow logs allows to log information about IP traffic flowing through network security group. It can be used for optimizing network flows, monitoring throughput, verifying compliance, detecting intrusions and more. |Audit, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkSecurityGroup_FlowLog_Audit.json) |
This built-in initiative is deployed as part of the
|[All network ports should be restricted on network security groups associated to your virtual machine](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9daedab3-fb2d-461e-b861-71790eead4f6) |Azure Security Center has identified some of your network security groups' inbound rules to be too permissive. Inbound rules should not allow access from 'Any' or 'Internet' ranges. This can potentially enable attackers to target your resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_UnprotectedEndpoints_Audit.json) | |[App Service apps should not have CORS configured to allow every resource to access your apps](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5744710e-cc2f-4ee8-8809-3b11e89f4bc9) |Cross-Origin Resource Sharing (CORS) should not allow all domains to access your app. Allow only required domains to interact with your app. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RestrictCORSAccess_WebApp_Audit.json) | |[Azure Web Application Firewall should be enabled for Azure Front Door entry-points](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F055aa869-bc98-4af8-bafc-23f1ab6ffe2c) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Network/WAF_AFD_Enabled_Audit.json) |
-|[Cognitive Services accounts should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
+|[Cognitive Services accounts should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |To improve the security of Cognitive Services accounts, ensure that it isn't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
|[Cognitive Services accounts should restrict network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) | |[Container registries should not allow unrestricted network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) | |[Flow logs should be configured for every network security group](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc251913d-7d24-4958-af87-478ed3b9ba41) |Audit for network security groups to verify if flow logs are configured. Enabling flow logs allows to log information about IP traffic flowing through network security group. It can be used for optimizing network flows, monitoring throughput, verifying compliance, detecting intrusions and more. |Audit, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkSecurityGroup_FlowLog_Audit.json) |
governance Gov Fedramp High https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-fedramp-high.md
Title: Regulatory Compliance details for FedRAMP High (Azure Government) description: Details of the FedRAMP High (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/04/2022 Last updated : 01/05/2023
initiative definition.
|[Azure Service Bus namespaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c06e275-d63d-4540-b761-71f364c2111d) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Service Bus namespaces, data leakage risks are reduced. Learn more at: [https://docs.microsoft.com/azure/service-bus-messaging/private-link-service](../../../service-bus-messaging/private-link-service.md). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Bus/ServiceBus_PrivateEndpoint_Audit.json) | |[Azure SignalR Service should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2393d2cf-a342-44cd-a2e2-fe0188fd1234) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure SignalR Service resource instead of the entire service, you'll reduce your data leakage risks. Learn more about private links at: [https://aka.ms/asrs/privatelink](https://aka.ms/asrs/privatelink). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit_v2.json) | |[Azure Synapse workspaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72d11df1-dd8a-41f7-8925-b05b960ebafc) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Synapse workspace, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/synapse-analytics/security/how-to-connect-to-workspace-with-private-links](../../../synapse-analytics/security/how-to-connect-to-workspace-with-private-links.md). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/SynapseWorkspaceUsePrivateLinks_Audit.json) |
-|[Cognitive Services accounts should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
+|[Cognitive Services accounts should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |To improve the security of Cognitive Services accounts, ensure that it isn't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
|[Cognitive Services accounts should restrict network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) | |[Cognitive Services should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcddd188c-4b82-4c48-a19d-ddf74ee66a01) |Azure Private Link lets you connect your virtual networks to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Cognitive Services, you'll reduce the potential for data leakage. Learn more about private links at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_EnablePrivateEndpoints_Audit.json) | |[Container registries should not allow unrestricted network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) |
initiative definition.
|[Function apps should have 'Client Certificates (Incoming client certificates)' enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feaebaea7-8013-4ceb-9d14-7eb32271373c) |Client certificates allow for the app to request a certificate for incoming requests. Only clients with valid certificates will be able to reach the app. |Audit, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_FunctionApp_Audit_ClientCert.json) | |[Function apps should have remote debugging turned off](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e60b895-3786-45da-8377-9c6b4b6ac5f9) |Remote debugging requires inbound ports to be opened on Function apps. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_FunctionApp_Audit.json) | |[Function apps should not have CORS configured to allow every resource to access your apps](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0820b7b9-23aa-4725-a1ce-ae4558f718e5) |Cross-Origin Resource Sharing (CORS) should not allow all domains to access your Function app. Allow only required domains to interact with your Function app. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RestrictCORSAccess_FuntionApp_Audit.json) |
-|[Kubernetes cluster containers CPU and memory resource limits should not exceed the specified limits](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe345eecc-fa47-480f-9e88-67dcc122b164) |Enforce container CPU and memory resource limits to prevent resource exhaustion attacks in a Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[9.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerResourceLimits.json) |
-|[Kubernetes cluster containers should not share host process ID or host IPC namespace](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F47a1ee2f-2a2a-4576-bf2a-e0e36709c2b8) |Block pod containers from sharing the host process ID namespace and host IPC namespace in a Kubernetes cluster. This recommendation is part of CIS 5.2.2 and CIS 5.2.3 which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[5.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/BlockHostNamespace.json) |
-|[Kubernetes cluster containers should only use allowed AppArmor profiles](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F511f5417-5d12-434d-ab2e-816901e72a5e) |Containers should only use allowed AppArmor profiles in a Kubernetes cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[6.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/EnforceAppArmorProfile.json) |
-|[Kubernetes cluster containers should only use allowed capabilities](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc26596ff-4d70-4e6a-9a30-c2506bd2f80c) |Restrict the capabilities to reduce the attack surface of containers in a Kubernetes cluster. This recommendation is part of CIS 5.2.8 and CIS 5.2.9 which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[6.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerAllowedCapabilities.json) |
-|[Kubernetes cluster containers should only use allowed images](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffebd0533-8e55-448f-b837-bd0e06f16469) |Use images from trusted registries to reduce the Kubernetes cluster's exposure risk to unknown vulnerabilities, security issues and malicious images. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[9.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerAllowedImages.json) |
-|[Kubernetes cluster containers should run with a read only root file system](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdf49d893-a74c-421d-bc95-c663042e5b80) |Run containers with a read only root file system to protect from changes at run-time with malicious binaries being added to PATH in a Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[6.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ReadOnlyRootFileSystem.json) |
-|[Kubernetes cluster pod hostPath volumes should only use allowed host paths](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F098fc59e-46c7-4d99-9b16-64990e543d75) |Limit pod HostPath volume mounts to the allowed host paths in a Kubernetes Cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[6.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/AllowedHostPaths.json) |
-|[Kubernetes cluster pods and containers should only run with approved user and group IDs](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff06ddb64-5fa3-4b77-b166-acb36f7f6042) |Control the user, primary group, supplemental group and file system group IDs that pods and containers can use to run in a Kubernetes Cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[6.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/AllowedUsersGroups.json) |
-|[Kubernetes cluster pods should only use approved host network and port range](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F82985f06-dc18-4a48-bc1c-b9f4f0098cfe) |Restrict pod access to the host network and the allowable host port range in a Kubernetes cluster. This recommendation is part of CIS 5.2.4 which is intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[6.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/HostNetworkPorts.json) |
-|[Kubernetes cluster services should listen only on allowed ports](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F233a2a17-77ca-4fb1-9b6b-69223d272a44) |Restrict services to listen only on allowed ports to secure access to the Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[8.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ServiceAllowedPorts.json) |
-|[Kubernetes cluster should not allow privileged containers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F95edb821-ddaf-4404-9732-666045e056b4) |Do not allow privileged containers creation in a Kubernetes cluster. This recommendation is part of CIS 5.2.1 which is intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[9.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerNoPrivilege.json) |
-|[Kubernetes clusters should not allow container privilege escalation](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c6e92c9-99f0-4e55-9cf2-0c234dc48f99) |Do not allow containers to run with privilege escalation to root in a Kubernetes cluster. This recommendation is part of CIS 5.2.5 which is intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[7.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerNoPrivilegeEscalation.json) |
+|[Kubernetes cluster containers CPU and memory resource limits should not exceed the specified limits](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe345eecc-fa47-480f-9e88-67dcc122b164) |Enforce container CPU and memory resource limits to prevent resource exhaustion attacks in a Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[10.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerResourceLimits.json) |
+|[Kubernetes cluster containers should not share host process ID or host IPC namespace](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F47a1ee2f-2a2a-4576-bf2a-e0e36709c2b8) |Block pod containers from sharing the host process ID namespace and host IPC namespace in a Kubernetes cluster. This recommendation is part of CIS 5.2.2 and CIS 5.2.3 which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[6.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/BlockHostNamespace.json) |
+|[Kubernetes cluster containers should only use allowed AppArmor profiles](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F511f5417-5d12-434d-ab2e-816901e72a5e) |Containers should only use allowed AppArmor profiles in a Kubernetes cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[7.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/EnforceAppArmorProfile.json) |
+|[Kubernetes cluster containers should only use allowed capabilities](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc26596ff-4d70-4e6a-9a30-c2506bd2f80c) |Restrict the capabilities to reduce the attack surface of containers in a Kubernetes cluster. This recommendation is part of CIS 5.2.8 and CIS 5.2.9 which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[7.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerAllowedCapabilities.json) |
+|[Kubernetes cluster containers should only use allowed images](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffebd0533-8e55-448f-b837-bd0e06f16469) |Use images from trusted registries to reduce the Kubernetes cluster's exposure risk to unknown vulnerabilities, security issues and malicious images. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[10.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerAllowedImages.json) |
+|[Kubernetes cluster containers should run with a read only root file system](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdf49d893-a74c-421d-bc95-c663042e5b80) |Run containers with a read only root file system to protect from changes at run-time with malicious binaries being added to PATH in a Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[7.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ReadOnlyRootFileSystem.json) |
+|[Kubernetes cluster pod hostPath volumes should only use allowed host paths](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F098fc59e-46c7-4d99-9b16-64990e543d75) |Limit pod HostPath volume mounts to the allowed host paths in a Kubernetes Cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[7.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/AllowedHostPaths.json) |
+|[Kubernetes cluster pods and containers should only run with approved user and group IDs](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff06ddb64-5fa3-4b77-b166-acb36f7f6042) |Control the user, primary group, supplemental group and file system group IDs that pods and containers can use to run in a Kubernetes Cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[7.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/AllowedUsersGroups.json) |
+|[Kubernetes cluster pods should only use approved host network and port range](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F82985f06-dc18-4a48-bc1c-b9f4f0098cfe) |Restrict pod access to the host network and the allowable host port range in a Kubernetes cluster. This recommendation is part of CIS 5.2.4 which is intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[7.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/HostNetworkPorts.json) |
+|[Kubernetes cluster services should listen only on allowed ports](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F233a2a17-77ca-4fb1-9b6b-69223d272a44) |Restrict services to listen only on allowed ports to secure access to the Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[9.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ServiceAllowedPorts.json) |
+|[Kubernetes cluster should not allow privileged containers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F95edb821-ddaf-4404-9732-666045e056b4) |Do not allow privileged containers creation in a Kubernetes cluster. This recommendation is part of CIS 5.2.1 which is intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[10.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerNoPrivilege.json) |
+|[Kubernetes clusters should not allow container privilege escalation](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c6e92c9-99f0-4e55-9cf2-0c234dc48f99) |Do not allow containers to run with privilege escalation to root in a Kubernetes cluster. This recommendation is part of CIS 5.2.5 which is intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[8.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerNoPrivilegeEscalation.json) |
|[Linux machines should meet requirements for the Azure compute security baseline](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc9b3da7-8347-4380-8e70-0a0361d8dedd) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is not configured correctly for one of the recommendations in the Azure compute security baseline. |AuditIfNotExists, Disabled |[1.3.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Guest%20Configuration/GuestConfiguration_AzureLinuxBaseline_AINE.json) | |[Windows machines should meet requirements of the Azure compute security baseline](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72650e9f-97bc-4b2a-ab5f-9781a9fcecbc) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is not configured correctly for one of the recommendations in the Azure compute security baseline. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Guest%20Configuration/GuestConfiguration_AzureWindowsBaseline_AINE.json) |
initiative definition.
|[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_OSVulnerabilities_Audit.json) | |[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) | |[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
-|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
|[Vulnerability assessment should be enabled on your Synapse workspaces](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0049a6b3-a662-4f3e-8635-39cf44ace45a) |Discover, track, and remediate potential vulnerabilities by configuring recurring SQL vulnerability assessment scans on your Synapse workspaces. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/ASC_SQLVulnerabilityAssessmentOnSynapse_Audit.json) | ## System And Communications Protection
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Azure DDoS Protection should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS protection should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableDDoSProtection_Audit.json) |
+|[Azure DDoS Protection Standard should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS protection standard should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableDDoSProtection_Audit.json) |
|[Azure Web Application Firewall should be enabled for Azure Front Door entry-points](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F055aa869-bc98-4af8-bafc-23f1ab6ffe2c) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Network/WAF_AFD_Enabled_Audit.json) | |[IP Forwarding on your virtual machine should be disabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd352bd5-2853-4985-bf0d-73806b4a5744) |Enabling IP forwarding on a virtual machine's NIC allows the machine to receive traffic addressed to other destinations. IP forwarding is rarely required (e.g., when using the VM as a network virtual appliance), and therefore, this should be reviewed by the network security team. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_IPForwardingOnVirtualMachines_Audit.json) | |[Web Application Firewall (WAF) should be enabled for Application Gateway](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F564feb30-bf6a-4854-b4bb-0d2d2d1e6c66) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AppGatewayEnabled_Audit.json) |
initiative definition.
|[Azure SignalR Service should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2393d2cf-a342-44cd-a2e2-fe0188fd1234) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure SignalR Service resource instead of the entire service, you'll reduce your data leakage risks. Learn more about private links at: [https://aka.ms/asrs/privatelink](https://aka.ms/asrs/privatelink). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit_v2.json) | |[Azure Synapse workspaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72d11df1-dd8a-41f7-8925-b05b960ebafc) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Synapse workspace, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/synapse-analytics/security/how-to-connect-to-workspace-with-private-links](../../../synapse-analytics/security/how-to-connect-to-workspace-with-private-links.md). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/SynapseWorkspaceUsePrivateLinks_Audit.json) | |[Azure Web Application Firewall should be enabled for Azure Front Door entry-points](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F055aa869-bc98-4af8-bafc-23f1ab6ffe2c) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Network/WAF_AFD_Enabled_Audit.json) |
-|[Cognitive Services accounts should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
+|[Cognitive Services accounts should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |To improve the security of Cognitive Services accounts, ensure that it isn't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
|[Cognitive Services accounts should restrict network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) | |[Cognitive Services should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcddd188c-4b82-4c48-a19d-ddf74ee66a01) |Azure Private Link lets you connect your virtual networks to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Cognitive Services, you'll reduce the potential for data leakage. Learn more about private links at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_EnablePrivateEndpoints_Audit.json) | |[Container registries should not allow unrestricted network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) |
initiative definition.
|[Azure SignalR Service should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2393d2cf-a342-44cd-a2e2-fe0188fd1234) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure SignalR Service resource instead of the entire service, you'll reduce your data leakage risks. Learn more about private links at: [https://aka.ms/asrs/privatelink](https://aka.ms/asrs/privatelink). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit_v2.json) | |[Azure Synapse workspaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72d11df1-dd8a-41f7-8925-b05b960ebafc) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Synapse workspace, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/synapse-analytics/security/how-to-connect-to-workspace-with-private-links](../../../synapse-analytics/security/how-to-connect-to-workspace-with-private-links.md). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/SynapseWorkspaceUsePrivateLinks_Audit.json) | |[Azure Web Application Firewall should be enabled for Azure Front Door entry-points](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F055aa869-bc98-4af8-bafc-23f1ab6ffe2c) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Network/WAF_AFD_Enabled_Audit.json) |
-|[Cognitive Services accounts should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
+|[Cognitive Services accounts should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |To improve the security of Cognitive Services accounts, ensure that it isn't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
|[Cognitive Services accounts should restrict network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) | |[Cognitive Services should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcddd188c-4b82-4c48-a19d-ddf74ee66a01) |Azure Private Link lets you connect your virtual networks to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Cognitive Services, you'll reduce the potential for data leakage. Learn more about private links at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_EnablePrivateEndpoints_Audit.json) | |[Container registries should not allow unrestricted network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) |
initiative definition.
|[Function apps should only be accessible over HTTPS](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6d555dd1-86f2-4f1c-8ed7-5abae7c6cbab) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled, Deny |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceFunctionApp_AuditHTTP_Audit.json) | |[Function apps should require FTPS only](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F399b2637-a50f-4f95-96f8-3a145476eb15) |Enable FTPS enforcement for enhanced security. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_AuditFTPS_FunctionApp_Audit.json) | |[Function apps should use the latest TLS version](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff9d614c5-c173-4d56-95a7-b4437057d193) |Periodically, newer versions are released for TLS either due to security flaws, include additional functionality, and enhance speed. Upgrade to the latest TLS version for Function apps to take advantage of security fixes, if any, and/or new functionalities of the latest version. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RequireLatestTls_FunctionApp_Audit.json) |
-|[Kubernetes clusters should be accessible only over HTTPS](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1a5b4dca-0b6f-4cf5-907c-56316bc1bf3d) |Use of HTTPS ensures authentication and protects data in transit from network layer eavesdropping attacks. This capability is currently generally available for Kubernetes Service (AKS), and in preview for Azure Arc enabled Kubernetes. For more info, visit [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc) |audit, Audit, deny, Deny, disabled, Disabled |[8.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/IngressHttpsOnly.json) |
+|[Kubernetes clusters should be accessible only over HTTPS](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1a5b4dca-0b6f-4cf5-907c-56316bc1bf3d) |Use of HTTPS ensures authentication and protects data in transit from network layer eavesdropping attacks. This capability is currently generally available for Kubernetes Service (AKS), and in preview for Azure Arc enabled Kubernetes. For more info, visit [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc) |audit, Audit, deny, Deny, disabled, Disabled |[9.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/IngressHttpsOnly.json) |
|[Only secure connections to your Azure Cache for Redis should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F22bee202-a82f-4305-9a2a-6d7f44d4dedb) |Audit enabling of only connections via SSL to Azure Cache for Redis. Use of secure connections ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cache/RedisCache_AuditSSLPort_Audit.json) | |[Secure transfer to storage accounts should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F404c3081-a854-4457-ae30-26a93ef643f9) |Audit requirement of Secure transfer in your storage account. Secure transfer is an option that forces your storage account to accept requests only from secure connections (HTTPS). Use of HTTPS ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_AuditForHTTPSEnabled_Audit.json) | |[Windows web servers should be configured to use secure communication protocols](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5752e6d6-1206-46d8-8ab1-ecc2f71a8112) |To protect the privacy of information communicated over the Internet, your web servers should use the latest version of the industry-standard cryptographic protocol, Transport Layer Security (TLS). TLS secures communications over a network by using security certificates to encrypt a connection between machines. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Guest%20Configuration/GuestConfiguration_SecureWebProtocol_AINE.json) |
initiative definition.
|[Function apps should only be accessible over HTTPS](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6d555dd1-86f2-4f1c-8ed7-5abae7c6cbab) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled, Deny |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceFunctionApp_AuditHTTP_Audit.json) | |[Function apps should require FTPS only](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F399b2637-a50f-4f95-96f8-3a145476eb15) |Enable FTPS enforcement for enhanced security. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_AuditFTPS_FunctionApp_Audit.json) | |[Function apps should use the latest TLS version](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff9d614c5-c173-4d56-95a7-b4437057d193) |Periodically, newer versions are released for TLS either due to security flaws, include additional functionality, and enhance speed. Upgrade to the latest TLS version for Function apps to take advantage of security fixes, if any, and/or new functionalities of the latest version. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RequireLatestTls_FunctionApp_Audit.json) |
-|[Kubernetes clusters should be accessible only over HTTPS](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1a5b4dca-0b6f-4cf5-907c-56316bc1bf3d) |Use of HTTPS ensures authentication and protects data in transit from network layer eavesdropping attacks. This capability is currently generally available for Kubernetes Service (AKS), and in preview for Azure Arc enabled Kubernetes. For more info, visit [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc) |audit, Audit, deny, Deny, disabled, Disabled |[8.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/IngressHttpsOnly.json) |
+|[Kubernetes clusters should be accessible only over HTTPS](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1a5b4dca-0b6f-4cf5-907c-56316bc1bf3d) |Use of HTTPS ensures authentication and protects data in transit from network layer eavesdropping attacks. This capability is currently generally available for Kubernetes Service (AKS), and in preview for Azure Arc enabled Kubernetes. For more info, visit [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc) |audit, Audit, deny, Deny, disabled, Disabled |[9.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/IngressHttpsOnly.json) |
|[Only secure connections to your Azure Cache for Redis should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F22bee202-a82f-4305-9a2a-6d7f44d4dedb) |Audit enabling of only connections via SSL to Azure Cache for Redis. Use of secure connections ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cache/RedisCache_AuditSSLPort_Audit.json) | |[Secure transfer to storage accounts should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F404c3081-a854-4457-ae30-26a93ef643f9) |Audit requirement of Secure transfer in your storage account. Secure transfer is an option that forces your storage account to accept requests only from secure connections (HTTPS). Use of HTTPS ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_AuditForHTTPSEnabled_Audit.json) | |[Windows web servers should be configured to use secure communication protocols](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5752e6d6-1206-46d8-8ab1-ecc2f71a8112) |To protect the privacy of information communicated over the Internet, your web servers should use the latest version of the industry-standard cryptographic protocol, Transport Layer Security (TLS). TLS secures communications over a network by using security certificates to encrypt a connection between machines. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Guest%20Configuration/GuestConfiguration_SecureWebProtocol_AINE.json) |
governance Gov Fedramp Moderate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-fedramp-moderate.md
Title: Regulatory Compliance details for FedRAMP Moderate (Azure Government) description: Details of the FedRAMP Moderate (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/04/2022 Last updated : 01/05/2023
initiative definition.
|[Azure Service Bus namespaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c06e275-d63d-4540-b761-71f364c2111d) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Service Bus namespaces, data leakage risks are reduced. Learn more at: [https://docs.microsoft.com/azure/service-bus-messaging/private-link-service](../../../service-bus-messaging/private-link-service.md). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Bus/ServiceBus_PrivateEndpoint_Audit.json) | |[Azure SignalR Service should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2393d2cf-a342-44cd-a2e2-fe0188fd1234) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure SignalR Service resource instead of the entire service, you'll reduce your data leakage risks. Learn more about private links at: [https://aka.ms/asrs/privatelink](https://aka.ms/asrs/privatelink). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit_v2.json) | |[Azure Synapse workspaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72d11df1-dd8a-41f7-8925-b05b960ebafc) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Synapse workspace, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/synapse-analytics/security/how-to-connect-to-workspace-with-private-links](../../../synapse-analytics/security/how-to-connect-to-workspace-with-private-links.md). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/SynapseWorkspaceUsePrivateLinks_Audit.json) |
-|[Cognitive Services accounts should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
+|[Cognitive Services accounts should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |To improve the security of Cognitive Services accounts, ensure that it isn't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
|[Cognitive Services accounts should restrict network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) | |[Cognitive Services should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcddd188c-4b82-4c48-a19d-ddf74ee66a01) |Azure Private Link lets you connect your virtual networks to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Cognitive Services, you'll reduce the potential for data leakage. Learn more about private links at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_EnablePrivateEndpoints_Audit.json) | |[Container registries should not allow unrestricted network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) |
initiative definition.
|[Function apps should have 'Client Certificates (Incoming client certificates)' enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feaebaea7-8013-4ceb-9d14-7eb32271373c) |Client certificates allow for the app to request a certificate for incoming requests. Only clients with valid certificates will be able to reach the app. |Audit, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_FunctionApp_Audit_ClientCert.json) | |[Function apps should have remote debugging turned off](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e60b895-3786-45da-8377-9c6b4b6ac5f9) |Remote debugging requires inbound ports to be opened on Function apps. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_FunctionApp_Audit.json) | |[Function apps should not have CORS configured to allow every resource to access your apps](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0820b7b9-23aa-4725-a1ce-ae4558f718e5) |Cross-Origin Resource Sharing (CORS) should not allow all domains to access your Function app. Allow only required domains to interact with your Function app. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RestrictCORSAccess_FuntionApp_Audit.json) |
-|[Kubernetes cluster containers CPU and memory resource limits should not exceed the specified limits](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe345eecc-fa47-480f-9e88-67dcc122b164) |Enforce container CPU and memory resource limits to prevent resource exhaustion attacks in a Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[9.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerResourceLimits.json) |
-|[Kubernetes cluster containers should not share host process ID or host IPC namespace](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F47a1ee2f-2a2a-4576-bf2a-e0e36709c2b8) |Block pod containers from sharing the host process ID namespace and host IPC namespace in a Kubernetes cluster. This recommendation is part of CIS 5.2.2 and CIS 5.2.3 which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[5.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/BlockHostNamespace.json) |
-|[Kubernetes cluster containers should only use allowed AppArmor profiles](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F511f5417-5d12-434d-ab2e-816901e72a5e) |Containers should only use allowed AppArmor profiles in a Kubernetes cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[6.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/EnforceAppArmorProfile.json) |
-|[Kubernetes cluster containers should only use allowed capabilities](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc26596ff-4d70-4e6a-9a30-c2506bd2f80c) |Restrict the capabilities to reduce the attack surface of containers in a Kubernetes cluster. This recommendation is part of CIS 5.2.8 and CIS 5.2.9 which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[6.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerAllowedCapabilities.json) |
-|[Kubernetes cluster containers should only use allowed images](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffebd0533-8e55-448f-b837-bd0e06f16469) |Use images from trusted registries to reduce the Kubernetes cluster's exposure risk to unknown vulnerabilities, security issues and malicious images. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[9.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerAllowedImages.json) |
-|[Kubernetes cluster containers should run with a read only root file system](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdf49d893-a74c-421d-bc95-c663042e5b80) |Run containers with a read only root file system to protect from changes at run-time with malicious binaries being added to PATH in a Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[6.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ReadOnlyRootFileSystem.json) |
-|[Kubernetes cluster pod hostPath volumes should only use allowed host paths](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F098fc59e-46c7-4d99-9b16-64990e543d75) |Limit pod HostPath volume mounts to the allowed host paths in a Kubernetes Cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[6.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/AllowedHostPaths.json) |
-|[Kubernetes cluster pods and containers should only run with approved user and group IDs](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff06ddb64-5fa3-4b77-b166-acb36f7f6042) |Control the user, primary group, supplemental group and file system group IDs that pods and containers can use to run in a Kubernetes Cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[6.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/AllowedUsersGroups.json) |
-|[Kubernetes cluster pods should only use approved host network and port range](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F82985f06-dc18-4a48-bc1c-b9f4f0098cfe) |Restrict pod access to the host network and the allowable host port range in a Kubernetes cluster. This recommendation is part of CIS 5.2.4 which is intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[6.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/HostNetworkPorts.json) |
-|[Kubernetes cluster services should listen only on allowed ports](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F233a2a17-77ca-4fb1-9b6b-69223d272a44) |Restrict services to listen only on allowed ports to secure access to the Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[8.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ServiceAllowedPorts.json) |
-|[Kubernetes cluster should not allow privileged containers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F95edb821-ddaf-4404-9732-666045e056b4) |Do not allow privileged containers creation in a Kubernetes cluster. This recommendation is part of CIS 5.2.1 which is intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[9.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerNoPrivilege.json) |
-|[Kubernetes clusters should not allow container privilege escalation](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c6e92c9-99f0-4e55-9cf2-0c234dc48f99) |Do not allow containers to run with privilege escalation to root in a Kubernetes cluster. This recommendation is part of CIS 5.2.5 which is intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[7.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerNoPrivilegeEscalation.json) |
+|[Kubernetes cluster containers CPU and memory resource limits should not exceed the specified limits](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe345eecc-fa47-480f-9e88-67dcc122b164) |Enforce container CPU and memory resource limits to prevent resource exhaustion attacks in a Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[10.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerResourceLimits.json) |
+|[Kubernetes cluster containers should not share host process ID or host IPC namespace](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F47a1ee2f-2a2a-4576-bf2a-e0e36709c2b8) |Block pod containers from sharing the host process ID namespace and host IPC namespace in a Kubernetes cluster. This recommendation is part of CIS 5.2.2 and CIS 5.2.3 which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[6.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/BlockHostNamespace.json) |
+|[Kubernetes cluster containers should only use allowed AppArmor profiles](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F511f5417-5d12-434d-ab2e-816901e72a5e) |Containers should only use allowed AppArmor profiles in a Kubernetes cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[7.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/EnforceAppArmorProfile.json) |
+|[Kubernetes cluster containers should only use allowed capabilities](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc26596ff-4d70-4e6a-9a30-c2506bd2f80c) |Restrict the capabilities to reduce the attack surface of containers in a Kubernetes cluster. This recommendation is part of CIS 5.2.8 and CIS 5.2.9 which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[7.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerAllowedCapabilities.json) |
+|[Kubernetes cluster containers should only use allowed images](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffebd0533-8e55-448f-b837-bd0e06f16469) |Use images from trusted registries to reduce the Kubernetes cluster's exposure risk to unknown vulnerabilities, security issues and malicious images. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[10.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerAllowedImages.json) |
+|[Kubernetes cluster containers should run with a read only root file system](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdf49d893-a74c-421d-bc95-c663042e5b80) |Run containers with a read only root file system to protect from changes at run-time with malicious binaries being added to PATH in a Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[7.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ReadOnlyRootFileSystem.json) |
+|[Kubernetes cluster pod hostPath volumes should only use allowed host paths](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F098fc59e-46c7-4d99-9b16-64990e543d75) |Limit pod HostPath volume mounts to the allowed host paths in a Kubernetes Cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[7.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/AllowedHostPaths.json) |
+|[Kubernetes cluster pods and containers should only run with approved user and group IDs](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff06ddb64-5fa3-4b77-b166-acb36f7f6042) |Control the user, primary group, supplemental group and file system group IDs that pods and containers can use to run in a Kubernetes Cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[7.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/AllowedUsersGroups.json) |
+|[Kubernetes cluster pods should only use approved host network and port range](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F82985f06-dc18-4a48-bc1c-b9f4f0098cfe) |Restrict pod access to the host network and the allowable host port range in a Kubernetes cluster. This recommendation is part of CIS 5.2.4 which is intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[7.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/HostNetworkPorts.json) |
+|[Kubernetes cluster services should listen only on allowed ports](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F233a2a17-77ca-4fb1-9b6b-69223d272a44) |Restrict services to listen only on allowed ports to secure access to the Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[9.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ServiceAllowedPorts.json) |
+|[Kubernetes cluster should not allow privileged containers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F95edb821-ddaf-4404-9732-666045e056b4) |Do not allow privileged containers creation in a Kubernetes cluster. This recommendation is part of CIS 5.2.1 which is intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[10.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerNoPrivilege.json) |
+|[Kubernetes clusters should not allow container privilege escalation](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c6e92c9-99f0-4e55-9cf2-0c234dc48f99) |Do not allow containers to run with privilege escalation to root in a Kubernetes cluster. This recommendation is part of CIS 5.2.5 which is intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[8.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerNoPrivilegeEscalation.json) |
|[Linux machines should meet requirements for the Azure compute security baseline](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc9b3da7-8347-4380-8e70-0a0361d8dedd) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is not configured correctly for one of the recommendations in the Azure compute security baseline. |AuditIfNotExists, Disabled |[1.3.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Guest%20Configuration/GuestConfiguration_AzureLinuxBaseline_AINE.json) | |[Windows machines should meet requirements of the Azure compute security baseline](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72650e9f-97bc-4b2a-ab5f-9781a9fcecbc) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is not configured correctly for one of the recommendations in the Azure compute security baseline. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Guest%20Configuration/GuestConfiguration_AzureWindowsBaseline_AINE.json) |
initiative definition.
|[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_OSVulnerabilities_Audit.json) | |[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) | |[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
-|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
|[Vulnerability assessment should be enabled on your Synapse workspaces](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0049a6b3-a662-4f3e-8635-39cf44ace45a) |Discover, track, and remediate potential vulnerabilities by configuring recurring SQL vulnerability assessment scans on your Synapse workspaces. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/ASC_SQLVulnerabilityAssessmentOnSynapse_Audit.json) | ## System And Communications Protection
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Azure DDoS Protection should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS protection should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableDDoSProtection_Audit.json) |
+|[Azure DDoS Protection Standard should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS protection standard should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableDDoSProtection_Audit.json) |
|[Azure Web Application Firewall should be enabled for Azure Front Door entry-points](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F055aa869-bc98-4af8-bafc-23f1ab6ffe2c) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Network/WAF_AFD_Enabled_Audit.json) | |[IP Forwarding on your virtual machine should be disabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd352bd5-2853-4985-bf0d-73806b4a5744) |Enabling IP forwarding on a virtual machine's NIC allows the machine to receive traffic addressed to other destinations. IP forwarding is rarely required (e.g., when using the VM as a network virtual appliance), and therefore, this should be reviewed by the network security team. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_IPForwardingOnVirtualMachines_Audit.json) | |[Web Application Firewall (WAF) should be enabled for Application Gateway](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F564feb30-bf6a-4854-b4bb-0d2d2d1e6c66) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AppGatewayEnabled_Audit.json) |
initiative definition.
|[Azure SignalR Service should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2393d2cf-a342-44cd-a2e2-fe0188fd1234) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure SignalR Service resource instead of the entire service, you'll reduce your data leakage risks. Learn more about private links at: [https://aka.ms/asrs/privatelink](https://aka.ms/asrs/privatelink). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit_v2.json) | |[Azure Synapse workspaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72d11df1-dd8a-41f7-8925-b05b960ebafc) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Synapse workspace, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/synapse-analytics/security/how-to-connect-to-workspace-with-private-links](../../../synapse-analytics/security/how-to-connect-to-workspace-with-private-links.md). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/SynapseWorkspaceUsePrivateLinks_Audit.json) | |[Azure Web Application Firewall should be enabled for Azure Front Door entry-points](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F055aa869-bc98-4af8-bafc-23f1ab6ffe2c) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Network/WAF_AFD_Enabled_Audit.json) |
-|[Cognitive Services accounts should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
+|[Cognitive Services accounts should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |To improve the security of Cognitive Services accounts, ensure that it isn't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
|[Cognitive Services accounts should restrict network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) | |[Cognitive Services should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcddd188c-4b82-4c48-a19d-ddf74ee66a01) |Azure Private Link lets you connect your virtual networks to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Cognitive Services, you'll reduce the potential for data leakage. Learn more about private links at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_EnablePrivateEndpoints_Audit.json) | |[Container registries should not allow unrestricted network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) |
initiative definition.
|[Azure SignalR Service should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2393d2cf-a342-44cd-a2e2-fe0188fd1234) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure SignalR Service resource instead of the entire service, you'll reduce your data leakage risks. Learn more about private links at: [https://aka.ms/asrs/privatelink](https://aka.ms/asrs/privatelink). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit_v2.json) | |[Azure Synapse workspaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72d11df1-dd8a-41f7-8925-b05b960ebafc) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Synapse workspace, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/synapse-analytics/security/how-to-connect-to-workspace-with-private-links](../../../synapse-analytics/security/how-to-connect-to-workspace-with-private-links.md). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/SynapseWorkspaceUsePrivateLinks_Audit.json) | |[Azure Web Application Firewall should be enabled for Azure Front Door entry-points](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F055aa869-bc98-4af8-bafc-23f1ab6ffe2c) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Network/WAF_AFD_Enabled_Audit.json) |
-|[Cognitive Services accounts should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
+|[Cognitive Services accounts should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |To improve the security of Cognitive Services accounts, ensure that it isn't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
|[Cognitive Services accounts should restrict network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) | |[Cognitive Services should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcddd188c-4b82-4c48-a19d-ddf74ee66a01) |Azure Private Link lets you connect your virtual networks to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Cognitive Services, you'll reduce the potential for data leakage. Learn more about private links at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_EnablePrivateEndpoints_Audit.json) | |[Container registries should not allow unrestricted network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) |
initiative definition.
|[Function apps should only be accessible over HTTPS](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6d555dd1-86f2-4f1c-8ed7-5abae7c6cbab) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled, Deny |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceFunctionApp_AuditHTTP_Audit.json) | |[Function apps should require FTPS only](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F399b2637-a50f-4f95-96f8-3a145476eb15) |Enable FTPS enforcement for enhanced security. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_AuditFTPS_FunctionApp_Audit.json) | |[Function apps should use the latest TLS version](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff9d614c5-c173-4d56-95a7-b4437057d193) |Periodically, newer versions are released for TLS either due to security flaws, include additional functionality, and enhance speed. Upgrade to the latest TLS version for Function apps to take advantage of security fixes, if any, and/or new functionalities of the latest version. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RequireLatestTls_FunctionApp_Audit.json) |
-|[Kubernetes clusters should be accessible only over HTTPS](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1a5b4dca-0b6f-4cf5-907c-56316bc1bf3d) |Use of HTTPS ensures authentication and protects data in transit from network layer eavesdropping attacks. This capability is currently generally available for Kubernetes Service (AKS), and in preview for Azure Arc enabled Kubernetes. For more info, visit [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc) |audit, Audit, deny, Deny, disabled, Disabled |[8.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/IngressHttpsOnly.json) |
+|[Kubernetes clusters should be accessible only over HTTPS](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1a5b4dca-0b6f-4cf5-907c-56316bc1bf3d) |Use of HTTPS ensures authentication and protects data in transit from network layer eavesdropping attacks. This capability is currently generally available for Kubernetes Service (AKS), and in preview for Azure Arc enabled Kubernetes. For more info, visit [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc) |audit, Audit, deny, Deny, disabled, Disabled |[9.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/IngressHttpsOnly.json) |
|[Only secure connections to your Azure Cache for Redis should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F22bee202-a82f-4305-9a2a-6d7f44d4dedb) |Audit enabling of only connections via SSL to Azure Cache for Redis. Use of secure connections ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cache/RedisCache_AuditSSLPort_Audit.json) | |[Secure transfer to storage accounts should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F404c3081-a854-4457-ae30-26a93ef643f9) |Audit requirement of Secure transfer in your storage account. Secure transfer is an option that forces your storage account to accept requests only from secure connections (HTTPS). Use of HTTPS ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_AuditForHTTPSEnabled_Audit.json) | |[Windows web servers should be configured to use secure communication protocols](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5752e6d6-1206-46d8-8ab1-ecc2f71a8112) |To protect the privacy of information communicated over the Internet, your web servers should use the latest version of the industry-standard cryptographic protocol, Transport Layer Security (TLS). TLS secures communications over a network by using security certificates to encrypt a connection between machines. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Guest%20Configuration/GuestConfiguration_SecureWebProtocol_AINE.json) |
initiative definition.
|[Function apps should only be accessible over HTTPS](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6d555dd1-86f2-4f1c-8ed7-5abae7c6cbab) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled, Deny |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceFunctionApp_AuditHTTP_Audit.json) | |[Function apps should require FTPS only](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F399b2637-a50f-4f95-96f8-3a145476eb15) |Enable FTPS enforcement for enhanced security. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_AuditFTPS_FunctionApp_Audit.json) | |[Function apps should use the latest TLS version](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff9d614c5-c173-4d56-95a7-b4437057d193) |Periodically, newer versions are released for TLS either due to security flaws, include additional functionality, and enhance speed. Upgrade to the latest TLS version for Function apps to take advantage of security fixes, if any, and/or new functionalities of the latest version. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RequireLatestTls_FunctionApp_Audit.json) |
-|[Kubernetes clusters should be accessible only over HTTPS](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1a5b4dca-0b6f-4cf5-907c-56316bc1bf3d) |Use of HTTPS ensures authentication and protects data in transit from network layer eavesdropping attacks. This capability is currently generally available for Kubernetes Service (AKS), and in preview for Azure Arc enabled Kubernetes. For more info, visit [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc) |audit, Audit, deny, Deny, disabled, Disabled |[8.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/IngressHttpsOnly.json) |
+|[Kubernetes clusters should be accessible only over HTTPS](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1a5b4dca-0b6f-4cf5-907c-56316bc1bf3d) |Use of HTTPS ensures authentication and protects data in transit from network layer eavesdropping attacks. This capability is currently generally available for Kubernetes Service (AKS), and in preview for Azure Arc enabled Kubernetes. For more info, visit [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc) |audit, Audit, deny, Deny, disabled, Disabled |[9.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/IngressHttpsOnly.json) |
|[Only secure connections to your Azure Cache for Redis should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F22bee202-a82f-4305-9a2a-6d7f44d4dedb) |Audit enabling of only connections via SSL to Azure Cache for Redis. Use of secure connections ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cache/RedisCache_AuditSSLPort_Audit.json) | |[Secure transfer to storage accounts should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F404c3081-a854-4457-ae30-26a93ef643f9) |Audit requirement of Secure transfer in your storage account. Secure transfer is an option that forces your storage account to accept requests only from secure connections (HTTPS). Use of HTTPS ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_AuditForHTTPSEnabled_Audit.json) | |[Windows web servers should be configured to use secure communication protocols](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5752e6d6-1206-46d8-8ab1-ecc2f71a8112) |To protect the privacy of information communicated over the Internet, your web servers should use the latest version of the industry-standard cryptographic protocol, Transport Layer Security (TLS). TLS secures communications over a network by using security certificates to encrypt a connection between machines. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Guest%20Configuration/GuestConfiguration_SecureWebProtocol_AINE.json) |
governance Gov Irs 1075 Sept2016 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-irs-1075-sept2016.md
Title: Regulatory Compliance details for IRS 1075 September 2016 (Azure Government) description: Details of the IRS 1075 September 2016 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/04/2022 Last updated : 01/05/2023
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Azure DDoS Protection should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS protection should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableDDoSProtection_Audit.json) |
+|[Azure DDoS Protection Standard should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS protection standard should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableDDoSProtection_Audit.json) |
### Boundary Protection (SC-7)
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[\[Preview\]: Log Analytics Extension should be enabled for listed virtual machine images](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F32133ab0-ee4b-4b44-98d6-042180979d50) |Reports virtual machines as non-compliant if the virtual machine image is not in the list defined and the extension is not installed. |AuditIfNotExists, Disabled |[2.0.1-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalytics_OSImage_Audit.json) |
-|[Audit diagnostic setting](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types |AuditIfNotExists |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
+|[Audit diagnostic setting for selected resource types](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types. Be sure to select only resource types which support diagnostics settings. |AuditIfNotExists |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
|[Auditing on SQL server should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6fb4358-5bf4-4ad7-ba82-2cd2f41ce5e9) |Auditing on your SQL Server should be enabled to track database activities across all databases on the server and save them in an audit log. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditing_Audit.json) | |[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) | |[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Audit diagnostic setting](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types |AuditIfNotExists |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
+|[Audit diagnostic setting for selected resource types](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types. Be sure to select only resource types which support diagnostics settings. |AuditIfNotExists |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
|[Auditing on SQL server should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6fb4358-5bf4-4ad7-ba82-2cd2f41ce5e9) |Auditing on your SQL Server should be enabled to track database activities across all databases on the server and save them in an audit log. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditing_Audit.json) | |[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) | |[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) |
governance Gov Iso 27001 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-iso-27001.md
Title: Regulatory Compliance details for ISO 27001:2013 (Azure Government) description: Details of the ISO 27001:2013 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/04/2022 Last updated : 01/05/2023
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[\[Preview\]: Log Analytics Extension should be enabled for listed virtual machine images](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F32133ab0-ee4b-4b44-98d6-042180979d50) |Reports virtual machines as non-compliant if the virtual machine image is not in the list defined and the extension is not installed. |AuditIfNotExists, Disabled |[2.0.1-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalytics_OSImage_Audit.json) |
-|[Audit diagnostic setting](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types |AuditIfNotExists |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
+|[Audit diagnostic setting for selected resource types](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types. Be sure to select only resource types which support diagnostics settings. |AuditIfNotExists |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
|[Auditing on SQL server should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6fb4358-5bf4-4ad7-ba82-2cd2f41ce5e9) |Auditing on your SQL Server should be enabled to track database activities across all databases on the server and save them in an audit log. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditing_Audit.json) | |[Dependency agent should be enabled for listed virtual machine images](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F11ac78e3-31bc-4f0c-8434-37ab963cea07) |Reports virtual machines as non-compliant if the virtual machine image is not in the list defined and the agent is not installed. The list of OS images is updated over time as support is updated. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DependencyAgent_OSImage_Audit.json) | |[Dependency agent should be enabled in virtual machine scale sets for listed virtual machine images](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe2dd799a-a932-4e9d-ac17-d473bc3c6c10) |Reports virtual machine scale sets as non-compliant if the virtual machine image is not in the list defined and the agent is not installed. The list of OS images is updated over time as support is updated. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DependencyAgent_OSImage_VMSS_Audit.json) |
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[\[Preview\]: Log Analytics Extension should be enabled for listed virtual machine images](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F32133ab0-ee4b-4b44-98d6-042180979d50) |Reports virtual machines as non-compliant if the virtual machine image is not in the list defined and the extension is not installed. |AuditIfNotExists, Disabled |[2.0.1-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalytics_OSImage_Audit.json) |
-|[Audit diagnostic setting](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types |AuditIfNotExists |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
+|[Audit diagnostic setting for selected resource types](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types. Be sure to select only resource types which support diagnostics settings. |AuditIfNotExists |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
|[Auditing on SQL server should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6fb4358-5bf4-4ad7-ba82-2cd2f41ce5e9) |Auditing on your SQL Server should be enabled to track database activities across all databases on the server and save them in an audit log. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditing_Audit.json) | |[Dependency agent should be enabled for listed virtual machine images](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F11ac78e3-31bc-4f0c-8434-37ab963cea07) |Reports virtual machines as non-compliant if the virtual machine image is not in the list defined and the agent is not installed. The list of OS images is updated over time as support is updated. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DependencyAgent_OSImage_Audit.json) | |[Dependency agent should be enabled in virtual machine scale sets for listed virtual machine images](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe2dd799a-a932-4e9d-ac17-d473bc3c6c10) |Reports virtual machine scale sets as non-compliant if the virtual machine image is not in the list defined and the agent is not installed. The list of OS images is updated over time as support is updated. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DependencyAgent_OSImage_VMSS_Audit.json) |
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[\[Preview\]: Log Analytics Extension should be enabled for listed virtual machine images](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F32133ab0-ee4b-4b44-98d6-042180979d50) |Reports virtual machines as non-compliant if the virtual machine image is not in the list defined and the extension is not installed. |AuditIfNotExists, Disabled |[2.0.1-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalytics_OSImage_Audit.json) |
-|[Audit diagnostic setting](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types |AuditIfNotExists |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
+|[Audit diagnostic setting for selected resource types](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types. Be sure to select only resource types which support diagnostics settings. |AuditIfNotExists |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
|[Auditing on SQL server should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6fb4358-5bf4-4ad7-ba82-2cd2f41ce5e9) |Auditing on your SQL Server should be enabled to track database activities across all databases on the server and save them in an audit log. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditing_Audit.json) | |[Dependency agent should be enabled for listed virtual machine images](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F11ac78e3-31bc-4f0c-8434-37ab963cea07) |Reports virtual machines as non-compliant if the virtual machine image is not in the list defined and the agent is not installed. The list of OS images is updated over time as support is updated. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DependencyAgent_OSImage_Audit.json) | |[Dependency agent should be enabled in virtual machine scale sets for listed virtual machine images](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe2dd799a-a932-4e9d-ac17-d473bc3c6c10) |Reports virtual machine scale sets as non-compliant if the virtual machine image is not in the list defined and the agent is not installed. The list of OS images is updated over time as support is updated. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DependencyAgent_OSImage_VMSS_Audit.json) |
governance Gov Nist Sp 800 53 R5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-nist-sp-800-53-r5.md
Title: Regulatory Compliance details for NIST SP 800-53 Rev. 5 (Azure Government) description: Details of the NIST SP 800-53 Rev. 5 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/04/2022 Last updated : 01/05/2023
initiative definition.
|[Azure Service Bus namespaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c06e275-d63d-4540-b761-71f364c2111d) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Service Bus namespaces, data leakage risks are reduced. Learn more at: [https://docs.microsoft.com/azure/service-bus-messaging/private-link-service](../../../service-bus-messaging/private-link-service.md). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Bus/ServiceBus_PrivateEndpoint_Audit.json) | |[Azure SignalR Service should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2393d2cf-a342-44cd-a2e2-fe0188fd1234) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure SignalR Service resource instead of the entire service, you'll reduce your data leakage risks. Learn more about private links at: [https://aka.ms/asrs/privatelink](https://aka.ms/asrs/privatelink). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit_v2.json) | |[Azure Synapse workspaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72d11df1-dd8a-41f7-8925-b05b960ebafc) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Synapse workspace, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/synapse-analytics/security/how-to-connect-to-workspace-with-private-links](../../../synapse-analytics/security/how-to-connect-to-workspace-with-private-links.md). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/SynapseWorkspaceUsePrivateLinks_Audit.json) |
-|[Cognitive Services accounts should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
+|[Cognitive Services accounts should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |To improve the security of Cognitive Services accounts, ensure that it isn't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
|[Cognitive Services accounts should restrict network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) | |[Cognitive Services should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcddd188c-4b82-4c48-a19d-ddf74ee66a01) |Azure Private Link lets you connect your virtual networks to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Cognitive Services, you'll reduce the potential for data leakage. Learn more about private links at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_EnablePrivateEndpoints_Audit.json) | |[Container registries should not allow unrestricted network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) |
initiative definition.
|[Function apps should have 'Client Certificates (Incoming client certificates)' enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feaebaea7-8013-4ceb-9d14-7eb32271373c) |Client certificates allow for the app to request a certificate for incoming requests. Only clients with valid certificates will be able to reach the app. |Audit, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_FunctionApp_Audit_ClientCert.json) | |[Function apps should have remote debugging turned off](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e60b895-3786-45da-8377-9c6b4b6ac5f9) |Remote debugging requires inbound ports to be opened on Function apps. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_FunctionApp_Audit.json) | |[Function apps should not have CORS configured to allow every resource to access your apps](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0820b7b9-23aa-4725-a1ce-ae4558f718e5) |Cross-Origin Resource Sharing (CORS) should not allow all domains to access your Function app. Allow only required domains to interact with your Function app. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RestrictCORSAccess_FuntionApp_Audit.json) |
-|[Kubernetes cluster containers CPU and memory resource limits should not exceed the specified limits](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe345eecc-fa47-480f-9e88-67dcc122b164) |Enforce container CPU and memory resource limits to prevent resource exhaustion attacks in a Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[9.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerResourceLimits.json) |
-|[Kubernetes cluster containers should not share host process ID or host IPC namespace](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F47a1ee2f-2a2a-4576-bf2a-e0e36709c2b8) |Block pod containers from sharing the host process ID namespace and host IPC namespace in a Kubernetes cluster. This recommendation is part of CIS 5.2.2 and CIS 5.2.3 which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[5.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/BlockHostNamespace.json) |
-|[Kubernetes cluster containers should only use allowed AppArmor profiles](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F511f5417-5d12-434d-ab2e-816901e72a5e) |Containers should only use allowed AppArmor profiles in a Kubernetes cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[6.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/EnforceAppArmorProfile.json) |
-|[Kubernetes cluster containers should only use allowed capabilities](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc26596ff-4d70-4e6a-9a30-c2506bd2f80c) |Restrict the capabilities to reduce the attack surface of containers in a Kubernetes cluster. This recommendation is part of CIS 5.2.8 and CIS 5.2.9 which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[6.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerAllowedCapabilities.json) |
-|[Kubernetes cluster containers should only use allowed images](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffebd0533-8e55-448f-b837-bd0e06f16469) |Use images from trusted registries to reduce the Kubernetes cluster's exposure risk to unknown vulnerabilities, security issues and malicious images. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[9.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerAllowedImages.json) |
-|[Kubernetes cluster containers should run with a read only root file system](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdf49d893-a74c-421d-bc95-c663042e5b80) |Run containers with a read only root file system to protect from changes at run-time with malicious binaries being added to PATH in a Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[6.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ReadOnlyRootFileSystem.json) |
-|[Kubernetes cluster pod hostPath volumes should only use allowed host paths](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F098fc59e-46c7-4d99-9b16-64990e543d75) |Limit pod HostPath volume mounts to the allowed host paths in a Kubernetes Cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[6.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/AllowedHostPaths.json) |
-|[Kubernetes cluster pods and containers should only run with approved user and group IDs](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff06ddb64-5fa3-4b77-b166-acb36f7f6042) |Control the user, primary group, supplemental group and file system group IDs that pods and containers can use to run in a Kubernetes Cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[6.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/AllowedUsersGroups.json) |
-|[Kubernetes cluster pods should only use approved host network and port range](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F82985f06-dc18-4a48-bc1c-b9f4f0098cfe) |Restrict pod access to the host network and the allowable host port range in a Kubernetes cluster. This recommendation is part of CIS 5.2.4 which is intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[6.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/HostNetworkPorts.json) |
-|[Kubernetes cluster services should listen only on allowed ports](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F233a2a17-77ca-4fb1-9b6b-69223d272a44) |Restrict services to listen only on allowed ports to secure access to the Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[8.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ServiceAllowedPorts.json) |
-|[Kubernetes cluster should not allow privileged containers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F95edb821-ddaf-4404-9732-666045e056b4) |Do not allow privileged containers creation in a Kubernetes cluster. This recommendation is part of CIS 5.2.1 which is intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[9.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerNoPrivilege.json) |
-|[Kubernetes clusters should not allow container privilege escalation](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c6e92c9-99f0-4e55-9cf2-0c234dc48f99) |Do not allow containers to run with privilege escalation to root in a Kubernetes cluster. This recommendation is part of CIS 5.2.5 which is intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[7.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerNoPrivilegeEscalation.json) |
+|[Kubernetes cluster containers CPU and memory resource limits should not exceed the specified limits](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe345eecc-fa47-480f-9e88-67dcc122b164) |Enforce container CPU and memory resource limits to prevent resource exhaustion attacks in a Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[10.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerResourceLimits.json) |
+|[Kubernetes cluster containers should not share host process ID or host IPC namespace](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F47a1ee2f-2a2a-4576-bf2a-e0e36709c2b8) |Block pod containers from sharing the host process ID namespace and host IPC namespace in a Kubernetes cluster. This recommendation is part of CIS 5.2.2 and CIS 5.2.3 which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[6.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/BlockHostNamespace.json) |
+|[Kubernetes cluster containers should only use allowed AppArmor profiles](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F511f5417-5d12-434d-ab2e-816901e72a5e) |Containers should only use allowed AppArmor profiles in a Kubernetes cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[7.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/EnforceAppArmorProfile.json) |
+|[Kubernetes cluster containers should only use allowed capabilities](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc26596ff-4d70-4e6a-9a30-c2506bd2f80c) |Restrict the capabilities to reduce the attack surface of containers in a Kubernetes cluster. This recommendation is part of CIS 5.2.8 and CIS 5.2.9 which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[7.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerAllowedCapabilities.json) |
+|[Kubernetes cluster containers should only use allowed images](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffebd0533-8e55-448f-b837-bd0e06f16469) |Use images from trusted registries to reduce the Kubernetes cluster's exposure risk to unknown vulnerabilities, security issues and malicious images. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[10.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerAllowedImages.json) |
+|[Kubernetes cluster containers should run with a read only root file system](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdf49d893-a74c-421d-bc95-c663042e5b80) |Run containers with a read only root file system to protect from changes at run-time with malicious binaries being added to PATH in a Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[7.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ReadOnlyRootFileSystem.json) |
+|[Kubernetes cluster pod hostPath volumes should only use allowed host paths](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F098fc59e-46c7-4d99-9b16-64990e543d75) |Limit pod HostPath volume mounts to the allowed host paths in a Kubernetes Cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[7.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/AllowedHostPaths.json) |
+|[Kubernetes cluster pods and containers should only run with approved user and group IDs](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff06ddb64-5fa3-4b77-b166-acb36f7f6042) |Control the user, primary group, supplemental group and file system group IDs that pods and containers can use to run in a Kubernetes Cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[7.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/AllowedUsersGroups.json) |
+|[Kubernetes cluster pods should only use approved host network and port range](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F82985f06-dc18-4a48-bc1c-b9f4f0098cfe) |Restrict pod access to the host network and the allowable host port range in a Kubernetes cluster. This recommendation is part of CIS 5.2.4 which is intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[7.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/HostNetworkPorts.json) |
+|[Kubernetes cluster services should listen only on allowed ports](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F233a2a17-77ca-4fb1-9b6b-69223d272a44) |Restrict services to listen only on allowed ports to secure access to the Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[9.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ServiceAllowedPorts.json) |
+|[Kubernetes cluster should not allow privileged containers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F95edb821-ddaf-4404-9732-666045e056b4) |Do not allow privileged containers creation in a Kubernetes cluster. This recommendation is part of CIS 5.2.1 which is intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[10.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerNoPrivilege.json) |
+|[Kubernetes clusters should not allow container privilege escalation](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c6e92c9-99f0-4e55-9cf2-0c234dc48f99) |Do not allow containers to run with privilege escalation to root in a Kubernetes cluster. This recommendation is part of CIS 5.2.5 which is intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[8.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerNoPrivilegeEscalation.json) |
|[Linux machines should meet requirements for the Azure compute security baseline](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc9b3da7-8347-4380-8e70-0a0361d8dedd) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is not configured correctly for one of the recommendations in the Azure compute security baseline. |AuditIfNotExists, Disabled |[1.3.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Guest%20Configuration/GuestConfiguration_AzureLinuxBaseline_AINE.json) | |[Microsoft Managed Control 1208 - Configuration Settings](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5ea87673-d06b-456f-a324-8abcee5c159f) |Microsoft implements this Configuration Management control |audit |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Regulatory%20Compliance/MicrosoftManagedControl1208.json) | |[Microsoft Managed Control 1209 - Configuration Settings](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fce669c31-9103-4552-ae9c-cdef4e03580d) |Microsoft implements this Configuration Management control |audit |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Regulatory%20Compliance/MicrosoftManagedControl1209.json) |
initiative definition.
||||| |[Microsoft Managed Control 1279 - Telecommunications Services](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7d00bcd6-963d-4c02-ad8e-b45fa50bf3b0) |Microsoft implements this Contingency Planning control |audit |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Regulatory%20Compliance/MicrosoftManagedControl1279.json) |
-### Telecommunications Services
+### Priority of Service Provisions
-**ID**: NIST SP 800-53 Rev. 5 CP-8
-**Ownership**: Microsoft
+**ID**: NIST SP 800-53 Rev. 5 CP-8 (1)
+**Ownership**: Shared
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
initiative definition.
||||| |[Microsoft Managed Control 1475 - Emergency Lighting](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34a63848-30cf-4081-937e-ce1a1c885501) |Microsoft implements this Physical and Environmental Protection control |audit |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Regulatory%20Compliance/MicrosoftManagedControl1475.json) |
-### Emergency Lighting
+### Fire Protection
-**ID**: NIST SP 800-53 Rev. 5 PE-12
+**ID**: NIST SP 800-53 Rev. 5 PE-13
**Ownership**: Shared |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
initiative definition.
|[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_OSVulnerabilities_Audit.json) | |[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) | |[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
-|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
|[Vulnerability assessment should be enabled on your Synapse workspaces](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0049a6b3-a662-4f3e-8635-39cf44ace45a) |Discover, track, and remediate potential vulnerabilities by configuring recurring SQL vulnerability assessment scans on your Synapse workspaces. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/ASC_SQLVulnerabilityAssessmentOnSynapse_Audit.json) | ### Update Vulnerabilities to Be Scanned
initiative definition.
||||| |[Microsoft Managed Control 1593 - External Information System Services \| Processing, Storage, And Service Location](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2cd0a426-b5f5-4fe0-9539-a6043cdbc6fa) |Microsoft implements this System and Services Acquisition control |audit |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Regulatory%20Compliance/MicrosoftManagedControl1593.json) |
-### Developer Configuration Management
+### Processing, Storage, and Service Location
-**ID**: NIST SP 800-53 Rev. 5 SA-10
+**ID**: NIST SP 800-53 Rev. 5 SA-9 (5)
**Ownership**: Shared |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Azure DDoS Protection should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS Protection should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableDDoSProtection_Audit.json) |
+|[Azure DDoS Protection Standard should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS protection standard should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableDDoSProtection_Audit.json) |
|[Azure Web Application Firewall should be enabled for Azure Front Door entry-points](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F055aa869-bc98-4af8-bafc-23f1ab6ffe2c) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Network/WAF_AFD_Enabled_Audit.json) | |[IP Forwarding on your virtual machine should be disabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd352bd5-2853-4985-bf0d-73806b4a5744) |Enabling IP forwarding on a virtual machine's NIC allows the machine to receive traffic addressed to other destinations. IP forwarding is rarely required (e.g., when using the VM as a network virtual appliance), and therefore, this should be reviewed by the network security team. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_IPForwardingOnVirtualMachines_Audit.json) | |[Microsoft Managed Control 1620 - Denial Of Service Protection](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd17c826b-1dec-43e1-a984-7b71c446649c) |Microsoft implements this System and Communications Protection control |audit |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Regulatory%20Compliance/MicrosoftManagedControl1620.json) |
initiative definition.
|[Azure SignalR Service should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2393d2cf-a342-44cd-a2e2-fe0188fd1234) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure SignalR Service resource instead of the entire service, you'll reduce your data leakage risks. Learn more about private links at: [https://aka.ms/asrs/privatelink](https://aka.ms/asrs/privatelink). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit_v2.json) | |[Azure Synapse workspaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72d11df1-dd8a-41f7-8925-b05b960ebafc) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Synapse workspace, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/synapse-analytics/security/how-to-connect-to-workspace-with-private-links](../../../synapse-analytics/security/how-to-connect-to-workspace-with-private-links.md). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/SynapseWorkspaceUsePrivateLinks_Audit.json) | |[Azure Web Application Firewall should be enabled for Azure Front Door entry-points](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F055aa869-bc98-4af8-bafc-23f1ab6ffe2c) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Network/WAF_AFD_Enabled_Audit.json) |
-|[Cognitive Services accounts should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
+|[Cognitive Services accounts should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |To improve the security of Cognitive Services accounts, ensure that it isn't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
|[Cognitive Services accounts should restrict network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) | |[Cognitive Services should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcddd188c-4b82-4c48-a19d-ddf74ee66a01) |Azure Private Link lets you connect your virtual networks to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Cognitive Services, you'll reduce the potential for data leakage. Learn more about private links at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_EnablePrivateEndpoints_Audit.json) | |[Container registries should not allow unrestricted network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) |
initiative definition.
|[Azure SignalR Service should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2393d2cf-a342-44cd-a2e2-fe0188fd1234) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure SignalR Service resource instead of the entire service, you'll reduce your data leakage risks. Learn more about private links at: [https://aka.ms/asrs/privatelink](https://aka.ms/asrs/privatelink). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit_v2.json) | |[Azure Synapse workspaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72d11df1-dd8a-41f7-8925-b05b960ebafc) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Synapse workspace, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/synapse-analytics/security/how-to-connect-to-workspace-with-private-links](../../../synapse-analytics/security/how-to-connect-to-workspace-with-private-links.md). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/SynapseWorkspaceUsePrivateLinks_Audit.json) | |[Azure Web Application Firewall should be enabled for Azure Front Door entry-points](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F055aa869-bc98-4af8-bafc-23f1ab6ffe2c) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Network/WAF_AFD_Enabled_Audit.json) |
-|[Cognitive Services accounts should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
+|[Cognitive Services accounts should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |To improve the security of Cognitive Services accounts, ensure that it isn't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
|[Cognitive Services accounts should restrict network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) | |[Cognitive Services should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcddd188c-4b82-4c48-a19d-ddf74ee66a01) |Azure Private Link lets you connect your virtual networks to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Cognitive Services, you'll reduce the potential for data leakage. Learn more about private links at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_EnablePrivateEndpoints_Audit.json) | |[Container registries should not allow unrestricted network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) |
initiative definition.
|[Function apps should only be accessible over HTTPS](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6d555dd1-86f2-4f1c-8ed7-5abae7c6cbab) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled, Deny |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceFunctionApp_AuditHTTP_Audit.json) | |[Function apps should require FTPS only](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F399b2637-a50f-4f95-96f8-3a145476eb15) |Enable FTPS enforcement for enhanced security. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_AuditFTPS_FunctionApp_Audit.json) | |[Function apps should use the latest TLS version](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff9d614c5-c173-4d56-95a7-b4437057d193) |Periodically, newer versions are released for TLS either due to security flaws, include additional functionality, and enhance speed. Upgrade to the latest TLS version for Function apps to take advantage of security fixes, if any, and/or new functionalities of the latest version. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RequireLatestTls_FunctionApp_Audit.json) |
-|[Kubernetes clusters should be accessible only over HTTPS](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1a5b4dca-0b6f-4cf5-907c-56316bc1bf3d) |Use of HTTPS ensures authentication and protects data in transit from network layer eavesdropping attacks. This capability is currently generally available for Kubernetes Service (AKS), and in preview for Azure Arc enabled Kubernetes. For more info, visit [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc) |audit, Audit, deny, Deny, disabled, Disabled |[8.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/IngressHttpsOnly.json) |
+|[Kubernetes clusters should be accessible only over HTTPS](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1a5b4dca-0b6f-4cf5-907c-56316bc1bf3d) |Use of HTTPS ensures authentication and protects data in transit from network layer eavesdropping attacks. This capability is currently generally available for Kubernetes Service (AKS), and in preview for Azure Arc enabled Kubernetes. For more info, visit [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc) |audit, Audit, deny, Deny, disabled, Disabled |[9.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/IngressHttpsOnly.json) |
|[Microsoft Managed Control 1640 - Transmission Confidentiality And Integrity](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F05a289ce-6a20-4b75-a0f3-dc8601b6acd0) |Microsoft implements this System and Communications Protection control |audit |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Regulatory%20Compliance/MicrosoftManagedControl1640.json) | |[Only secure connections to your Azure Cache for Redis should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F22bee202-a82f-4305-9a2a-6d7f44d4dedb) |Audit enabling of only connections via SSL to Azure Cache for Redis. Use of secure connections ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cache/RedisCache_AuditSSLPort_Audit.json) | |[Secure transfer to storage accounts should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F404c3081-a854-4457-ae30-26a93ef643f9) |Audit requirement of Secure transfer in your storage account. Secure transfer is an option that forces your storage account to accept requests only from secure connections (HTTPS). Use of HTTPS ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_AuditForHTTPSEnabled_Audit.json) |
initiative definition.
|[Function apps should only be accessible over HTTPS](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6d555dd1-86f2-4f1c-8ed7-5abae7c6cbab) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled, Deny |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceFunctionApp_AuditHTTP_Audit.json) | |[Function apps should require FTPS only](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F399b2637-a50f-4f95-96f8-3a145476eb15) |Enable FTPS enforcement for enhanced security. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_AuditFTPS_FunctionApp_Audit.json) | |[Function apps should use the latest TLS version](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff9d614c5-c173-4d56-95a7-b4437057d193) |Periodically, newer versions are released for TLS either due to security flaws, include additional functionality, and enhance speed. Upgrade to the latest TLS version for Function apps to take advantage of security fixes, if any, and/or new functionalities of the latest version. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RequireLatestTls_FunctionApp_Audit.json) |
-|[Kubernetes clusters should be accessible only over HTTPS](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1a5b4dca-0b6f-4cf5-907c-56316bc1bf3d) |Use of HTTPS ensures authentication and protects data in transit from network layer eavesdropping attacks. This capability is currently generally available for Kubernetes Service (AKS), and in preview for Azure Arc enabled Kubernetes. For more info, visit [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc) |audit, Audit, deny, Deny, disabled, Disabled |[8.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/IngressHttpsOnly.json) |
+|[Kubernetes clusters should be accessible only over HTTPS](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1a5b4dca-0b6f-4cf5-907c-56316bc1bf3d) |Use of HTTPS ensures authentication and protects data in transit from network layer eavesdropping attacks. This capability is currently generally available for Kubernetes Service (AKS), and in preview for Azure Arc enabled Kubernetes. For more info, visit [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc) |audit, Audit, deny, Deny, disabled, Disabled |[9.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/IngressHttpsOnly.json) |
|[Microsoft Managed Control 1641 - Transmission Confidentiality And Integrity \| Cryptographic Or Alternate Physical Protection](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd39d4f68-7346-4133-8841-15318a714a24) |Microsoft implements this System and Communications Protection control |audit |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Regulatory%20Compliance/MicrosoftManagedControl1641.json) | |[Only secure connections to your Azure Cache for Redis should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F22bee202-a82f-4305-9a2a-6d7f44d4dedb) |Audit enabling of only connections via SSL to Azure Cache for Redis. Use of secure connections ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cache/RedisCache_AuditSSLPort_Audit.json) | |[Secure transfer to storage accounts should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F404c3081-a854-4457-ae30-26a93ef643f9) |Audit requirement of Secure transfer in your storage account. Secure transfer is an option that forces your storage account to accept requests only from secure connections (HTTPS). Use of HTTPS ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_AuditForHTTPSEnabled_Audit.json) |
governance Hipaa Hitrust 9 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/hipaa-hitrust-9-2.md
Title: Regulatory Compliance details for HIPAA HITRUST 9.2 description: Details of the HIPAA HITRUST 9.2 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/04/2022 Last updated : 01/05/2023
This built-in initiative is deployed as part of the
## 01 Information Protection Program
-### 0.01 Information Security Management Program
+### 0101.00a1Organizational.123-00.a 0.01 Information Security Management Program
**ID**: 0101.00a1Organizational.123-00.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Review and update the information security architecture](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fced291b8-1d3d-7e27-40cf-829e9dd523c8) |CMA_C1504 - Review and update the information security architecture |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1504.json) | |[Update information security policies](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5226dee6-3420-711b-4709-8e675ebd828f) |CMA_0518 - Update information security policies |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0518.json) |
-### 0.01 Information Security Management Program
+### 0102.00a2Organizational.123-00.a 0.01 Information Security Management Program
**ID**: 0102.00a2Organizational.123-00.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Review and update the information security architecture](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fced291b8-1d3d-7e27-40cf-829e9dd523c8) |CMA_C1504 - Review and update the information security architecture |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1504.json) | |[Update information security policies](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5226dee6-3420-711b-4709-8e675ebd828f) |CMA_0518 - Update information security policies |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0518.json) |
-### 0.01 Information Security Management Program
+### 0103.00a3Organizational.1234567-00.a 0.01 Information Security Management Program
**ID**: 0103.00a3Organizational.1234567-00.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Establish security requirements for the manufacturing of connected devices](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fafbecd30-37ee-a27b-8e09-6ac49951a0ee) |CMA_0279 - Establish security requirements for the manufacturing of connected devices |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0279.json) | |[Implement security engineering principles of information systems](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdf2e9507-169b-4114-3a52-877561ee3198) |CMA_0325 - Implement security engineering principles of information systems |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0325.json) |
-### 02.01 Prior to Employment
+### 0104.02a1Organizational.12-02.a 02.01 Prior to Employment
**ID**: 0104.02a1Organizational.12-02.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Update rules of behavior and access agreements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6610f662-37e9-2f71-65be-502bdc2f554d) |CMA_0521 - Update rules of behavior and access agreements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0521.json) | |[Update rules of behavior and access agreements every 3 years](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7ad83b58-2042-085d-08f0-13e946f26f89) |CMA_0522 - Update rules of behavior and access agreements every 3 years |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0522.json) |
-### 02.01 Prior to Employment
+### 0105.02a2Organizational.1-02.a 02.01 Prior to Employment
**ID**: 0105.02a2Organizational.1-02.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Protect special information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa315c657-4a00-8eba-15ac-44692ad24423) |CMA_0409 - Protect special information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0409.json) | |[Rescreen individuals at a defined frequency](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc6aeb800-0b19-944d-92dc-59b893722329) |CMA_C1512 - Rescreen individuals at a defined frequency |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1512.json) |
-### 02.01 Prior to Employment
+### 0106.02a2Organizational.23-02.a 02.01 Prior to Employment
**ID**: 0106.02a2Organizational.23-02.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Protect special information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa315c657-4a00-8eba-15ac-44692ad24423) |CMA_0409 - Protect special information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0409.json) | |[Rescreen individuals at a defined frequency](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc6aeb800-0b19-944d-92dc-59b893722329) |CMA_C1512 - Rescreen individuals at a defined frequency |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1512.json) |
-### 02.03 During Employment
+### 0107.02d1Organizational.1-02.d 02.03 During Employment
**ID**: 0107.02d1Organizational.1-02.d **Ownership**: Shared
This built-in initiative is deployed as part of the
||||| |[Establish information security workforce development and improvement program](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb544f797-a73b-1be3-6d01-6b1a085376bc) |CMA_C1752 - Establish information security workforce development and improvement program |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1752.json) |
-### 02.03 During Employment
+### 0108.02d1Organizational.23-02.d 02.03 During Employment
**ID**: 0108.02d1Organizational.23-02.d **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Retain training records](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3153d9c0-2584-14d3-362d-578b01358aeb) |CMA_0456 - Retain training records |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0456.json) | |[Review security testing, training, and monitoring plans](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3b3cc61-9c70-5d78-7f12-1aefcc477db7) |CMA_C1754 - Review security testing, training, and monitoring plans |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1754.json) |
-### 02.03 During Employment
+### 0109.02d1Organizational.4-02.d 02.03 During Employment
**ID**: 0109.02d1Organizational.4-02.d **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Update rules of behavior and access agreements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6610f662-37e9-2f71-65be-502bdc2f554d) |CMA_0521 - Update rules of behavior and access agreements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0521.json) | |[Update rules of behavior and access agreements every 3 years](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7ad83b58-2042-085d-08f0-13e946f26f89) |CMA_0522 - Update rules of behavior and access agreements every 3 years |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0522.json) |
-### 02.03 During Employment
+### 0110.02d2Organizational.1-02.d 02.03 During Employment
**ID**: 0110.02d2Organizational.1-02.d **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Appoint a senior information security officer](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc6cf9f2c-5fd8-3f16-a1f1-f0b69c904928) |CMA_C1733 - Appoint a senior information security officer |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1733.json) | |[Establish information security workforce development and improvement program](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb544f797-a73b-1be3-6d01-6b1a085376bc) |CMA_C1752 - Establish information security workforce development and improvement program |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1752.json) |
-### 02.03 During Employment
+### 0111.02d2Organizational.2-02.d 02.03 During Employment
**ID**: 0111.02d2Organizational.2-02.d **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Require notification of third-party personnel transfer or termination](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fafd5d60a-48d2-8073-1ec2-6687e22f2ddd) |CMA_C1532 - Require notification of third-party personnel transfer or termination |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1532.json) | |[Require third-party providers to comply with personnel security policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8c31e15-642d-600f-78ab-bad47a5787e6) |CMA_C1530 - Require third-party providers to comply with personnel security policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1530.json) |
-### 05.01 Internal Organization
+### 01110.05a1Organizational.5-05.a 05.01 Internal Organization
**ID**: 01110.05a1Organizational.5-05.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Establish third-party personnel security requirements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3881168c-5d38-6f04-61cc-b5d87b2c4c58) |CMA_C1529 - Establish third-party personnel security requirements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1529.json) | |[Require third-party providers to comply with personnel security policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8c31e15-642d-600f-78ab-bad47a5787e6) |CMA_C1530 - Require third-party providers to comply with personnel security policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1530.json) |
-### 05.01 Internal Organization
+### 01111.05a2Organizational.5-05.a 05.01 Internal Organization
**ID**: 01111.05a2Organizational.5-05.a **Ownership**: Shared
This built-in initiative is deployed as part of the
||||| |[Appoint a senior information security officer](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc6cf9f2c-5fd8-3f16-a1f1-f0b69c904928) |CMA_C1733 - Appoint a senior information security officer |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1733.json) |
-### 02.03 During Employment
+### 0112.02d2Organizational.3-02.d 02.03 During Employment
**ID**: 0112.02d2Organizational.3-02.d **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Require compliance with intellectual property rights](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F725164e5-3b21-1ec2-7e42-14f077862841) |CMA_0432 - Require compliance with intellectual property rights |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0432.json) | |[Track software license usage](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F77cc89bb-774f-48d7-8a84-fb8c322c3000) |CMA_C1235 - Track software license usage |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1235.json) |
-### 04.01 Information Security Policy
+### 0113.04a1Organizational.123-04.a 04.01 Information Security Policy
**ID**: 0113.04a1Organizational.123-04.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Protect the information security program plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2e7a98c9-219f-0d58-38dc-d69038224442) |CMA_C1732 - Protect the information security program plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1732.json) | |[Update information security policies](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5226dee6-3420-711b-4709-8e675ebd828f) |CMA_0518 - Update information security policies |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0518.json) |
-### 04.01 Information Security Policy
+### 0114.04b1Organizational.1-04.b 04.01 Information Security Policy
**ID**: 0114.04b1Organizational.1-04.b **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Review and update system maintenance policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2067b904-9552-3259-0cdd-84468e284b7c) |CMA_C1395 - Review and update system maintenance policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1395.json) | |[Update information security policies](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5226dee6-3420-711b-4709-8e675ebd828f) |CMA_0518 - Update information security policies |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0518.json) |
-### 04.01 Information Security Policy
+### 0115.04b2Organizational.123-04.b 04.01 Information Security Policy
**ID**: 0115.04b2Organizational.123-04.b **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Review security assessment and authorization policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa4493012-908c-5f48-a468-1e243be884ce) |CMA_C1143 - Review security assessment and authorization policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1143.json) | |[Update information security policies](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5226dee6-3420-711b-4709-8e675ebd828f) |CMA_0518 - Update information security policies |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0518.json) |
-### 04.01 Information Security Policy
+### 0116.04b3Organizational.1-04.b 04.01 Information Security Policy
**ID**: 0116.04b3Organizational.1-04.b **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Review and update planning policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F28aa060e-25c7-6121-05d8-a846f11433df) |CMA_C1491 - Review and update planning policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1491.json) | |[Review and update system maintenance policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2067b904-9552-3259-0cdd-84468e284b7c) |CMA_C1395 - Review and update system maintenance policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1395.json) |
-### 05.01 Internal Organization
+### 0117.05a1Organizational.1-05.a 05.01 Internal Organization
**ID**: 0117.05a1Organizational.1-05.a **Ownership**: Shared
This built-in initiative is deployed as part of the
||||| |[Appoint a senior information security officer](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc6cf9f2c-5fd8-3f16-a1f1-f0b69c904928) |CMA_C1733 - Appoint a senior information security officer |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1733.json) |
-### 05.01 Internal Organization
+### 0118.05a1Organizational.2-05.a 05.01 Internal Organization
**ID**: 0118.05a1Organizational.2-05.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Implement security engineering principles of information systems](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdf2e9507-169b-4114-3a52-877561ee3198) |CMA_0325 - Implement security engineering principles of information systems |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0325.json) | |[Update information security policies](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5226dee6-3420-711b-4709-8e675ebd828f) |CMA_0518 - Update information security policies |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0518.json) |
-### 05.01 Internal Organization
+### 0119.05a1Organizational.3-05.a 05.01 Internal Organization
**ID**: 0119.05a1Organizational.3-05.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Establish security requirements for the manufacturing of connected devices](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fafbecd30-37ee-a27b-8e09-6ac49951a0ee) |CMA_0279 - Establish security requirements for the manufacturing of connected devices |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0279.json) | |[Implement security engineering principles of information systems](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdf2e9507-169b-4114-3a52-877561ee3198) |CMA_0325 - Implement security engineering principles of information systems |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0325.json) |
-### 05.01 Internal Organization
+### 0120.05a1Organizational.4-05.a 05.01 Internal Organization
**ID**: 0120.05a1Organizational.4-05.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Govern the allocation of resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F33d34fac-56a8-1c0f-0636-3ed94892a709) |CMA_0293 - Govern the allocation of resources |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0293.json) | |[Secure commitment from leadership](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F70057208-70cc-7b31-3c3a-121af6bc1966) |CMA_0489 - Secure commitment from leadership |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0489.json) |
-### 05.01 Internal Organization
+### 0121.05a2Organizational.12-05.a 05.01 Internal Organization
**ID**: 0121.05a2Organizational.12-05.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Implement the risk management strategy](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc6fe3856-4635-36b6-983c-070da12a953b) |CMA_C1744 - Implement the risk management strategy |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1744.json) | |[Review and update risk assessment policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F20012034-96f0-85c2-4a86-1ae1eb457802) |CMA_C1537 - Review and update risk assessment policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1537.json) |
-### 05.01 Internal Organization
+### 0122.05a2Organizational.3-05.a 05.01 Internal Organization
**ID**: 0122.05a2Organizational.3-05.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Provide security training before providing access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b05dca2-25ec-9335-495c-29155f785082) |CMA_0418 - Provide security training before providing access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0418.json) | |[Provide security training for new users](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1cb7bf71-841c-4741-438a-67c65fdd7194) |CMA_0419 - Provide security training for new users |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0419.json) |
-### 05.01 Internal Organization
+### 0123.05a2Organizational.4-05.a 05.01 Internal Organization
**ID**: 0123.05a2Organizational.4-05.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Establish a privacy program](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F39eb03c1-97cc-11ab-0960-6209ed2869f7) |CMA_0257 - Establish a privacy program |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0257.json) | |[Manage contacts for authorities and special interest groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5269d7e4-3768-501d-7e46-66c56c15622c) |CMA_0359 - Manage contacts for authorities and special interest groups |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0359.json) |
-### 05.01 Internal Organization
+### 0124.05a3Organizational.1-05.a 05.01 Internal Organization
**ID**: 0124.05a3Organizational.1-05.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Appoint a senior information security officer](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc6cf9f2c-5fd8-3f16-a1f1-f0b69c904928) |CMA_C1733 - Appoint a senior information security officer |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1733.json) | |[Document security and privacy training activities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F524e7136-9f6a-75ba-9089-501018151346) |CMA_0198 - Document security and privacy training activities |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0198.json) |
-### 05.01 Internal Organization
+### 0125.05a3Organizational.2-05.a 05.01 Internal Organization
**ID**: 0125.05a3Organizational.2-05.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Employ independent assessors to conduct security control assessments](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb65c5d8e-9043-9612-2c17-65f231d763bb) |CMA_C1148 - Employ independent assessors to conduct security control assessments |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1148.json) | |[Perform a risk assessment](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8c5d3d8d-5cba-0def-257c-5ab9ea9644dc) |CMA_0388 - Perform a risk assessment |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0388.json) |
-### 02.03 During Employment
+### 0135.02f1Organizational.56-02.f 02.03 During Employment
**ID**: 0135.02f1Organizational.56-02.f **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Notify personnel upon sanctions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6228396e-2ace-7ca5-3247-45767dbf52f4) |CMA_0380 - Notify personnel upon sanctions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0380.json) | |[Require third-party providers to comply with personnel security policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8c31e15-642d-600f-78ab-bad47a5787e6) |CMA_C1530 - Require third-party providers to comply with personnel security policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1530.json) |
-### 02.01 Prior to Employment
+### 0137.02a1Organizational.3-02.a 02.01 Prior to Employment
**ID**: 0137.02a1Organizational.3-02.a **Ownership**: Shared
This built-in initiative is deployed as part of the
||||| |[Review and update personnel security policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe5c5fc78-4aa5-3d6b-81bc-5fcc88b318e9) |CMA_C1507 - Review and update personnel security policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1507.json) |
-### 04.01 Information Security Policy
+### 0162.04b1Organizational.2-04.b 04.01 Information Security Policy
**ID**: 0162.04b1Organizational.2-04.b **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Implement security engineering principles of information systems](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdf2e9507-169b-4114-3a52-877561ee3198) |CMA_0325 - Implement security engineering principles of information systems |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0325.json) | |[Review and update information integrity policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6bededc0-2985-54d5-4158-eb8bad8070a0) |CMA_C1667 - Review and update information integrity policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1667.json) |
-### 05.01 Internal Organization
+### 0165.05a3Organizational.3-05.a 05.01 Internal Organization
**ID**: 0165.05a3Organizational.3-05.a **Ownership**: Shared
This built-in initiative is deployed as part of the
||||| |[Review and update planning policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F28aa060e-25c7-6121-05d8-a846f11433df) |CMA_C1491 - Review and update planning policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1491.json) |
-### 05.01 Internal Organization
+### 0177.05h1Organizational.12-05.h 05.01 Internal Organization
**ID**: 0177.05h1Organizational.12-05.h **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Employ independent assessors to conduct security control assessments](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb65c5d8e-9043-9612-2c17-65f231d763bb) |CMA_C1148 - Employ independent assessors to conduct security control assessments |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1148.json) | |[Select additional testing for security control assessments](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff78fc35e-1268-0bca-a798-afcba9d2330a) |CMA_C1149 - Select additional testing for security control assessments |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1149.json) |
-### 05.01 Internal Organization
+### 0178.05h1Organizational.3-05.h 05.01 Internal Organization
**ID**: 0178.05h1Organizational.3-05.h **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Deliver security assessment results](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8e49107c-3338-40d1-02aa-d524178a2afe) |CMA_C1147 - Deliver security assessment results |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1147.json) | |[Produce Security Assessment report](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F70a7a065-a060-85f8-7863-eb7850ed2af9) |CMA_C1146 - Produce Security Assessment report |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1146.json) |
-### 05.01 Internal Organization
+### 0179.05h1Organizational.4-05.h 05.01 Internal Organization
**ID**: 0179.05h1Organizational.4-05.h **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Establish a risk management strategy](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd36700f2-2f0d-7c2a-059c-bdadd1d79f70) |CMA_0258 - Establish a risk management strategy |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0258.json) | |[Implement plans of action and milestones for security program process](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd93fe1be-13e4-421d-9c21-3158e2fa2667) |CMA_C1737 - Implement plans of action and milestones for security program process |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1737.json) |
-### 05.01 Internal Organization
+### 0180.05h2Organizational.1-05.h 05.01 Internal Organization
**ID**: 0180.05h2Organizational.1-05.h **Ownership**: Shared
This built-in initiative is deployed as part of the
## 02 Endpoint Protection
-### 09.04 Protection Against Malicious and Mobile Code
+### 0201.09j1Organizational.124-09.j 09.04 Protection Against Malicious and Mobile Code
**ID**: 0201.09j1Organizational.124-09.j **Ownership**: Shared
This built-in initiative is deployed as part of the
|[System updates should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F86b3d65f-7626-441e-b690-81a8b71cff60) |Missing security system updates on your servers will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingSystemUpdates_Audit.json) | |[Update antivirus definitions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea9d7c95-2f10-8a4d-61d8-7469bd2e8d65) |CMA_0517 - Update antivirus definitions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0517.json) |
-### 09.04 Protection Against Malicious and Mobile Code
+### 0202.09j1Organizational.3-09.j 09.04 Protection Against Malicious and Mobile Code
**ID**: 0202.09j1Organizational.3-09.j **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Review role group changes weekly](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F70fe686f-1f91-7dab-11bf-bca4201e183b) |CMA_0476 - Review role group changes weekly |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0476.json) | |[Specify permitted actions associated with customer audit information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3eecf628-a1c8-1b48-1b5c-7ca781e97970) |CMA_C1122 - Specify permitted actions associated with customer audit information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1122.json) |
-### 09.04 Protection Against Malicious and Mobile Code
+### 0204.09j2Organizational.1-09.j 09.04 Protection Against Malicious and Mobile Code
**ID**: 0204.09j2Organizational.1-09.j **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Update antivirus definitions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea9d7c95-2f10-8a4d-61d8-7469bd2e8d65) |CMA_0517 - Update antivirus definitions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0517.json) | |[Verify security functions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fece8bb17-4080-5127-915f-dc7267ee8549) |CMA_C1708 - Verify security functions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1708.json) |
-### 09.04 Protection Against Malicious and Mobile Code
+### 0205.09j2Organizational.2-09.j 09.04 Protection Against Malicious and Mobile Code
**ID**: 0205.09j2Organizational.2-09.j **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Set automated notifications for new and trending cloud applications in your organization](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf38215f-70c4-0cd6-40c2-c52d86690a45) |CMA_0495 - Set automated notifications for new and trending cloud applications in your organization |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0495.json) | |[Update antivirus definitions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea9d7c95-2f10-8a4d-61d8-7469bd2e8d65) |CMA_0517 - Update antivirus definitions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0517.json) |
-### 09.04 Protection Against Malicious and Mobile Code
+### 0206.09j2Organizational.34-09.j 09.04 Protection Against Malicious and Mobile Code
**ID**: 0206.09j2Organizational.34-09.j **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Review malware detections report weekly](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4a6f5cbd-6c6b-006f-2bb1-091af1441bce) |CMA_0475 - Review malware detections report weekly |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0475.json) | |[Update antivirus definitions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea9d7c95-2f10-8a4d-61d8-7469bd2e8d65) |CMA_0517 - Update antivirus definitions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0517.json) |
-### 09.04 Protection Against Malicious and Mobile Code
+### 0207.09j2Organizational.56-09.j 09.04 Protection Against Malicious and Mobile Code
**ID**: 0207.09j2Organizational.56-09.j **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Review threat protection status weekly](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffad161f5-5261-401a-22dd-e037bae011bd) |CMA_0479 - Review threat protection status weekly |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0479.json) | |[Update antivirus definitions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea9d7c95-2f10-8a4d-61d8-7469bd2e8d65) |CMA_0517 - Update antivirus definitions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0517.json) |
-### 09.04 Protection Against Malicious and Mobile Code
+### 0208.09j2Organizational.7-09.j 09.04 Protection Against Malicious and Mobile Code
**ID**: 0208.09j2Organizational.7-09.j **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Separate user and information system management functionality](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8a703eb5-4e53-701b-67e4-05ba2f7930c8) |CMA_0493 - Separate user and information system management functionality |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0493.json) | |[Use dedicated machines for administrative tasks](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb8972f60-8d77-1cb8-686f-9c9f4cdd8a59) |CMA_0527 - Use dedicated machines for administrative tasks |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0527.json) |
-### 09.06 Network Security Management
+### 0209.09m3Organizational.7-09.m 09.06 Network Security Management
**ID**: 0209.09m3Organizational.7-09.m **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Train staff on PII sharing and its consequences](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8019d788-713d-90a1-5570-dac5052f517d) |CMA_C1871 - Train staff on PII sharing and its consequences |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1871.json) | |[Verify software, firmware and information integrity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdb28735f-518f-870e-15b4-49623cbe3aa0) |CMA_0542 - Verify software, firmware and information integrity |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0542.json) |
-### 09.04 Protection Against Malicious and Mobile Code
+### 0214.09j1Organizational.6-09.j 09.04 Protection Against Malicious and Mobile Code
**ID**: 0214.09j1Organizational.6-09.j **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Review threat protection status weekly](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffad161f5-5261-401a-22dd-e037bae011bd) |CMA_0479 - Review threat protection status weekly |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0479.json) | |[Update antivirus definitions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea9d7c95-2f10-8a4d-61d8-7469bd2e8d65) |CMA_0517 - Update antivirus definitions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0517.json) |
-### 09.04 Protection Against Malicious and Mobile Code
+### 0215.09j2Organizational.8-09.j 09.04 Protection Against Malicious and Mobile Code
**ID**: 0215.09j2Organizational.8-09.j **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Review threat protection status weekly](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffad161f5-5261-401a-22dd-e037bae011bd) |CMA_0479 - Review threat protection status weekly |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0479.json) | |[Update antivirus definitions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea9d7c95-2f10-8a4d-61d8-7469bd2e8d65) |CMA_0517 - Update antivirus definitions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0517.json) |
-### 09.04 Protection Against Malicious and Mobile Code
+### 0216.09j2Organizational.9-09.j 09.04 Protection Against Malicious and Mobile Code
**ID**: 0216.09j2Organizational.9-09.j **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Review file and folder activity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef718fe4-7ceb-9ddf-3198-0ee8f6fe9cba) |CMA_0473 - Review file and folder activity |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0473.json) | |[Review role group changes weekly](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F70fe686f-1f91-7dab-11bf-bca4201e183b) |CMA_0476 - Review role group changes weekly |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0476.json) |
-### 09.04 Protection Against Malicious and Mobile Code
+### 0217.09j2Organizational.10-09.j 09.04 Protection Against Malicious and Mobile Code
**ID**: 0217.09j2Organizational.10-09.j **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Review threat protection status weekly](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffad161f5-5261-401a-22dd-e037bae011bd) |CMA_0479 - Review threat protection status weekly |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0479.json) | |[Update antivirus definitions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea9d7c95-2f10-8a4d-61d8-7469bd2e8d65) |CMA_0517 - Update antivirus definitions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0517.json) |
-### 09.04 Protection Against Malicious and Mobile Code
+### 0219.09j2Organizational.12-09.j 09.04 Protection Against Malicious and Mobile Code
**ID**: 0219.09j2Organizational.12-09.j **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Review threat protection status weekly](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffad161f5-5261-401a-22dd-e037bae011bd) |CMA_0479 - Review threat protection status weekly |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0479.json) | |[Update antivirus definitions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea9d7c95-2f10-8a4d-61d8-7469bd2e8d65) |CMA_0517 - Update antivirus definitions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0517.json) |
-### 09.04 Protection Against Malicious and Mobile Code
+### 0225.09k1Organizational.1-09.k 09.04 Protection Against Malicious and Mobile Code
**ID**: 0225.09k1Organizational.1-09.k **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Review threat protection status weekly](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffad161f5-5261-401a-22dd-e037bae011bd) |CMA_0479 - Review threat protection status weekly |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0479.json) | |[Update antivirus definitions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea9d7c95-2f10-8a4d-61d8-7469bd2e8d65) |CMA_0517 - Update antivirus definitions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0517.json) |
-### 09.04 Protection Against Malicious and Mobile Code
+### 0226.09k1Organizational.2-09.k 09.04 Protection Against Malicious and Mobile Code
**ID**: 0226.09k1Organizational.2-09.k **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Review malware detections report weekly](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4a6f5cbd-6c6b-006f-2bb1-091af1441bce) |CMA_0475 - Review malware detections report weekly |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0475.json) | |[Update antivirus definitions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea9d7c95-2f10-8a4d-61d8-7469bd2e8d65) |CMA_0517 - Update antivirus definitions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0517.json) |
-### 09.04 Protection Against Malicious and Mobile Code
+### 0227.09k2Organizational.12-09.k 09.04 Protection Against Malicious and Mobile Code
**ID**: 0227.09k2Organizational.12-09.k **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Review user groups and applications with access to sensitive data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb1c944e-0e94-647b-9b7e-fdb8d2af0838) |CMA_0481 - Review user groups and applications with access to sensitive data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0481.json) | |[Update antivirus definitions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea9d7c95-2f10-8a4d-61d8-7469bd2e8d65) |CMA_0517 - Update antivirus definitions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0517.json) |
-### 09.04 Protection Against Malicious and Mobile Code
+### 0228.09k2Organizational.3-09.k 09.04 Protection Against Malicious and Mobile Code
**ID**: 0228.09k2Organizational.3-09.k **Ownership**: Shared
This built-in initiative is deployed as part of the
## 03 Portable Media Security
-### 09.07 Media Handling
+### 0301.09o1Organizational.123-09.o 09.07 Media Handling
**ID**: 0301.09o1Organizational.123-09.o **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Review and update media protection policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb4e19d22-8c0e-7cad-3219-c84c62dc250f) |CMA_C1427 - Review and update media protection policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1427.json) | |[Transparent Data Encryption on SQL databases should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F17k78e20-9358-41c9-923c-fb736d382a12) |Transparent data encryption should be enabled to protect data-at-rest and meet compliance requirements |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlDBEncryption_Audit.json) |
-### 09.07 Media Handling
+### 0302.09o2Organizational.1-09.o 09.07 Media Handling
**ID**: 0302.09o2Organizational.1-09.o **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Restrict media use](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6122970b-8d4a-7811-0278-4c6c68f61e4f) |CMA_0450 - Restrict media use |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0450.json) | |[Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0961003e-5a0a-4549-abde-af6a37f2724d) |By default, a virtual machine's OS and data disks are encrypted-at-rest using platform-managed keys. Temp disks, data caches and data flowing between compute and storage aren't encrypted. Disregard this recommendation if: 1. using encryption-at-host, or 2. server-side encryption on Managed Disks meets your security requirements. Learn more in: Server-side encryption of Azure Disk Storage: [https://aka.ms/disksse,](https://aka.ms/disksse,) Different disk encryption offerings: [https://aka.ms/diskencryptioncomparison](https://aka.ms/diskencryptioncomparison) |AuditIfNotExists, Disabled |[2.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnencryptedVMDisks_Audit.json) |
-### 09.07 Media Handling
+### 0303.09o2Organizational.2-09.o 09.07 Media Handling
**ID**: 0303.09o2Organizational.2-09.o **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Manage the transportation of assets](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4ac81669-00e2-9790-8648-71bc11bc91eb) |CMA_0370 - Manage the transportation of assets |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0370.json) | |[Restrict media use](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6122970b-8d4a-7811-0278-4c6c68f61e4f) |CMA_0450 - Restrict media use |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0450.json) |
-### 09.07 Media Handling
+### 0304.09o3Organizational.1-09.o 09.07 Media Handling
**ID**: 0304.09o3Organizational.1-09.o **Ownership**: Shared
This built-in initiative is deployed as part of the
|[SQL managed instances should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac01ad65-10e5-46df-bdd9-6b0cad13e1d2) |Implementing Transparent Data Encryption (TDE) with your own key provides you with increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties. This recommendation applies to organizations with a related compliance requirement. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_EnsureServerTDEisEncryptedWithYourOwnKey_Deny.json) | |[SQL servers should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a370ff3-6cab-4e85-8995-295fd854c5b8) |Implementing Transparent Data Encryption (TDE) with your own key provides increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties. This recommendation applies to organizations with a related compliance requirement. |Audit, Deny, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_EnsureServerTDEisEncryptedWithYourOwnKey_Deny.json) |
-### 09.07 Media Handling
+### 0305.09q1Organizational.12-09.q 09.07 Media Handling
**ID**: 0305.09q1Organizational.12-09.q **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Manage the transportation of assets](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4ac81669-00e2-9790-8648-71bc11bc91eb) |CMA_0370 - Manage the transportation of assets |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0370.json) | |[Restrict media use](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6122970b-8d4a-7811-0278-4c6c68f61e4f) |CMA_0450 - Restrict media use |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0450.json) |
-### 09.07 Media Handling
+### 0306.09q1Organizational.3-09.q 09.07 Media Handling
**ID**: 0306.09q1Organizational.3-09.q **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Implement controls to secure all media](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe435f7e3-0dd9-58c9-451f-9b44b96c0232) |CMA_0314 - Implement controls to secure all media |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0314.json) | |[Implement training for protecting authenticators](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe4b00788-7e1c-33ec-0418-d048508e095b) |CMA_0329 - Implement training for protecting authenticators |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0329.json) |
-### 09.07 Media Handling
+### 0307.09q2Organizational.12-09.q 09.07 Media Handling
**ID**: 0307.09q2Organizational.12-09.q **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Control information flow](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F59bedbdc-0ba9-39b9-66bb-1d1c192384e6) |CMA_0079 - Control information flow |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0079.json) | |[Employ flow control mechanisms of encrypted information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F79365f13-8ba4-1f6c-2ac4-aa39929f56d0) |CMA_0211 - Employ flow control mechanisms of encrypted information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0211.json) |
-### 09.07 Media Handling
+### 0308.09q3Organizational.1-09.q 09.07 Media Handling
**ID**: 0308.09q3Organizational.1-09.q **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Implement controls to secure all media](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe435f7e3-0dd9-58c9-451f-9b44b96c0232) |CMA_0314 - Implement controls to secure all media |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0314.json) | |[Manage the transportation of assets](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4ac81669-00e2-9790-8648-71bc11bc91eb) |CMA_0370 - Manage the transportation of assets |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0370.json) |
-### 09.07 Media Handling
+### 0314.09q3Organizational.2-09.q 09.07 Media Handling
**ID**: 0314.09q3Organizational.2-09.q **Ownership**: Shared
This built-in initiative is deployed as part of the
## 04 Mobile Device Security
-### 01.07 Mobile Computing and Teleworking
+### 0401.01x1System.124579-01.x 01.07 Mobile Computing and Teleworking
**ID**: 0401.01x1System.124579-01.x **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Prohibit remote activation of collaborative computing devices](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F678ca228-042d-6d8e-a598-c58d5670437d) |CMA_C1648 - Prohibit remote activation of collaborative computing devices |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1648.json) | |[Protect data in transit using encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb11697e8-9515-16f1-7a35-477d5c8a1344) |CMA_0403 - Protect data in transit using encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0403.json) |
-### 01.07 Mobile Computing and Teleworking
+### 0403.01x1System.8-01.x 01.07 Mobile Computing and Teleworking
**ID**: 0403.01x1System.8-01.x **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Not allow for information systems to accompany with individuals](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F41172402-8d73-64c7-0921-909083c086b0) |CMA_C1182 - Not allow for information systems to accompany with individuals |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1182.json) | |[Protect data in transit using encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb11697e8-9515-16f1-7a35-477d5c8a1344) |CMA_0403 - Protect data in transit using encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0403.json) |
-### 01.07 Mobile Computing and Teleworking
+### 0405.01y1Organizational.12345678-01.y 01.07 Mobile Computing and Teleworking
**ID**: 0405.01y1Organizational.12345678-01.y **Ownership**: Shared
This built-in initiative is deployed as part of the
||||| |[Define mobile device requirements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9ca3a3ea-3a1f-8ba0-31a8-6aed0fe1a7a4) |CMA_0122 - Define mobile device requirements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0122.json) |
-### 01.07 Mobile Computing and Teleworking
+### 0407.01y2Organizational.1-01.y 01.07 Mobile Computing and Teleworking
**ID**: 0407.01y2Organizational.1-01.y **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Define mobile device requirements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9ca3a3ea-3a1f-8ba0-31a8-6aed0fe1a7a4) |CMA_0122 - Define mobile device requirements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0122.json) | |[Implement controls to secure alternate work sites](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcd36eeec-67e7-205a-4b64-dbfe3b4e3e4e) |CMA_0315 - Implement controls to secure alternate work sites |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0315.json) |
-### 01.07 Mobile Computing and Teleworking
+### 0408.01y3Organizational.12-01.y 01.07 Mobile Computing and Teleworking
**ID**: 0408.01y3Organizational.12-01.y **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Implement physical security for offices, working areas, and secure areas](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F05ec66a2-137c-14b8-8e75-3d7a2bef07f8) |CMA_0323 - Implement physical security for offices, working areas, and secure areas |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0323.json) | |[Manage nonlocal maintenance and diagnostic activities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1fb1cb0e-1936-6f32-42fd-89970b535855) |CMA_0364 - Manage nonlocal maintenance and diagnostic activities |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0364.json) |
-### 01.07 Mobile Computing and Teleworking
+### 0409.01y3Organizational.3-01.y 01.07 Mobile Computing and Teleworking
**ID**: 0409.01y3Organizational.3-01.y **Ownership**: Shared
This built-in initiative is deployed as part of the
||||| |[Define mobile device requirements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9ca3a3ea-3a1f-8ba0-31a8-6aed0fe1a7a4) |CMA_0122 - Define mobile device requirements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0122.json) |
-### 01.07 Mobile Computing and Teleworking
+### 0410.01x1System.12-01.xMobileComputingandCommunications 01.07 Mobile Computing and Teleworking
**ID**: 0410.01x1System.12-01.xMobileComputingandCommunications **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Define mobile device requirements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9ca3a3ea-3a1f-8ba0-31a8-6aed0fe1a7a4) |CMA_0122 - Define mobile device requirements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0122.json) | |[Protect data in transit using encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb11697e8-9515-16f1-7a35-477d5c8a1344) |CMA_0403 - Protect data in transit using encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0403.json) |
-### 01.07 Mobile Computing and Teleworking
+### 0415.01y1Organizational.10-01.y 01.07 Mobile Computing and Teleworking
**ID**: 0415.01y1Organizational.10-01.y **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Implement controls to secure all media](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe435f7e3-0dd9-58c9-451f-9b44b96c0232) |CMA_0314 - Implement controls to secure all media |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0314.json) | |[Manage nonlocal maintenance and diagnostic activities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1fb1cb0e-1936-6f32-42fd-89970b535855) |CMA_0364 - Manage nonlocal maintenance and diagnostic activities |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0364.json) |
-### 01.07 Mobile Computing and Teleworking
+### 0416.01y3Organizational.4-01.y 01.07 Mobile Computing and Teleworking
**ID**: 0416.01y3Organizational.4-01.y **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Manage nonlocal maintenance and diagnostic activities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1fb1cb0e-1936-6f32-42fd-89970b535855) |CMA_0364 - Manage nonlocal maintenance and diagnostic activities |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0364.json) | |[Protect data in transit using encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb11697e8-9515-16f1-7a35-477d5c8a1344) |CMA_0403 - Protect data in transit using encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0403.json) |
-### 01.07 Mobile Computing and Teleworking
+### 0417.01y3Organizational.5-01.y 01.07 Mobile Computing and Teleworking
**ID**: 0417.01y3Organizational.5-01.y **Ownership**: Shared
This built-in initiative is deployed as part of the
||||| |[Define mobile device requirements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9ca3a3ea-3a1f-8ba0-31a8-6aed0fe1a7a4) |CMA_0122 - Define mobile device requirements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0122.json) |
-### 01.07 Mobile Computing and Teleworking
+### 0425.01x1System.13-01.x 01.07 Mobile Computing and Teleworking
**ID**: 0425.01x1System.13-01.x **Ownership**: Shared
This built-in initiative is deployed as part of the
||||| |[Define mobile device requirements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9ca3a3ea-3a1f-8ba0-31a8-6aed0fe1a7a4) |CMA_0122 - Define mobile device requirements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0122.json) |
-### 01.07 Mobile Computing and Teleworking
+### 0426.01x2System.1-01.x 01.07 Mobile Computing and Teleworking
**ID**: 0426.01x2System.1-01.x **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Not allow for information systems to accompany with individuals](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F41172402-8d73-64c7-0921-909083c086b0) |CMA_C1182 - Not allow for information systems to accompany with individuals |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1182.json) | |[Protect data in transit using encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb11697e8-9515-16f1-7a35-477d5c8a1344) |CMA_0403 - Protect data in transit using encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0403.json) |
-### 01.07 Mobile Computing and Teleworking
+### 0427.01x2System.2-01.x 01.07 Mobile Computing and Teleworking
**ID**: 0427.01x2System.2-01.x **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Not allow for information systems to accompany with individuals](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F41172402-8d73-64c7-0921-909083c086b0) |CMA_C1182 - Not allow for information systems to accompany with individuals |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1182.json) | |[Protect data in transit using encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb11697e8-9515-16f1-7a35-477d5c8a1344) |CMA_0403 - Protect data in transit using encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0403.json) |
-### 01.07 Mobile Computing and Teleworking
+### 0428.01x2System.3-01.x 01.07 Mobile Computing and Teleworking
**ID**: 0428.01x2System.3-01.x **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Not allow for information systems to accompany with individuals](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F41172402-8d73-64c7-0921-909083c086b0) |CMA_C1182 - Not allow for information systems to accompany with individuals |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1182.json) | |[Protect data in transit using encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb11697e8-9515-16f1-7a35-477d5c8a1344) |CMA_0403 - Protect data in transit using encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0403.json) |
-### 01.07 Mobile Computing and Teleworking
+### 0429.01x1System.14-01.x 01.07 Mobile Computing and Teleworking
**ID**: 0429.01x1System.14-01.x **Ownership**: Shared
This built-in initiative is deployed as part of the
## 05 Wireless Security
-### 09.06 Network Security Management
+### 0504.09m2Organizational.5-09.m 09.06 Network Security Management
**ID**: 0504.09m2Organizational.5-09.m **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Identify and authenticate network devices](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fae5345d5-8dab-086a-7290-db43a3272198) |CMA_0296 - Identify and authenticate network devices |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0296.json) | |[Protect wireless access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd42a8f69-a193-6cbc-48b9-04a9e29961f1) |CMA_0411 - Protect wireless access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0411.json) |
-### 09.06 Network Security Management
+### 0505.09m2Organizational.3-09.m 09.06 Network Security Management
**ID**: 0505.09m2Organizational.3-09.m **Ownership**: Shared
This built-in initiative is deployed as part of the
## 06 Configuration Management
-### 06.02 Compliance with Security Policies and Standards, and Technical Compliance
+### 0601.06g1Organizational.124-06.g 06.02 Compliance with Security Policies and Standards, and Technical Compliance
**ID**: 0601.06g1Organizational.124-06.g **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Produce Security Assessment report](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F70a7a065-a060-85f8-7863-eb7850ed2af9) |CMA_C1146 - Produce Security Assessment report |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1146.json) | |[Update POA&M items](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcc057769-01d9-95ad-a36f-1e62a7f9540b) |CMA_C1157 - Update POA&M items |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1157.json) |
-### 06.02 Compliance with Security Policies and Standards, and Technical Compliance
+### 0602.06g1Organizational.3-06.g 06.02 Compliance with Security Policies and Standards, and Technical Compliance
**ID**: 0602.06g1Organizational.3-06.g **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Require developers to document approved changes and potential impact](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3a868d0c-538f-968b-0191-bddb44da5b75) |CMA_C1597 - Require developers to document approved changes and potential impact |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1597.json) | |[Update POA&M items](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcc057769-01d9-95ad-a36f-1e62a7f9540b) |CMA_C1157 - Update POA&M items |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1157.json) |
-### 06.02 Compliance with Security Policies and Standards, and Technical Compliance
+### 0603.06g2Organizational.1-06.g 06.02 Compliance with Security Policies and Standards, and Technical Compliance
**ID**: 0603.06g2Organizational.1-06.g **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Verify software, firmware and information integrity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdb28735f-518f-870e-15b4-49623cbe3aa0) |CMA_0542 - Verify software, firmware and information integrity |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0542.json) | |[View and configure system diagnostic data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0123edae-3567-a05a-9b05-b53ebe9d3e7e) |CMA_0544 - View and configure system diagnostic data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0544.json) |
-### 06.02 Compliance with Security Policies and Standards, and Technical Compliance
+### 0604.06g2Organizational.2-06.g 06.02 Compliance with Security Policies and Standards, and Technical Compliance
**ID**: 0604.06g2Organizational.2-06.g **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Turn on sensors for endpoint security solution](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5fc24b95-53f7-0ed1-2330-701b539b97fe) |CMA_0514 - Turn on sensors for endpoint security solution |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0514.json) | |[Undergo independent security review](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9b55929b-0101-47c0-a16e-d6ac5c7d21f8) |CMA_0515 - Undergo independent security review |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0515.json) |
-### 10.04 Security of System Files
+### 0605.10h1System.12-10.h 10.04 Security of System Files
**ID**: 0605.10h1System.12-10.h **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Windows machines should meet requirements for 'Security Options - Audit'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F33936777-f2ac-45aa-82ec-07958ec9ade4) |Windows machines should have the specified Group Policy settings in the category 'Security Options - Audit' for forcing audit policy subcategory and shutting down if unable to log security audits. This policy requires that the Guest Configuration prerequisites have been deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SecurityOptionsAudit_AINE.json) | |[Windows machines should meet requirements for 'System Audit Policies - Account Management'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F94d9aca8-3757-46df-aa51-f218c5f11954) |Windows machines should have the specified Group Policy settings in the category 'System Audit Policies - Account Management' for auditing application, security, and user group management, and other management events. This policy requires that the Guest Configuration prerequisites have been deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SystemAuditPoliciesAccountManagement_AINE.json) |
-### 06.02 Compliance with Security Policies and Standards, and Technical Compliance
+### 0613.06h1Organizational.12-06.h 06.02 Compliance with Security Policies and Standards, and Technical Compliance
**ID**: 0613.06h1Organizational.12-06.h **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Perform vulnerability scans](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c5e0e1a-216f-8f49-0a15-76ed0d8b8e1f) |CMA_0393 - Perform vulnerability scans |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0393.json) | |[Remediate information system flaws](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbe38a620-000b-21cf-3cb3-ea151b704c3b) |CMA_0427 - Remediate information system flaws |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0427.json) |
-### 06.02 Compliance with Security Policies and Standards, and Technical Compliance
+### 0614.06h2Organizational.12-06.h 06.02 Compliance with Security Policies and Standards, and Technical Compliance
**ID**: 0614.06h2Organizational.12-06.h **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Remediate information system flaws](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbe38a620-000b-21cf-3cb3-ea151b704c3b) |CMA_0427 - Remediate information system flaws |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0427.json) | |[Select additional testing for security control assessments](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff78fc35e-1268-0bca-a798-afcba9d2330a) |CMA_C1149 - Select additional testing for security control assessments |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1149.json) |
-### 06.02 Compliance with Security Policies and Standards, and Technical Compliance
+### 0615.06h2Organizational.3-06.h 06.02 Compliance with Security Policies and Standards, and Technical Compliance
**ID**: 0615.06h2Organizational.3-06.h **Ownership**: Shared
This built-in initiative is deployed as part of the
||||| |[Remediate information system flaws](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbe38a620-000b-21cf-3cb3-ea151b704c3b) |CMA_0427 - Remediate information system flaws |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0427.json) |
-### 09.01 Documented Operating Procedures
+### 0618.09b1System.1-09.b 09.01 Documented Operating Procedures
**ID**: 0618.09b1System.1-09.b **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Retain previous versions of baseline configs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5e4e9685-3818-5934-0071-2620c4fa2ca5) |CMA_C1181 - Retain previous versions of baseline configs |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1181.json) | |[View and configure system diagnostic data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0123edae-3567-a05a-9b05-b53ebe9d3e7e) |CMA_0544 - View and configure system diagnostic data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0544.json) |
-### 10.04 Security of System Files
+### 0626.10h1System.3-10.h 10.04 Security of System Files
**ID**: 0626.10h1System.3-10.h **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Verify software, firmware and information integrity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdb28735f-518f-870e-15b4-49623cbe3aa0) |CMA_0542 - Verify software, firmware and information integrity |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0542.json) | |[View and configure system diagnostic data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0123edae-3567-a05a-9b05-b53ebe9d3e7e) |CMA_0544 - View and configure system diagnostic data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0544.json) |
-### 10.04 Security of System Files
+### 0627.10h1System.45-10.h 10.04 Security of System Files
**ID**: 0627.10h1System.45-10.h **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Verify software, firmware and information integrity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdb28735f-518f-870e-15b4-49623cbe3aa0) |CMA_0542 - Verify software, firmware and information integrity |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0542.json) | |[View and configure system diagnostic data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0123edae-3567-a05a-9b05-b53ebe9d3e7e) |CMA_0544 - View and configure system diagnostic data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0544.json) |
-### 10.04 Security of System Files
+### 0628.10h1System.6-10.h 10.04 Security of System Files
**ID**: 0628.10h1System.6-10.h **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Remediate information system flaws](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbe38a620-000b-21cf-3cb3-ea151b704c3b) |CMA_0427 - Remediate information system flaws |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0427.json) | |[Verify software, firmware and information integrity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdb28735f-518f-870e-15b4-49623cbe3aa0) |CMA_0542 - Verify software, firmware and information integrity |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0542.json) |
-### 10.05 Security In Development and Support Processes
+### 0635.10k1Organizational.12-10.k 10.05 Security In Development and Support Processes
**ID**: 0635.10k1Organizational.12-10.k **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Update antivirus definitions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea9d7c95-2f10-8a4d-61d8-7469bd2e8d65) |CMA_0517 - Update antivirus definitions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0517.json) | |[Windows machines should meet requirements for 'System Audit Policies - Detailed Tracking'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F58383b73-94a9-4414-b382-4146eb02611b) |Windows machines should have the specified Group Policy settings in the category 'System Audit Policies - Detailed Tracking' for auditing DPAPI, process creation/termination, RPC events, and PNP activity. This policy requires that the Guest Configuration prerequisites have been deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SystemAuditPoliciesDetailedTracking_AINE.json) |
-### 10.05 Security In Development and Support Processes
+### 0636.10k2Organizational.1-10.k 10.05 Security In Development and Support Processes
**ID**: 0636.10k2Organizational.1-10.k **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Review and update configuration management policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb8a8df9-521f-3ccd-7e2c-3d1fcc812340) |CMA_C1175 - Review and update configuration management policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1175.json) | |[Windows machines should meet requirements for 'System Audit Policies - Detailed Tracking'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F58383b73-94a9-4414-b382-4146eb02611b) |Windows machines should have the specified Group Policy settings in the category 'System Audit Policies - Detailed Tracking' for auditing DPAPI, process creation/termination, RPC events, and PNP activity. This policy requires that the Guest Configuration prerequisites have been deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SystemAuditPoliciesDetailedTracking_AINE.json) |
-### 10.05 Security In Development and Support Processes
+### 0637.10k2Organizational.2-10.k 10.05 Security In Development and Support Processes
**ID**: 0637.10k2Organizational.2-10.k **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Implement an automated configuration management tool](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F33832848-42ab-63f3-1a55-c0ad309d44cd) |CMA_0311 - Implement an automated configuration management tool |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0311.json) | |[Windows machines should meet requirements for 'System Audit Policies - Detailed Tracking'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F58383b73-94a9-4414-b382-4146eb02611b) |Windows machines should have the specified Group Policy settings in the category 'System Audit Policies - Detailed Tracking' for auditing DPAPI, process creation/termination, RPC events, and PNP activity. This policy requires that the Guest Configuration prerequisites have been deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SystemAuditPoliciesDetailedTracking_AINE.json) |
-### 10.05 Security In Development and Support Processes
+### 0638.10k2Organizational.34569-10.k 10.05 Security In Development and Support Processes
**ID**: 0638.10k2Organizational.34569-10.k **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Perform audit for configuration change control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1282809c-9001-176b-4a81-260a085f4872) |CMA_0390 - Perform audit for configuration change control |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0390.json) | |[Windows machines should meet requirements for 'System Audit Policies - Detailed Tracking'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F58383b73-94a9-4414-b382-4146eb02611b) |Windows machines should have the specified Group Policy settings in the category 'System Audit Policies - Detailed Tracking' for auditing DPAPI, process creation/termination, RPC events, and PNP activity. This policy requires that the Guest Configuration prerequisites have been deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SystemAuditPoliciesDetailedTracking_AINE.json) |
-### 10.05 Security In Development and Support Processes
+### 0639.10k2Organizational.78-10.k 10.05 Security In Development and Support Processes
**ID**: 0639.10k2Organizational.78-10.k **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Remediate information system flaws](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbe38a620-000b-21cf-3cb3-ea151b704c3b) |CMA_0427 - Remediate information system flaws |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0427.json) | |[Windows machines should meet requirements for 'System Audit Policies - Detailed Tracking'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F58383b73-94a9-4414-b382-4146eb02611b) |Windows machines should have the specified Group Policy settings in the category 'System Audit Policies - Detailed Tracking' for auditing DPAPI, process creation/termination, RPC events, and PNP activity. This policy requires that the Guest Configuration prerequisites have been deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SystemAuditPoliciesDetailedTracking_AINE.json) |
-### 10.05 Security In Development and Support Processes
+### 0640.10k2Organizational.1012-10.k 10.05 Security In Development and Support Processes
**ID**: 0640.10k2Organizational.1012-10.k **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Require developers to produce evidence of security assessment plan execution](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff8a63511-66f1-503f-196d-d6217ee0823a) |CMA_C1602 - Require developers to produce evidence of security assessment plan execution |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1602.json) | |[Windows machines should meet requirements for 'System Audit Policies - Detailed Tracking'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F58383b73-94a9-4414-b382-4146eb02611b) |Windows machines should have the specified Group Policy settings in the category 'System Audit Policies - Detailed Tracking' for auditing DPAPI, process creation/termination, RPC events, and PNP activity. This policy requires that the Guest Configuration prerequisites have been deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SystemAuditPoliciesDetailedTracking_AINE.json) |
-### 10.05 Security In Development and Support Processes
+### 0641.10k2Organizational.11-10.k 10.05 Security In Development and Support Processes
**ID**: 0641.10k2Organizational.11-10.k **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Review development process, standards and tools](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1e876c5c-0f2a-8eb6-69f7-5f91e7918ed6) |CMA_C1610 - Review development process, standards and tools |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1610.json) | |[Windows machines should meet requirements for 'System Audit Policies - Detailed Tracking'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F58383b73-94a9-4414-b382-4146eb02611b) |Windows machines should have the specified Group Policy settings in the category 'System Audit Policies - Detailed Tracking' for auditing DPAPI, process creation/termination, RPC events, and PNP activity. This policy requires that the Guest Configuration prerequisites have been deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SystemAuditPoliciesDetailedTracking_AINE.json) |
-### 10.05 Security In Development and Support Processes
+### 0642.10k3Organizational.12-10.k 10.05 Security In Development and Support Processes
**ID**: 0642.10k3Organizational.12-10.k **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Implement an automated configuration management tool](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F33832848-42ab-63f3-1a55-c0ad309d44cd) |CMA_0311 - Implement an automated configuration management tool |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0311.json) | |[Windows machines should meet requirements for 'System Audit Policies - Detailed Tracking'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F58383b73-94a9-4414-b382-4146eb02611b) |Windows machines should have the specified Group Policy settings in the category 'System Audit Policies - Detailed Tracking' for auditing DPAPI, process creation/termination, RPC events, and PNP activity. This policy requires that the Guest Configuration prerequisites have been deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SystemAuditPoliciesDetailedTracking_AINE.json) |
-### 10.05 Security In Development and Support Processes
+### 0643.10k3Organizational.3-10.k 10.05 Security In Development and Support Processes
**ID**: 0643.10k3Organizational.3-10.k **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Retain previous versions of baseline configs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5e4e9685-3818-5934-0071-2620c4fa2ca5) |CMA_C1181 - Retain previous versions of baseline configs |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1181.json) | |[Windows machines should meet requirements for 'System Audit Policies - Detailed Tracking'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F58383b73-94a9-4414-b382-4146eb02611b) |Windows machines should have the specified Group Policy settings in the category 'System Audit Policies - Detailed Tracking' for auditing DPAPI, process creation/termination, RPC events, and PNP activity. This policy requires that the Guest Configuration prerequisites have been deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SystemAuditPoliciesDetailedTracking_AINE.json) |
-### 10.05 Security In Development and Support Processes
+### 0644.10k3Organizational.4-10.k 10.05 Security In Development and Support Processes
**ID**: 0644.10k3Organizational.4-10.k **Ownership**: Shared
This built-in initiative is deployed as part of the
|[View and configure system diagnostic data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0123edae-3567-a05a-9b05-b53ebe9d3e7e) |CMA_0544 - View and configure system diagnostic data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0544.json) | |[Windows machines should meet requirements for 'System Audit Policies - Detailed Tracking'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F58383b73-94a9-4414-b382-4146eb02611b) |Windows machines should have the specified Group Policy settings in the category 'System Audit Policies - Detailed Tracking' for auditing DPAPI, process creation/termination, RPC events, and PNP activity. This policy requires that the Guest Configuration prerequisites have been deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SystemAuditPoliciesDetailedTracking_AINE.json) |
-### 09.08 Exchange of Information
+### 0662.09sCSPOrganizational.2-09.s 09.08 Exchange of Information
**ID**: 0662.09sCSPOrganizational.2-09.s **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Employ independent assessors to conduct security control assessments](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb65c5d8e-9043-9612-2c17-65f231d763bb) |CMA_C1148 - Employ independent assessors to conduct security control assessments |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1148.json) | |[Select additional testing for security control assessments](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff78fc35e-1268-0bca-a798-afcba9d2330a) |CMA_C1149 - Select additional testing for security control assessments |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1149.json) |
-### 10.04 Security of System Files
+### 0663.10h1System.7-10.h 10.04 Security of System Files
**ID**: 0663.10h1System.7-10.h **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Verify software, firmware and information integrity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdb28735f-518f-870e-15b4-49623cbe3aa0) |CMA_0542 - Verify software, firmware and information integrity |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0542.json) | |[View and configure system diagnostic data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0123edae-3567-a05a-9b05-b53ebe9d3e7e) |CMA_0544 - View and configure system diagnostic data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0544.json) |
-### 10.04 Security of System Files
+### 0669.10hCSPSystem.1-10.h 10.04 Security of System Files
**ID**: 0669.10hCSPSystem.1-10.h **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Perform audit for configuration change control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1282809c-9001-176b-4a81-260a085f4872) |CMA_0390 - Perform audit for configuration change control |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0390.json) | |[Require developers to manage change integrity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb33d61c1-7463-7025-0ec0-a47585b59147) |CMA_C1595 - Require developers to manage change integrity |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1595.json) |
-### 10.04 Security of System Files
+### 0670.10hCSPSystem.2-10.h 10.04 Security of System Files
**ID**: 0670.10hCSPSystem.2-10.h **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Perform disposition review](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb5a4be05-3997-1731-3260-98be653610f6) |CMA_0391 - Perform disposition review |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0391.json) | |[Verify personal data is deleted at the end of processing](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc6b877a6-5d6d-1862-4b7f-3ccc30b25b63) |CMA_0540 - Verify personal data is deleted at the end of processing |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0540.json) |
-### 10.05 Security In Development and Support Processes
+### 0671.10k1System.1-10.k 10.05 Security In Development and Support Processes
**ID**: 0671.10k1System.1-10.k **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Require developers to implement only approved changes](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F085467a6-9679-5c65-584a-f55acefd0d43) |CMA_C1596 - Require developers to implement only approved changes |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1596.json) | |[Require developers to manage change integrity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb33d61c1-7463-7025-0ec0-a47585b59147) |CMA_C1595 - Require developers to manage change integrity |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1595.json) |
-### 10.05 Security In Development and Support Processes
+### 0672.10k3System.5-10.k 10.05 Security In Development and Support Processes
**ID**: 0672.10k3System.5-10.k **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Verify software, firmware and information integrity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdb28735f-518f-870e-15b4-49623cbe3aa0) |CMA_0542 - Verify software, firmware and information integrity |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0542.json) | |[View and configure system diagnostic data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0123edae-3567-a05a-9b05-b53ebe9d3e7e) |CMA_0544 - View and configure system diagnostic data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0544.json) |
-### 06.02 Compliance with Security Policies and Standards, and Technical Compliance
+### 068.06g2Organizational.34-06.g 06.02 Compliance with Security Policies and Standards, and Technical Compliance
**ID**: 068.06g2Organizational.34-06.g **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Employ independent assessors to conduct security control assessments](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb65c5d8e-9043-9612-2c17-65f231d763bb) |CMA_C1148 - Employ independent assessors to conduct security control assessments |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1148.json) | |[Produce Security Assessment report](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F70a7a065-a060-85f8-7863-eb7850ed2af9) |CMA_C1146 - Produce Security Assessment report |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1146.json) |
-### 06.02 Compliance with Security Policies and Standards, and Technical Compliance
+### 069.06g2Organizational.56-06.g 06.02 Compliance with Security Policies and Standards, and Technical Compliance
**ID**: 069.06g2Organizational.56-06.g **Ownership**: Shared
This built-in initiative is deployed as part of the
## 07 Vulnerability Management
-### 07.01 Responsibility for Assets
+### 0701.07a1Organizational.12-07.a 07.01 Responsibility for Assets
**ID**: 0701.07a1Organizational.12-07.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Protect against and prevent data theft from departing employees](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F80a97208-264e-79da-0cc7-4fca179a0c9c) |CMA_0398 - Protect against and prevent data theft from departing employees |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0398.json) | |[Retain terminated user data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7c7032fe-9ce6-9092-5890-87a1a3755db1) |CMA_0455 - Retain terminated user data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0455.json) |
-### 07.01 Responsibility for Assets
+### 0702.07a1Organizational.3-07.a 07.01 Responsibility for Assets
**ID**: 0702.07a1Organizational.3-07.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Define information security roles and responsibilities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef5a7059-6651-73b1-18b3-75b1b79c1565) |CMA_C1565 - Define information security roles and responsibilities |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1565.json) | |[Establish terms and conditions for processing resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5715bf33-a5bd-1084-4e19-bc3c83ec1c35) |CMA_C1077 - Establish terms and conditions for processing resources |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1077.json) |
-### 07.01 Responsibility for Assets
+### 0703.07a2Organizational.1-07.a 07.01 Responsibility for Assets
**ID**: 0703.07a2Organizational.1-07.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Establish and maintain an asset inventory](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F27965e62-141f-8cca-426f-d09514ee5216) |CMA_0266 - Establish and maintain an asset inventory |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0266.json) | |[Maintain records of processing of personal data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F92ede480-154e-0e22-4dca-8b46a74a3a51) |CMA_0353 - Maintain records of processing of personal data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0353.json) |
-### 07.01 Responsibility for Assets
+### 0704.07a3Organizational.12-07.a 07.01 Responsibility for Assets
**ID**: 0704.07a3Organizational.12-07.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Establish and maintain an asset inventory](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F27965e62-141f-8cca-426f-d09514ee5216) |CMA_0266 - Establish and maintain an asset inventory |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0266.json) | |[Maintain records of processing of personal data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F92ede480-154e-0e22-4dca-8b46a74a3a51) |CMA_0353 - Maintain records of processing of personal data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0353.json) |
-### 07.01 Responsibility for Assets
+### 0705.07a3Organizational.3-07.a 07.01 Responsibility for Assets
**ID**: 0705.07a3Organizational.3-07.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Identify individuals with security roles and responsibilities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0dcbaf2f-075e-947b-8f4c-74ecc5cd302c) |CMA_C1566 - Identify individuals with security roles and responsibilities |Manual, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1566.json) | |[Integrate risk management process into SDLC](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F00f12b6f-10d7-8117-9577-0f2b76488385) |CMA_C1567 - Integrate risk management process into SDLC |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1567.json) |
-### 10.02 Correct Processing in Applications
+### 0706.10b1System.12-10.b 10.02 Correct Processing in Applications
**ID**: 0706.10b1System.12-10.b **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Integrate risk management process into SDLC](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F00f12b6f-10d7-8117-9577-0f2b76488385) |CMA_C1567 - Integrate risk management process into SDLC |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1567.json) | |[Perform information input validation](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8b1f29eb-1b22-4217-5337-9207cb55231e) |CMA_C1723 - Perform information input validation |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1723.json) |
-### 10.02 Correct Processing in Applications
+### 0708.10b2System.2-10.b 10.02 Correct Processing in Applications
**ID**: 0708.10b2System.2-10.b **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Verify software, firmware and information integrity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdb28735f-518f-870e-15b4-49623cbe3aa0) |CMA_0542 - Verify software, firmware and information integrity |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0542.json) | |[View and configure system diagnostic data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0123edae-3567-a05a-9b05-b53ebe9d3e7e) |CMA_0544 - View and configure system diagnostic data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0544.json) |
-### 10.06 Technical Vulnerability Management
+### 0709.10m1Organizational.1-10.m 10.06 Technical Vulnerability Management
**ID**: 0709.10m1Organizational.1-10.m **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) | |[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) | |[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
-|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
|[Windows machines should meet requirements for 'Security Options - Microsoft Network Server'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcaf2d518-f029-4f6b-833b-d7081702f253) |Windows machines should have the specified Group Policy settings in the category 'Security Options - Microsoft Network Server' for disabling SMB v1 server. This policy requires that the Guest Configuration prerequisites have been deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SecurityOptionsMicrosoftNetworkServer_AINE.json) |
-### 10.06 Technical Vulnerability Management
+### 0710.10m2Organizational.1-10.m 10.06 Technical Vulnerability Management
**ID**: 0710.10m2Organizational.1-10.m **Ownership**: Shared
This built-in initiative is deployed as part of the
|[View and configure system diagnostic data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0123edae-3567-a05a-9b05-b53ebe9d3e7e) |CMA_0544 - View and configure system diagnostic data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0544.json) | |[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
-### 10.06 Technical Vulnerability Management
+### 0711.10m2Organizational.23-10.m 10.06 Technical Vulnerability Management
**ID**: 0711.10m2Organizational.23-10.m **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Perform a trend analysis on threats](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e81644-923d-33fc-6ebb-9733bc8d1a06) |CMA_0389 - Perform a trend analysis on threats |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0389.json) | |[Perform threat modeling](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbf883b14-9c19-0f37-8825-5e39a8b66d5b) |CMA_0392 - Perform threat modeling |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0392.json) |
-### 10.06 Technical Vulnerability Management
+### 0712.10m2Organizational.4-10.m 10.06 Technical Vulnerability Management
**ID**: 0712.10m2Organizational.4-10.m **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Employ independent team for penetration testing](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F611ebc63-8600-50b6-a0e3-fef272457132) |CMA_C1171 - Employ independent team for penetration testing |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1171.json) | |[Select additional testing for security control assessments](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff78fc35e-1268-0bca-a798-afcba9d2330a) |CMA_C1149 - Select additional testing for security control assessments |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1149.json) |
-### 10.06 Technical Vulnerability Management
+### 0713.10m2Organizational.5-10.m 10.06 Technical Vulnerability Management
**ID**: 0713.10m2Organizational.5-10.m **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Measure the time between flaw identification and flaw remediation](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdad1887d-161b-7b61-2e4d-5124a7b5724e) |CMA_C1674 - Measure the time between flaw identification and flaw remediation |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1674.json) | |[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) |
-### 10.06 Technical Vulnerability Management
+### 0714.10m2Organizational.7-10.m 10.06 Technical Vulnerability Management
**ID**: 0714.10m2Organizational.7-10.m **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Review role group changes weekly](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F70fe686f-1f91-7dab-11bf-bca4201e183b) |CMA_0476 - Review role group changes weekly |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0476.json) | |[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) |
-### 10.06 Technical Vulnerability Management
+### 0715.10m2Organizational.8-10.m 10.06 Technical Vulnerability Management
**ID**: 0715.10m2Organizational.8-10.m **Ownership**: Shared
This built-in initiative is deployed as part of the
||||| |[Vulnerabilities in container security configurations should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8cbc669-f12d-49eb-93e7-9273119e9933) |Audit vulnerabilities in security configuration on machines with Docker installed and display as recommendations in Azure Security Center. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerBenchmark_Audit.json) |
-### 10.06 Technical Vulnerability Management
+### 0716.10m3Organizational.1-10.m 10.06 Technical Vulnerability Management
**ID**: 0716.10m3Organizational.1-10.m **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Produce Security Assessment report](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F70a7a065-a060-85f8-7863-eb7850ed2af9) |CMA_C1146 - Produce Security Assessment report |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1146.json) | |[SQL databases should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffeedbf84-6b99-488c-acc2-71c829aa5ffc) |Monitor vulnerability assessment scan results and recommendations for how to remediate database vulnerabilities. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_SQLDbVulnerabilities_Audit.json) |
-### 10.06 Technical Vulnerability Management
+### 0717.10m3Organizational.2-10.m 10.06 Technical Vulnerability Management
**ID**: 0717.10m3Organizational.2-10.m **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Perform threat modeling](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbf883b14-9c19-0f37-8825-5e39a8b66d5b) |CMA_0392 - Perform threat modeling |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0392.json) | |[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) |
-### 10.06 Technical Vulnerability Management
+### 0718.10m3Organizational.34-10.m 10.06 Technical Vulnerability Management
**ID**: 0718.10m3Organizational.34-10.m **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Perform threat modeling](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbf883b14-9c19-0f37-8825-5e39a8b66d5b) |CMA_0392 - Perform threat modeling |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0392.json) | |[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) |
-### 10.06 Technical Vulnerability Management
+### 0719.10m3Organizational.5-10.m 10.06 Technical Vulnerability Management
**ID**: 0719.10m3Organizational.5-10.m **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Perform threat modeling](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbf883b14-9c19-0f37-8825-5e39a8b66d5b) |CMA_0392 - Perform threat modeling |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0392.json) | |[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
-### 07.01 Responsibility for Assets
+### 0720.07a1Organizational.4-07.a 07.01 Responsibility for Assets
**ID**: 0720.07a1Organizational.4-07.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Create a data inventory](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F043c1e56-5a16-52f8-6af8-583098ff3e60) |CMA_0096 - Create a data inventory |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0096.json) | |[Maintain records of processing of personal data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F92ede480-154e-0e22-4dca-8b46a74a3a51) |CMA_0353 - Maintain records of processing of personal data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0353.json) |
-### 07.01 Responsibility for Assets
+### 0722.07a1Organizational.67-07.a 07.01 Responsibility for Assets
**ID**: 0722.07a1Organizational.67-07.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Restrict use of open source software](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F08c11b48-8745-034d-1c1b-a144feec73b9) |CMA_C1237 - Restrict use of open source software |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1237.json) | |[Track software license usage](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F77cc89bb-774f-48d7-8a84-fb8c322c3000) |CMA_C1235 - Track software license usage |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1235.json) |
-### 07.01 Responsibility for Assets
+### 0723.07a1Organizational.8-07.a 07.01 Responsibility for Assets
**ID**: 0723.07a1Organizational.8-07.a **Ownership**: Shared
This built-in initiative is deployed as part of the
||||| |[Review and update media protection policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb4e19d22-8c0e-7cad-3219-c84c62dc250f) |CMA_C1427 - Review and update media protection policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1427.json) |
-### 07.01 Responsibility for Assets
+### 0724.07a3Organizational.4-07.a 07.01 Responsibility for Assets
**ID**: 0724.07a3Organizational.4-07.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Set automated notifications for new and trending cloud applications in your organization](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf38215f-70c4-0cd6-40c2-c52d86690a45) |CMA_0495 - Set automated notifications for new and trending cloud applications in your organization |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0495.json) | |[Update antivirus definitions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea9d7c95-2f10-8a4d-61d8-7469bd2e8d65) |CMA_0517 - Update antivirus definitions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0517.json) |
-### 07.01 Responsibility for Assets
+### 0725.07a3Organizational.5-07.a 07.01 Responsibility for Assets
**ID**: 0725.07a3Organizational.5-07.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Establish and maintain an asset inventory](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F27965e62-141f-8cca-426f-d09514ee5216) |CMA_0266 - Establish and maintain an asset inventory |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0266.json) | |[Maintain records of processing of personal data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F92ede480-154e-0e22-4dca-8b46a74a3a51) |CMA_0353 - Maintain records of processing of personal data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0353.json) |
-### 10.02 Correct Processing in Applications
+### 0733.10b2System.4-10.b 10.02 Correct Processing in Applications
**ID**: 0733.10b2System.4-10.b **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Perform information input validation](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8b1f29eb-1b22-4217-5337-9207cb55231e) |CMA_C1723 - Perform information input validation |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1723.json) | |[Verify software, firmware and information integrity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdb28735f-518f-870e-15b4-49623cbe3aa0) |CMA_0542 - Verify software, firmware and information integrity |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0542.json) |
-### 10.06 Technical Vulnerability Management
+### 0786.10m2Organizational.13-10.m 10.06 Technical Vulnerability Management
**ID**: 0786.10m2Organizational.13-10.m **Ownership**: Shared
This built-in initiative is deployed as part of the
||||| |[Incorporate flaw remediation into configuration management](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34aac8b2-488a-2b96-7280-5b9b481a317a) |CMA_C1671 - Incorporate flaw remediation into configuration management |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1671.json) |
-### 10.06 Technical Vulnerability Management
+### 0787.10m2Organizational.14-10.m 10.06 Technical Vulnerability Management
**ID**: 0787.10m2Organizational.14-10.m **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Incorporate flaw remediation into configuration management](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34aac8b2-488a-2b96-7280-5b9b481a317a) |CMA_C1671 - Incorporate flaw remediation into configuration management |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1671.json) | |[Measure the time between flaw identification and flaw remediation](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdad1887d-161b-7b61-2e4d-5124a7b5724e) |CMA_C1674 - Measure the time between flaw identification and flaw remediation |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1674.json) |
-### 10.06 Technical Vulnerability Management
+### 0788.10m3Organizational.20-10.m 10.06 Technical Vulnerability Management
**ID**: 0788.10m3Organizational.20-10.m **Ownership**: Shared
This built-in initiative is deployed as part of the
||||| |[Employ independent team for penetration testing](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F611ebc63-8600-50b6-a0e3-fef272457132) |CMA_C1171 - Employ independent team for penetration testing |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1171.json) |
-### 10.06 Technical Vulnerability Management
+### 0790.10m3Organizational.22-10.m 10.06 Technical Vulnerability Management
**ID**: 0790.10m3Organizational.22-10.m **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Review file and folder activity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef718fe4-7ceb-9ddf-3198-0ee8f6fe9cba) |CMA_0473 - Review file and folder activity |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0473.json) | |[Review role group changes weekly](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F70fe686f-1f91-7dab-11bf-bca4201e183b) |CMA_0476 - Review role group changes weekly |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0476.json) |
-### 10.02 Correct Processing in Applications
+### 0791.10b2Organizational.4-10.b 10.02 Correct Processing in Applications
**ID**: 0791.10b2Organizational.4-10.b **Ownership**: Shared
This built-in initiative is deployed as part of the
## 08 Network Protection
-### 01.04 Network Access Control
+### 0805.01m1Organizational.12-01.m 01.04 Network Access Control
**ID**: 0805.01m1Organizational.12-01.m **Ownership**: Shared
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[\[Preview\]: Container Registry should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4857be7-912a-4c75-87e6-e30292bcdf78) |This policy audits any Container Registry not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_ContainerRegistry_Audit.json) |
-|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aka.ms/appservice-vnet-service-endpoint](https://aka.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) |
+|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aks.ms/appservice-vnet-service-endpoint](https://aks.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) |
|[Cosmos DB should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe0a2b1a3-f7f9-4569-807f-2a9edebdf4d9) |This policy audits any Cosmos DB not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_CosmosDB_Audit.json) | |[Event Hub should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd63edb4a-c612-454d-b47d-191a724fcbf0) |This policy audits any Event Hub not configured to use a virtual network service endpoint. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_EventHub_AuditIfNotExists.json) | |[Gateway subnets should not be configured with a network security group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F35f9c03a-cc27-418e-9c0c-539ff999d010) |This policy denies if a gateway subnet is configured with a network security group. Assigning a network security group to a gateway subnet will cause the gateway to stop functioning. |deny |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkSecurityGroupOnGatewaySubnet_Deny.json) |
This built-in initiative is deployed as part of the
|[Subnets should be associated with a Network Security Group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe71308d3-144b-4262-b144-efdc3cc90517) |Protect your subnet from potential threats by restricting access to it with a Network Security Group (NSG). NSGs contain a list of Access Control List (ACL) rules that allow or deny network traffic to your subnet. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnSubnets_Audit.json) | |[Virtual machines should be connected to an approved virtual network](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd416745a-506c-48b6-8ab1-83cb814bcaa3) |This policy audits any virtual machine connected to a virtual network that is not approved. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ApprovedVirtualNetwork_Audit.json) |
-### 01.04 Network Access Control
+### 0806.01m2Organizational.12356-01.m 01.04 Network Access Control
**ID**: 0806.01m2Organizational.12356-01.m **Ownership**: Shared
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[\[Preview\]: Container Registry should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4857be7-912a-4c75-87e6-e30292bcdf78) |This policy audits any Container Registry not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_ContainerRegistry_Audit.json) |
-|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aka.ms/appservice-vnet-service-endpoint](https://aka.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) |
+|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aks.ms/appservice-vnet-service-endpoint](https://aks.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) |
|[Cosmos DB should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe0a2b1a3-f7f9-4569-807f-2a9edebdf4d9) |This policy audits any Cosmos DB not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_CosmosDB_Audit.json) | |[Event Hub should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd63edb4a-c612-454d-b47d-191a724fcbf0) |This policy audits any Event Hub not configured to use a virtual network service endpoint. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_EventHub_AuditIfNotExists.json) | |[Gateway subnets should not be configured with a network security group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F35f9c03a-cc27-418e-9c0c-539ff999d010) |This policy denies if a gateway subnet is configured with a network security group. Assigning a network security group to a gateway subnet will cause the gateway to stop functioning. |deny |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkSecurityGroupOnGatewaySubnet_Deny.json) |
This built-in initiative is deployed as part of the
|[Subnets should be associated with a Network Security Group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe71308d3-144b-4262-b144-efdc3cc90517) |Protect your subnet from potential threats by restricting access to it with a Network Security Group (NSG). NSGs contain a list of Access Control List (ACL) rules that allow or deny network traffic to your subnet. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnSubnets_Audit.json) | |[Virtual machines should be connected to an approved virtual network](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd416745a-506c-48b6-8ab1-83cb814bcaa3) |This policy audits any virtual machine connected to a virtual network that is not approved. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ApprovedVirtualNetwork_Audit.json) |
-### 10.02 Correct Processing in Applications
+### 0808.10b2System.3-10.b 10.02 Correct Processing in Applications
**ID**: 0808.10b2System.3-10.b **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Implement system boundary protection](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F01ae60e2-38bb-0a32-7b20-d3a091423409) |CMA_0328 - Implement system boundary protection |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0328.json) | |[Route traffic through authenticated proxy network](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd91558ce-5a5c-551b-8fbb-83f793255e09) |CMA_C1633 - Route traffic through authenticated proxy network |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1633.json) |
-### 01.04 Network Access Control
+### 0809.01n2Organizational.1234-01.n 01.04 Network Access Control
**ID**: 0809.01n2Organizational.1234-01.n **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Subnets should be associated with a Network Security Group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe71308d3-144b-4262-b144-efdc3cc90517) |Protect your subnet from potential threats by restricting access to it with a Network Security Group (NSG). NSGs contain a list of Access Control List (ACL) rules that allow or deny network traffic to your subnet. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnSubnets_Audit.json) | |[Virtual machines should be connected to an approved virtual network](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd416745a-506c-48b6-8ab1-83cb814bcaa3) |This policy audits any virtual machine connected to a virtual network that is not approved. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ApprovedVirtualNetwork_Audit.json) |
-### 01.04 Network Access Control
+### 0810.01n2Organizational.5-01.n 01.04 Network Access Control
**ID**: 0810.01n2Organizational.5-01.n **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Subnets should be associated with a Network Security Group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe71308d3-144b-4262-b144-efdc3cc90517) |Protect your subnet from potential threats by restricting access to it with a Network Security Group (NSG). NSGs contain a list of Access Control List (ACL) rules that allow or deny network traffic to your subnet. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnSubnets_Audit.json) | |[Virtual machines should be connected to an approved virtual network](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd416745a-506c-48b6-8ab1-83cb814bcaa3) |This policy audits any virtual machine connected to a virtual network that is not approved. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ApprovedVirtualNetwork_Audit.json) |
-### 09.06 Network Security Management
+### 08101.09m2Organizational.14-09.m 09.06 Network Security Management
**ID**: 08101.09m2Organizational.14-09.m **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Protect passwords with encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb2d3e5a2-97ab-5497-565a-71172a729d93) |CMA_0408 - Protect passwords with encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0408.json) | |[Secure the interface to external systems](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fff1efad2-6b09-54cc-01bf-d386c4d558a8) |CMA_0491 - Secure the interface to external systems |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0491.json) |
-### 09.06 Network Security Management
+### 08102.09nCSPOrganizational.1-09.n 09.06 Network Security Management
**ID**: 08102.09nCSPOrganizational.1-09.n **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Implement system boundary protection](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F01ae60e2-38bb-0a32-7b20-d3a091423409) |CMA_0328 - Implement system boundary protection |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0328.json) | |[Secure the interface to external systems](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fff1efad2-6b09-54cc-01bf-d386c4d558a8) |CMA_0491 - Secure the interface to external systems |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0491.json) |
-### 01.04 Network Access Control
+### 0811.01n2Organizational.6-01.n 01.04 Network Access Control
**ID**: 0811.01n2Organizational.6-01.n **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Subnets should be associated with a Network Security Group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe71308d3-144b-4262-b144-efdc3cc90517) |Protect your subnet from potential threats by restricting access to it with a Network Security Group (NSG). NSGs contain a list of Access Control List (ACL) rules that allow or deny network traffic to your subnet. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnSubnets_Audit.json) | |[Virtual machines should be connected to an approved virtual network](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd416745a-506c-48b6-8ab1-83cb814bcaa3) |This policy audits any virtual machine connected to a virtual network that is not approved. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ApprovedVirtualNetwork_Audit.json) |
-### 01.04 Network Access Control
+### 0812.01n2Organizational.8-01.n 01.04 Network Access Control
**ID**: 0812.01n2Organizational.8-01.n **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Subnets should be associated with a Network Security Group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe71308d3-144b-4262-b144-efdc3cc90517) |Protect your subnet from potential threats by restricting access to it with a Network Security Group (NSG). NSGs contain a list of Access Control List (ACL) rules that allow or deny network traffic to your subnet. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnSubnets_Audit.json) | |[Virtual machines should be connected to an approved virtual network](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd416745a-506c-48b6-8ab1-83cb814bcaa3) |This policy audits any virtual machine connected to a virtual network that is not approved. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ApprovedVirtualNetwork_Audit.json) |
-### 01.04 Network Access Control
+### 0814.01n1Organizational.12-01.n 01.04 Network Access Control
**ID**: 0814.01n1Organizational.12-01.n **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Subnets should be associated with a Network Security Group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe71308d3-144b-4262-b144-efdc3cc90517) |Protect your subnet from potential threats by restricting access to it with a Network Security Group (NSG). NSGs contain a list of Access Control List (ACL) rules that allow or deny network traffic to your subnet. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnSubnets_Audit.json) | |[Virtual machines should be connected to an approved virtual network](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd416745a-506c-48b6-8ab1-83cb814bcaa3) |This policy audits any virtual machine connected to a virtual network that is not approved. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ApprovedVirtualNetwork_Audit.json) |
-### 01.04 Network Access Control
+### 0815.01o2Organizational.123-01.o 01.04 Network Access Control
**ID**: 0815.01o2Organizational.123-01.o **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Route traffic through authenticated proxy network](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd91558ce-5a5c-551b-8fbb-83f793255e09) |CMA_C1633 - Route traffic through authenticated proxy network |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1633.json) | |[Route traffic through managed network access points](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbab9ef1d-a16d-421a-822d-3fa94e808156) |CMA_0484 - Route traffic through managed network access points |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0484.json) |
-### 01.06 Application and Information Access Control
+### 0816.01w1System.1-01.w 01.06 Application and Information Access Control
**ID**: 0816.01w1System.1-01.w **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Obtain user security function documentation](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbe1c34ab-295a-07a6-785c-36f63c1d223e) |CMA_C1581 - Obtain user security function documentation |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1581.json) | |[Protect administrator and user documentation](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F09960521-759e-5d12-086f-4192a72a5e92) |CMA_C1583 - Protect administrator and user documentation |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1583.json) |
-### 01.06 Application and Information Access Control
+### 0817.01w2System.123-01.w 01.06 Application and Information Access Control
**ID**: 0817.01w2System.123-01.w **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Separate user and information system management functionality](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8a703eb5-4e53-701b-67e4-05ba2f7930c8) |CMA_0493 - Separate user and information system management functionality |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0493.json) | |[Use dedicated machines for administrative tasks](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb8972f60-8d77-1cb8-686f-9c9f4cdd8a59) |CMA_0527 - Use dedicated machines for administrative tasks |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0527.json) |
-### 01.06 Application and Information Access Control
+### 0818.01w3System.12-01.w 01.06 Application and Information Access Control
**ID**: 0818.01w3System.12-01.w **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Manage availability and capacity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fedcc36f1-511b-81e0-7125-abee29752fe7) |CMA_0356 - Manage availability and capacity |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0356.json) | |[Secure commitment from leadership](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F70057208-70cc-7b31-3c3a-121af6bc1966) |CMA_0489 - Secure commitment from leadership |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0489.json) |
-### 09.06 Network Security Management
+### 0819.09m1Organizational.23-09.m 09.06 Network Security Management
**ID**: 0819.09m1Organizational.23-09.m **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Check for privacy and security compliance before establishing internal connections](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fee4bbbbb-2e52-9adb-4e3a-e641f7ac68ab) |CMA_0053 - Check for privacy and security compliance before establishing internal connections |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0053.json) | |[Require interconnection security agreements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F096a7055-30cb-2db4-3fda-41b20ac72667) |CMA_C1151 - Require interconnection security agreements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1151.json) |
-### 09.06 Network Security Management
+### 0821.09m2Organizational.2-09.m 09.06 Network Security Management
**ID**: 0821.09m2Organizational.2-09.m **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Perform audit for configuration change control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1282809c-9001-176b-4a81-260a085f4872) |CMA_0390 - Perform audit for configuration change control |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0390.json) | |[Review changes for any unauthorized changes](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc246d146-82b0-301f-32e7-1065dcd248b7) |CMA_C1204 - Review changes for any unauthorized changes |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1204.json) |
-### 09.06 Network Security Management
+### 0822.09m2Organizational.4-09.m 09.06 Network Security Management
**ID**: 0822.09m2Organizational.4-09.m **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Route traffic through authenticated proxy network](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd91558ce-5a5c-551b-8fbb-83f793255e09) |CMA_C1633 - Route traffic through authenticated proxy network |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1633.json) | |[Route traffic through managed network access points](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbab9ef1d-a16d-421a-822d-3fa94e808156) |CMA_0484 - Route traffic through managed network access points |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0484.json) |
-### 09.06 Network Security Management
+### 0824.09m3Organizational.1-09.m 09.06 Network Security Management
**ID**: 0824.09m3Organizational.1-09.m **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Turn on sensors for endpoint security solution](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5fc24b95-53f7-0ed1-2330-701b539b97fe) |CMA_0514 - Turn on sensors for endpoint security solution |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0514.json) | |[Undergo independent security review](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9b55929b-0101-47c0-a16e-d6ac5c7d21f8) |CMA_0515 - Undergo independent security review |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0515.json) |
-### 09.06 Network Security Management
+### 0825.09m3Organizational.23-09.m 09.06 Network Security Management
**ID**: 0825.09m3Organizational.23-09.m **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Provide monitoring information as needed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fc1f0da-0050-19bb-3d75-81ae15940df6) |CMA_C1689 - Provide monitoring information as needed |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1689.json) | |[Route traffic through managed network access points](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbab9ef1d-a16d-421a-822d-3fa94e808156) |CMA_0484 - Route traffic through managed network access points |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0484.json) |
-### 09.06 Network Security Management
+### 0826.09m3Organizational.45-09.m 09.06 Network Security Management
**ID**: 0826.09m3Organizational.45-09.m **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Implement system boundary protection](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F01ae60e2-38bb-0a32-7b20-d3a091423409) |CMA_0328 - Implement system boundary protection |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0328.json) | |[Secure the interface to external systems](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fff1efad2-6b09-54cc-01bf-d386c4d558a8) |CMA_0491 - Secure the interface to external systems |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0491.json) |
-### 09.06 Network Security Management
+### 0828.09m3Organizational.8-09.m 09.06 Network Security Management
**ID**: 0828.09m3Organizational.8-09.m **Ownership**: Shared
This built-in initiative is deployed as part of the
||||| |[Review changes for any unauthorized changes](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc246d146-82b0-301f-32e7-1065dcd248b7) |CMA_C1204 - Review changes for any unauthorized changes |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1204.json) |
-### 09.06 Network Security Management
+### 0829.09m3Organizational.911-09.m 09.06 Network Security Management
**ID**: 0829.09m3Organizational.911-09.m **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Implement managed interface for each external service](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb262e1dd-08e9-41d4-963a-258909ad794b) |CMA_C1626 - Implement managed interface for each external service |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1626.json) | |[Implement system boundary protection](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F01ae60e2-38bb-0a32-7b20-d3a091423409) |CMA_0328 - Implement system boundary protection |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0328.json) |
-### 09.06 Network Security Management
+### 0830.09m3Organizational.1012-09.m 09.06 Network Security Management
**ID**: 0830.09m3Organizational.1012-09.m **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Secure the interface to external systems](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fff1efad2-6b09-54cc-01bf-d386c4d558a8) |CMA_0491 - Secure the interface to external systems |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0491.json) | |[Support personal verification credentials issued by legal authorities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1d39b5d9-0392-8954-8359-575ce1957d1a) |CMA_0507 - Support personal verification credentials issued by legal authorities |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0507.json) |
-### 09.06 Network Security Management
+### 0832.09m3Organizational.14-09.m 09.06 Network Security Management
**ID**: 0832.09m3Organizational.14-09.m **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Require interconnection security agreements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F096a7055-30cb-2db4-3fda-41b20ac72667) |CMA_C1151 - Require interconnection security agreements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1151.json) | |[Update interconnection security agreements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd48a6f19-a284-6fc6-0623-3367a74d3f50) |CMA_0519 - Update interconnection security agreements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0519.json) |
-### 09.06 Network Security Management
+### 0835.09n1Organizational.1-09.n 09.06 Network Security Management
**ID**: 0835.09n1Organizational.1-09.n **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Undergo independent security review](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9b55929b-0101-47c0-a16e-d6ac5c7d21f8) |CMA_0515 - Undergo independent security review |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0515.json) | |[Virtual machines should be migrated to new Azure Resource Manager resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1d84d5fb-01f6-4d12-ba4f-4a26081d403d) |Use new Azure Resource Manager for your virtual machines to provide security enhancements such as: stronger access control (RBAC), better auditing, Azure Resource Manager based deployment and governance, access to managed identities, access to key vault for secrets, Azure AD-based authentication and support for tags and resource groups for easier security management |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/ClassicCompute_Audit.json) |
-### 09.06 Network Security Management
+### 0836.09.n2Organizational.1-09.n 09.06 Network Security Management
**ID**: 0836.09.n2Organizational.1-09.n **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Require interconnection security agreements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F096a7055-30cb-2db4-3fda-41b20ac72667) |CMA_C1151 - Require interconnection security agreements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1151.json) | |[Update interconnection security agreements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd48a6f19-a284-6fc6-0623-3367a74d3f50) |CMA_0519 - Update interconnection security agreements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0519.json) |
-### 09.06 Network Security Management
+### 0837.09.n2Organizational.2-09.n 09.06 Network Security Management
**ID**: 0837.09.n2Organizational.2-09.n **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Undergo independent security review](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9b55929b-0101-47c0-a16e-d6ac5c7d21f8) |CMA_0515 - Undergo independent security review |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0515.json) | |[Update interconnection security agreements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd48a6f19-a284-6fc6-0623-3367a74d3f50) |CMA_0519 - Update interconnection security agreements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0519.json) |
-### 01.04 Network Access Control
+### 0850.01o1Organizational.12-01.o 01.04 Network Access Control
**ID**: 0850.01o1Organizational.12-01.o **Ownership**: Shared
This built-in initiative is deployed as part of the
||||| |[Route traffic through authenticated proxy network](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd91558ce-5a5c-551b-8fbb-83f793255e09) |CMA_C1633 - Route traffic through authenticated proxy network |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1633.json) |
-### 09.06 Network Security Management
+### 0858.09m1Organizational.4-09.m 09.06 Network Security Management
**ID**: 0858.09m1Organizational.4-09.m **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Protect wireless access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd42a8f69-a193-6cbc-48b9-04a9e29961f1) |CMA_0411 - Protect wireless access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0411.json) | |[Windows machines should meet requirements for 'Windows Firewall Properties'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F35d9882c-993d-44e6-87d2-db66ce21b636) |Windows machines should have the specified Group Policy settings in the category 'Windows Firewall Properties' for firewall state, connections, rule management, and notifications. This policy requires that the Guest Configuration prerequisites have been deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsFirewallProperties_AINE.json) |
-### 09.06 Network Security Management
+### 0859.09m1Organizational.78-09.m 09.06 Network Security Management
**ID**: 0859.09m1Organizational.78-09.m **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Secure the interface to external systems](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fff1efad2-6b09-54cc-01bf-d386c4d558a8) |CMA_0491 - Secure the interface to external systems |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0491.json) | |[Separate duties of individuals](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F60ee1260-97f0-61bb-8155-5d8b75743655) |CMA_0492 - Separate duties of individuals |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0492.json) |
-### 09.06 Network Security Management
+### 0860.09m1Organizational.9-09.m 09.06 Network Security Management
**ID**: 0860.09m1Organizational.9-09.m **Ownership**: Shared |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Deploy Diagnostic Settings for Network Security Groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc9c29499-c1d1-4195-99bd-2ec9e3a9dc89) |This policy automatically deploys diagnostic settings to network security groups. A storage account with name '{storagePrefixParameter}{NSGLocation}' will be automatically created. |deployIfNotExists |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForNSG_Deploy.json) |
+|[Deploy Diagnostic Settings for Network Security Groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc9c29499-c1d1-4195-99bd-2ec9e3a9dc89) |This policy automatically deploys diagnostic settings to network security groups. A storage account with name '{storagePrefixParameter}{NSGLocation}' will be automatically created. |deployIfNotExists |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForNSG_Deploy.json) |
|[Establish an alternate processing site](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf5ff768-a34b-720e-1224-e6b3214f3ba6) |CMA_0262 - Establish an alternate processing site |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0262.json) | |[Implement managed interface for each external service](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb262e1dd-08e9-41d4-963a-258909ad794b) |CMA_C1626 - Implement managed interface for each external service |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1626.json) | |[Secure the interface to external systems](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fff1efad2-6b09-54cc-01bf-d386c4d558a8) |CMA_0491 - Secure the interface to external systems |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0491.json) | |[Separately store backup information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc26e2fd-3149-74b4-5988-d64bb90f8ef7) |CMA_C1293 - Separately store backup information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1293.json) |
-### 09.06 Network Security Management
+### 0861.09m2Organizational.67-09.m 09.06 Network Security Management
**ID**: 0861.09m2Organizational.67-09.m **Ownership**: Shared |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aka.ms/appservice-vnet-service-endpoint](https://aka.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) |
+|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aks.ms/appservice-vnet-service-endpoint](https://aks.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) |
|[Document and implement wireless access guidelines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F04b3e7f6-4841-888d-4799-cda19a0084f6) |CMA_0190 - Document and implement wireless access guidelines |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0190.json) | |[Document wireless access security controls](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8f835d6a-4d13-9a9c-37dc-176cebd37fda) |CMA_C1695 - Document wireless access security controls |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1695.json) | |[Identify and authenticate network devices](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fae5345d5-8dab-086a-7290-db43a3272198) |CMA_0296 - Identify and authenticate network devices |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0296.json) |
This built-in initiative is deployed as part of the
|[Protect wireless access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd42a8f69-a193-6cbc-48b9-04a9e29961f1) |CMA_0411 - Protect wireless access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0411.json) | |[Windows machines should meet requirements for 'Security Options - Network Access'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3ff60f98-7fa4-410a-9f7f-0b00f5afdbdd) |Windows machines should have the specified Group Policy settings in the category 'Security Options - Network Access' for including access for anonymous users, local accounts, and remote access to the registry. This policy requires that the Guest Configuration prerequisites have been deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SecurityOptionsNetworkAccess_AINE.json) |
-### 09.06 Network Security Management
+### 0862.09m2Organizational.8-09.m 09.06 Network Security Management
**ID**: 0862.09m2Organizational.8-09.m **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Protect passwords with encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb2d3e5a2-97ab-5497-565a-71172a729d93) |CMA_0408 - Protect passwords with encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0408.json) | |[SQL Server should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fae5d2f14-d830-42b6-9899-df6cfe9c71a3) |This policy audits any SQL Server not configured to use a virtual network service endpoint. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_SQLServer_AuditIfNotExists.json) |
-### 09.06 Network Security Management
+### 0863.09m2Organizational.910-09.m 09.06 Network Security Management
**ID**: 0863.09m2Organizational.910-09.m **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Perform audit for configuration change control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1282809c-9001-176b-4a81-260a085f4872) |CMA_0390 - Perform audit for configuration change control |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0390.json) | |[Review and update the information security architecture](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fced291b8-1d3d-7e27-40cf-829e9dd523c8) |CMA_C1504 - Review and update the information security architecture |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1504.json) |
-### 09.06 Network Security Management
+### 0864.09m2Organizational.12-09.m 09.06 Network Security Management
**ID**: 0864.09m2Organizational.12-09.m **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Establish voip usage restrictions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F68a39c2b-0f17-69ee-37a3-aa10f9853a08) |CMA_0280 - Establish voip usage restrictions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0280.json) | |[Secure the interface to external systems](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fff1efad2-6b09-54cc-01bf-d386c4d558a8) |CMA_0491 - Secure the interface to external systems |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0491.json) |
-### 09.06 Network Security Management
+### 0865.09m2Organizational.13-09.m 09.06 Network Security Management
**ID**: 0865.09m2Organizational.13-09.m **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Require interconnection security agreements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F096a7055-30cb-2db4-3fda-41b20ac72667) |CMA_C1151 - Require interconnection security agreements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1151.json) | |[Update interconnection security agreements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd48a6f19-a284-6fc6-0623-3367a74d3f50) |CMA_0519 - Update interconnection security agreements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0519.json) |
-### 09.06 Network Security Management
+### 0866.09m3Organizational.1516-09.m 09.06 Network Security Management
**ID**: 0866.09m3Organizational.1516-09.m **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Secure the interface to external systems](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fff1efad2-6b09-54cc-01bf-d386c4d558a8) |CMA_0491 - Secure the interface to external systems |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0491.json) | |[Storage accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34c877ad-507e-4c82-993e-3452a6e0ad3c) |Network access to storage accounts should be restricted. Configure network rules so only applications from allowed networks can access the storage account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges |Audit, Deny, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_NetworkAcls_Audit.json) |
-### 09.06 Network Security Management
+### 0868.09m3Organizational.18-09.m 09.06 Network Security Management
**ID**: 0868.09m3Organizational.18-09.m **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Route traffic through managed network access points](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbab9ef1d-a16d-421a-822d-3fa94e808156) |CMA_0484 - Route traffic through managed network access points |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0484.json) | |[Secure the interface to external systems](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fff1efad2-6b09-54cc-01bf-d386c4d558a8) |CMA_0491 - Secure the interface to external systems |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0491.json) |
-### 09.06 Network Security Management
+### 0869.09m3Organizational.19-09.m 09.06 Network Security Management
**ID**: 0869.09m3Organizational.19-09.m **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Establish and document a configuration management plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F526ed90e-890f-69e7-0386-ba5c0f1f784f) |CMA_0264 - Establish and document a configuration management plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0264.json) | |[Implement an automated configuration management tool](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F33832848-42ab-63f3-1a55-c0ad309d44cd) |CMA_0311 - Implement an automated configuration management tool |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0311.json) |
-### 09.06 Network Security Management
+### 0870.09m3Organizational.20-09.m 09.06 Network Security Management
**ID**: 0870.09m3Organizational.20-09.m **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Route traffic through authenticated proxy network](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd91558ce-5a5c-551b-8fbb-83f793255e09) |CMA_C1633 - Route traffic through authenticated proxy network |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1633.json) | |[Support personal verification credentials issued by legal authorities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1d39b5d9-0392-8954-8359-575ce1957d1a) |CMA_0507 - Support personal verification credentials issued by legal authorities |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0507.json) |
-### 09.06 Network Security Management
+### 0871.09m3Organizational.22-09.m 09.06 Network Security Management
**ID**: 0871.09m3Organizational.22-09.m **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Provide secure name and address resolution services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbbb2e6d6-085f-5a35-a55d-e45daad38933) |CMA_0416 - Provide secure name and address resolution services |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0416.json) | |[Verify software, firmware and information integrity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdb28735f-518f-870e-15b4-49623cbe3aa0) |CMA_0542 - Verify software, firmware and information integrity |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0542.json) |
-### 09.06 Network Security Management
+### 0885.09n2Organizational.3-09.n 09.06 Network Security Management
**ID**: 0885.09n2Organizational.3-09.n **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Require interconnection security agreements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F096a7055-30cb-2db4-3fda-41b20ac72667) |CMA_C1151 - Require interconnection security agreements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1151.json) | |[Update interconnection security agreements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd48a6f19-a284-6fc6-0623-3367a74d3f50) |CMA_0519 - Update interconnection security agreements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0519.json) |
-### 09.06 Network Security Management
+### 0886.09n2Organizational.4-09.n 09.06 Network Security Management
**ID**: 0886.09n2Organizational.4-09.n **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Employ restrictions on external system interconnections](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F80029bc5-834f-3a9c-a2d8-acbc1aab4e9f) |CMA_C1155 - Employ restrictions on external system interconnections |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1155.json) | |[Network Watcher should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb6e2945c-0b7b-40f5-9233-7a5323b5cdc6) |Network Watcher is a regional service that enables you to monitor and diagnose conditions at a network scenario level in, to, and from Azure. Scenario level monitoring enables you to diagnose problems at an end to end network level view. It is required to have a network watcher resource group to be created in every region where a virtual network is present. An alert is enabled if a network watcher resource group is not available in a particular region. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkWatcher_Enabled_Audit.json) |
-### 09.06 Network Security Management
+### 0887.09n2Organizational.5-09.n 09.06 Network Security Management
**ID**: 0887.09n2Organizational.5-09.n **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Require developer to identify SDLC ports, protocols, and services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6da5cca-5795-60ff-49e1-4972567815fe) |CMA_C1578 - Require developer to identify SDLC ports, protocols, and services |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1578.json) | |[Secure the interface to external systems](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fff1efad2-6b09-54cc-01bf-d386c4d558a8) |CMA_0491 - Secure the interface to external systems |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0491.json) |
-### 09.06 Network Security Management
+### 0888.09n2Organizational.6-09.n 09.06 Network Security Management
**ID**: 0888.09n2Organizational.6-09.n **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Review cloud service provider's compliance with policies and agreements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fffea18d9-13de-6505-37f3-4c1f88070ad7) |CMA_0469 - Review cloud service provider's compliance with policies and agreements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0469.json) | |[Undergo independent security review](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9b55929b-0101-47c0-a16e-d6ac5c7d21f8) |CMA_0515 - Undergo independent security review |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0515.json) |
-### 01.04 Network Access Control
+### 0894.01m2Organizational.7-01.m 01.04 Network Access Control
**ID**: 0894.01m2Organizational.7-01.m **Ownership**: Shared
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[\[Preview\]: Container Registry should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4857be7-912a-4c75-87e6-e30292bcdf78) |This policy audits any Container Registry not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_ContainerRegistry_Audit.json) |
-|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aka.ms/appservice-vnet-service-endpoint](https://aka.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) |
+|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aks.ms/appservice-vnet-service-endpoint](https://aks.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) |
|[Authorize access to security functions and information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faeed863a-0f56-429f-945d-8bb66bd06841) |CMA_0022 - Authorize access to security functions and information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0022.json) | |[Authorize and manage access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e9324a-7410-0539-0662-2c1e775538b7) |CMA_0023 - Authorize and manage access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0023.json) | |[Cosmos DB should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe0a2b1a3-f7f9-4569-807f-2a9edebdf4d9) |This policy audits any Cosmos DB not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_CosmosDB_Audit.json) |
This built-in initiative is deployed as part of the
## 09 Transmission Protection
-### 09.08 Exchange of Information
+### 0901.09s1Organizational.1-09.s 09.08 Exchange of Information
**ID**: 0901.09s1Organizational.1-09.s **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Update rules of behavior and access agreements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6610f662-37e9-2f71-65be-502bdc2f554d) |CMA_0521 - Update rules of behavior and access agreements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0521.json) | |[Update rules of behavior and access agreements every 3 years](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7ad83b58-2042-085d-08f0-13e946f26f89) |CMA_0522 - Update rules of behavior and access agreements every 3 years |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0522.json) |
-### 09.08 Exchange of Information
+### 0902.09s2Organizational.13-09.s 09.08 Exchange of Information
**ID**: 0902.09s2Organizational.13-09.s **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Provide privacy training](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F518eafdd-08e5-37a9-795b-15a8d798056d) |CMA_0415 - Provide privacy training |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0415.json) | |[Route traffic through managed network access points](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbab9ef1d-a16d-421a-822d-3fa94e808156) |CMA_0484 - Route traffic through managed network access points |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0484.json) |
-### 10.03 Cryptographic Controls
+### 0903.10f1Organizational.1-10.f 10.03 Cryptographic Controls
**ID**: 0903.10f1Organizational.1-10.f **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Define cryptographic use](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4ccd607-702b-8ae6-8eeb-fc3339cd4b42) |CMA_0120 - Define cryptographic use |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0120.json) | |[Protect passwords with encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb2d3e5a2-97ab-5497-565a-71172a729d93) |CMA_0408 - Protect passwords with encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0408.json) |
-### 10.03 Cryptographic Controls
+### 0904.10f2Organizational.1-10.f 10.03 Cryptographic Controls
**ID**: 0904.10f2Organizational.1-10.f **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Protect passwords with encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb2d3e5a2-97ab-5497-565a-71172a729d93) |CMA_0408 - Protect passwords with encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0408.json) | |[Restrict access to private keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8d140e8b-76c7-77de-1d46-ed1b2e112444) |CMA_0445 - Restrict access to private keys |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0445.json) |
-### 09.08 Exchange of Information
+### 0912.09s1Organizational.4-09.s 09.08 Exchange of Information
**ID**: 0912.09s1Organizational.4-09.s **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Provide privacy training](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F518eafdd-08e5-37a9-795b-15a8d798056d) |CMA_0415 - Provide privacy training |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0415.json) | |[Route traffic through managed network access points](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbab9ef1d-a16d-421a-822d-3fa94e808156) |CMA_0484 - Route traffic through managed network access points |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0484.json) |
-### 09.08 Exchange of Information
+### 0913.09s1Organizational.5-09.s 09.08 Exchange of Information
**ID**: 0913.09s1Organizational.5-09.s **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Produce, control and distribute asymmetric cryptographic keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fde077e7e-0cc8-65a6-6e08-9ab46c827b05) |CMA_C1646 - Produce, control and distribute asymmetric cryptographic keys |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1646.json) | |[Protect passwords with encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb2d3e5a2-97ab-5497-565a-71172a729d93) |CMA_0408 - Protect passwords with encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0408.json) |
-### 09.08 Exchange of Information
+### 0914.09s1Organizational.6-09.s 09.08 Exchange of Information
**ID**: 0914.09s1Organizational.6-09.s **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Produce Security Assessment report](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F70a7a065-a060-85f8-7863-eb7850ed2af9) |CMA_C1146 - Produce Security Assessment report |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1146.json) | |[Review and update system and communications protection policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fadf517f3-6dcd-3546-9928-34777d0c277e) |CMA_C1616 - Review and update system and communications protection policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1616.json) |
-### 09.08 Exchange of Information
+### 0915.09s2Organizational.2-09.s 09.08 Exchange of Information
**ID**: 0915.09s2Organizational.2-09.s **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Establish terms and conditions for accessing resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c93dba1-84fd-57de-33c7-ef0400a08134) |CMA_C1076 - Establish terms and conditions for accessing resources |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1076.json) | |[Establish terms and conditions for processing resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5715bf33-a5bd-1084-4e19-bc3c83ec1c35) |CMA_C1077 - Establish terms and conditions for processing resources |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1077.json) |
-### 09.08 Exchange of Information
+### 0916.09s2Organizational.4-09.s 09.08 Exchange of Information
**ID**: 0916.09s2Organizational.4-09.s **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Prohibit remote activation of collaborative computing devices](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F678ca228-042d-6d8e-a598-c58d5670437d) |CMA_C1648 - Prohibit remote activation of collaborative computing devices |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1648.json) | |[Restrict media use](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6122970b-8d4a-7811-0278-4c6c68f61e4f) |CMA_0450 - Restrict media use |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0450.json) |
-### 09.08 Exchange of Information
+### 0926.09v1Organizational.2-09.v 09.08 Exchange of Information
**ID**: 0926.09v1Organizational.2-09.v **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Protect passwords with encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb2d3e5a2-97ab-5497-565a-71172a729d93) |CMA_0408 - Protect passwords with encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0408.json) | |[Provide secure name and address resolution services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbbb2e6d6-085f-5a35-a55d-e45daad38933) |CMA_0416 - Provide secure name and address resolution services |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0416.json) |
-### 09.08 Exchange of Information
+### 0927.09v1Organizational.3-09.v 09.08 Exchange of Information
**ID**: 0927.09v1Organizational.3-09.v **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Identify and authenticate network devices](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fae5345d5-8dab-086a-7290-db43a3272198) |CMA_0296 - Identify and authenticate network devices |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0296.json) | |[Support personal verification credentials issued by legal authorities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1d39b5d9-0392-8954-8359-575ce1957d1a) |CMA_0507 - Support personal verification credentials issued by legal authorities |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0507.json) |
-### 09.08 Exchange of Information
+### 0928.09v1Organizational.45-09.v 09.08 Exchange of Information
**ID**: 0928.09v1Organizational.45-09.v **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Protect passwords with encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb2d3e5a2-97ab-5497-565a-71172a729d93) |CMA_0408 - Protect passwords with encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0408.json) | |[Secure the interface to external systems](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fff1efad2-6b09-54cc-01bf-d386c4d558a8) |CMA_0491 - Secure the interface to external systems |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0491.json) |
-### 09.08 Exchange of Information
+### 0929.09v1Organizational.6-09.v 09.08 Exchange of Information
**ID**: 0929.09v1Organizational.6-09.v **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Protect passwords with encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb2d3e5a2-97ab-5497-565a-71172a729d93) |CMA_0408 - Protect passwords with encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0408.json) | |[Provide secure name and address resolution services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbbb2e6d6-085f-5a35-a55d-e45daad38933) |CMA_0416 - Provide secure name and address resolution services |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0416.json) |
-### 09.09 Electronic Commerce Services
+### 0943.09y1Organizational.1-09.y 09.09 Electronic Commerce Services
**ID**: 0943.09y1Organizational.1-09.y **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Protect passwords with encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb2d3e5a2-97ab-5497-565a-71172a729d93) |CMA_0408 - Protect passwords with encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0408.json) | |[Secure transfer to storage accounts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F404c3081-a854-4457-ae30-26a93ef643f9) |Audit requirement of Secure transfer in your storage account. Secure transfer is an option that forces your storage account to accept requests only from secure connections (HTTPS). Use of HTTPS ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_AuditForHTTPSEnabled_Audit.json) |
-### 09.09 Electronic Commerce Services
+### 0944.09y1Organizational.2-09.y 09.09 Electronic Commerce Services
**ID**: 0944.09y1Organizational.2-09.y **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Identify and manage downstream information exchanges](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc7fddb0e-3f44-8635-2b35-dc6b8e740b7c) |CMA_0298 - Identify and manage downstream information exchanges |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0298.json) | |[Information flow control using security policy filters](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F13ef3484-3a51-785a-9c96-500f21f84edd) |CMA_C1029 - Information flow control using security policy filters |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1029.json) |
-### 09.09 Electronic Commerce Services
+### 0945.09y1Organizational.3-09.y 09.09 Electronic Commerce Services
**ID**: 0945.09y1Organizational.3-09.y **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Produce, control and distribute asymmetric cryptographic keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fde077e7e-0cc8-65a6-6e08-9ab46c827b05) |CMA_C1646 - Produce, control and distribute asymmetric cryptographic keys |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1646.json) | |[Protect passwords with encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb2d3e5a2-97ab-5497-565a-71172a729d93) |CMA_0408 - Protect passwords with encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0408.json) |
-### 09.09 Electronic Commerce Services
+### 0947.09y2Organizational.2-09.y 09.09 Electronic Commerce Services
**ID**: 0947.09y2Organizational.2-09.y **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Restrict location of information processing, storage and services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0040d2e5-2779-170d-6a2c-1f5fca353335) |CMA_C1593 - Restrict location of information processing, storage and services |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1593.json) | |[Transfer backup information to an alternate storage site](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7bdb79ea-16b8-453e-4ca4-ad5b16012414) |CMA_C1294 - Transfer backup information to an alternate storage site |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1294.json) |
-### 09.09 Electronic Commerce Services
+### 0948.09y2Organizational.3-09.y 09.09 Electronic Commerce Services
**ID**: 0948.09y2Organizational.3-09.y **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Issue public key certificates](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F97d91b33-7050-237b-3e23-a77d57d84e13) |CMA_0347 - Issue public key certificates |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0347.json) | |[Satisfy token quality requirements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F056a723b-4946-9d2a-5243-3aa27c4d31a1) |CMA_0487 - Satisfy token quality requirements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0487.json) |
-### 09.09 Electronic Commerce Services
+### 0949.09y2Organizational.5-09.y 09.09 Electronic Commerce Services
**ID**: 0949.09y2Organizational.5-09.y **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Identify external service providers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F46ab2c5e-6654-1f58-8c83-e97a44f39308) |CMA_C1591 - Identify external service providers |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1591.json) | |[Require developer to identify SDLC ports, protocols, and services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6da5cca-5795-60ff-49e1-4972567815fe) |CMA_C1578 - Require developer to identify SDLC ports, protocols, and services |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1578.json) |
-### 09.08 Exchange of Information
+### 0960.09sCSPOrganizational.1-09.s 09.08 Exchange of Information
**ID**: 0960.09sCSPOrganizational.1-09.s **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Function apps should not have CORS configured to allow every resource to access your apps](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0820b7b9-23aa-4725-a1ce-ae4558f718e5) |Cross-Origin Resource Sharing (CORS) should not allow all domains to access your Function app. Allow only required domains to interact with your Function app. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RestrictCORSAccess_FuntionApp_Audit.json) | |[Identify external service providers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F46ab2c5e-6654-1f58-8c83-e97a44f39308) |CMA_C1591 - Identify external service providers |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1591.json) |
-### 09.06 Network Security Management
+### 099.09m2Organizational.11-09.m 09.06 Network Security Management
**ID**: 099.09m2Organizational.11-09.m **Ownership**: Shared
This built-in initiative is deployed as part of the
## 10 Password Management
-### 01.02 Authorized Access to Information Systems
+### 1002.01d1System.1-01.d 01.02 Authorized Access to Information Systems
**ID**: 1002.01d1System.1-01.d **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Obscure feedback information during authentication process](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1ff03f2a-974b-3272-34f2-f6cd51420b30) |CMA_C1344 - Obscure feedback information during authentication process |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1344.json) | |[Protect passwords with encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb2d3e5a2-97ab-5497-565a-71172a729d93) |CMA_0408 - Protect passwords with encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0408.json) |
-### 01.02 Authorized Access to Information Systems
+### 1003.01d1System.3-01.d 01.02 Authorized Access to Information Systems
**ID**: 1003.01d1System.3-01.d **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Refresh authenticators](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3ae68d9a-5696-8c32-62d3-c6f9c52e437c) |CMA_0425 - Refresh authenticators |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0425.json) | |[Verify identity before distributing authenticators](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72889284-15d2-90b2-4b39-a1e9541e1152) |CMA_0538 - Verify identity before distributing authenticators |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0538.json) |
-### 01.02 Authorized Access to Information Systems
+### 1004.01d1System.8913-01.d 01.02 Authorized Access to Information Systems
**ID**: 1004.01d1System.8913-01.d **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Refresh authenticators](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3ae68d9a-5696-8c32-62d3-c6f9c52e437c) |CMA_0425 - Refresh authenticators |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0425.json) | |[Verify identity before distributing authenticators](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72889284-15d2-90b2-4b39-a1e9541e1152) |CMA_0538 - Verify identity before distributing authenticators |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0538.json) |
-### 01.02 Authorized Access to Information Systems
+### 1005.01d1System.1011-01.d 01.02 Authorized Access to Information Systems
**ID**: 1005.01d1System.1011-01.d **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Implement parameters for memorized secret verifiers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3b30aa25-0f19-6c04-5ca4-bd3f880a763d) |CMA_0321 - Implement parameters for memorized secret verifiers |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0321.json) | |[Produce, control and distribute symmetric cryptographic keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F16c54e01-9e65-7524-7c33-beda48a75779) |CMA_C1645 - Produce, control and distribute symmetric cryptographic keys |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1645.json) |
-### 01.02 Authorized Access to Information Systems
+### 1006.01d2System.1-01.d 01.02 Authorized Access to Information Systems
**ID**: 1006.01d2System.1-01.d **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Implement training for protecting authenticators](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe4b00788-7e1c-33ec-0418-d048508e095b) |CMA_0329 - Implement training for protecting authenticators |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0329.json) | |[Obscure feedback information during authentication process](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1ff03f2a-974b-3272-34f2-f6cd51420b30) |CMA_C1344 - Obscure feedback information during authentication process |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1344.json) |
-### 01.02 Authorized Access to Information Systems
+### 1007.01d2System.2-01.d 01.02 Authorized Access to Information Systems
**ID**: 1007.01d2System.2-01.d **Ownership**: Shared
This built-in initiative is deployed as part of the
||||| |[Define cryptographic use](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4ccd607-702b-8ae6-8eeb-fc3339cd4b42) |CMA_0120 - Define cryptographic use |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0120.json) |
-### 01.02 Authorized Access to Information Systems
+### 1008.01d2System.3-01.d 01.02 Authorized Access to Information Systems
**ID**: 1008.01d2System.3-01.d **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Update rules of behavior and access agreements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6610f662-37e9-2f71-65be-502bdc2f554d) |CMA_0521 - Update rules of behavior and access agreements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0521.json) | |[Update rules of behavior and access agreements every 3 years](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7ad83b58-2042-085d-08f0-13e946f26f89) |CMA_0522 - Update rules of behavior and access agreements every 3 years |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0522.json) |
-### 01.02 Authorized Access to Information Systems
+### 1009.01d2System.4-01.d 01.02 Authorized Access to Information Systems
**ID**: 1009.01d2System.4-01.d **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Implement parameters for memorized secret verifiers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3b30aa25-0f19-6c04-5ca4-bd3f880a763d) |CMA_0321 - Implement parameters for memorized secret verifiers |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0321.json) | |[Refresh authenticators](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3ae68d9a-5696-8c32-62d3-c6f9c52e437c) |CMA_0425 - Refresh authenticators |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0425.json) |
-### 01.02 Authorized Access to Information Systems
+### 1014.01d1System.12-01.d 01.02 Authorized Access to Information Systems
**ID**: 1014.01d1System.12-01.d **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Reissue authenticators for changed groups and accounts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f204e72-1896-3bf8-75c9-9128b8683a36) |CMA_0426 - Reissue authenticators for changed groups and accounts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0426.json) | |[Verify identity before distributing authenticators](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72889284-15d2-90b2-4b39-a1e9541e1152) |CMA_0538 - Verify identity before distributing authenticators |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0538.json) |
-### 01.02 Authorized Access to Information Systems
+### 1015.01d1System.14-01.d 01.02 Authorized Access to Information Systems
**ID**: 1015.01d1System.14-01.d **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Reissue authenticators for changed groups and accounts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f204e72-1896-3bf8-75c9-9128b8683a36) |CMA_0426 - Reissue authenticators for changed groups and accounts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0426.json) | |[Verify identity before distributing authenticators](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72889284-15d2-90b2-4b39-a1e9541e1152) |CMA_0538 - Verify identity before distributing authenticators |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0538.json) |
-### 01.02 Authorized Access to Information Systems
+### 1022.01d1System.15-01.d 01.02 Authorized Access to Information Systems
**ID**: 1022.01d1System.15-01.d **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Refresh authenticators](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3ae68d9a-5696-8c32-62d3-c6f9c52e437c) |CMA_0425 - Refresh authenticators |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0425.json) | |[Restrict media use](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6122970b-8d4a-7811-0278-4c6c68f61e4f) |CMA_0450 - Restrict media use |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0450.json) |
-### 01.02 Authorized Access to Information Systems
+### 1031.01d1System.34510-01.d 01.02 Authorized Access to Information Systems
**ID**: 1031.01d1System.34510-01.d **Ownership**: Shared
This built-in initiative is deployed as part of the
## 11 Access Control
-### 01.02 Authorized Access to Information Systems
+### 1106.01b1System.1-01.b 01.02 Authorized Access to Information Systems
**ID**: 1106.01b1System.1-01.b **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Review user accounts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F79f081c7-1634-01a1-708e-376197999289) |CMA_0480 - Review user accounts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0480.json) | |[Verify identity before distributing authenticators](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72889284-15d2-90b2-4b39-a1e9541e1152) |CMA_0538 - Verify identity before distributing authenticators |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0538.json) |
-### 01.02 Authorized Access to Information Systems
+### 1107.01b1System.2-01.b 01.02 Authorized Access to Information Systems
**ID**: 1107.01b1System.2-01.b **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Manage Authenticators](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4aacaec9-0628-272c-3e83-0d68446694e0) |CMA_C1321 - Manage Authenticators |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1321.json) | |[Verify identity before distributing authenticators](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72889284-15d2-90b2-4b39-a1e9541e1152) |CMA_0538 - Verify identity before distributing authenticators |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0538.json) |
-### 01.02 Authorized Access to Information Systems
+### 1108.01b1System.3-01.b 01.02 Authorized Access to Information Systems
**ID**: 1108.01b1System.3-01.b **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Monitor account activity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7b28ba4f-0a87-46ac-62e1-46b7c09202a8) |CMA_0377 - Monitor account activity |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0377.json) | |[Notify Account Managers of customer controlled accounts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4b8fd5da-609b-33bf-9724-1c946285a14c) |CMA_C1009 - Notify Account Managers of customer controlled accounts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1009.json) |
-### 01.02 Authorized Access to Information Systems
+### 1109.01b1System.479-01.b 01.02 Authorized Access to Information Systems
**ID**: 1109.01b1System.479-01.b **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Update rules of behavior and access agreements every 3 years](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7ad83b58-2042-085d-08f0-13e946f26f89) |CMA_0522 - Update rules of behavior and access agreements every 3 years |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0522.json) | |[Verify identity before distributing authenticators](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72889284-15d2-90b2-4b39-a1e9541e1152) |CMA_0538 - Verify identity before distributing authenticators |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0538.json) |
-### 01.02 Authorized Access to Information Systems
+### 1110.01b1System.5-01.b 01.02 Authorized Access to Information Systems
**ID**: 1110.01b1System.5-01.b **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Update rules of behavior and access agreements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6610f662-37e9-2f71-65be-502bdc2f554d) |CMA_0521 - Update rules of behavior and access agreements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0521.json) | |[Update rules of behavior and access agreements every 3 years](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7ad83b58-2042-085d-08f0-13e946f26f89) |CMA_0522 - Update rules of behavior and access agreements every 3 years |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0522.json) |
-### 01.05 Operating System Access Control
+### 11109.01q1Organizational.57-01.q 01.05 Operating System Access Control
**ID**: 11109.01q1Organizational.57-01.q **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Prevent identifier reuse for the defined time period](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4781e5fd-76b8-7d34-6df3-a0a7fca47665) |CMA_C1314 - Prevent identifier reuse for the defined time period |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1314.json) | |[Support personal verification credentials issued by legal authorities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1d39b5d9-0392-8954-8359-575ce1957d1a) |CMA_0507 - Support personal verification credentials issued by legal authorities |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0507.json) |
-### 01.02 Authorized Access to Information Systems
+### 1111.01b2System.1-01.b 01.02 Authorized Access to Information Systems
**ID**: 1111.01b2System.1-01.b **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Define and enforce conditions for shared and group accounts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff7eb1d0b-6d4f-2d59-1591-7563e11a9313) |CMA_0117 - Define and enforce conditions for shared and group accounts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0117.json) | |[Reissue authenticators for changed groups and accounts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f204e72-1896-3bf8-75c9-9128b8683a36) |CMA_0426 - Reissue authenticators for changed groups and accounts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0426.json) |
-### 01.05 Operating System Access Control
+### 11111.01q2System.4-01.q 01.05 Operating System Access Control
**ID**: 11111.01q2System.4-01.q **Ownership**: Shared
This built-in initiative is deployed as part of the
|[MFA should be enabled on accounts with read permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe3576e28-8b17-4677-84c3-db2990658d64) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with read privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForReadPermissions_Audit.json) | |[Verify identity before distributing authenticators](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72889284-15d2-90b2-4b39-a1e9541e1152) |CMA_0538 - Verify identity before distributing authenticators |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0538.json) |
-### 01.05 Operating System Access Control
+### 11112.01q2Organizational.67-01.q 01.05 Operating System Access Control
**ID**: 11112.01q2Organizational.67-01.q **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Adopt biometric authentication mechanisms](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7d7a8356-5c34-9a95-3118-1424cfaf192a) |CMA_0005 - Adopt biometric authentication mechanisms |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0005.json) | |[Satisfy token quality requirements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F056a723b-4946-9d2a-5243-3aa27c4d31a1) |CMA_0487 - Satisfy token quality requirements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0487.json) |
-### 01.02 Authorized Access to Information Systems
+### 1112.01b2System.2-01.b 01.02 Authorized Access to Information Systems
**ID**: 1112.01b2System.2-01.b **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Update the security authorization](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F449ebb52-945b-36e5-3446-af6f33770f8f) |CMA_C1160 - Update the security authorization |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1160.json) | |[Verify identity before distributing authenticators](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72889284-15d2-90b2-4b39-a1e9541e1152) |CMA_0538 - Verify identity before distributing authenticators |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0538.json) |
-### 01.05 Operating System Access Control
+### 11126.01t1Organizational.12-01.t 01.05 Operating System Access Control
**ID**: 11126.01t1Organizational.12-01.t **Ownership**: Shared
This built-in initiative is deployed as part of the
||||| |[Reauthenticate or terminate a user session](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd6653f89-7cb5-24a4-9d71-51581038231b) |CMA_0421 - Reauthenticate or terminate a user session |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0421.json) |
-### 01.03 User Responsibilities
+### 1114.01h1Organizational.123-01.h 01.03 User Responsibilities
**ID**: 1114.01h1Organizational.123-01.h **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Define and enforce the limit of concurrent sessions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd8350d4c-9314-400b-288f-20ddfce04fbd) |CMA_C1050 - Define and enforce the limit of concurrent sessions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1050.json) | |[Terminate user session automatically](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4502e506-5f35-0df4-684f-b326e3cc7093) |CMA_C1054 - Terminate user session automatically |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1054.json) |
-### 02.04 Termination or Change of Employment
+### 11154.02i1Organizational.5-02.i 02.04 Termination or Change of Employment
**ID**: 11154.02i1Organizational.5-02.i **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Reevaluate access upon personnel transfer](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe89436d8-6a93-3b62-4444-1d2a42ad56b2) |CMA_0424 - Reevaluate access upon personnel transfer |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0424.json) | |[Retain terminated user data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7c7032fe-9ce6-9092-5890-87a1a3755db1) |CMA_0455 - Retain terminated user data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0455.json) |
-### 02.04 Termination or Change of Employment
+### 11155.02i2Organizational.2-02.i 02.04 Termination or Change of Employment
**ID**: 11155.02i2Organizational.2-02.i **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Protect against and prevent data theft from departing employees](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F80a97208-264e-79da-0cc7-4fca179a0c9c) |CMA_0398 - Protect against and prevent data theft from departing employees |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0398.json) | |[Retain terminated user data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7c7032fe-9ce6-9092-5890-87a1a3755db1) |CMA_0455 - Retain terminated user data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0455.json) |
-### 01.04 Network Access Control
+### 1116.01j1Organizational.145-01.j 01.04 Network Access Control
**ID**: 1116.01j1Organizational.145-01.j **Ownership**: Shared
This built-in initiative is deployed as part of the
|[MFA should be enabled on accounts with owner permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faa633080-8b72-40c4-a2d7-d00c03e80bed) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with owner permissions to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForOwnerPermissions_Audit.json) | |[Verify identity before distributing authenticators](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72889284-15d2-90b2-4b39-a1e9541e1152) |CMA_0538 - Verify identity before distributing authenticators |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0538.json) |
-### 01.04 Network Access Control
+### 1118.01j2Organizational.124-01.j 01.04 Network Access Control
**ID**: 1118.01j2Organizational.124-01.j **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Provide privacy training](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F518eafdd-08e5-37a9-795b-15a8d798056d) |CMA_0415 - Provide privacy training |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0415.json) | |[Route traffic through managed network access points](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbab9ef1d-a16d-421a-822d-3fa94e808156) |CMA_0484 - Route traffic through managed network access points |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0484.json) |
-### 01.02 Authorized Access to Information Systems
+### 11180.01c3System.6-01.c 01.02 Authorized Access to Information Systems
**ID**: 11180.01c3System.6-01.c **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) | |[Restrict access to privileged accounts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F873895e8-0e3a-6492-42e9-22cd030e9fcd) |CMA_0446 - Restrict access to privileged accounts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0446.json) |
-### 01.04 Network Access Control
+### 1119.01j2Organizational.3-01.j 01.04 Network Access Control
**ID**: 1119.01j2Organizational.3-01.j **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Secure the interface to external systems](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fff1efad2-6b09-54cc-01bf-d386c4d558a8) |CMA_0491 - Secure the interface to external systems |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0491.json) | |[Set automated notifications for new and trending cloud applications in your organization](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf38215f-70c4-0cd6-40c2-c52d86690a45) |CMA_0495 - Set automated notifications for new and trending cloud applications in your organization |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0495.json) |
-### 01.05 Operating System Access Control
+### 11190.01t1Organizational.3-01.t 01.05 Operating System Access Control
**ID**: 11190.01t1Organizational.3-01.t **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Implement physical security for offices, working areas, and secure areas](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F05ec66a2-137c-14b8-8e75-3d7a2bef07f8) |CMA_0323 - Implement physical security for offices, working areas, and secure areas |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0323.json) | |[Manage the input, output, processing, and storage of data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe603da3a-8af7-4f8a-94cb-1bcc0e0333d2) |CMA_0369 - Manage the input, output, processing, and storage of data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0369.json) |
-### 09.10 Monitoring
+### 1120.09ab3System.9-09.ab 09.10 Monitoring
**ID**: 1120.09ab3System.9-09.ab **Ownership**: Shared
This built-in initiative is deployed as part of the
||||| |[Azure Monitor should collect activity logs from all regions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F41388f1c-2db0-4c25-95b2-35d7f5ccbfa9) |This policy audits the Azure Monitor log profile which does not export activities from all Azure supported regions including global. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_CaptureAllRegions.json) |
-### 01.04 Network Access Control
+### 1121.01j3Organizational.2-01.j 01.04 Network Access Control
**ID**: 1121.01j3Organizational.2-01.j **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Provide privacy training](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F518eafdd-08e5-37a9-795b-15a8d798056d) |CMA_0415 - Provide privacy training |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0415.json) | |[Support personal verification credentials issued by legal authorities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1d39b5d9-0392-8954-8359-575ce1957d1a) |CMA_0507 - Support personal verification credentials issued by legal authorities |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0507.json) |
-### 01.02 Authorized Access to Information Systems
+### 11219.01b1Organizational.10-01.b 01.02 Authorized Access to Information Systems
**ID**: 11219.01b1Organizational.10-01.b **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Employ least privilege access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1bc7fd64-291f-028e-4ed6-6e07886e163f) |CMA_0212 - Employ least privilege access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0212.json) | |[Separate duties of individuals](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F60ee1260-97f0-61bb-8155-5d8b75743655) |CMA_0492 - Separate duties of individuals |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0492.json) |
-### 01.05 Operating System Access Control
+### 1122.01q1System.1-01.q 01.05 Operating System Access Control
**ID**: 1122.01q1System.1-01.q **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Identify and authenticate non-organizational users](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1379836-3492-6395-451d-2f5062e14136) |CMA_C1346 - Identify and authenticate non-organizational users |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1346.json) | |[Support personal verification credentials issued by legal authorities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1d39b5d9-0392-8954-8359-575ce1957d1a) |CMA_0507 - Support personal verification credentials issued by legal authorities |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0507.json) |
-### 01.02 Authorized Access to Information Systems
+### 11220.01b1System.10-01.b 01.02 Authorized Access to Information Systems
**ID**: 11220.01b1System.10-01.b **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Review user accounts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F79f081c7-1634-01a1-708e-376197999289) |CMA_0480 - Review user accounts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0480.json) | |[Revoke privileged roles as appropriate](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F32f22cfa-770b-057c-965b-450898425519) |CMA_0483 - Revoke privileged roles as appropriate |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0483.json) |
-### 01.05 Operating System Access Control
+### 1123.01q1System.2-01.q 01.05 Operating System Access Control
**ID**: 1123.01q1System.2-01.q **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Employ least privilege access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1bc7fd64-291f-028e-4ed6-6e07886e163f) |CMA_0212 - Employ least privilege access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0212.json) | |[Enforce mandatory and discretionary access control policies](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb1666a13-8f67-9c47-155e-69e027ff6823) |CMA_0246 - Enforce mandatory and discretionary access control policies |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0246.json) |
-### 01.05 Operating System Access Control
+### 1124.01q1System.34-01.q 01.05 Operating System Access Control
**ID**: 1124.01q1System.34-01.q **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Define and enforce conditions for shared and group accounts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff7eb1d0b-6d4f-2d59-1591-7563e11a9313) |CMA_0117 - Define and enforce conditions for shared and group accounts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0117.json) | |[Reissue authenticators for changed groups and accounts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f204e72-1896-3bf8-75c9-9128b8683a36) |CMA_0426 - Reissue authenticators for changed groups and accounts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0426.json) |
-### 01.05 Operating System Access Control
+### 1125.01q2System.1-01.q 01.05 Operating System Access Control
**ID**: 1125.01q2System.1-01.q **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Enforce user uniqueness](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe336d5f4-4d8f-0059-759c-ae10f63d1747) |CMA_0250 - Enforce user uniqueness |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0250.json) | |[Support personal verification credentials issued by legal authorities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1d39b5d9-0392-8954-8359-575ce1957d1a) |CMA_0507 - Support personal verification credentials issued by legal authorities |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0507.json) |
-### 01.05 Operating System Access Control
+### 1127.01q2System.3-01.q 01.05 Operating System Access Control
**ID**: 1127.01q2System.3-01.q **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Audit Windows machines missing any of specified members in the Administrators group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F30f71ea1-ac77-4f26-9fc5-2d926bbd4ba7) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the local Administrators group does not contain one or more members that are listed in the policy parameter. |auditIfNotExists |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AdministratorsGroupMembersToInclude_AINE.json) | |[Distribute authenticators](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F098dcde7-016a-06c3-0985-0daaf3301d3a) |CMA_0184 - Distribute authenticators |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0184.json) |
-### 01.05 Operating System Access Control
+### 1128.01q2System.5-01.q 01.05 Operating System Access Control
**ID**: 1128.01q2System.5-01.q **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Enforce rules of behavior and access agreements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F509552f5-6528-3540-7959-fbeae4832533) |CMA_0248 - Enforce rules of behavior and access agreements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0248.json) | |[Provide privacy training](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F518eafdd-08e5-37a9-795b-15a8d798056d) |CMA_0415 - Provide privacy training |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0415.json) |
-### 01.06 Application and Information Access Control
+### 1129.01v1System.12-01.v 01.06 Application and Information Access Control
**ID**: 1129.01v1System.12-01.v **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Revoke privileged roles as appropriate](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F32f22cfa-770b-057c-965b-450898425519) |CMA_0483 - Revoke privileged roles as appropriate |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0483.json) | |[Use privileged identity management](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe714b481-8fac-64a2-14a9-6f079b2501a4) |CMA_0533 - Use privileged identity management |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0533.json) |
-### 01.06 Application and Information Access Control
+### 1130.01v2System.1-01.v 01.06 Application and Information Access Control
**ID**: 1130.01v2System.1-01.v **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Establish conditions for role membership](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F97cfd944-6f0c-7db2-3796-8e890ef70819) |CMA_0269 - Establish conditions for role membership |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0269.json) | |[Restrict access to privileged accounts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F873895e8-0e3a-6492-42e9-22cd030e9fcd) |CMA_0446 - Restrict access to privileged accounts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0446.json) |
-### 01.06 Application and Information Access Control
+### 1131.01v2System.2-01.v 01.06 Application and Information Access Control
**ID**: 1131.01v2System.2-01.v **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Identify and manage downstream information exchanges](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc7fddb0e-3f44-8635-2b35-dc6b8e740b7c) |CMA_0298 - Identify and manage downstream information exchanges |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0298.json) | |[Information flow control using security policy filters](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F13ef3484-3a51-785a-9c96-500f21f84edd) |CMA_C1029 - Information flow control using security policy filters |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1029.json) |
-### 01.06 Application and Information Access Control
+### 1132.01v2System.3-01.v 01.06 Application and Information Access Control
**ID**: 1132.01v2System.3-01.v **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Establish a data leakage management procedure](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c9aa856-6b86-35dc-83f4-bc72cec74dea) |CMA_0255 - Establish a data leakage management procedure |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0255.json) | |[Protect special information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa315c657-4a00-8eba-15ac-44692ad24423) |CMA_0409 - Protect special information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0409.json) |
-### 01.06 Application and Information Access Control
+### 1133.01v2System.4-01.v 01.06 Application and Information Access Control
**ID**: 1133.01v2System.4-01.v **Ownership**: Shared
This built-in initiative is deployed as part of the
||||| |[Identify actions allowed without authentication](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F92a7591f-73b3-1173-a09c-a08882d84c70) |CMA_0295 - Identify actions allowed without authentication |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0295.json) |
-### 01.06 Application and Information Access Control
+### 1134.01v3System.1-01.v 01.06 Application and Information Access Control
**ID**: 1134.01v3System.1-01.v **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Limit privileges to make changes in production environment](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2af551d5-1775-326a-0589-590bfb7e9eb2) |CMA_C1206 - Limit privileges to make changes in production environment |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1206.json) | |[Protect special information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa315c657-4a00-8eba-15ac-44692ad24423) |CMA_0409 - Protect special information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0409.json) |
-### 02.04 Termination or Change of Employment
+### 1135.02i1Organizational.1234-02.i 02.04 Termination or Change of Employment
**ID**: 1135.02i1Organizational.1234-02.i **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Retain terminated user data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7c7032fe-9ce6-9092-5890-87a1a3755db1) |CMA_0455 - Retain terminated user data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0455.json) | |[Revoke privileged roles as appropriate](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F32f22cfa-770b-057c-965b-450898425519) |CMA_0483 - Revoke privileged roles as appropriate |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0483.json) |
-### 02.04 Termination or Change of Employment
+### 1136.02i2Organizational.1-02.i 02.04 Termination or Change of Employment
**ID**: 1136.02i2Organizational.1-02.i **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Protect against and prevent data theft from departing employees](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F80a97208-264e-79da-0cc7-4fca179a0c9c) |CMA_0398 - Protect against and prevent data theft from departing employees |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0398.json) | |[Retain terminated user data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7c7032fe-9ce6-9092-5890-87a1a3755db1) |CMA_0455 - Retain terminated user data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0455.json) |
-### 06.01 Compliance with Legal Requirements
+### 1137.06e1Organizational.1-06.e 06.01 Compliance with Legal Requirements
**ID**: 1137.06e1Organizational.1-06.e **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Update rules of behavior and access agreements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6610f662-37e9-2f71-65be-502bdc2f554d) |CMA_0521 - Update rules of behavior and access agreements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0521.json) | |[Update rules of behavior and access agreements every 3 years](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7ad83b58-2042-085d-08f0-13e946f26f89) |CMA_0522 - Update rules of behavior and access agreements every 3 years |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0522.json) |
-### 01.02 Authorized Access to Information Systems
+### 1139.01b1System.68-01.b 01.02 Authorized Access to Information Systems
**ID**: 1139.01b1System.68-01.b **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Reissue authenticators for changed groups and accounts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f204e72-1896-3bf8-75c9-9128b8683a36) |CMA_0426 - Reissue authenticators for changed groups and accounts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0426.json) | |[Restrict access to privileged accounts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F873895e8-0e3a-6492-42e9-22cd030e9fcd) |CMA_0446 - Restrict access to privileged accounts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0446.json) |
-### 01.02 Authorized Access to Information Systems
+### 1143.01c1System.123-01.c 01.02 Authorized Access to Information Systems
**ID**: 1143.01c1System.123-01.c **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Require approval for account creation](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fde770ba6-50dd-a316-2932-e0d972eaa734) |CMA_0431 - Require approval for account creation |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0431.json) | |[Restrict access to privileged accounts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F873895e8-0e3a-6492-42e9-22cd030e9fcd) |CMA_0446 - Restrict access to privileged accounts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0446.json) |
-### 01.02 Authorized Access to Information Systems
+### 1144.01c1System.4-01.c 01.02 Authorized Access to Information Systems
**ID**: 1144.01c1System.4-01.c **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Employ least privilege access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1bc7fd64-291f-028e-4ed6-6e07886e163f) |CMA_0212 - Employ least privilege access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0212.json) | |[Enforce mandatory and discretionary access control policies](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb1666a13-8f67-9c47-155e-69e027ff6823) |CMA_0246 - Enforce mandatory and discretionary access control policies |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0246.json) |
-### 01.02 Authorized Access to Information Systems
+### 1145.01c2System.1-01.c 01.02 Authorized Access to Information Systems
**ID**: 1145.01c2System.1-01.c **Ownership**: Shared
This built-in initiative is deployed as part of the
|[There should be more than one owner assigned to your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F09024ccc-0c5f-475e-9457-b7c0d9ed487b) |It is recommended to designate more than one subscription owner in order to have administrator access redundancy. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateMoreThanOneOwner_Audit.json) | |[Use privileged identity management](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe714b481-8fac-64a2-14a9-6f079b2501a4) |CMA_0533 - Use privileged identity management |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0533.json) |
-### 01.02 Authorized Access to Information Systems
+### 1146.01c2System.23-01.c 01.02 Authorized Access to Information Systems
**ID**: 1146.01c2System.23-01.c **Ownership**: Shared
This built-in initiative is deployed as part of the
|[External accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff8456c1c-aa66-4dfb-861a-25d127b775c9) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsWithOwnerPermissions_Audit.json) | |[Restrict access to privileged accounts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F873895e8-0e3a-6492-42e9-22cd030e9fcd) |CMA_0446 - Restrict access to privileged accounts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0446.json) |
-### 01.02 Authorized Access to Information Systems
+### 1147.01c2System.456-01.c 01.02 Authorized Access to Information Systems
**ID**: 1147.01c2System.456-01.c **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Employ least privilege access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1bc7fd64-291f-028e-4ed6-6e07886e163f) |CMA_0212 - Employ least privilege access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0212.json) | |[Enforce mandatory and discretionary access control policies](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb1666a13-8f67-9c47-155e-69e027ff6823) |CMA_0246 - Enforce mandatory and discretionary access control policies |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0246.json) |
-### 01.02 Authorized Access to Information Systems
+### 1148.01c2System.78-01.c 01.02 Authorized Access to Information Systems
**ID**: 1148.01c2System.78-01.c **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Restrict access to privileged accounts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F873895e8-0e3a-6492-42e9-22cd030e9fcd) |CMA_0446 - Restrict access to privileged accounts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0446.json) | |[Windows machines should meet requirements for 'Security Options - Accounts'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fee984370-154a-4ee8-9726-19d900e56fc0) |Windows machines should have the specified Group Policy settings in the category 'Security Options - Accounts' for limiting local account use of blank passwords and guest account status. This policy requires that the Guest Configuration prerequisites have been deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SecurityOptionsAccounts_AINE.json) |
-### 01.02 Authorized Access to Information Systems
+### 1150.01c2System.10-01.c 01.02 Authorized Access to Information Systems
**ID**: 1150.01c2System.10-01.c **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Information flow control using security policy filters](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F13ef3484-3a51-785a-9c96-500f21f84edd) |CMA_C1029 - Information flow control using security policy filters |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1029.json) | |[Management ports should be closed on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F22730e10-96f6-4aac-ad84-9383d35b5917) |Open remote management ports are exposing your VM to a high level of risk from Internet-based attacks. These attacks attempt to brute force credentials to gain admin access to the machine. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OpenManagementPortsOnVirtualMachines_Audit.json) |
-### 01.02 Authorized Access to Information Systems
+### 1151.01c3System.1-01.c 01.02 Authorized Access to Information Systems
**ID**: 1151.01c3System.1-01.c **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Revoke privileged roles as appropriate](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F32f22cfa-770b-057c-965b-450898425519) |CMA_0483 - Revoke privileged roles as appropriate |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0483.json) | |[Use privileged identity management](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe714b481-8fac-64a2-14a9-6f079b2501a4) |CMA_0533 - Use privileged identity management |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0533.json) |
-### 01.02 Authorized Access to Information Systems
+### 1152.01c3System.2-01.c 01.02 Authorized Access to Information Systems
**ID**: 1152.01c3System.2-01.c **Ownership**: Shared
This built-in initiative is deployed as part of the
|[There should be more than one owner assigned to your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F09024ccc-0c5f-475e-9457-b7c0d9ed487b) |It is recommended to designate more than one subscription owner in order to have administrator access redundancy. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateMoreThanOneOwner_Audit.json) | |[Use privileged identity management](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe714b481-8fac-64a2-14a9-6f079b2501a4) |CMA_0533 - Use privileged identity management |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0533.json) |
-### 01.02 Authorized Access to Information Systems
+### 1153.01c3System.35-01.c 01.02 Authorized Access to Information Systems
**ID**: 1153.01c3System.35-01.c **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Require approval for account creation](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fde770ba6-50dd-a316-2932-e0d972eaa734) |CMA_0431 - Require approval for account creation |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0431.json) | |[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
-### 01.02 Authorized Access to Information Systems
+### 1166.01e1System.12-01.e 01.02 Authorized Access to Information Systems
**ID**: 1166.01e1System.12-01.e **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Review account provisioning logs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa830fe9e-08c9-a4fb-420c-6f6bf1702395) |CMA_0460 - Review account provisioning logs |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0460.json) | |[Review user accounts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F79f081c7-1634-01a1-708e-376197999289) |CMA_0480 - Review user accounts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0480.json) |
-### 01.02 Authorized Access to Information Systems
+### 1167.01e2System.1-01.e 01.02 Authorized Access to Information Systems
**ID**: 1167.01e2System.1-01.e **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Assign system identifiers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff29b17a4-0df2-8a50-058a-8570f9979d28) |CMA_0018 - Assign system identifiers |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0018.json) | |[Identify status of individual users](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fca748dfe-3e28-1d18-4221-89aea30aa0a5) |CMA_C1316 - Identify status of individual users |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1316.json) |
-### 01.02 Authorized Access to Information Systems
+### 1168.01e2System.2-01.e 01.02 Authorized Access to Information Systems
**ID**: 1168.01e2System.2-01.e **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Reassign or remove user privileges as needed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7805a343-275c-41be-9d62-7215b96212d8) |CMA_C1040 - Reassign or remove user privileges as needed |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1040.json) | |[Review user privileges](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff96d2186-79df-262d-3f76-f371e3b71798) |CMA_C1039 - Review user privileges |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1039.json) |
-### 01.04 Network Access Control
+### 1175.01j1Organizational.8-01.j 01.04 Network Access Control
**ID**: 1175.01j1Organizational.8-01.j **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) | |[Support personal verification credentials issued by legal authorities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1d39b5d9-0392-8954-8359-575ce1957d1a) |CMA_0507 - Support personal verification credentials issued by legal authorities |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0507.json) |
-### 01.04 Network Access Control
+### 1178.01j2Organizational.7-01.j 01.04 Network Access Control
**ID**: 1178.01j2Organizational.7-01.j **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Require use of individual authenticators](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F08ad71d0-52be-6503-4908-e015460a16ae) |CMA_C1305 - Require use of individual authenticators |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1305.json) | |[Support personal verification credentials issued by legal authorities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1d39b5d9-0392-8954-8359-575ce1957d1a) |CMA_0507 - Support personal verification credentials issued by legal authorities |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0507.json) |
-### 01.04 Network Access Control
+### 1179.01j3Organizational.1-01.j 01.04 Network Access Control
**ID**: 1179.01j3Organizational.1-01.j **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Monitor access across the organization](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F48c816c5-2190-61fc-8806-25d6f3df162f) |CMA_0376 - Monitor access across the organization |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0376.json) | |[Provide privacy training](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F518eafdd-08e5-37a9-795b-15a8d798056d) |CMA_0415 - Provide privacy training |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0415.json) |
-### 01.04 Network Access Control
+### 1192.01l1Organizational.1-01.l 01.04 Network Access Control
**ID**: 1192.01l1Organizational.1-01.l **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Implement physical security for offices, working areas, and secure areas](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F05ec66a2-137c-14b8-8e75-3d7a2bef07f8) |CMA_0323 - Implement physical security for offices, working areas, and secure areas |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0323.json) | |[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) |
-### 01.04 Network Access Control
+### 1193.01l2Organizational.13-01.l 01.04 Network Access Control
**ID**: 1193.01l2Organizational.13-01.l **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Implement physical security for offices, working areas, and secure areas](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F05ec66a2-137c-14b8-8e75-3d7a2bef07f8) |CMA_0323 - Implement physical security for offices, working areas, and secure areas |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0323.json) | |[Management ports should be closed on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F22730e10-96f6-4aac-ad84-9383d35b5917) |Open remote management ports are exposing your VM to a high level of risk from Internet-based attacks. These attacks attempt to brute force credentials to gain admin access to the machine. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OpenManagementPortsOnVirtualMachines_Audit.json) |
-### 01.04 Network Access Control
+### 1194.01l2Organizational.2-01.l 01.04 Network Access Control
**ID**: 1194.01l2Organizational.2-01.l **Ownership**: Shared
This built-in initiative is deployed as part of the
||||| |[App Service apps should have remote debugging turned off](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcb510bfd-1cba-4d9f-a230-cb0976f4bb71) |Remote debugging requires inbound ports to be opened on an App Service app. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_WebApp_Audit.json) |
-### 01.04 Network Access Control
+### 1195.01l3Organizational.1-01.l 01.04 Network Access Control
**ID**: 1195.01l3Organizational.1-01.l **Ownership**: Shared
This built-in initiative is deployed as part of the
||||| |[Function apps should have remote debugging turned off](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e60b895-3786-45da-8377-9c6b4b6ac5f9) |Remote debugging requires inbound ports to be opened on Function apps. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_FunctionApp_Audit.json) |
-### 01.04 Network Access Control
+### 1197.01l3Organizational.3-01.l 01.04 Network Access Control
**ID**: 1197.01l3Organizational.3-01.l **Ownership**: Shared
This built-in initiative is deployed as part of the
## 12 Audit Logging & Monitoring
-### 06.01 Compliance with Legal Requirements
+### 1201.06e1Organizational.2-06.e 06.01 Compliance with Legal Requirements
**ID**: 1201.06e1Organizational.2-06.e **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Update rules of behavior and access agreements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6610f662-37e9-2f71-65be-502bdc2f554d) |CMA_0521 - Update rules of behavior and access agreements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0521.json) | |[Update rules of behavior and access agreements every 3 years](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7ad83b58-2042-085d-08f0-13e946f26f89) |CMA_0522 - Update rules of behavior and access agreements every 3 years |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0522.json) |
-### 09.10 Monitoring
+### 1202.09aa1System.1-09.aa 09.10 Monitoring
**ID**: 1202.09aa1System.1-09.aa **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Review and update the events defined in AU-02](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa930f477-9dcb-2113-8aa7-45bb6fc90861) |CMA_C1106 - Review and update the events defined in AU-02 |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1106.json) | |[System updates on virtual machine scale sets should be installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3f317a7-a95c-4547-b7e7-11017ebdf2fe) |Audit whether there are any missing system security updates and critical updates that should be installed to ensure that your Windows and Linux virtual machine scale sets are secure. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssMissingSystemUpdates_Audit.json) |
-### 09.10 Monitoring
+### 1203.09aa1System.2-09.aa 09.10 Monitoring
**ID**: 1203.09aa1System.2-09.aa **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Determine auditable events](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f67e567-03db-9d1f-67dc-b6ffb91312f4) |CMA_0137 - Determine auditable events |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0137.json) | |[Resource logs in Logic Apps should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34f95f76-5386-4de7-b824-0d8478470c9d) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Logic%20Apps/LogicApps_AuditDiagnosticLog_Audit.json) |
-### 09.10 Monitoring
+### 1204.09aa1System.3-09.aa 09.10 Monitoring
**ID**: 1204.09aa1System.3-09.aa **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Monitor account activity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7b28ba4f-0a87-46ac-62e1-46b7c09202a8) |CMA_0377 - Monitor account activity |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0377.json) | |[Resource logs in IoT Hub should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F383856f8-de7f-44a2-81fc-e5135b5c2aa4) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Internet%20of%20Things/IoTHub_AuditDiagnosticLog_Audit.json) |
-### 09.10 Monitoring
+### 1205.09aa2System.1-09.aa 09.10 Monitoring
**ID**: 1205.09aa2System.1-09.aa **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Provide capability to process customer-controlled audit records](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F21633c09-804e-7fcd-78e3-635c6bfe2be7) |CMA_C1126 - Provide capability to process customer-controlled audit records |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1126.json) | |[Resource logs in Batch accounts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F428256e6-1fac-4f48-a757-df34c2b3336d) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Batch/Batch_AuditDiagnosticLog_Audit.json) |
-### 09.10 Monitoring
+### 1206.09aa2System.23-09.aa 09.10 Monitoring
**ID**: 1206.09aa2System.23-09.aa **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Verify software, firmware and information integrity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdb28735f-518f-870e-15b4-49623cbe3aa0) |CMA_0542 - Verify software, firmware and information integrity |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0542.json) | |[View and configure system diagnostic data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0123edae-3567-a05a-9b05-b53ebe9d3e7e) |CMA_0544 - View and configure system diagnostic data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0544.json) |
-### 09.10 Monitoring
+### 1207.09aa2System.4-09.aa 09.10 Monitoring
**ID**: 1207.09aa2System.4-09.aa **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Retain terminated user data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7c7032fe-9ce6-9092-5890-87a1a3755db1) |CMA_0455 - Retain terminated user data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0455.json) | |[Review audit data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6625638f-3ba1-7404-5983-0ea33d719d34) |CMA_0466 - Review audit data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0466.json) |
-### 09.10 Monitoring
+### 1208.09aa3System.1-09.aa 09.10 Monitoring
**ID**: 1208.09aa3System.1-09.aa **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Resource logs in Service Bus should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff8d36e2f-389b-4ee4-898d-21aeb69a0f45) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Bus/ServiceBus_AuditDiagnosticLog_Audit.json) | |[Verify software, firmware and information integrity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdb28735f-518f-870e-15b4-49623cbe3aa0) |CMA_0542 - Verify software, firmware and information integrity |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0542.json) |
-### 09.10 Monitoring
+### 1209.09aa3System.2-09.aa 09.10 Monitoring
**ID**: 1209.09aa3System.2-09.aa **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Configure Azure Audit capabilities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa3e98638-51d4-4e28-910a-60e98c1a756f) |CMA_C1108 - Configure Azure Audit capabilities |Manual, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1108.json) | |[Determine auditable events](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f67e567-03db-9d1f-67dc-b6ffb91312f4) |CMA_0137 - Determine auditable events |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0137.json) |
-### 09.10 Monitoring
+### 1210.09aa3System.3-09.aa 09.10 Monitoring
**ID**: 1210.09aa3System.3-09.aa **Ownership**: Shared
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[Adhere to retention periods defined](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1ecb79d7-1a06-9a3b-3be8-f434d04d1ec1) |CMA_0004 - Adhere to retention periods defined |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0004.json) |
-|[Audit diagnostic setting](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types |AuditIfNotExists |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
+|[Audit diagnostic setting for selected resource types](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types. Be sure to select only resource types which support diagnostics settings. |AuditIfNotExists |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
|[Audit privileged functions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff26af0b1-65b6-689a-a03f-352ad2d00f98) |CMA_0019 - Audit privileged functions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0019.json) | |[Audit user account status](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F49c23d9b-02b0-0e42-4f94-e8cef1b8381b) |CMA_0020 - Audit user account status |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0020.json) | |[Determine auditable events](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f67e567-03db-9d1f-67dc-b6ffb91312f4) |CMA_0137 - Determine auditable events |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0137.json) |
This built-in initiative is deployed as part of the
|[Review audit data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6625638f-3ba1-7404-5983-0ea33d719d34) |CMA_0466 - Review audit data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0466.json) | |[Use system clocks for audit records](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1ee4c7eb-480a-0007-77ff-4ba370776266) |CMA_0535 - Use system clocks for audit records |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0535.json) |
-### 09.10 Monitoring
+### 12100.09ab2System.15-09.ab 09.10 Monitoring
**ID**: 12100.09ab2System.15-09.ab **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Document wireless access security controls](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8f835d6a-4d13-9a9c-37dc-176cebd37fda) |CMA_C1695 - Document wireless access security controls |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1695.json) | |[Virtual machines should have the Log Analytics extension installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa70ca396-0a34-413a-88e1-b956c1e683be) |This policy audits any Windows/Linux virtual machines if the Log Analytics extension is not installed. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/VirtualMachines_LogAnalyticsAgent_AuditIfNotExists.json) |
-### 09.10 Monitoring
+### 12101.09ab1Organizational.3-09.ab 09.10 Monitoring
**ID**: 12101.09ab1Organizational.3-09.ab **Ownership**: Shared
This built-in initiative is deployed as part of the
|[The Log Analytics extension should be installed on Virtual Machine Scale Sets](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fefbde977-ba53-4479-b8e9-10b957924fbf) |This policy audits any Windows/Linux Virtual Machine Scale Sets if the Log Analytics extension is not installed. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/VMSS_LogAnalyticsAgent_AuditIfNotExists.json) | |[Update information security policies](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5226dee6-3420-711b-4709-8e675ebd828f) |CMA_0518 - Update information security policies |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0518.json) |
-### 09.10 Monitoring
+### 12102.09ab1Organizational.4-09.ab 09.10 Monitoring
**ID**: 12102.09ab1Organizational.4-09.ab **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Select additional testing for security control assessments](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff78fc35e-1268-0bca-a798-afcba9d2330a) |CMA_C1149 - Select additional testing for security control assessments |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1149.json) | |[Update POA&M items](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcc057769-01d9-95ad-a36f-1e62a7f9540b) |CMA_C1157 - Update POA&M items |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1157.json) |
-### 09.10 Monitoring
+### 12103.09ab1Organizational.5-09.ab 09.10 Monitoring
**ID**: 12103.09ab1Organizational.5-09.ab **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Review file and folder activity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef718fe4-7ceb-9ddf-3198-0ee8f6fe9cba) |CMA_0473 - Review file and folder activity |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0473.json) | |[Review role group changes weekly](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F70fe686f-1f91-7dab-11bf-bca4201e183b) |CMA_0476 - Review role group changes weekly |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0476.json) |
-### 09.10 Monitoring
+### 1211.09aa3System.4-09.aa 09.10 Monitoring
**ID**: 1211.09aa3System.4-09.aa **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Resource logs in Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcf820ca0-f99e-4f3e-84fb-66e913812d21) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_AuditDiagnosticLog_Audit.json) | |[Verify personal data is deleted at the end of processing](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc6b877a6-5d6d-1862-4b7f-3ccc30b25b63) |CMA_0540 - Verify personal data is deleted at the end of processing |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0540.json) |
-### 09.10 Monitoring
+### 1212.09ab1System.1-09.ab 09.10 Monitoring
**ID**: 1212.09ab1System.1-09.ab **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Obtain legal opinion for monitoring system activities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd9af7f88-686a-5a8b-704b-eafdab278977) |CMA_C1688 - Obtain legal opinion for monitoring system activities |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1688.json) | |[Provide monitoring information as needed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fc1f0da-0050-19bb-3d75-81ae15940df6) |CMA_C1689 - Provide monitoring information as needed |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1689.json) |
-### 09.10 Monitoring
+### 1213.09ab2System.128-09.ab 09.10 Monitoring
**ID**: 1213.09ab2System.128-09.ab **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Auto provisioning of the Log Analytics agent should be enabled on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F475aae12-b88a-4572-8b36-9b712b2b3a17) |To monitor for security vulnerabilities and threats, Azure Security Center collects data from your Azure virtual machines. Data is collected by the Log Analytics agent, formerly known as the Microsoft Monitoring Agent (MMA), which reads various security-related configurations and event logs from the machine and copies the data to your Log Analytics workspace for analysis. We recommend enabling auto provisioning to automatically deploy the agent to all supported Azure VMs and any new ones that are created. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Automatic_provisioning_log_analytics_monitoring_agent.json) | |[Route traffic through managed network access points](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbab9ef1d-a16d-421a-822d-3fa94e808156) |CMA_0484 - Route traffic through managed network access points |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0484.json) |
-### 09.10 Monitoring
+### 1214.09ab2System.3456-09.ab 09.10 Monitoring
**ID**: 1214.09ab2System.3456-09.ab **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Revoke privileged roles as appropriate](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F32f22cfa-770b-057c-965b-450898425519) |CMA_0483 - Revoke privileged roles as appropriate |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0483.json) | |[Use privileged identity management](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe714b481-8fac-64a2-14a9-6f079b2501a4) |CMA_0533 - Use privileged identity management |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0533.json) |
-### 09.10 Monitoring
+### 1215.09ab2System.7-09.ab 09.10 Monitoring
**ID**: 1215.09ab2System.7-09.ab **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Provide capability to process customer-controlled audit records](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F21633c09-804e-7fcd-78e3-635c6bfe2be7) |CMA_C1126 - Provide capability to process customer-controlled audit records |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1126.json) | |[Virtual machines should have the Log Analytics extension installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa70ca396-0a34-413a-88e1-b956c1e683be) |This policy audits any Windows/Linux virtual machines if the Log Analytics extension is not installed. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/VirtualMachines_LogAnalyticsAgent_AuditIfNotExists.json) |
-### 09.10 Monitoring
+### 1216.09ab3System.12-09.ab 09.10 Monitoring
**ID**: 1216.09ab3System.12-09.ab **Ownership**: Shared
This built-in initiative is deployed as part of the
|[The Log Analytics extension should be installed on Virtual Machine Scale Sets](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fefbde977-ba53-4479-b8e9-10b957924fbf) |This policy audits any Windows/Linux Virtual Machine Scale Sets if the Log Analytics extension is not installed. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/VMSS_LogAnalyticsAgent_AuditIfNotExists.json) | |[Turn on sensors for endpoint security solution](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5fc24b95-53f7-0ed1-2330-701b539b97fe) |CMA_0514 - Turn on sensors for endpoint security solution |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0514.json) |
-### 09.10 Monitoring
+### 1217.09ab3System.3-09.ab 09.10 Monitoring
**ID**: 1217.09ab3System.3-09.ab **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Document wireless access security controls](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8f835d6a-4d13-9a9c-37dc-176cebd37fda) |CMA_C1695 - Document wireless access security controls |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1695.json) | |[Set automated notifications for new and trending cloud applications in your organization](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf38215f-70c4-0cd6-40c2-c52d86690a45) |CMA_0495 - Set automated notifications for new and trending cloud applications in your organization |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0495.json) |
-### 09.10 Monitoring
+### 1218.09ab3System.47-09.ab 09.10 Monitoring
**ID**: 1218.09ab3System.47-09.ab **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Set automated notifications for new and trending cloud applications in your organization](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf38215f-70c4-0cd6-40c2-c52d86690a45) |CMA_0495 - Set automated notifications for new and trending cloud applications in your organization |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0495.json) | |[Turn on sensors for endpoint security solution](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5fc24b95-53f7-0ed1-2330-701b539b97fe) |CMA_0514 - Turn on sensors for endpoint security solution |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0514.json) |
-### 09.10 Monitoring
+### 1219.09ab3System.10-09.ab 09.10 Monitoring
**ID**: 1219.09ab3System.10-09.ab **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Provide audit review, analysis, and reporting capability](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F44f8a42d-739f-8030-89a8-4c2d5b3f6af3) |CMA_C1124 - Provide audit review, analysis, and reporting capability |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1124.json) | |[Provide capability to process customer-controlled audit records](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F21633c09-804e-7fcd-78e3-635c6bfe2be7) |CMA_C1126 - Provide capability to process customer-controlled audit records |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1126.json) |
-### 09.10 Monitoring
+### 1220.09ab3System.56-09.ab 09.10 Monitoring
**ID**: 1220.09ab3System.56-09.ab **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Verify software, firmware and information integrity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdb28735f-518f-870e-15b4-49623cbe3aa0) |CMA_0542 - Verify software, firmware and information integrity |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0542.json) | |[View and configure system diagnostic data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0123edae-3567-a05a-9b05-b53ebe9d3e7e) |CMA_0544 - View and configure system diagnostic data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0544.json) |
-### 09.10 Monitoring
+### 1222.09ab3System.8-09.ab 09.10 Monitoring
**ID**: 1222.09ab3System.8-09.ab **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Provide capability to process customer-controlled audit records](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F21633c09-804e-7fcd-78e3-635c6bfe2be7) |CMA_C1126 - Provide capability to process customer-controlled audit records |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1126.json) | |[Set automated notifications for new and trending cloud applications in your organization](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf38215f-70c4-0cd6-40c2-c52d86690a45) |CMA_0495 - Set automated notifications for new and trending cloud applications in your organization |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0495.json) |
-### 09.01 Documented Operating Procedures
+### 1229.09c1Organizational.1-09.c 09.01 Documented Operating Procedures
**ID**: 1229.09c1Organizational.1-09.c **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | |[Separate duties of individuals](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F60ee1260-97f0-61bb-8155-5d8b75743655) |CMA_0492 - Separate duties of individuals |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0492.json) |
-### 09.01 Documented Operating Procedures
+### 1230.09c2Organizational.1-09.c 09.01 Documented Operating Procedures
**ID**: 1230.09c2Organizational.1-09.c **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Review user groups and applications with access to sensitive data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb1c944e-0e94-647b-9b7e-fdb8d2af0838) |CMA_0481 - Review user groups and applications with access to sensitive data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0481.json) | |[Separate duties of individuals](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F60ee1260-97f0-61bb-8155-5d8b75743655) |CMA_0492 - Separate duties of individuals |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0492.json) |
-### 09.01 Documented Operating Procedures
+### 1231.09c2Organizational.23-09.c 09.01 Documented Operating Procedures
**ID**: 1231.09c2Organizational.23-09.c **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Document separation of duties](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe6f7b584-877a-0d69-77d4-ab8b923a9650) |CMA_0204 - Document separation of duties |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0204.json) | |[Separate duties of individuals](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F60ee1260-97f0-61bb-8155-5d8b75743655) |CMA_0492 - Separate duties of individuals |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0492.json) |
-### 09.01 Documented Operating Procedures
+### 1232.09c3Organizational.12-09.c 09.01 Documented Operating Procedures
**ID**: 1232.09c3Organizational.12-09.c **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Use privileged identity management](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe714b481-8fac-64a2-14a9-6f079b2501a4) |CMA_0533 - Use privileged identity management |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0533.json) | |[Windows machines should meet requirements for 'User Rights Assignment'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe068b215-0026-4354-b347-8fb2766f73a2) |Windows machines should have the specified Group Policy settings in the category 'User Rights Assignment' for allowing log on locally, RDP, access from the network, and many other user activities. This policy requires that the Guest Configuration prerequisites have been deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_UserRightsAssignment_AINE.json) |
-### 09.01 Documented Operating Procedures
+### 1233.09c3Organizational.3-09.c 09.01 Documented Operating Procedures
**ID**: 1233.09c3Organizational.3-09.c **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Document separation of duties](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe6f7b584-877a-0d69-77d4-ab8b923a9650) |CMA_0204 - Document separation of duties |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0204.json) | |[Separate duties of individuals](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F60ee1260-97f0-61bb-8155-5d8b75743655) |CMA_0492 - Separate duties of individuals |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0492.json) |
-### 09.10 Monitoring
+### 1270.09ad1System.12-09.ad 09.10 Monitoring
**ID**: 1270.09ad1System.12-09.ad **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Revoke privileged roles as appropriate](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F32f22cfa-770b-057c-965b-450898425519) |CMA_0483 - Revoke privileged roles as appropriate |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0483.json) | |[Use privileged identity management](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe714b481-8fac-64a2-14a9-6f079b2501a4) |CMA_0533 - Use privileged identity management |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0533.json) |
-### 09.10 Monitoring
+### 1271.09ad1System.1-09.ad 09.10 Monitoring
**ID**: 1271.09ad1System.1-09.ad **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Require approval for account creation](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fde770ba6-50dd-a316-2932-e0d972eaa734) |CMA_0431 - Require approval for account creation |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0431.json) | |[Separate duties of individuals](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F60ee1260-97f0-61bb-8155-5d8b75743655) |CMA_0492 - Separate duties of individuals |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0492.json) |
-### 09.10 Monitoring
+### 1271.09ad2System.1 09.10 Monitoring
**ID**: 1271.09ad2System.1 **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Require approval for account creation](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fde770ba6-50dd-a316-2932-e0d972eaa734) |CMA_0431 - Require approval for account creation |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0431.json) | |[Separate duties of individuals](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F60ee1260-97f0-61bb-8155-5d8b75743655) |CMA_0492 - Separate duties of individuals |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0492.json) |
-### 09.01 Documented Operating Procedures
+### 1276.09c2Organizational.2-09.c 09.01 Documented Operating Procedures
**ID**: 1276.09c2Organizational.2-09.c **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Separate duties of individuals](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F60ee1260-97f0-61bb-8155-5d8b75743655) |CMA_0492 - Separate duties of individuals |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0492.json) | |[Use privileged identity management](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe714b481-8fac-64a2-14a9-6f079b2501a4) |CMA_0533 - Use privileged identity management |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0533.json) |
-### 09.01 Documented Operating Procedures
+### 1277.09c2Organizational.4-09.c 09.01 Documented Operating Procedures
**ID**: 1277.09c2Organizational.4-09.c **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Separate duties of individuals](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F60ee1260-97f0-61bb-8155-5d8b75743655) |CMA_0492 - Separate duties of individuals |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0492.json) | |[Windows machines should meet requirements for 'Security Options - User Account Control'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F492a29ed-d143-4f03-b6a4-705ce081b463) |Windows machines should have the specified Group Policy settings in the category 'Security Options - User Account Control' for mode for admins, behavior of elevation prompt, and virtualizing file and registry write failures. This policy requires that the Guest Configuration prerequisites have been deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SecurityOptionsUserAccountControl_AINE.json) |
-### 09.01 Documented Operating Procedures
+### 1278.09c2Organizational.56-09.c 09.01 Documented Operating Procedures
**ID**: 1278.09c2Organizational.56-09.c **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Document separation of duties](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe6f7b584-877a-0d69-77d4-ab8b923a9650) |CMA_0204 - Document separation of duties |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0204.json) | |[Separate duties of individuals](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F60ee1260-97f0-61bb-8155-5d8b75743655) |CMA_0492 - Separate duties of individuals |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0492.json) |
-### 09.01 Documented Operating Procedures
+### 1279.09c3Organizational.4-09.c 09.01 Documented Operating Procedures
**ID**: 1279.09c3Organizational.4-09.c **Ownership**: Shared
This built-in initiative is deployed as part of the
## 13 Education, Training and Awareness
-### 02.03 During Employment
+### 1301.02e1Organizational.12-02.e 02.03 During Employment
**ID**: 1301.02e1Organizational.12-02.e **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Update rules of behavior and access agreements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6610f662-37e9-2f71-65be-502bdc2f554d) |CMA_0521 - Update rules of behavior and access agreements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0521.json) | |[Update rules of behavior and access agreements every 3 years](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7ad83b58-2042-085d-08f0-13e946f26f89) |CMA_0522 - Update rules of behavior and access agreements every 3 years |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0522.json) |
-### 02.03 During Employment
+### 1302.02e2Organizational.134-02.e 02.03 During Employment
**ID**: 1302.02e2Organizational.134-02.e **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Update rules of behavior and access agreements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6610f662-37e9-2f71-65be-502bdc2f554d) |CMA_0521 - Update rules of behavior and access agreements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0521.json) | |[Update rules of behavior and access agreements every 3 years](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7ad83b58-2042-085d-08f0-13e946f26f89) |CMA_0522 - Update rules of behavior and access agreements every 3 years |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0522.json) |
-### 02.03 During Employment
+### 1303.02e2Organizational.2-02.e 02.03 During Employment
**ID**: 1303.02e2Organizational.2-02.e **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Update rules of behavior and access agreements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6610f662-37e9-2f71-65be-502bdc2f554d) |CMA_0521 - Update rules of behavior and access agreements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0521.json) | |[Update rules of behavior and access agreements every 3 years](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7ad83b58-2042-085d-08f0-13e946f26f89) |CMA_0522 - Update rules of behavior and access agreements every 3 years |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0522.json) |
-### 02.03 During Employment
+### 1304.02e3Organizational.1-02.e 02.03 During Employment
**ID**: 1304.02e3Organizational.1-02.e **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Require developers to provide training](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F676c3c35-3c36-612c-9523-36d266a65000) |CMA_C1611 - Require developers to provide training |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1611.json) | |[Train personnel on disclosure of nonpublic information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F97f0d974-1486-01e2-2088-b888f46c0589) |CMA_C1084 - Train personnel on disclosure of nonpublic information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1084.json) |
-### 02.03 During Employment
+### 1305.02e3Organizational.23-02.e 02.03 During Employment
**ID**: 1305.02e3Organizational.23-02.e **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Monitor security and privacy training completion](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F82bd024a-5c99-05d6-96ff-01f539676a1a) |CMA_0379 - Monitor security and privacy training completion |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0379.json) | |[Retain training records](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3153d9c0-2584-14d3-362d-578b01358aeb) |CMA_0456 - Retain training records |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0456.json) |
-### 06.01 Compliance with Legal Requirements
+### 1306.06e1Organizational.5-06.e 06.01 Compliance with Legal Requirements
**ID**: 1306.06e1Organizational.5-06.e **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Update rules of behavior and access agreements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6610f662-37e9-2f71-65be-502bdc2f554d) |CMA_0521 - Update rules of behavior and access agreements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0521.json) | |[Update rules of behavior and access agreements every 3 years](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7ad83b58-2042-085d-08f0-13e946f26f89) |CMA_0522 - Update rules of behavior and access agreements every 3 years |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0522.json) |
-### 07.01 Responsibility for Assets
+### 1307.07c1Organizational.124-07.c 07.01 Responsibility for Assets
**ID**: 1307.07c1Organizational.124-07.c **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Update rules of behavior and access agreements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6610f662-37e9-2f71-65be-502bdc2f554d) |CMA_0521 - Update rules of behavior and access agreements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0521.json) | |[Update rules of behavior and access agreements every 3 years](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7ad83b58-2042-085d-08f0-13e946f26f89) |CMA_0522 - Update rules of behavior and access agreements every 3 years |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0522.json) |
-### 09.04 Protection Against Malicious and Mobile Code
+### 1308.09j1Organizational.5-09.j 09.04 Protection Against Malicious and Mobile Code
**ID**: 1308.09j1Organizational.5-09.j **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Update rules of behavior and access agreements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6610f662-37e9-2f71-65be-502bdc2f554d) |CMA_0521 - Update rules of behavior and access agreements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0521.json) | |[Update rules of behavior and access agreements every 3 years](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7ad83b58-2042-085d-08f0-13e946f26f89) |CMA_0522 - Update rules of behavior and access agreements every 3 years |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0522.json) |
-### 01.07 Mobile Computing and Teleworking
+### 1309.01x1System.36-01.x 01.07 Mobile Computing and Teleworking
**ID**: 1309.01x1System.36-01.x **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Provide security training for new users](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1cb7bf71-841c-4741-438a-67c65fdd7194) |CMA_0419 - Provide security training for new users |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0419.json) | |[Provide updated security awareness training](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd136ae80-54dd-321c-98b4-17acf4af2169) |CMA_C1090 - Provide updated security awareness training |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1090.json) |
-### 01.07 Mobile Computing and Teleworking
+### 1310.01y1Organizational.9-01.y 01.07 Mobile Computing and Teleworking
**ID**: 1310.01y1Organizational.9-01.y **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Provide security training for new users](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1cb7bf71-841c-4741-438a-67c65fdd7194) |CMA_0419 - Provide security training for new users |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0419.json) | |[Provide updated security awareness training](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd136ae80-54dd-321c-98b4-17acf4af2169) |CMA_C1090 - Provide updated security awareness training |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1090.json) |
-### 12.01 Information Security Aspects of Business Continuity Management
+### 1311.12c2Organizational.3-12.c 12.01 Information Security Aspects of Business Continuity Management
**ID**: 1311.12c2Organizational.3-12.c **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Provide contingency training](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fde936662-13dc-204c-75ec-1af80f994088) |CMA_0412 - Provide contingency training |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0412.json) | |[Provide information spillage training](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d4d0e90-32d9-4deb-2166-a00d51ed57c0) |CMA_0413 - Provide information spillage training |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0413.json) |
-### 02.03 During Employment
+### 1313.02e1Organizational.3-02.e 02.03 During Employment
**ID**: 1313.02e1Organizational.3-02.e **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Provide information spillage training](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d4d0e90-32d9-4deb-2166-a00d51ed57c0) |CMA_0413 - Provide information spillage training |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0413.json) | |[Provide periodic role-based security training](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9ac8621d-9acd-55bf-9f99-ee4212cc3d85) |CMA_C1095 - Provide periodic role-based security training |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1095.json) |
-### 02.03 During Employment
+### 1314.02e2Organizational.5-02.e 02.03 During Employment
**ID**: 1314.02e2Organizational.5-02.e **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Perform a risk assessment](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8c5d3d8d-5cba-0def-257c-5ab9ea9644dc) |CMA_0388 - Perform a risk assessment |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0388.json) | |[Provide privacy training](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F518eafdd-08e5-37a9-795b-15a8d798056d) |CMA_0415 - Provide privacy training |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0415.json) |
-### 02.03 During Employment
+### 1315.02e2Organizational.67-02.e 02.03 During Employment
**ID**: 1315.02e2Organizational.67-02.e **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Provide security training before providing access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b05dca2-25ec-9335-495c-29155f785082) |CMA_0418 - Provide security training before providing access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0418.json) | |[Provide security training for new users](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1cb7bf71-841c-4741-438a-67c65fdd7194) |CMA_0419 - Provide security training for new users |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0419.json) |
-### 07.01 Responsibility for Assets
+### 1324.07c1Organizational.3-07.c 07.01 Responsibility for Assets
**ID**: 1324.07c1Organizational.3-07.c **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Update rules of behavior and access agreements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6610f662-37e9-2f71-65be-502bdc2f554d) |CMA_0521 - Update rules of behavior and access agreements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0521.json) | |[Update rules of behavior and access agreements every 3 years](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7ad83b58-2042-085d-08f0-13e946f26f89) |CMA_0522 - Update rules of behavior and access agreements every 3 years |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0522.json) |
-### 09.08 Exchange of Information
+### 1325.09s1Organizational.3-09.s 09.08 Exchange of Information
**ID**: 1325.09s1Organizational.3-09.s **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Update rules of behavior and access agreements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6610f662-37e9-2f71-65be-502bdc2f554d) |CMA_0521 - Update rules of behavior and access agreements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0521.json) | |[Update rules of behavior and access agreements every 3 years](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7ad83b58-2042-085d-08f0-13e946f26f89) |CMA_0522 - Update rules of behavior and access agreements every 3 years |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0522.json) |
-### 02.03 During Employment
+### 1327.02e2Organizational.8-02.e 02.03 During Employment
**ID**: 1327.02e2Organizational.8-02.e **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Provide security training for new users](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1cb7bf71-841c-4741-438a-67c65fdd7194) |CMA_0419 - Provide security training for new users |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0419.json) | |[Provide updated security awareness training](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd136ae80-54dd-321c-98b4-17acf4af2169) |CMA_C1090 - Provide updated security awareness training |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1090.json) |
-### 02.03 During Employment
+### 1331.02e3Organizational.4-02.e 02.03 During Employment
**ID**: 1331.02e3Organizational.4-02.e **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Manage a secure surveillance camera system](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff2222056-062d-1060-6dc2-0107a68c34b2) |CMA_0354 - Manage a secure surveillance camera system |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0354.json) | |[Run simulation attacks](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa8f9c283-9a66-3eb3-9e10-bdba95b85884) |CMA_0486 - Run simulation attacks |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0486.json) |
-### 02.03 During Employment
+### 1334.02e2Organizational.12-02.e 02.03 During Employment
**ID**: 1334.02e2Organizational.12-02.e **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Provide security training for new users](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1cb7bf71-841c-4741-438a-67c65fdd7194) |CMA_0419 - Provide security training for new users |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0419.json) | |[Provide updated security awareness training](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd136ae80-54dd-321c-98b4-17acf4af2169) |CMA_C1090 - Provide updated security awareness training |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1090.json) |
-### 02.03 During Employment
+### 1336.02e1Organizational.5-02.e 02.03 During Employment
**ID**: 1336.02e1Organizational.5-02.e **Ownership**: Shared
This built-in initiative is deployed as part of the
## 14 Third Party Assurance
-### 05.02 External Parties
+### 1404.05i2Organizational.1-05.i 05.02 External Parties
**ID**: 1404.05i2Organizational.1-05.i **Ownership**: Shared
This built-in initiative is deployed as part of the
||||| |[Review and update system and services acquisition policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff49925aa-9b11-76ae-10e2-6e973cc60f37) |CMA_C1560 - Review and update system and services acquisition policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1560.json) |
-### 05.02 External Parties
+### 1406.05k1Organizational.110-05.k 05.02 External Parties
**ID**: 1406.05k1Organizational.110-05.k **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Document the information system environment in acquisition contracts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc148208b-1a6f-a4ac-7abc-23b1d41121b1) |CMA_0205 - Document the information system environment in acquisition contracts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0205.json) | |[Document the protection of cardholder data in third party contracts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F77acc53d-0f67-6e06-7d04-5750653d4629) |CMA_0207 - Document the protection of cardholder data in third party contracts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0207.json) |
-### 05.02 External Parties
+### 1407.05k2Organizational.1-05.k 05.02 External Parties
**ID**: 1407.05k2Organizational.1-05.k **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Require notification of third-party personnel transfer or termination](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fafd5d60a-48d2-8073-1ec2-6687e22f2ddd) |CMA_C1532 - Require notification of third-party personnel transfer or termination |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1532.json) | |[Require third-party providers to comply with personnel security policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8c31e15-642d-600f-78ab-bad47a5787e6) |CMA_C1530 - Require third-party providers to comply with personnel security policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1530.json) |
-### 09.02 Control Third Party Service Delivery
+### 1408.09e1System.1-09.e 09.02 Control Third Party Service Delivery
**ID**: 1408.09e1System.1-09.e **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Undergo independent security review](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9b55929b-0101-47c0-a16e-d6ac5c7d21f8) |CMA_0515 - Undergo independent security review |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0515.json) | |[Update interconnection security agreements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd48a6f19-a284-6fc6-0623-3367a74d3f50) |CMA_0519 - Update interconnection security agreements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0519.json) |
-### 09.02 Control Third Party Service Delivery
+### 1409.09e2System.1-09.e 09.02 Control Third Party Service Delivery
**ID**: 1409.09e2System.1-09.e **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Monitor third-party provider compliance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff8ded0c6-a668-9371-6bb6-661d58787198) |CMA_C1533 - Monitor third-party provider compliance |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1533.json) | |[Require third-party providers to comply with personnel security policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8c31e15-642d-600f-78ab-bad47a5787e6) |CMA_C1530 - Require third-party providers to comply with personnel security policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1530.json) |
-### 09.02 Control Third Party Service Delivery
+### 1410.09e2System.23-09.e 09.02 Control Third Party Service Delivery
**ID**: 1410.09e2System.23-09.e **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Document the information system environment in acquisition contracts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc148208b-1a6f-a4ac-7abc-23b1d41121b1) |CMA_0205 - Document the information system environment in acquisition contracts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0205.json) | |[Document the protection of cardholder data in third party contracts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F77acc53d-0f67-6e06-7d04-5750653d4629) |CMA_0207 - Document the protection of cardholder data in third party contracts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0207.json) |
-### 09.02 Control Third Party Service Delivery
+### 1411.09f1System.1-09.f 09.02 Control Third Party Service Delivery
**ID**: 1411.09f1System.1-09.f **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Route traffic through managed network access points](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbab9ef1d-a16d-421a-822d-3fa94e808156) |CMA_0484 - Route traffic through managed network access points |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0484.json) | |[Undergo independent security review](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9b55929b-0101-47c0-a16e-d6ac5c7d21f8) |CMA_0515 - Undergo independent security review |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0515.json) |
-### 10.05 Security In Development and Support Processes
+### 1416.10l1Organizational.1-10.l 10.05 Security In Development and Support Processes
**ID**: 1416.10l1Organizational.1-10.l **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Document the information system environment in acquisition contracts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc148208b-1a6f-a4ac-7abc-23b1d41121b1) |CMA_0205 - Document the information system environment in acquisition contracts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0205.json) | |[Document the protection of cardholder data in third party contracts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F77acc53d-0f67-6e06-7d04-5750653d4629) |CMA_0207 - Document the protection of cardholder data in third party contracts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0207.json) |
-### 10.05 Security In Development and Support Processes
+### 1417.10l2Organizational.1-10.l 10.05 Security In Development and Support Processes
**ID**: 1417.10l2Organizational.1-10.l **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Document the protection of cardholder data in third party contracts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F77acc53d-0f67-6e06-7d04-5750653d4629) |CMA_0207 - Document the protection of cardholder data in third party contracts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0207.json) | |[Require developers to produce evidence of security assessment plan execution](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff8a63511-66f1-503f-196d-d6217ee0823a) |CMA_C1602 - Require developers to produce evidence of security assessment plan execution |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1602.json) |
-### 05.02 External Parties
+### 1419.05j1Organizational.12-05.j 05.02 External Parties
**ID**: 1419.05j1Organizational.12-05.j **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Document the information system environment in acquisition contracts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc148208b-1a6f-a4ac-7abc-23b1d41121b1) |CMA_0205 - Document the information system environment in acquisition contracts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0205.json) | |[Document the protection of cardholder data in third party contracts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F77acc53d-0f67-6e06-7d04-5750653d4629) |CMA_0207 - Document the protection of cardholder data in third party contracts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0207.json) |
-### 05.02 External Parties
+### 1421.05j2Organizational.12-05.j 05.02 External Parties
**ID**: 1421.05j2Organizational.12-05.j **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Document the information system environment in acquisition contracts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc148208b-1a6f-a4ac-7abc-23b1d41121b1) |CMA_0205 - Document the information system environment in acquisition contracts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0205.json) | |[Document the protection of cardholder data in third party contracts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F77acc53d-0f67-6e06-7d04-5750653d4629) |CMA_0207 - Document the protection of cardholder data in third party contracts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0207.json) |
-### 05.02 External Parties
+### 1422.05j2Organizational.3-05.j 05.02 External Parties
**ID**: 1422.05j2Organizational.3-05.j **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Review cloud service provider's compliance with policies and agreements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fffea18d9-13de-6505-37f3-4c1f88070ad7) |CMA_0469 - Review cloud service provider's compliance with policies and agreements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0469.json) | |[Undergo independent security review](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9b55929b-0101-47c0-a16e-d6ac5c7d21f8) |CMA_0515 - Undergo independent security review |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0515.json) |
-### 05.02 External Parties
+### 1423.05j2Organizational.4-05.j 05.02 External Parties
**ID**: 1423.05j2Organizational.4-05.j **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Undergo independent security review](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9b55929b-0101-47c0-a16e-d6ac5c7d21f8) |CMA_0515 - Undergo independent security review |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0515.json) | |[Verify security controls for external information systems](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdc7ec756-221c-33c8-0afe-c48e10e42321) |CMA_0541 - Verify security controls for external information systems |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0541.json) |
-### 05.02 External Parties
+### 1424.05j2Organizational.5-05.j 05.02 External Parties
**ID**: 1424.05j2Organizational.5-05.j **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Support personal verification credentials issued by legal authorities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1d39b5d9-0392-8954-8359-575ce1957d1a) |CMA_0507 - Support personal verification credentials issued by legal authorities |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0507.json) | |[Verify identity before distributing authenticators](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72889284-15d2-90b2-4b39-a1e9541e1152) |CMA_0538 - Verify identity before distributing authenticators |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0538.json) |
-### 05.02 External Parties
+### 1429.05k1Organizational.34-05.k 05.02 External Parties
**ID**: 1429.05k1Organizational.34-05.k **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Monitor third-party provider compliance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff8ded0c6-a668-9371-6bb6-661d58787198) |CMA_C1533 - Monitor third-party provider compliance |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1533.json) | |[Require third-party providers to comply with personnel security policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8c31e15-642d-600f-78ab-bad47a5787e6) |CMA_C1530 - Require third-party providers to comply with personnel security policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1530.json) |
-### 05.02 External Parties
+### 1430.05k1Organizational.56-05.k 05.02 External Parties
**ID**: 1430.05k1Organizational.56-05.k **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Establish third-party personnel security requirements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3881168c-5d38-6f04-61cc-b5d87b2c4c58) |CMA_C1529 - Establish third-party personnel security requirements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1529.json) | |[Require third-party providers to comply with personnel security policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8c31e15-642d-600f-78ab-bad47a5787e6) |CMA_C1530 - Require third-party providers to comply with personnel security policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1530.json) |
-### 05.02 External Parties
+### 1431.05k1Organizational.7-05.k 05.02 External Parties
**ID**: 1431.05k1Organizational.7-05.k **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Require notification of third-party personnel transfer or termination](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fafd5d60a-48d2-8073-1ec2-6687e22f2ddd) |CMA_C1532 - Require notification of third-party personnel transfer or termination |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1532.json) | |[Require third-party providers to comply with personnel security policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8c31e15-642d-600f-78ab-bad47a5787e6) |CMA_C1530 - Require third-party providers to comply with personnel security policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1530.json) |
-### 05.02 External Parties
+### 1432.05k1Organizational.89-05.k 05.02 External Parties
**ID**: 1432.05k1Organizational.89-05.k **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Monitor third-party provider compliance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff8ded0c6-a668-9371-6bb6-661d58787198) |CMA_C1533 - Monitor third-party provider compliance |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1533.json) | |[Require third-party providers to comply with personnel security policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8c31e15-642d-600f-78ab-bad47a5787e6) |CMA_C1530 - Require third-party providers to comply with personnel security policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1530.json) |
-### 09.02 Control Third Party Service Delivery
+### 1438.09e2System.4-09.e 09.02 Control Third Party Service Delivery
**ID**: 1438.09e2System.4-09.e **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Review cloud service provider's compliance with policies and agreements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fffea18d9-13de-6505-37f3-4c1f88070ad7) |CMA_0469 - Review cloud service provider's compliance with policies and agreements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0469.json) | |[Undergo independent security review](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9b55929b-0101-47c0-a16e-d6ac5c7d21f8) |CMA_0515 - Undergo independent security review |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0515.json) |
-### 05.02 External Parties
+### 1450.05i2Organizational.2-05.i 05.02 External Parties
**ID**: 1450.05i2Organizational.2-05.i **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Review cloud service provider's compliance with policies and agreements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fffea18d9-13de-6505-37f3-4c1f88070ad7) |CMA_0469 - Review cloud service provider's compliance with policies and agreements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0469.json) | |[Undergo independent security review](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9b55929b-0101-47c0-a16e-d6ac5c7d21f8) |CMA_0515 - Undergo independent security review |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0515.json) |
-### 05.02 External Parties
+### 1451.05iCSPOrganizational.2-05.i 05.02 External Parties
**ID**: 1451.05iCSPOrganizational.2-05.i **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Undergo independent security review](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9b55929b-0101-47c0-a16e-d6ac5c7d21f8) |CMA_0515 - Undergo independent security review |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0515.json) | |[Use privileged identity management](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe714b481-8fac-64a2-14a9-6f079b2501a4) |CMA_0533 - Use privileged identity management |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0533.json) |
-### 05.02 External Parties
+### 1452.05kCSPOrganizational.1-05.k 05.02 External Parties
**ID**: 1452.05kCSPOrganizational.1-05.k **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Establish third-party personnel security requirements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3881168c-5d38-6f04-61cc-b5d87b2c4c58) |CMA_C1529 - Establish third-party personnel security requirements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1529.json) | |[Require third-party providers to comply with personnel security policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8c31e15-642d-600f-78ab-bad47a5787e6) |CMA_C1530 - Require third-party providers to comply with personnel security policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1530.json) |
-### 05.02 External Parties
+### 1453.05kCSPOrganizational.2-05.k 05.02 External Parties
**ID**: 1453.05kCSPOrganizational.2-05.k **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Review cloud service provider's compliance with policies and agreements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fffea18d9-13de-6505-37f3-4c1f88070ad7) |CMA_0469 - Review cloud service provider's compliance with policies and agreements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0469.json) | |[Undergo independent security review](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9b55929b-0101-47c0-a16e-d6ac5c7d21f8) |CMA_0515 - Undergo independent security review |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0515.json) |
-### 05.02 External Parties
+### 1454.05kCSPOrganizational.3-05.k 05.02 External Parties
**ID**: 1454.05kCSPOrganizational.3-05.k **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Review cloud service provider's compliance with policies and agreements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fffea18d9-13de-6505-37f3-4c1f88070ad7) |CMA_0469 - Review cloud service provider's compliance with policies and agreements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0469.json) | |[Undergo independent security review](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9b55929b-0101-47c0-a16e-d6ac5c7d21f8) |CMA_0515 - Undergo independent security review |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0515.json) |
-### 05.02 External Parties
+### 1455.05kCSPOrganizational.4-05.k 05.02 External Parties
**ID**: 1455.05kCSPOrganizational.4-05.k **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Review cloud service provider's compliance with policies and agreements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fffea18d9-13de-6505-37f3-4c1f88070ad7) |CMA_0469 - Review cloud service provider's compliance with policies and agreements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0469.json) | |[Undergo independent security review](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9b55929b-0101-47c0-a16e-d6ac5c7d21f8) |CMA_0515 - Undergo independent security review |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0515.json) |
-### 09.02 Control Third Party Service Delivery
+### 1464.09e2Organizational.5-09.e 09.02 Control Third Party Service Delivery
**ID**: 1464.09e2Organizational.5-09.e **Ownership**: Shared
This built-in initiative is deployed as part of the
## 15 Incident Management
-### 02.03 During Employment
+### 1501.02f1Organizational.123-02.f 02.03 During Employment
**ID**: 1501.02f1Organizational.123-02.f **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Notify personnel upon sanctions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6228396e-2ace-7ca5-3247-45767dbf52f4) |CMA_0380 - Notify personnel upon sanctions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0380.json) | |[View and investigate restricted users](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F98145a9b-428a-7e81-9d14-ebb154a24f93) |CMA_0545 - View and investigate restricted users |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0545.json) |
-### 02.03 During Employment
+### 1503.02f2Organizational.12-02.f 02.03 During Employment
**ID**: 1503.02f2Organizational.12-02.f **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Notify personnel upon sanctions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6228396e-2ace-7ca5-3247-45767dbf52f4) |CMA_0380 - Notify personnel upon sanctions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0380.json) | |[View and investigate restricted users](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F98145a9b-428a-7e81-9d14-ebb154a24f93) |CMA_0545 - View and investigate restricted users |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0545.json) |
-### 06.01 Compliance with Legal Requirements
+### 1504.06e1Organizational.34-06.e 06.01 Compliance with Legal Requirements
**ID**: 1504.06e1Organizational.34-06.e **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Review user groups and applications with access to sensitive data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb1c944e-0e94-647b-9b7e-fdb8d2af0838) |CMA_0481 - Review user groups and applications with access to sensitive data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0481.json) | |[Set automated notifications for new and trending cloud applications in your organization](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf38215f-70c4-0cd6-40c2-c52d86690a45) |CMA_0495 - Set automated notifications for new and trending cloud applications in your organization |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0495.json) |
-### 11.01 Reporting Information Security Incidents and Weaknesses
+### 1505.11a1Organizational.13-11.a 11.01 Reporting Information Security Incidents and Weaknesses
**ID**: 1505.11a1Organizational.13-11.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Run simulation attacks](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa8f9c283-9a66-3eb3-9e10-bdba95b85884) |CMA_0486 - Run simulation attacks |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0486.json) | |[View and investigate restricted users](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F98145a9b-428a-7e81-9d14-ebb154a24f93) |CMA_0545 - View and investigate restricted users |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0545.json) |
-### 11.01 Reporting Information Security Incidents and Weaknesses
+### 1506.11a1Organizational.2-11.a 11.01 Reporting Information Security Incidents and Weaknesses
**ID**: 1506.11a1Organizational.2-11.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Manage contacts for authorities and special interest groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5269d7e4-3768-501d-7e46-66c56c15622c) |CMA_0359 - Manage contacts for authorities and special interest groups |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0359.json) | |[View and investigate restricted users](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F98145a9b-428a-7e81-9d14-ebb154a24f93) |CMA_0545 - View and investigate restricted users |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0545.json) |
-### 11.01 Reporting Information Security Incidents and Weaknesses
+### 1507.11a1Organizational.4-11.a 11.01 Reporting Information Security Incidents and Weaknesses
**ID**: 1507.11a1Organizational.4-11.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Implement Incident handling capability](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F98e33927-8d7f-6d5f-44f5-2469b40b7215) |CMA_C1367 - Implement Incident handling capability |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1367.json) | |[Provide security awareness training for insider threats](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9b8b05ec-3d21-215e-5d98-0f7cf0998202) |CMA_0417 - Provide security awareness training for insider threats |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0417.json) |
-### 11.01 Reporting Information Security Incidents and Weaknesses
+### 1508.11a2Organizational.1-11.a 11.01 Reporting Information Security Incidents and Weaknesses
**ID**: 1508.11a2Organizational.1-11.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Provide information spillage training](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d4d0e90-32d9-4deb-2166-a00d51ed57c0) |CMA_0413 - Provide information spillage training |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0413.json) | |[View and investigate restricted users](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F98145a9b-428a-7e81-9d14-ebb154a24f93) |CMA_0545 - View and investigate restricted users |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0545.json) |
-### 11.01 Reporting Information Security Incidents and Weaknesses
+### 1509.11a2Organizational.236-11.a 11.01 Reporting Information Security Incidents and Weaknesses
**ID**: 1509.11a2Organizational.236-11.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Run simulation attacks](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa8f9c283-9a66-3eb3-9e10-bdba95b85884) |CMA_0486 - Run simulation attacks |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0486.json) | |[View and investigate restricted users](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F98145a9b-428a-7e81-9d14-ebb154a24f93) |CMA_0545 - View and investigate restricted users |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0545.json) |
-### 11.01 Reporting Information Security Incidents and Weaknesses
+### 1510.11a2Organizational.47-11.a 11.01 Reporting Information Security Incidents and Weaknesses
**ID**: 1510.11a2Organizational.47-11.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Provide information spillage training](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d4d0e90-32d9-4deb-2166-a00d51ed57c0) |CMA_0413 - Provide information spillage training |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0413.json) | |[Run simulation attacks](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa8f9c283-9a66-3eb3-9e10-bdba95b85884) |CMA_0486 - Run simulation attacks |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0486.json) |
-### 11.01 Reporting Information Security Incidents and Weaknesses
+### 1511.11a2Organizational.5-11.a 11.01 Reporting Information Security Incidents and Weaknesses
**ID**: 1511.11a2Organizational.5-11.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Provide information spillage training](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d4d0e90-32d9-4deb-2166-a00d51ed57c0) |CMA_0413 - Provide information spillage training |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0413.json) | |[View and investigate restricted users](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F98145a9b-428a-7e81-9d14-ebb154a24f93) |CMA_0545 - View and investigate restricted users |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0545.json) |
-### 11.01 Reporting Information Security Incidents and Weaknesses
+### 1512.11a2Organizational.8-11.a 11.01 Reporting Information Security Incidents and Weaknesses
**ID**: 1512.11a2Organizational.8-11.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Set automated notifications for new and trending cloud applications in your organization](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf38215f-70c4-0cd6-40c2-c52d86690a45) |CMA_0495 - Set automated notifications for new and trending cloud applications in your organization |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0495.json) | |[Turn on sensors for endpoint security solution](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5fc24b95-53f7-0ed1-2330-701b539b97fe) |CMA_0514 - Turn on sensors for endpoint security solution |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0514.json) |
-### 11.01 Reporting Information Security Incidents and Weaknesses
+### 1515.11a3Organizational.3-11.a 11.01 Reporting Information Security Incidents and Weaknesses
**ID**: 1515.11a3Organizational.3-11.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Maintain incident response plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F37546841-8ea1-5be0-214d-8ac599588332) |CMA_0352 - Maintain incident response plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0352.json) | |[View and investigate restricted users](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F98145a9b-428a-7e81-9d14-ebb154a24f93) |CMA_0545 - View and investigate restricted users |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0545.json) |
-### 11.02 Management of Information Security Incidents and Improvements
+### 1516.11c1Organizational.12-11.c 11.02 Management of Information Security Incidents and Improvements
**ID**: 1516.11c1Organizational.12-11.c **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Provide information spillage training](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d4d0e90-32d9-4deb-2166-a00d51ed57c0) |CMA_0413 - Provide information spillage training |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0413.json) | |[Run simulation attacks](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa8f9c283-9a66-3eb3-9e10-bdba95b85884) |CMA_0486 - Run simulation attacks |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0486.json) |
-### 11.02 Management of Information Security Incidents and Improvements
+### 1517.11c1Organizational.3-11.c 11.02 Management of Information Security Incidents and Improvements
**ID**: 1517.11c1Organizational.3-11.c **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Maintain incident response plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F37546841-8ea1-5be0-214d-8ac599588332) |CMA_0352 - Maintain incident response plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0352.json) | |[Protect incident response plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2401b496-7f23-79b2-9f80-89bb5abf3d4a) |CMA_0405 - Protect incident response plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0405.json) |
-### 11.02 Management of Information Security Incidents and Improvements
+### 1518.11c2Organizational.13-11.c 11.02 Management of Information Security Incidents and Improvements
**ID**: 1518.11c2Organizational.13-11.c **Ownership**: Shared
This built-in initiative is deployed as part of the
||||| |[Review and update incident response policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb28c8687-4bbd-8614-0b96-cdffa1ac6d9c) |CMA_C1352 - Review and update incident response policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1352.json) |
-### 11.02 Management of Information Security Incidents and Improvements
+### 1519.11c2Organizational.2-11.c 11.02 Management of Information Security Incidents and Improvements
**ID**: 1519.11c2Organizational.2-11.c **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Review file and folder activity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef718fe4-7ceb-9ddf-3198-0ee8f6fe9cba) |CMA_0473 - Review file and folder activity |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0473.json) | |[Review role group changes weekly](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F70fe686f-1f91-7dab-11bf-bca4201e183b) |CMA_0476 - Review role group changes weekly |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0476.json) |
-### 11.02 Management of Information Security Incidents and Improvements
+### 1520.11c2Organizational.4-11.c 11.02 Management of Information Security Incidents and Improvements
**ID**: 1520.11c2Organizational.4-11.c **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Protect incident response plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2401b496-7f23-79b2-9f80-89bb5abf3d4a) |CMA_0405 - Protect incident response plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0405.json) | |[Run simulation attacks](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa8f9c283-9a66-3eb3-9e10-bdba95b85884) |CMA_0486 - Run simulation attacks |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0486.json) |
-### 11.02 Management of Information Security Incidents and Improvements
+### 1521.11c2Organizational.56-11.c 11.02 Management of Information Security Incidents and Improvements
**ID**: 1521.11c2Organizational.56-11.c **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Run simulation attacks](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa8f9c283-9a66-3eb3-9e10-bdba95b85884) |CMA_0486 - Run simulation attacks |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0486.json) | |[View and investigate restricted users](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F98145a9b-428a-7e81-9d14-ebb154a24f93) |CMA_0545 - View and investigate restricted users |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0545.json) |
-### 11.02 Management of Information Security Incidents and Improvements
+### 1522.11c3Organizational.13-11.c 11.02 Management of Information Security Incidents and Improvements
**ID**: 1522.11c3Organizational.13-11.c **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Implement incident handling](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F433de59e-7a53-a766-02c2-f80f8421469a) |CMA_0318 - Implement incident handling |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0318.json) | |[View and investigate restricted users](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F98145a9b-428a-7e81-9d14-ebb154a24f93) |CMA_0545 - View and investigate restricted users |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0545.json) |
-### 11.02 Management of Information Security Incidents and Improvements
+### 1523.11c3Organizational.24-11.c 11.02 Management of Information Security Incidents and Improvements
**ID**: 1523.11c3Organizational.24-11.c **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Identify incident response personnel](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037c0089-6606-2dab-49ad-437005b5035f) |CMA_0301 - Identify incident response personnel |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0301.json) | |[Use automated mechanisms for security alerts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb8689b2e-4308-a58b-a0b4-6f3343a000df) |CMA_C1707 - Use automated mechanisms for security alerts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1707.json) |
-### 11.01 Reporting Information Security Incidents and Weaknesses
+### 1524.11a1Organizational.5-11.a 11.01 Reporting Information Security Incidents and Weaknesses
**ID**: 1524.11a1Organizational.5-11.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Obtain legal opinion for monitoring system activities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd9af7f88-686a-5a8b-704b-eafdab278977) |CMA_C1688 - Obtain legal opinion for monitoring system activities |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1688.json) | |[Require external service providers to comply with security requirements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4e45863d-9ea9-32b4-a204-2680bc6007a6) |CMA_C1586 - Require external service providers to comply with security requirements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1586.json) |
-### 11.01 Reporting Information Security Incidents and Weaknesses
+### 1525.11a1Organizational.6-11.a 11.01 Reporting Information Security Incidents and Weaknesses
**ID**: 1525.11a1Organizational.6-11.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Notify personnel upon sanctions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6228396e-2ace-7ca5-3247-45767dbf52f4) |CMA_0380 - Notify personnel upon sanctions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0380.json) | |[Provide security awareness training for insider threats](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9b8b05ec-3d21-215e-5d98-0f7cf0998202) |CMA_0417 - Provide security awareness training for insider threats |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0417.json) |
-### 11.02 Management of Information Security Incidents and Improvements
+### 1560.11d1Organizational.1-11.d 11.02 Management of Information Security Incidents and Improvements
**ID**: 1560.11d1Organizational.1-11.d **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Protect incident response plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2401b496-7f23-79b2-9f80-89bb5abf3d4a) |CMA_0405 - Protect incident response plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0405.json) | |[Run simulation attacks](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa8f9c283-9a66-3eb3-9e10-bdba95b85884) |CMA_0486 - Run simulation attacks |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0486.json) |
-### 11.02 Management of Information Security Incidents and Improvements
+### 1561.11d2Organizational.14-11.d 11.02 Management of Information Security Incidents and Improvements
**ID**: 1561.11d2Organizational.14-11.d **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Review and update incident response policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb28c8687-4bbd-8614-0b96-cdffa1ac6d9c) |CMA_C1352 - Review and update incident response policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1352.json) | |[View and investigate restricted users](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F98145a9b-428a-7e81-9d14-ebb154a24f93) |CMA_0545 - View and investigate restricted users |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0545.json) |
-### 11.02 Management of Information Security Incidents and Improvements
+### 1562.11d2Organizational.2-11.d 11.02 Management of Information Security Incidents and Improvements
**ID**: 1562.11d2Organizational.2-11.d **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Run simulation attacks](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa8f9c283-9a66-3eb3-9e10-bdba95b85884) |CMA_0486 - Run simulation attacks |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0486.json) | |[View and investigate restricted users](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F98145a9b-428a-7e81-9d14-ebb154a24f93) |CMA_0545 - View and investigate restricted users |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0545.json) |
-### 11.02 Management of Information Security Incidents and Improvements
+### 1563.11d2Organizational.3-11.d 11.02 Management of Information Security Incidents and Improvements
**ID**: 1563.11d2Organizational.3-11.d **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Maintain incident response plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F37546841-8ea1-5be0-214d-8ac599588332) |CMA_0352 - Maintain incident response plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0352.json) | |[Run simulation attacks](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa8f9c283-9a66-3eb3-9e10-bdba95b85884) |CMA_0486 - Run simulation attacks |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0486.json) |
-### 11.01 Reporting Information Security Incidents and Weaknesses
+### 1577.11aCSPOrganizational.1-11.a 11.01 Reporting Information Security Incidents and Weaknesses
**ID**: 1577.11aCSPOrganizational.1-11.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Ensure external providers consistently meet interests of the customers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3eabed6d-1912-2d3c-858b-f438d08d0412) |CMA_C1592 - Ensure external providers consistently meet interests of the customers |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1592.json) | |[Identify incident response personnel](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037c0089-6606-2dab-49ad-437005b5035f) |CMA_0301 - Identify incident response personnel |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0301.json) |
-### 11.02 Management of Information Security Incidents and Improvements
+### 1587.11c2Organizational.10-11.c 11.02 Management of Information Security Incidents and Improvements
**ID**: 1587.11c2Organizational.10-11.c **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Protect incident response plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2401b496-7f23-79b2-9f80-89bb5abf3d4a) |CMA_0405 - Protect incident response plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0405.json) | |[View and investigate restricted users](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F98145a9b-428a-7e81-9d14-ebb154a24f93) |CMA_0545 - View and investigate restricted users |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0545.json) |
-### 11.02 Management of Information Security Incidents and Improvements
+### 1589.11c1Organizational.5-11.c 11.02 Management of Information Security Incidents and Improvements
**ID**: 1589.11c1Organizational.5-11.c **Ownership**: Shared
This built-in initiative is deployed as part of the
## 16 Business Continuity & Disaster Recovery
-### 12.01 Information Security Aspects of Business Continuity Management
+### 1601.12c1Organizational.1238-12.c 12.01 Information Security Aspects of Business Continuity Management
**ID**: 1601.12c1Organizational.1238-12.c **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Test the business continuity and disaster recovery plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F58a51cde-008b-1a5d-61b5-d95849770677) |CMA_0509 - Test the business continuity and disaster recovery plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0509.json) | |[Update contingency plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F14a4fd0a-9100-1e12-1362-792014a28155) |CMA_C1248 - Update contingency plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1248.json) |
-### 12.01 Information Security Aspects of Business Continuity Management
+### 1602.12c1Organizational.4567-12.c 12.01 Information Security Aspects of Business Continuity Management
**ID**: 1602.12c1Organizational.4567-12.c **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Develop and document a business continuity and disaster recovery plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd6cbcba-4a2d-507c-53e3-296b5c238a8e) |CMA_0146 - Develop and document a business continuity and disaster recovery plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0146.json) | |[Develop contingency plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faa305b4d-8c84-1754-0c74-dec004e66be0) |CMA_C1244 - Develop contingency plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1244.json) |
-### 12.01 Information Security Aspects of Business Continuity Management
+### 1603.12c1Organizational.9-12.c 12.01 Information Security Aspects of Business Continuity Management
**ID**: 1603.12c1Organizational.9-12.c **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Distribute policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feff6e4a5-3efe-94dd-2ed1-25d56a019a82) |CMA_0185 - Distribute policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0185.json) | |[Review and update contingency planning policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe9c60c37-65b0-2d72-6c3c-af66036203ae) |CMA_C1243 - Review and update contingency planning policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1243.json) |
-### 12.01 Information Security Aspects of Business Continuity Management
+### 1604.12c2Organizational.16789-12.c 12.01 Information Security Aspects of Business Continuity Management
**ID**: 1604.12c2Organizational.16789-12.c **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Establish an alternate processing site](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf5ff768-a34b-720e-1224-e6b3214f3ba6) |CMA_0262 - Establish an alternate processing site |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0262.json) | |[Establish requirements for internet service providers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f2e834d-7e40-a4d5-a216-e49b16955ccf) |CMA_0278 - Establish requirements for internet service providers |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0278.json) |
-### 12.01 Information Security Aspects of Business Continuity Management
+### 1607.12c2Organizational.4-12.c 12.01 Information Security Aspects of Business Continuity Management
**ID**: 1607.12c2Organizational.4-12.c **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Develop contingency plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faa305b4d-8c84-1754-0c74-dec004e66be0) |CMA_C1244 - Develop contingency plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1244.json) | |[Review and update contingency planning policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe9c60c37-65b0-2d72-6c3c-af66036203ae) |CMA_C1243 - Review and update contingency planning policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1243.json) |
-### 12.01 Information Security Aspects of Business Continuity Management
+### 1608.12c2Organizational.5-12.c 12.01 Information Security Aspects of Business Continuity Management
**ID**: 1608.12c2Organizational.5-12.c **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Separately store backup information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc26e2fd-3149-74b4-5988-d64bb90f8ef7) |CMA_C1293 - Separately store backup information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1293.json) | |[Transfer backup information to an alternate storage site](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7bdb79ea-16b8-453e-4ca4-ad5b16012414) |CMA_C1294 - Transfer backup information to an alternate storage site |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1294.json) |
-### 12.01 Information Security Aspects of Business Continuity Management
+### 1609.12c3Organizational.12-12.c 12.01 Information Security Aspects of Business Continuity Management
**ID**: 1609.12c3Organizational.12-12.c **Ownership**: Shared
This built-in initiative is deployed as part of the
||||| |[Establish requirements for internet service providers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f2e834d-7e40-a4d5-a216-e49b16955ccf) |CMA_0278 - Establish requirements for internet service providers |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0278.json) |
-### 09.05 Information Back-Up
+### 1616.09l1Organizational.16-09.l 09.05 Information Back-Up
**ID**: 1616.09l1Organizational.16-09.l **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Conduct backup of information system documentation](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb269a749-705e-8bff-055a-147744675cdf) |CMA_C1289 - Conduct backup of information system documentation |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1289.json) | |[Long-term geo-redundant backup should be enabled for Azure SQL Databases](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd38fc420-0735-4ef3-ac11-c806f651a570) |This policy audits any Azure SQL Database with long-term geo-redundant backup not enabled. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_SQLDatabase_AuditIfNotExists.json) |
-### 09.05 Information Back-Up
+### 1617.09l1Organizational.23-09.l 09.05 Information Back-Up
**ID**: 1617.09l1Organizational.23-09.l **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Develop contingency plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faa305b4d-8c84-1754-0c74-dec004e66be0) |CMA_C1244 - Develop contingency plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1244.json) | |[Geo-redundant backup should be enabled for Azure Database for MySQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F82339799-d096-41ae-8538-b108becf0970) |Azure Database for MySQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMySQL_Audit.json) |
-### 09.05 Information Back-Up
+### 1618.09l1Organizational.45-09.l 09.05 Information Back-Up
**ID**: 1618.09l1Organizational.45-09.l **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Geo-redundant backup should be enabled for Azure Database for PostgreSQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F48af4db5-9b8b-401c-8e74-076be876a430) |Azure Database for PostgreSQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForPostgreSQL_Audit.json) | |[Separately store backup information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc26e2fd-3149-74b4-5988-d64bb90f8ef7) |CMA_C1293 - Separately store backup information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1293.json) |
-### 09.05 Information Back-Up
+### 1619.09l1Organizational.7-09.l 09.05 Information Back-Up
**ID**: 1619.09l1Organizational.7-09.l **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Establish requirements for internet service providers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f2e834d-7e40-a4d5-a216-e49b16955ccf) |CMA_0278 - Establish requirements for internet service providers |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0278.json) | |[Geo-redundant backup should be enabled for Azure Database for MariaDB](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0ec47710-77ff-4a3d-9181-6aa50af424d0) |Azure Database for MariaDB allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMariaDB_Audit.json) |
-### 09.05 Information Back-Up
+### 1620.09l1Organizational.8-09.l 09.05 Information Back-Up
**ID**: 1620.09l1Organizational.8-09.l **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Separately store backup information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc26e2fd-3149-74b4-5988-d64bb90f8ef7) |CMA_C1293 - Separately store backup information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1293.json) | |[Transfer backup information to an alternate storage site](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7bdb79ea-16b8-453e-4ca4-ad5b16012414) |CMA_C1294 - Transfer backup information to an alternate storage site |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1294.json) |
-### 09.05 Information Back-Up
+### 1621.09l2Organizational.1-09.l 09.05 Information Back-Up
**ID**: 1621.09l2Organizational.1-09.l **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Long-term geo-redundant backup should be enabled for Azure SQL Databases](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd38fc420-0735-4ef3-ac11-c806f651a570) |This policy audits any Azure SQL Database with long-term geo-redundant backup not enabled. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_SQLDatabase_AuditIfNotExists.json) | |[Maintain records of processing of personal data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F92ede480-154e-0e22-4dca-8b46a74a3a51) |CMA_0353 - Maintain records of processing of personal data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0353.json) |
-### 09.05 Information Back-Up
+### 1622.09l2Organizational.23-09.l 09.05 Information Back-Up
**ID**: 1622.09l2Organizational.23-09.l **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Identify and mitigate potential issues at alternate storage site](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F13939f8c-4cd5-a6db-9af4-9dfec35e3722) |CMA_C1271 - Identify and mitigate potential issues at alternate storage site |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1271.json) | |[Separately store backup information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc26e2fd-3149-74b4-5988-d64bb90f8ef7) |CMA_C1293 - Separately store backup information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1293.json) |
-### 09.05 Information Back-Up
+### 1623.09l2Organizational.4-09.l 09.05 Information Back-Up
**ID**: 1623.09l2Organizational.4-09.l **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Establish backup policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f23967c-a74b-9a09-9dc2-f566f61a87b9) |CMA_0268 - Establish backup policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0268.json) | |[Geo-redundant backup should be enabled for Azure Database for PostgreSQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F48af4db5-9b8b-401c-8e74-076be876a430) |Azure Database for PostgreSQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForPostgreSQL_Audit.json) |
-### 09.05 Information Back-Up
+### 1624.09l3Organizational.12-09.l 09.05 Information Back-Up
**ID**: 1624.09l3Organizational.12-09.l **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Establish backup policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f23967c-a74b-9a09-9dc2-f566f61a87b9) |CMA_0268 - Establish backup policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0268.json) | |[Geo-redundant backup should be enabled for Azure Database for MariaDB](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0ec47710-77ff-4a3d-9181-6aa50af424d0) |Azure Database for MariaDB allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMariaDB_Audit.json) |
-### 09.05 Information Back-Up
+### 1625.09l3Organizational.34-09.l 09.05 Information Back-Up
**ID**: 1625.09l3Organizational.34-09.l **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Azure Backup should be enabled for Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F013e242c-8828-4970-87b3-ab247555486d) |Ensure protection of your Azure Virtual Machines by enabling Azure Backup. Azure Backup is a secure and cost effective data protection solution for Azure. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/VirtualMachines_EnableAzureBackup_Audit.json) | |[Conduct backup of information system documentation](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb269a749-705e-8bff-055a-147744675cdf) |CMA_C1289 - Conduct backup of information system documentation |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1289.json) |
-### 09.05 Information Back-Up
+### 1626.09l3Organizational.5-09.l 09.05 Information Back-Up
**ID**: 1626.09l3Organizational.5-09.l **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Conduct backup of information system documentation](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb269a749-705e-8bff-055a-147744675cdf) |CMA_C1289 - Conduct backup of information system documentation |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1289.json) | |[Geo-redundant backup should be enabled for Azure Database for PostgreSQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F48af4db5-9b8b-401c-8e74-076be876a430) |Azure Database for PostgreSQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForPostgreSQL_Audit.json) |
-### 09.05 Information Back-Up
+### 1627.09l3Organizational.6-09.l 09.05 Information Back-Up
**ID**: 1627.09l3Organizational.6-09.l **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Geo-redundant backup should be enabled for Azure Database for MariaDB](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0ec47710-77ff-4a3d-9181-6aa50af424d0) |Azure Database for MariaDB allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMariaDB_Audit.json) | |[Separately store backup information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc26e2fd-3149-74b4-5988-d64bb90f8ef7) |CMA_C1293 - Separately store backup information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1293.json) |
-### 12.01 Information Security Aspects of Business Continuity Management
+### 1634.12b1Organizational.1-12.b 12.01 Information Security Aspects of Business Continuity Management
**ID**: 1634.12b1Organizational.1-12.b **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Develop contingency planning policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F75b42dcf-7840-1271-260b-852273d7906e) |CMA_0156 - Develop contingency planning policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0156.json) | |[Distribute policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feff6e4a5-3efe-94dd-2ed1-25d56a019a82) |CMA_0185 - Distribute policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0185.json) |
-### 12.01 Information Security Aspects of Business Continuity Management
+### 1635.12b1Organizational.2-12.b 12.01 Information Security Aspects of Business Continuity Management
**ID**: 1635.12b1Organizational.2-12.b **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Perform a risk assessment](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8c5d3d8d-5cba-0def-257c-5ab9ea9644dc) |CMA_0388 - Perform a risk assessment |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0388.json) | |[Plan for resumption of essential business functions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7ded6497-815d-6506-242b-e043e0273928) |CMA_C1253 - Plan for resumption of essential business functions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1253.json) |
-### 12.01 Information Security Aspects of Business Continuity Management
+### 1636.12b2Organizational.1-12.b 12.01 Information Security Aspects of Business Continuity Management
**ID**: 1636.12b2Organizational.1-12.b **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Develop contingency plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faa305b4d-8c84-1754-0c74-dec004e66be0) |CMA_C1244 - Develop contingency plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1244.json) | |[Perform a business impact assessment and application criticality assessment](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcb8841d4-9d13-7292-1d06-ba4d68384681) |CMA_0386 - Perform a business impact assessment and application criticality assessment |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0386.json) |
-### 12.01 Information Security Aspects of Business Continuity Management
+### 1637.12b2Organizational.2-12.b 12.01 Information Security Aspects of Business Continuity Management
**ID**: 1637.12b2Organizational.2-12.b **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Update contingency plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F14a4fd0a-9100-1e12-1362-792014a28155) |CMA_C1248 - Update contingency plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1248.json) | |[Windows machines should meet requirements for 'Security Options - Recovery console'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff71be03e-e25b-4d0f-b8bc-9b3e309b66c0) |Windows machines should have the specified Group Policy settings in the category 'Security Options - Recovery console' for allowing floppy copy and access to all drives and folders. This policy requires that the Guest Configuration prerequisites have been deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SecurityOptionsRecoveryconsole_AINE.json) |
-### 12.01 Information Security Aspects of Business Continuity Management
+### 1638.12b2Organizational.345-12.b 12.01 Information Security Aspects of Business Continuity Management
**ID**: 1638.12b2Organizational.345-12.b **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Perform a risk assessment](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8c5d3d8d-5cba-0def-257c-5ab9ea9644dc) |CMA_0388 - Perform a risk assessment |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0388.json) | |[Plan for resumption of essential business functions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7ded6497-815d-6506-242b-e043e0273928) |CMA_C1253 - Plan for resumption of essential business functions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1253.json) |
-### 12.01 Information Security Aspects of Business Continuity Management
+### 1666.12d1Organizational.1235-12.d 12.01 Information Security Aspects of Business Continuity Management
**ID**: 1666.12d1Organizational.1235-12.d **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Develop contingency plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faa305b4d-8c84-1754-0c74-dec004e66be0) |CMA_C1244 - Develop contingency plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1244.json) | |[Plan for resumption of essential business functions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7ded6497-815d-6506-242b-e043e0273928) |CMA_C1253 - Plan for resumption of essential business functions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1253.json) |
-### 12.01 Information Security Aspects of Business Continuity Management
+### 1667.12d1Organizational.4-12.d 12.01 Information Security Aspects of Business Continuity Management
**ID**: 1667.12d1Organizational.4-12.d **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Develop and document a business continuity and disaster recovery plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd6cbcba-4a2d-507c-53e3-296b5c238a8e) |CMA_0146 - Develop and document a business continuity and disaster recovery plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0146.json) | |[Update contingency plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F14a4fd0a-9100-1e12-1362-792014a28155) |CMA_C1248 - Update contingency plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1248.json) |
-### 12.01 Information Security Aspects of Business Continuity Management
+### 1668.12d1Organizational.67-12.d 12.01 Information Security Aspects of Business Continuity Management
**ID**: 1668.12d1Organizational.67-12.d **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Establish an alternate processing site](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf5ff768-a34b-720e-1224-e6b3214f3ba6) |CMA_0262 - Establish an alternate processing site |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0262.json) | |[Review and update contingency planning policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe9c60c37-65b0-2d72-6c3c-af66036203ae) |CMA_C1243 - Review and update contingency planning policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1243.json) |
-### 12.01 Information Security Aspects of Business Continuity Management
+### 1669.12d1Organizational.8-12.d 12.01 Information Security Aspects of Business Continuity Management
**ID**: 1669.12d1Organizational.8-12.d **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Test the business continuity and disaster recovery plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F58a51cde-008b-1a5d-61b5-d95849770677) |CMA_0509 - Test the business continuity and disaster recovery plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0509.json) | |[Update contingency plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F14a4fd0a-9100-1e12-1362-792014a28155) |CMA_C1248 - Update contingency plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1248.json) |
-### 12.01 Information Security Aspects of Business Continuity Management
+### 1670.12d2Organizational.1-12.d 12.01 Information Security Aspects of Business Continuity Management
**ID**: 1670.12d2Organizational.1-12.d **Ownership**: Shared
This built-in initiative is deployed as part of the
||||| |[Develop contingency plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faa305b4d-8c84-1754-0c74-dec004e66be0) |CMA_C1244 - Develop contingency plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1244.json) |
-### 12.01 Information Security Aspects of Business Continuity Management
+### 1671.12d2Organizational.2-12.d 12.01 Information Security Aspects of Business Continuity Management
**ID**: 1671.12d2Organizational.2-12.d **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Review contingency plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F53fc1282-0ee3-2764-1319-e20143bb0ea5) |CMA_C1247 - Review contingency plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1247.json) | |[Update contingency plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F14a4fd0a-9100-1e12-1362-792014a28155) |CMA_C1248 - Update contingency plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1248.json) |
-### 12.01 Information Security Aspects of Business Continuity Management
+### 1672.12d2Organizational.3-12.d 12.01 Information Security Aspects of Business Continuity Management
**ID**: 1672.12d2Organizational.3-12.d **Ownership**: Shared
This built-in initiative is deployed as part of the
## 17 Risk Management
-### 03.01 Risk Management Program
+### 1704.03b1Organizational.12-03.b 03.01 Risk Management Program
**ID**: 1704.03b1Organizational.12-03.b **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Conduct Risk Assessment](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F677e1da4-00c3-287a-563d-f4a1cf9b99a0) |CMA_C1543 - Conduct Risk Assessment |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1543.json) | |[Perform a risk assessment](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8c5d3d8d-5cba-0def-257c-5ab9ea9644dc) |CMA_0388 - Perform a risk assessment |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0388.json) |
-### 03.01 Risk Management Program
+### 1705.03b2Organizational.12-03.b 03.01 Risk Management Program
**ID**: 1705.03b2Organizational.12-03.b **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Conduct Risk Assessment](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F677e1da4-00c3-287a-563d-f4a1cf9b99a0) |CMA_C1543 - Conduct Risk Assessment |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1543.json) | |[Conduct risk assessment and distribute its results](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd7c1ecc3-2980-a079-1569-91aec8ac4a77) |CMA_C1544 - Conduct risk assessment and distribute its results |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1544.json) |
-### 03.01 Risk Management Program
+### 1707.03c1Organizational.12-03.c 03.01 Risk Management Program
**ID**: 1707.03c1Organizational.12-03.c **Ownership**: Shared
This built-in initiative is deployed as part of the
||||| |[Develop POA&M](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F477bd136-7dd9-55f8-48ac-bae096b86a07) |CMA_C1156 - Develop POA&M |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1156.json) |
-### 03.01 Risk Management Program
+### 1708.03c2Organizational.12-03.c 03.01 Risk Management Program
**ID**: 1708.03c2Organizational.12-03.c **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Develop POA&M](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F477bd136-7dd9-55f8-48ac-bae096b86a07) |CMA_C1156 - Develop POA&M |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1156.json) | |[Update POA&M items](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcc057769-01d9-95ad-a36f-1e62a7f9540b) |CMA_C1157 - Update POA&M items |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1157.json) |
-### 10.01 Security Requirements of Information Systems
+### 17100.10a3Organizational.5 10.01 Security Requirements of Information Systems
**ID**: 17100.10a3Organizational.5 **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Document security functional requirements in acquisition contracts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F57927290-8000-59bf-3776-90c468ac5b4b) |CMA_0201 - Document security functional requirements in acquisition contracts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0201.json) | |[Document the protection of cardholder data in third party contracts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F77acc53d-0f67-6e06-7d04-5750653d4629) |CMA_0207 - Document the protection of cardholder data in third party contracts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0207.json) |
-### 10.01 Security Requirements of Information Systems
+### 17101.10a3Organizational.6-10.a 10.01 Security Requirements of Information Systems
**ID**: 17101.10a3Organizational.6-10.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Require developers to implement only approved changes](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F085467a6-9679-5c65-584a-f55acefd0d43) |CMA_C1596 - Require developers to implement only approved changes |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1596.json) | |[Require developers to manage change integrity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb33d61c1-7463-7025-0ec0-a47585b59147) |CMA_C1595 - Require developers to manage change integrity |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1595.json) |
-### 10.01 Security Requirements of Information Systems
+### 17120.10a3Organizational.5-10.a 10.01 Security Requirements of Information Systems
**ID**: 17120.10a3Organizational.5-10.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Document the protection of cardholder data in third party contracts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F77acc53d-0f67-6e06-7d04-5750653d4629) |CMA_0207 - Document the protection of cardholder data in third party contracts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0207.json) | |[Obtain approvals for acquisitions and outsourcing](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F92b94485-1c49-3350-9ada-dffe94f08e87) |CMA_C1590 - Obtain approvals for acquisitions and outsourcing |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1590.json) |
-### 03.01 Risk Management Program
+### 17126.03c1System.6-03.c 03.01 Risk Management Program
**ID**: 17126.03c1System.6-03.c **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Establish a risk management strategy](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd36700f2-2f0d-7c2a-059c-bdadd1d79f70) |CMA_0258 - Establish a risk management strategy |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0258.json) | |[Implement the risk management strategy](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc6fe3856-4635-36b6-983c-070da12a953b) |CMA_C1744 - Implement the risk management strategy |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1744.json) |
-### 03.01 Risk Management Program
+### 1713.03c1Organizational.3-03.c 03.01 Risk Management Program
**ID**: 1713.03c1Organizational.3-03.c **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Train staff on PII sharing and its consequences](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8019d788-713d-90a1-5570-dac5052f517d) |CMA_C1871 - Train staff on PII sharing and its consequences |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1871.json) | |[Verify personal data is deleted at the end of processing](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc6b877a6-5d6d-1862-4b7f-3ccc30b25b63) |CMA_0540 - Verify personal data is deleted at the end of processing |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0540.json) |
-### 03.01 Risk Management Program
+### 1733.03d1Organizational.1-03.d 03.01 Risk Management Program
**ID**: 1733.03d1Organizational.1-03.d **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Conduct risk assessment and document its results](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1dbd51c2-2bd1-5e26-75ba-ed075d8f0d68) |CMA_C1542 - Conduct risk assessment and document its results |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1542.json) | |[Establish a risk management strategy](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd36700f2-2f0d-7c2a-059c-bdadd1d79f70) |CMA_0258 - Establish a risk management strategy |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0258.json) |
-### 03.01 Risk Management Program
+### 1734.03d2Organizational.1-03.d 03.01 Risk Management Program
**ID**: 1734.03d2Organizational.1-03.d **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Perform a privacy impact assessment](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd18af1ac-0086-4762-6dc8-87cdded90e39) |CMA_0387 - Perform a privacy impact assessment |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0387.json) | |[Perform audit for configuration change control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1282809c-9001-176b-4a81-260a085f4872) |CMA_0390 - Perform audit for configuration change control |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0390.json) |
-### 03.01 Risk Management Program
+### 1735.03d2Organizational.23-03.d 03.01 Risk Management Program
**ID**: 1735.03d2Organizational.23-03.d **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Perform a privacy impact assessment](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd18af1ac-0086-4762-6dc8-87cdded90e39) |CMA_0387 - Perform a privacy impact assessment |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0387.json) | |[Perform audit for configuration change control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1282809c-9001-176b-4a81-260a085f4872) |CMA_0390 - Perform audit for configuration change control |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0390.json) |
-### 03.01 Risk Management Program
+### 1736.03d2Organizational.4-03.d 03.01 Risk Management Program
**ID**: 1736.03d2Organizational.4-03.d **Ownership**: Shared
This built-in initiative is deployed as part of the
||||| |[Conduct risk assessment and document its results](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1dbd51c2-2bd1-5e26-75ba-ed075d8f0d68) |CMA_C1542 - Conduct risk assessment and document its results |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1542.json) |
-### 03.01 Risk Management Program
+### 1737.03d2Organizational.5-03.d 03.01 Risk Management Program
**ID**: 1737.03d2Organizational.5-03.d **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Conduct risk assessment and document its results](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1dbd51c2-2bd1-5e26-75ba-ed075d8f0d68) |CMA_C1542 - Conduct risk assessment and document its results |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1542.json) | |[Establish a risk management strategy](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd36700f2-2f0d-7c2a-059c-bdadd1d79f70) |CMA_0258 - Establish a risk management strategy |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0258.json) |
-### 10.01 Security Requirements of Information Systems
+### 1780.10a1Organizational.1-10.a 10.01 Security Requirements of Information Systems
**ID**: 1780.10a1Organizational.1-10.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Develop access control policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F59f7feff-02aa-6539-2cf7-bea75b762140) |CMA_0144 - Develop access control policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0144.json) | |[Govern policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1a2a03a4-9992-5788-5953-d8f6615306de) |CMA_0292 - Govern policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0292.json) |
-### 10.01 Security Requirements of Information Systems
+### 1781.10a1Organizational.23-10.a 10.01 Security Requirements of Information Systems
**ID**: 1781.10a1Organizational.23-10.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Develop SSP that meets criteria](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6b957f60-54cd-5752-44d5-ff5a64366c93) |CMA_C1492 - Develop SSP that meets criteria |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1492.json) | |[Integrate risk management process into SDLC](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F00f12b6f-10d7-8117-9577-0f2b76488385) |CMA_C1567 - Integrate risk management process into SDLC |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1567.json) |
-### 10.01 Security Requirements of Information Systems
+### 1782.10a1Organizational.4-10.a 10.01 Security Requirements of Information Systems
**ID**: 1782.10a1Organizational.4-10.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Establish security requirements for the manufacturing of connected devices](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fafbecd30-37ee-a27b-8e09-6ac49951a0ee) |CMA_0279 - Establish security requirements for the manufacturing of connected devices |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0279.json) | |[Implement security engineering principles of information systems](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdf2e9507-169b-4114-3a52-877561ee3198) |CMA_0325 - Implement security engineering principles of information systems |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0325.json) |
-### 10.01 Security Requirements of Information Systems
+### 1783.10a1Organizational.56-10.a 10.01 Security Requirements of Information Systems
**ID**: 1783.10a1Organizational.56-10.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Document security functional requirements in acquisition contracts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F57927290-8000-59bf-3776-90c468ac5b4b) |CMA_0201 - Document security functional requirements in acquisition contracts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0201.json) | |[Document the protection of cardholder data in third party contracts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F77acc53d-0f67-6e06-7d04-5750653d4629) |CMA_0207 - Document the protection of cardholder data in third party contracts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0207.json) |
-### 10.01 Security Requirements of Information Systems
+### 1784.10a1Organizational.7-10.a 10.01 Security Requirements of Information Systems
**ID**: 1784.10a1Organizational.7-10.a **Ownership**: Shared
This built-in initiative is deployed as part of the
||||| |[Employ FIPS 201-approved technology for PIV](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8b333332-6efd-7c0d-5a9f-d1eb95105214) |CMA_C1579 - Employ FIPS 201-approved technology for PIV |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1579.json) |
-### 10.01 Security Requirements of Information Systems
+### 1785.10a1Organizational.8-10.a 10.01 Security Requirements of Information Systems
**ID**: 1785.10a1Organizational.8-10.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Separate user and information system management functionality](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8a703eb5-4e53-701b-67e4-05ba2f7930c8) |CMA_0493 - Separate user and information system management functionality |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0493.json) | |[Use dedicated machines for administrative tasks](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb8972f60-8d77-1cb8-686f-9c9f4cdd8a59) |CMA_0527 - Use dedicated machines for administrative tasks |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0527.json) |
-### 10.01 Security Requirements of Information Systems
+### 1786.10a1Organizational.9-10.a 10.01 Security Requirements of Information Systems
**ID**: 1786.10a1Organizational.9-10.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Identify individuals with security roles and responsibilities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0dcbaf2f-075e-947b-8f4c-74ecc5cd302c) |CMA_C1566 - Identify individuals with security roles and responsibilities |Manual, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1566.json) | |[Require developer to identify SDLC ports, protocols, and services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6da5cca-5795-60ff-49e1-4972567815fe) |CMA_C1578 - Require developer to identify SDLC ports, protocols, and services |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1578.json) |
-### 10.01 Security Requirements of Information Systems
+### 1787.10a2Organizational.1-10.a 10.01 Security Requirements of Information Systems
**ID**: 1787.10a2Organizational.1-10.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Information security and personal data protection](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34738025-5925-51f9-1081-f2d0060133ed) |CMA_0332 - Information security and personal data protection |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0332.json) | |[Perform a privacy impact assessment](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd18af1ac-0086-4762-6dc8-87cdded90e39) |CMA_0387 - Perform a privacy impact assessment |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0387.json) |
-### 10.01 Security Requirements of Information Systems
+### 1788.10a2Organizational.2-10.a 10.01 Security Requirements of Information Systems
**ID**: 1788.10a2Organizational.2-10.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Require developers to implement only approved changes](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F085467a6-9679-5c65-584a-f55acefd0d43) |CMA_C1596 - Require developers to implement only approved changes |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1596.json) | |[Require developers to manage change integrity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb33d61c1-7463-7025-0ec0-a47585b59147) |CMA_C1595 - Require developers to manage change integrity |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1595.json) |
-### 10.01 Security Requirements of Information Systems
+### 1789.10a2Organizational.3-10.a 10.01 Security Requirements of Information Systems
**ID**: 1789.10a2Organizational.3-10.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Identify individuals with security roles and responsibilities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0dcbaf2f-075e-947b-8f4c-74ecc5cd302c) |CMA_C1566 - Identify individuals with security roles and responsibilities |Manual, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1566.json) | |[Integrate risk management process into SDLC](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F00f12b6f-10d7-8117-9577-0f2b76488385) |CMA_C1567 - Integrate risk management process into SDLC |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1567.json) |
-### 10.01 Security Requirements of Information Systems
+### 1790.10a2Organizational.45-10.a 10.01 Security Requirements of Information Systems
**ID**: 1790.10a2Organizational.45-10.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Review and update the information security architecture](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fced291b8-1d3d-7e27-40cf-829e9dd523c8) |CMA_C1504 - Review and update the information security architecture |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1504.json) | |[Review development process, standards and tools](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1e876c5c-0f2a-8eb6-69f7-5f91e7918ed6) |CMA_C1610 - Review development process, standards and tools |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1610.json) |
-### 10.01 Security Requirements of Information Systems
+### 1791.10a2Organizational.6-10.a 10.01 Security Requirements of Information Systems
**ID**: 1791.10a2Organizational.6-10.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Integrate risk management process into SDLC](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F00f12b6f-10d7-8117-9577-0f2b76488385) |CMA_C1567 - Integrate risk management process into SDLC |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1567.json) | |[View and configure system diagnostic data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0123edae-3567-a05a-9b05-b53ebe9d3e7e) |CMA_0544 - View and configure system diagnostic data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0544.json) |
-### 10.01 Security Requirements of Information Systems
+### 1792.10a2Organizational.7814-10.a 10.01 Security Requirements of Information Systems
**ID**: 1792.10a2Organizational.7814-10.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Implement the risk management strategy](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc6fe3856-4635-36b6-983c-070da12a953b) |CMA_C1744 - Implement the risk management strategy |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1744.json) | |[Integrate risk management process into SDLC](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F00f12b6f-10d7-8117-9577-0f2b76488385) |CMA_C1567 - Integrate risk management process into SDLC |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1567.json) |
-### 10.01 Security Requirements of Information Systems
+### 1793.10a2Organizational.91011-10.a 10.01 Security Requirements of Information Systems
**ID**: 1793.10a2Organizational.91011-10.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Establish security requirements for the manufacturing of connected devices](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fafbecd30-37ee-a27b-8e09-6ac49951a0ee) |CMA_0279 - Establish security requirements for the manufacturing of connected devices |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0279.json) | |[Implement security engineering principles of information systems](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdf2e9507-169b-4114-3a52-877561ee3198) |CMA_0325 - Implement security engineering principles of information systems |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0325.json) |
-### 10.01 Security Requirements of Information Systems
+### 1794.10a2Organizational.12-10.a 10.01 Security Requirements of Information Systems
**ID**: 1794.10a2Organizational.12-10.a **Ownership**: Shared
This built-in initiative is deployed as part of the
||||| |[Require developers to produce evidence of security assessment plan execution](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff8a63511-66f1-503f-196d-d6217ee0823a) |CMA_C1602 - Require developers to produce evidence of security assessment plan execution |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1602.json) |
-### 10.01 Security Requirements of Information Systems
+### 1795.10a2Organizational.13-10.a 10.01 Security Requirements of Information Systems
**ID**: 1795.10a2Organizational.13-10.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Require developers to document approved changes and potential impact](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3a868d0c-538f-968b-0191-bddb44da5b75) |CMA_C1597 - Require developers to document approved changes and potential impact |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1597.json) | |[Require developers to produce evidence of security assessment plan execution](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff8a63511-66f1-503f-196d-d6217ee0823a) |CMA_C1602 - Require developers to produce evidence of security assessment plan execution |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1602.json) |
-### 10.01 Security Requirements of Information Systems
+### 1796.10a2Organizational.15-10.a 10.01 Security Requirements of Information Systems
**ID**: 1796.10a2Organizational.15-10.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Employ independent assessors to conduct security control assessments](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb65c5d8e-9043-9612-2c17-65f231d763bb) |CMA_C1148 - Employ independent assessors to conduct security control assessments |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1148.json) | |[Produce Security Assessment report](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F70a7a065-a060-85f8-7863-eb7850ed2af9) |CMA_C1146 - Produce Security Assessment report |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1146.json) |
-### 10.01 Security Requirements of Information Systems
+### 1797.10a3Organizational.1-10.a 10.01 Security Requirements of Information Systems
**ID**: 1797.10a3Organizational.1-10.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Require developers to describe accurate security functionality](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3e37c891-840c-3eb4-78d2-e2e0bb5063e0) |CMA_C1613 - Require developers to describe accurate security functionality |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1613.json) | |[Require developers to provide unified security protection approach](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7a114735-a420-057d-a651-9a73cd0416ef) |CMA_C1614 - Require developers to provide unified security protection approach |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1614.json) |
-### 10.01 Security Requirements of Information Systems
+### 1798.10a3Organizational.2-10.a 10.01 Security Requirements of Information Systems
**ID**: 1798.10a3Organizational.2-10.a **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Require developers to build security architecture](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff131c8c5-a54a-4888-1efc-158928924bc1) |CMA_C1612 - Require developers to build security architecture |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1612.json) | |[Review and update the information security architecture](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fced291b8-1d3d-7e27-40cf-829e9dd523c8) |CMA_C1504 - Review and update the information security architecture |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1504.json) |
-### 10.01 Security Requirements of Information Systems
+### 1799.10a3Organizational.34-10.a 10.01 Security Requirements of Information Systems
**ID**: 1799.10a3Organizational.34-10.a **Ownership**: Shared
This built-in initiative is deployed as part of the
## 18 Physical & Environmental Security
-### 08.01 Secure Areas
+### 1801.08b1Organizational.124-08.b 08.01 Secure Areas
**ID**: 1801.08b1Organizational.124-08.b **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Implement physical security for offices, working areas, and secure areas](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F05ec66a2-137c-14b8-8e75-3d7a2bef07f8) |CMA_0323 - Implement physical security for offices, working areas, and secure areas |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0323.json) | |[Monitor third-party provider compliance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff8ded0c6-a668-9371-6bb6-661d58787198) |CMA_C1533 - Monitor third-party provider compliance |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1533.json) |
-### 08.01 Secure Areas
+### 1802.08b1Organizational.3-08.b 08.01 Secure Areas
**ID**: 1802.08b1Organizational.3-08.b **Ownership**: Shared
This built-in initiative is deployed as part of the
||||| |[Control physical access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55a7f9a0-6397-7589-05ef-5ed59a8149e7) |CMA_0081 - Control physical access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0081.json) |
-### 08.01 Secure Areas
+### 1803.08b1Organizational.5-08.b 08.01 Secure Areas
**ID**: 1803.08b1Organizational.5-08.b **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Control maintenance and repair activities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb6ad009f-5c24-1dc0-a25e-74b60e4da45f) |CMA_0080 - Control maintenance and repair activities |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0080.json) | |[Produce complete records of remote maintenance activities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F74041cfe-3f87-1d17-79ec-34ca5f895542) |CMA_C1403 - Produce complete records of remote maintenance activities |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1403.json) |
-### 08.01 Secure Areas
+### 1804.08b2Organizational.12-08.b 08.01 Secure Areas
**ID**: 1804.08b2Organizational.12-08.b **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Control physical access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55a7f9a0-6397-7589-05ef-5ed59a8149e7) |CMA_0081 - Control physical access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0081.json) | |[Implement physical security for offices, working areas, and secure areas](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F05ec66a2-137c-14b8-8e75-3d7a2bef07f8) |CMA_0323 - Implement physical security for offices, working areas, and secure areas |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0323.json) |
-### 08.01 Secure Areas
+### 1805.08b2Organizational.3-08.b 08.01 Secure Areas
**ID**: 1805.08b2Organizational.3-08.b **Ownership**: Shared
This built-in initiative is deployed as part of the
||||| |[Control physical access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55a7f9a0-6397-7589-05ef-5ed59a8149e7) |CMA_0081 - Control physical access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0081.json) |
-### 08.01 Secure Areas
+### 1806.08b2Organizational.4-08.b 08.01 Secure Areas
**ID**: 1806.08b2Organizational.4-08.b **Ownership**: Shared
This built-in initiative is deployed as part of the
||||| |[Control physical access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55a7f9a0-6397-7589-05ef-5ed59a8149e7) |CMA_0081 - Control physical access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0081.json) |
-### 08.01 Secure Areas
+### 1807.08b2Organizational.56-08.b 08.01 Secure Areas
**ID**: 1807.08b2Organizational.56-08.b **Ownership**: Shared
This built-in initiative is deployed as part of the
||||| |[Control physical access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55a7f9a0-6397-7589-05ef-5ed59a8149e7) |CMA_0081 - Control physical access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0081.json) |
-### 08.01 Secure Areas
+### 1808.08b2Organizational.7-08.b 08.01 Secure Areas
**ID**: 1808.08b2Organizational.7-08.b **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Review user accounts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F79f081c7-1634-01a1-708e-376197999289) |CMA_0480 - Review user accounts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0480.json) | |[Separate duties of individuals](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F60ee1260-97f0-61bb-8155-5d8b75743655) |CMA_0492 - Separate duties of individuals |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0492.json) |
-### 08.01 Secure Areas
+### 1810.08b3Organizational.2-08.b 08.01 Secure Areas
**ID**: 1810.08b3Organizational.2-08.b **Ownership**: Shared
This built-in initiative is deployed as part of the
||||| |[Control physical access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55a7f9a0-6397-7589-05ef-5ed59a8149e7) |CMA_0081 - Control physical access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0081.json) |
-### 08.02 Equipment Security
+### 18108.08j1Organizational.1-08.j 08.02 Equipment Security
**ID**: 18108.08j1Organizational.1-08.j **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Review and update media protection policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb4e19d22-8c0e-7cad-3219-c84c62dc250f) |CMA_C1427 - Review and update media protection policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1427.json) | |[Review and update system maintenance policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2067b904-9552-3259-0cdd-84468e284b7c) |CMA_C1395 - Review and update system maintenance policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1395.json) |
-### 08.02 Equipment Security
+### 18109.08j1Organizational.4-08.j 08.02 Equipment Security
**ID**: 18109.08j1Organizational.4-08.j **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Maintain list of authorized remote maintenance personnel](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4ce91e4e-6dab-3c46-011a-aa14ae1561bf) |CMA_C1420 - Maintain list of authorized remote maintenance personnel |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1420.json) | |[Manage maintenance personnel](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb273f1e3-79e7-13ee-5b5d-dca6c66c3d5d) |CMA_C1421 - Manage maintenance personnel |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1421.json) |
-### 08.01 Secure Areas
+### 1811.08b3Organizational.3-08.b 08.01 Secure Areas
**ID**: 1811.08b3Organizational.3-08.b **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Establish and maintain an asset inventory](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F27965e62-141f-8cca-426f-d09514ee5216) |CMA_0266 - Establish and maintain an asset inventory |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0266.json) | |[Implement physical security for offices, working areas, and secure areas](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F05ec66a2-137c-14b8-8e75-3d7a2bef07f8) |CMA_0323 - Implement physical security for offices, working areas, and secure areas |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0323.json) |
-### 08.02 Equipment Security
+### 18110.08j1Organizational.5-08.j 08.02 Equipment Security
**ID**: 18110.08j1Organizational.5-08.j **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Manage nonlocal maintenance and diagnostic activities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1fb1cb0e-1936-6f32-42fd-89970b535855) |CMA_0364 - Manage nonlocal maintenance and diagnostic activities |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0364.json) | |[Perform all non-local maintenance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5bac5fb7-7735-357b-767d-02264bfe5c3b) |CMA_C1417 - Perform all non-local maintenance |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1417.json) |
-### 08.02 Equipment Security
+### 18111.08j1Organizational.6-08.j 08.02 Equipment Security
**ID**: 18111.08j1Organizational.6-08.j **Ownership**: Shared
This built-in initiative is deployed as part of the
||||| |[Provide timely maintenance support](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb598832-4bcc-658d-4381-3ecbe17b9866) |CMA_C1425 - Provide timely maintenance support |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1425.json) |
-### 08.02 Equipment Security
+### 18112.08j3Organizational.4-08.j 08.02 Equipment Security
**ID**: 18112.08j3Organizational.4-08.j **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Review and update information integrity policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6bededc0-2985-54d5-4158-eb8bad8070a0) |CMA_C1667 - Review and update information integrity policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1667.json) | |[Review and update system maintenance policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2067b904-9552-3259-0cdd-84468e284b7c) |CMA_C1395 - Review and update system maintenance policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1395.json) |
-### 08.01 Secure Areas
+### 1812.08b3Organizational.46-08.b 08.01 Secure Areas
**ID**: 1812.08b3Organizational.46-08.b **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Install an alarm system](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faa0ddd99-43eb-302d-3f8f-42b499182960) |CMA_0338 - Install an alarm system |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0338.json) | |[Manage a secure surveillance camera system](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff2222056-062d-1060-6dc2-0107a68c34b2) |CMA_0354 - Manage a secure surveillance camera system |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0354.json) |
-### 08.02 Equipment Security
+### 18127.08l1Organizational.3-08.l 08.02 Equipment Security
**ID**: 18127.08l1Organizational.3-08.l **Ownership**: Shared
This built-in initiative is deployed as part of the
||||| |[Employ a media sanitization mechanism](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feaaae23f-92c9-4460-51cf-913feaea4d52) |CMA_0208 - Employ a media sanitization mechanism |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0208.json) |
-### 08.01 Secure Areas
+### 1813.08b3Organizational.56-08.b 08.01 Secure Areas
**ID**: 1813.08b3Organizational.56-08.b **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Install an alarm system](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faa0ddd99-43eb-302d-3f8f-42b499182960) |CMA_0338 - Install an alarm system |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0338.json) | |[Manage a secure surveillance camera system](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff2222056-062d-1060-6dc2-0107a68c34b2) |CMA_0354 - Manage a secure surveillance camera system |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0354.json) |
-### 09.07 Media Handling
+### 18130.09p1Organizational.24-09.p 09.07 Media Handling
**ID**: 18130.09p1Organizational.24-09.p **Ownership**: Shared
This built-in initiative is deployed as part of the
||||| |[Employ a media sanitization mechanism](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feaaae23f-92c9-4460-51cf-913feaea4d52) |CMA_0208 - Employ a media sanitization mechanism |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0208.json) |
-### 08.01 Secure Areas
+### 1814.08d1Organizational.12-08.d 08.01 Secure Areas
**ID**: 1814.08d1Organizational.12-08.d **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Implement physical security for offices, working areas, and secure areas](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F05ec66a2-137c-14b8-8e75-3d7a2bef07f8) |CMA_0323 - Implement physical security for offices, working areas, and secure areas |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0323.json) | |[Run simulation attacks](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa8f9c283-9a66-3eb3-9e10-bdba95b85884) |CMA_0486 - Run simulation attacks |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0486.json) |
-### 08.01 Secure Areas
+### 18145.08b3Organizational.7-08.b 08.01 Secure Areas
**ID**: 18145.08b3Organizational.7-08.b **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Install an alarm system](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faa0ddd99-43eb-302d-3f8f-42b499182960) |CMA_0338 - Install an alarm system |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0338.json) | |[Manage a secure surveillance camera system](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff2222056-062d-1060-6dc2-0107a68c34b2) |CMA_0354 - Manage a secure surveillance camera system |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0354.json) |
-### 08.01 Secure Areas
+### 18146.08b3Organizational.8-08.b 08.01 Secure Areas
**ID**: 18146.08b3Organizational.8-08.b **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Install an alarm system](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faa0ddd99-43eb-302d-3f8f-42b499182960) |CMA_0338 - Install an alarm system |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0338.json) | |[Manage a secure surveillance camera system](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff2222056-062d-1060-6dc2-0107a68c34b2) |CMA_0354 - Manage a secure surveillance camera system |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0354.json) |
-### 08.01 Secure Areas
+### 1815.08d2Organizational.123-08.d 08.01 Secure Areas
**ID**: 1815.08d2Organizational.123-08.d **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Implement physical security for offices, working areas, and secure areas](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F05ec66a2-137c-14b8-8e75-3d7a2bef07f8) |CMA_0323 - Implement physical security for offices, working areas, and secure areas |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0323.json) | |[Run simulation attacks](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa8f9c283-9a66-3eb3-9e10-bdba95b85884) |CMA_0486 - Run simulation attacks |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0486.json) |
-### 08.01 Secure Areas
+### 1816.08d2Organizational.4-08.d 08.01 Secure Areas
**ID**: 1816.08d2Organizational.4-08.d **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Manage a secure surveillance camera system](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff2222056-062d-1060-6dc2-0107a68c34b2) |CMA_0354 - Manage a secure surveillance camera system |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0354.json) | |[Manage the transportation of assets](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4ac81669-00e2-9790-8648-71bc11bc91eb) |CMA_0370 - Manage the transportation of assets |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0370.json) |
-### 08.01 Secure Areas
+### 1817.08d3Organizational.12-08.d 08.01 Secure Areas
**ID**: 1817.08d3Organizational.12-08.d **Ownership**: Shared
This built-in initiative is deployed as part of the
||||| |[Implement physical security for offices, working areas, and secure areas](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F05ec66a2-137c-14b8-8e75-3d7a2bef07f8) |CMA_0323 - Implement physical security for offices, working areas, and secure areas |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0323.json) |
-### 08.01 Secure Areas
+### 1818.08d3Organizational.3-08.d 08.01 Secure Areas
**ID**: 1818.08d3Organizational.3-08.d **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Implement physical security for offices, working areas, and secure areas](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F05ec66a2-137c-14b8-8e75-3d7a2bef07f8) |CMA_0323 - Implement physical security for offices, working areas, and secure areas |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0323.json) | |[Run simulation attacks](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa8f9c283-9a66-3eb3-9e10-bdba95b85884) |CMA_0486 - Run simulation attacks |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0486.json) |
-### 08.02 Equipment Security
+### 1819.08j1Organizational.23-08.j 08.02 Equipment Security
**ID**: 1819.08j1Organizational.23-08.j **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Manage nonlocal maintenance and diagnostic activities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1fb1cb0e-1936-6f32-42fd-89970b535855) |CMA_0364 - Manage nonlocal maintenance and diagnostic activities |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0364.json) | |[Produce complete records of remote maintenance activities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F74041cfe-3f87-1d17-79ec-34ca5f895542) |CMA_C1403 - Produce complete records of remote maintenance activities |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1403.json) |
-### 08.02 Equipment Security
+### 1820.08j2Organizational.1-08.j 08.02 Equipment Security
**ID**: 1820.08j2Organizational.1-08.j **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Control maintenance and repair activities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb6ad009f-5c24-1dc0-a25e-74b60e4da45f) |CMA_0080 - Control maintenance and repair activities |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0080.json) | |[Manage nonlocal maintenance and diagnostic activities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1fb1cb0e-1936-6f32-42fd-89970b535855) |CMA_0364 - Manage nonlocal maintenance and diagnostic activities |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0364.json) |
-### 08.02 Equipment Security
+### 1821.08j2Organizational.3-08.j 08.02 Equipment Security
**ID**: 1821.08j2Organizational.3-08.j **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Manage nonlocal maintenance and diagnostic activities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1fb1cb0e-1936-6f32-42fd-89970b535855) |CMA_0364 - Manage nonlocal maintenance and diagnostic activities |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0364.json) | |[Produce complete records of remote maintenance activities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F74041cfe-3f87-1d17-79ec-34ca5f895542) |CMA_C1403 - Produce complete records of remote maintenance activities |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1403.json) |
-### 08.02 Equipment Security
+### 1822.08j2Organizational.2-08.j 08.02 Equipment Security
**ID**: 1822.08j2Organizational.2-08.j **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Manage nonlocal maintenance and diagnostic activities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1fb1cb0e-1936-6f32-42fd-89970b535855) |CMA_0364 - Manage nonlocal maintenance and diagnostic activities |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0364.json) | |[Produce complete records of remote maintenance activities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F74041cfe-3f87-1d17-79ec-34ca5f895542) |CMA_C1403 - Produce complete records of remote maintenance activities |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1403.json) |
-### 08.02 Equipment Security
+### 1823.08j3Organizational.12-08.j 08.02 Equipment Security
**ID**: 1823.08j3Organizational.12-08.j **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Control maintenance and repair activities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb6ad009f-5c24-1dc0-a25e-74b60e4da45f) |CMA_0080 - Control maintenance and repair activities |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0080.json) | |[Manage nonlocal maintenance and diagnostic activities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1fb1cb0e-1936-6f32-42fd-89970b535855) |CMA_0364 - Manage nonlocal maintenance and diagnostic activities |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0364.json) |
-### 08.02 Equipment Security
+### 1824.08j3Organizational.3-08.j 08.02 Equipment Security
**ID**: 1824.08j3Organizational.3-08.j **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Control maintenance and repair activities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb6ad009f-5c24-1dc0-a25e-74b60e4da45f) |CMA_0080 - Control maintenance and repair activities |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0080.json) | |[Manage nonlocal maintenance and diagnostic activities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1fb1cb0e-1936-6f32-42fd-89970b535855) |CMA_0364 - Manage nonlocal maintenance and diagnostic activities |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0364.json) |
-### 09.07 Media Handling
+### 1826.09p1Organizational.1-09.p 09.07 Media Handling
**ID**: 1826.09p1Organizational.1-09.p **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Perform disposition review](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb5a4be05-3997-1731-3260-98be653610f6) |CMA_0391 - Perform disposition review |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0391.json) | |[Verify personal data is deleted at the end of processing](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc6b877a6-5d6d-1862-4b7f-3ccc30b25b63) |CMA_0540 - Verify personal data is deleted at the end of processing |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0540.json) |
-### 08.01 Secure Areas
+### 1844.08b1Organizational.6-08.b 08.01 Secure Areas
**ID**: 1844.08b1Organizational.6-08.b **Ownership**: Shared
This built-in initiative is deployed as part of the
||||| |[Control physical access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55a7f9a0-6397-7589-05ef-5ed59a8149e7) |CMA_0081 - Control physical access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0081.json) |
-### 08.01 Secure Areas
+### 1845.08b1Organizational.7-08.b 08.01 Secure Areas
**ID**: 1845.08b1Organizational.7-08.b **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Establish and maintain an asset inventory](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F27965e62-141f-8cca-426f-d09514ee5216) |CMA_0266 - Establish and maintain an asset inventory |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0266.json) | |[Implement physical security for offices, working areas, and secure areas](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F05ec66a2-137c-14b8-8e75-3d7a2bef07f8) |CMA_0323 - Implement physical security for offices, working areas, and secure areas |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0323.json) |
-### 08.01 Secure Areas
+### 1846.08b2Organizational.8-08.b 08.01 Secure Areas
**ID**: 1846.08b2Organizational.8-08.b **Ownership**: Shared
This built-in initiative is deployed as part of the
||||| |[Implement physical security for offices, working areas, and secure areas](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F05ec66a2-137c-14b8-8e75-3d7a2bef07f8) |CMA_0323 - Implement physical security for offices, working areas, and secure areas |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0323.json) |
-### 08.01 Secure Areas
+### 1847.08b2Organizational.910-08.b 08.01 Secure Areas
**ID**: 1847.08b2Organizational.910-08.b **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Define a physical key management process](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F51e4b233-8ee3-8bdc-8f5f-f33bd0d229b7) |CMA_0115 - Define a physical key management process |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0115.json) | |[Establish and maintain an asset inventory](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F27965e62-141f-8cca-426f-d09514ee5216) |CMA_0266 - Establish and maintain an asset inventory |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0266.json) |
-### 08.01 Secure Areas
+### 1848.08b2Organizational.11-08.b 08.01 Secure Areas
**ID**: 1848.08b2Organizational.11-08.b **Ownership**: Shared
This built-in initiative is deployed as part of the
||||| |[Define a physical key management process](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F51e4b233-8ee3-8bdc-8f5f-f33bd0d229b7) |CMA_0115 - Define a physical key management process |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0115.json) |
-### 08.01 Secure Areas
+### 1862.08d1Organizational.3-08.d 08.01 Secure Areas
**ID**: 1862.08d1Organizational.3-08.d **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Implement a penetration testing methodology](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc2eabc28-1e5c-78a2-a712-7cc176c44c07) |CMA_0306 - Implement a penetration testing methodology |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0306.json) | |[Run simulation attacks](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa8f9c283-9a66-3eb3-9e10-bdba95b85884) |CMA_0486 - Run simulation attacks |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0486.json) |
-### 08.01 Secure Areas
+### 1862.08d3Organizational.3 08.01 Secure Areas
**ID**: 1862.08d3Organizational.3 **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Implement a penetration testing methodology](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc2eabc28-1e5c-78a2-a712-7cc176c44c07) |CMA_0306 - Implement a penetration testing methodology |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0306.json) | |[Review and update physical and environmental policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F91cf132e-0c9f-37a8-a523-dc6a92cd2fb2) |CMA_C1446 - Review and update physical and environmental policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1446.json) |
-### 01.04 Network Access Control
+### 1892.01l1Organizational.1 01.04 Network Access Control
**ID**: 1892.01l1Organizational.1 **Ownership**: Shared
This built-in initiative is deployed as part of the
## 19 Data Protection & Privacy
-### 06.01 Compliance with Legal Requirements
+### 1901.06d1Organizational.1-06.d 06.01 Compliance with Legal Requirements
**ID**: 1901.06d1Organizational.1-06.d **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Establish a privacy program](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F39eb03c1-97cc-11ab-0960-6209ed2869f7) |CMA_0257 - Establish a privacy program |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0257.json) | |[Manage compliance activities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4e400494-53a5-5147-6f4d-718b539c7394) |CMA_0358 - Manage compliance activities |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0358.json) |
-### 06.01 Compliance with Legal Requirements
+### 1902.06d1Organizational.2-06.d 06.01 Compliance with Legal Requirements
**ID**: 1902.06d1Organizational.2-06.d **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Retain accounting of disclosures of information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F75b9db50-7906-2351-98ae-0458218609e5) |CMA_C1819 - Retain accounting of disclosures of information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1819.json) | |[Train staff on PII sharing and its consequences](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8019d788-713d-90a1-5570-dac5052f517d) |CMA_C1871 - Train staff on PII sharing and its consequences |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1871.json) |
-### 06.01 Compliance with Legal Requirements
+### 1903.06d1Organizational.3456711-06.d 06.01 Compliance with Legal Requirements
**ID**: 1903.06d1Organizational.3456711-06.d **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Notify users of system logon or access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffe2dff43-0a8c-95df-0432-cb1c794b17d0) |CMA_0382 - Notify users of system logon or access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0382.json) | |[Protect special information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa315c657-4a00-8eba-15ac-44692ad24423) |CMA_0409 - Protect special information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0409.json) |
-### 06.01 Compliance with Legal Requirements
+### 1904.06.d2Organizational.1-06.d 06.01 Compliance with Legal Requirements
**ID**: 1904.06.d2Organizational.1-06.d **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Perform disposition review](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb5a4be05-3997-1731-3260-98be653610f6) |CMA_0391 - Perform disposition review |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0391.json) | |[Verify personal data is deleted at the end of processing](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc6b877a6-5d6d-1862-4b7f-3ccc30b25b63) |CMA_0540 - Verify personal data is deleted at the end of processing |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0540.json) |
-### 06.01 Compliance with Legal Requirements
+### 1906.06.c1Organizational.2-06.c 06.01 Compliance with Legal Requirements
**ID**: 1906.06.c1Organizational.2-06.c **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Provide privacy notice to the public and to individuals](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5023a9e7-8e64-2db6-31dc-7bce27f796af) |CMA_C1861 - Provide privacy notice to the public and to individuals |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1861.json) | |[Publish SORNs for systems containing PII](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F898a5781-2254-5a37-34c7-d78ea7c20d55) |CMA_C1862 - Publish SORNs for systems containing PII |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1862.json) |
-### 06.01 Compliance with Legal Requirements
+### 1907.06.c1Organizational.3-06.c 06.01 Compliance with Legal Requirements
**ID**: 1907.06.c1Organizational.3-06.c **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Provide formal notice to individuals](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F95eb7d09-9937-5df9-11d9-20317e3f60df) |CMA_C1864 - Provide formal notice to individuals |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1864.json) | |[Publish SORNs for systems containing PII](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F898a5781-2254-5a37-34c7-d78ea7c20d55) |CMA_C1862 - Publish SORNs for systems containing PII |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1862.json) |
-### 06.01 Compliance with Legal Requirements
+### 1908.06.c1Organizational.4-06.c 06.01 Compliance with Legal Requirements
**ID**: 1908.06.c1Organizational.4-06.c **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Retain terminated user data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7c7032fe-9ce6-9092-5890-87a1a3755db1) |CMA_0455 - Retain terminated user data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0455.json) | |[Review label activity and analytics](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe23444b9-9662-40f3-289e-6d25c02b48fa) |CMA_0474 - Review label activity and analytics |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0474.json) |
-### 06.01 Compliance with Legal Requirements
+### 1911.06d1Organizational.13-06.d 06.01 Compliance with Legal Requirements
**ID**: 1911.06d1Organizational.13-06.d **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Obtain consent prior to collection or processing of personal data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F069101ac-4578-31da-0cd4-ff083edd3eb4) |CMA_0385 - Obtain consent prior to collection or processing of personal data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0385.json) | |[Remove or redact any PII](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F94c842e3-8098-38f9-6d3f-8872b790527d) |CMA_C1833 - Remove or redact any PII |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1833.json) |
-### 05.02 External Parties
+### 19134.05j1Organizational.5-05.j 05.02 External Parties
**ID**: 19134.05j1Organizational.5-05.j **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Train personnel on disclosure of nonpublic information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F97f0d974-1486-01e2-2088-b888f46c0589) |CMA_C1084 - Train personnel on disclosure of nonpublic information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1084.json) | |[Update privacy plan, policies, and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F96333008-988d-4add-549b-92b3a8c42063) |CMA_C1807 - Update privacy plan, policies, and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1807.json) |
-### 06.01 Compliance with Legal Requirements
+### 19141.06c1Organizational.7-06.c 06.01 Compliance with Legal Requirements
**ID**: 19141.06c1Organizational.7-06.c **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Review label activity and analytics](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe23444b9-9662-40f3-289e-6d25c02b48fa) |CMA_0474 - Review label activity and analytics |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0474.json) | |[Review user groups and applications with access to sensitive data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb1c944e-0e94-647b-9b7e-fdb8d2af0838) |CMA_0481 - Review user groups and applications with access to sensitive data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0481.json) |
-### 06.01 Compliance with Legal Requirements
+### 19142.06c1Organizational.8-06.c 06.01 Compliance with Legal Requirements
**ID**: 19142.06c1Organizational.8-06.c **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Review label activity and analytics](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe23444b9-9662-40f3-289e-6d25c02b48fa) |CMA_0474 - Review label activity and analytics |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0474.json) | |[Verify personal data is deleted at the end of processing](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc6b877a6-5d6d-1862-4b7f-3ccc30b25b63) |CMA_0540 - Verify personal data is deleted at the end of processing |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0540.json) |
-### 06.01 Compliance with Legal Requirements
+### 19143.06c1Organizational.9-06.c 06.01 Compliance with Legal Requirements
**ID**: 19143.06c1Organizational.9-06.c **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Ensure security categorization is approved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6c79c3e5-5f7b-a48a-5c7b-8c158bc01115) |CMA_C1540 - Ensure security categorization is approved |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1540.json) | |[Review label activity and analytics](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe23444b9-9662-40f3-289e-6d25c02b48fa) |CMA_0474 - Review label activity and analytics |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0474.json) |
-### 06.01 Compliance with Legal Requirements
+### 19144.06c2Organizational.1-06.c 06.01 Compliance with Legal Requirements
**ID**: 19144.06c2Organizational.1-06.c **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Review label activity and analytics](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe23444b9-9662-40f3-289e-6d25c02b48fa) |CMA_0474 - Review label activity and analytics |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0474.json) | |[Verify personal data is deleted at the end of processing](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc6b877a6-5d6d-1862-4b7f-3ccc30b25b63) |CMA_0540 - Verify personal data is deleted at the end of processing |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0540.json) |
-### 06.01 Compliance with Legal Requirements
+### 19145.06c2Organizational.2-06.c 06.01 Compliance with Legal Requirements
**ID**: 19145.06c2Organizational.2-06.c **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Review label activity and analytics](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe23444b9-9662-40f3-289e-6d25c02b48fa) |CMA_0474 - Review label activity and analytics |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0474.json) | |[Verify personal data is deleted at the end of processing](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc6b877a6-5d6d-1862-4b7f-3ccc30b25b63) |CMA_0540 - Verify personal data is deleted at the end of processing |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0540.json) |
-### 06.01 Compliance with Legal Requirements
+### 19242.06d1Organizational.14-06.d 06.01 Compliance with Legal Requirements
**ID**: 19242.06d1Organizational.14-06.d **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Obtain consent prior to collection or processing of personal data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F069101ac-4578-31da-0cd4-ff083edd3eb4) |CMA_0385 - Obtain consent prior to collection or processing of personal data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0385.json) | |[Remove or redact any PII](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F94c842e3-8098-38f9-6d3f-8872b790527d) |CMA_C1833 - Remove or redact any PII |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1833.json) |
-### 06.01 Compliance with Legal Requirements
+### 19243.06d1Organizational.15-06.d 06.01 Compliance with Legal Requirements
**ID**: 19243.06d1Organizational.15-06.d **Ownership**: Shared
This built-in initiative is deployed as part of the
|[Remove or redact any PII](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F94c842e3-8098-38f9-6d3f-8872b790527d) |CMA_C1833 - Remove or redact any PII |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1833.json) | |[Restrict communications](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5020f3f4-a579-2f28-72a8-283c5a0b15f9) |CMA_0449 - Restrict communications |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0449.json) |
-### 06.01 Compliance with Legal Requirements
+### 19245.06d2Organizational.2-06.d 06.01 Compliance with Legal Requirements
**ID**: 19245.06d2Organizational.2-06.d **Ownership**: Shared
governance Irs 1075 Sept2016 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/irs-1075-sept2016.md
Title: Regulatory Compliance details for IRS 1075 September 2016 description: Details of the IRS 1075 September 2016 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/04/2022 Last updated : 01/05/2023
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Azure DDoS Protection should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS Protection should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDDoSProtection_Audit.json) |
+|[Azure DDoS Protection Standard should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS protection standard should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDDoSProtection_Audit.json) |
### Boundary Protection (SC-7)
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[\[Preview\]: Log Analytics Extension should be enabled for listed virtual machine images](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F32133ab0-ee4b-4b44-98d6-042180979d50) |Reports virtual machines as non-compliant if the virtual machine image is not in the list defined and the extension is not installed. |AuditIfNotExists, Disabled |[2.0.1-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalytics_OSImage_Audit.json) |
-|[Audit diagnostic setting](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types |AuditIfNotExists |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
+|[Audit diagnostic setting for selected resource types](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types. Be sure to select only resource types which support diagnostics settings. |AuditIfNotExists |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
|[Auditing on SQL server should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6fb4358-5bf4-4ad7-ba82-2cd2f41ce5e9) |Auditing on your SQL Server should be enabled to track database activities across all databases on the server and save them in an audit log. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditing_Audit.json) | |[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) | |[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Audit diagnostic setting](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types |AuditIfNotExists |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
+|[Audit diagnostic setting for selected resource types](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types. Be sure to select only resource types which support diagnostics settings. |AuditIfNotExists |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
|[Auditing on SQL server should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6fb4358-5bf4-4ad7-ba82-2cd2f41ce5e9) |Auditing on your SQL Server should be enabled to track database activities across all databases on the server and save them in an audit log. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditing_Audit.json) | |[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) | |[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) |
governance Iso 27001 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/iso-27001.md
Title: Regulatory Compliance details for ISO 27001:2013 description: Details of the ISO 27001:2013 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/04/2022 Last updated : 01/05/2023
This built-in initiative is deployed as part of the
|[\[Preview\]: Log Analytics Extension should be enabled for listed virtual machine images](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F32133ab0-ee4b-4b44-98d6-042180979d50) |Reports virtual machines as non-compliant if the virtual machine image is not in the list defined and the extension is not installed. |AuditIfNotExists, Disabled |[2.0.1-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalytics_OSImage_Audit.json) | |[Adhere to retention periods defined](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1ecb79d7-1a06-9a3b-3be8-f434d04d1ec1) |CMA_0004 - Adhere to retention periods defined |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0004.json) | |[Alert personnel of information spillage](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9622aaa9-5c49-40e2-5bf8-660b7cd23deb) |CMA_0007 - Alert personnel of information spillage |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0007.json) |
-|[Audit diagnostic setting](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types |AuditIfNotExists |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
+|[Audit diagnostic setting for selected resource types](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types. Be sure to select only resource types which support diagnostics settings. |AuditIfNotExists |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
|[Audit privileged functions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff26af0b1-65b6-689a-a03f-352ad2d00f98) |CMA_0019 - Audit privileged functions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0019.json) | |[Audit user account status](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F49c23d9b-02b0-0e42-4f94-e8cef1b8381b) |CMA_0020 - Audit user account status |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0020.json) | |[Auditing on SQL server should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6fb4358-5bf4-4ad7-ba82-2cd2f41ce5e9) |Auditing on your SQL Server should be enabled to track database activities across all databases on the server and save them in an audit log. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditing_Audit.json) |
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[\[Preview\]: Log Analytics Extension should be enabled for listed virtual machine images](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F32133ab0-ee4b-4b44-98d6-042180979d50) |Reports virtual machines as non-compliant if the virtual machine image is not in the list defined and the extension is not installed. |AuditIfNotExists, Disabled |[2.0.1-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalytics_OSImage_Audit.json) |
-|[Audit diagnostic setting](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types |AuditIfNotExists |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
+|[Audit diagnostic setting for selected resource types](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types. Be sure to select only resource types which support diagnostics settings. |AuditIfNotExists |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
|[Audit privileged functions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff26af0b1-65b6-689a-a03f-352ad2d00f98) |CMA_0019 - Audit privileged functions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0019.json) | |[Audit user account status](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F49c23d9b-02b0-0e42-4f94-e8cef1b8381b) |CMA_0020 - Audit user account status |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0020.json) | |[Auditing on SQL server should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6fb4358-5bf4-4ad7-ba82-2cd2f41ce5e9) |Auditing on your SQL Server should be enabled to track database activities across all databases on the server and save them in an audit log. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditing_Audit.json) |
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[\[Preview\]: Log Analytics Extension should be enabled for listed virtual machine images](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F32133ab0-ee4b-4b44-98d6-042180979d50) |Reports virtual machines as non-compliant if the virtual machine image is not in the list defined and the extension is not installed. |AuditIfNotExists, Disabled |[2.0.1-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalytics_OSImage_Audit.json) |
-|[Audit diagnostic setting](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types |AuditIfNotExists |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
+|[Audit diagnostic setting for selected resource types](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types. Be sure to select only resource types which support diagnostics settings. |AuditIfNotExists |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
|[Auditing on SQL server should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6fb4358-5bf4-4ad7-ba82-2cd2f41ce5e9) |Auditing on your SQL Server should be enabled to track database activities across all databases on the server and save them in an audit log. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditing_Audit.json) | |[Compile Audit records into system wide audit](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F214ea241-010d-8926-44cc-b90a96d52adc) |CMA_C1140 - Compile Audit records into system wide audit |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1140.json) | |[Dependency agent should be enabled for listed virtual machine images](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F11ac78e3-31bc-4f0c-8434-37ab963cea07) |Reports virtual machines as non-compliant if the virtual machine image is not in the list defined and the agent is not installed. The list of OS images is updated over time as support is updated. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DependencyAgent_OSImage_Audit.json) |
governance New Zealand Ism https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/new-zealand-ism.md
Title: Regulatory Compliance details for New Zealand ISM Restricted description: Details of the New Zealand ISM Restricted Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/04/2022 Last updated : 01/05/2023
This built-in initiative is deployed as part of the
||||| |[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) | |[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
-|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
### 6.2.6 Resolving vulnerabilities
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Azure DDoS Protection should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS Protection should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDDoSProtection_Audit.json) |
+|[Azure DDoS Protection Standard should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS protection standard should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDDoSProtection_Audit.json) |
### 18.4.8 IDS/IPSs on gateways
This built-in initiative is deployed as part of the
||||| |[All authorization rules except RootManageSharedAccessKey should be removed from Service Bus namespace](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa1817ec0-a368-432a-8057-8371e17ac6ee) |Service Bus clients should not use a namespace level access policy that provides access to all queues and topics in a namespace. To align with the least privilege security model, you should create access policies at the entity level for queues and topics to provide access to only the specific entity |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Bus/ServiceBus_AuditNamespaceAccessRules_Audit.json) | |[Azure Key Vault Managed HSM should have purge protection enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc39ba22d-4428-4149-b981-70acb31fc383) |Malicious deletion of an Azure Key Vault Managed HSM can lead to permanent data loss. A malicious insider in your organization can potentially delete and purge Azure Key Vault Managed HSM. Purge protection protects you from insider attacks by enforcing a mandatory retention period for soft deleted Azure Key Vault Managed HSM. No one inside your organization or Microsoft will be able to purge your Azure Key Vault Managed HSM during the soft delete retention period. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/ManagedHsm_Recoverable_Audit.json) |
-|[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
+|[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |To improve the security of Cognitive Services accounts, ensure that it isn't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
|[Internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6de0be7-9a8a-4b8a-b349-43cf02d22f7c) |Protect your virtual machines from potential threats by restricting access to them with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternetFacingVirtualMachines_Audit.json) | |[Public network access on Azure SQL Database should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b8ca024-1d5c-4dec-8995-b1a932b41780) |Disabling the public network access property improves security by ensuring your Azure SQL Database can only be accessed from a private endpoint. This configuration denies all logins that match IP or virtual network based firewall rules. |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_PublicNetworkAccess_Audit.json) | |[Public network access should be disabled for MariaDB servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffdccbe47-f3e3-4213-ad5d-ea459b2fa077) |Disable the public network access property to improve security and ensure your Azure Database for MariaDB can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MariaDB_DisablePublicNetworkAccess_Audit.json) |
governance Nist Sp 800 53 R5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/nist-sp-800-53-r5.md
Title: Regulatory Compliance details for NIST SP 800-53 Rev. 5 description: Details of the NIST SP 800-53 Rev. 5 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/04/2022 Last updated : 01/05/2023
initiative definition.
|[Azure SignalR Service should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2393d2cf-a342-44cd-a2e2-fe0188fd1234) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure SignalR Service resource instead of the entire service, you'll reduce your data leakage risks. Learn more about private links at: [https://aka.ms/asrs/privatelink](https://aka.ms/asrs/privatelink). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit_v2.json) | |[Azure Synapse workspaces should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72d11df1-dd8a-41f7-8925-b05b960ebafc) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Synapse workspace, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/synapse-analytics/security/how-to-connect-to-workspace-with-private-links](../../../synapse-analytics/security/how-to-connect-to-workspace-with-private-links.md). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/SynapseWorkspaceUsePrivateLinks_Audit.json) | |[Azure Web PubSub Service should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb907f70-7514-460d-92b3-a5ae93b4f917) |Azure Private Link lets you connect your virtual networks to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure Web PubSub Service, you can reduce data leakage risks. Learn more about private links at: [https://aka.ms/awps/privatelink](https://aka.ms/awps/privatelink). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Web%20PubSub/WebPubSub_PrivateEndpointEnabled_Audit_v2.json) |
-|[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
+|[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |To improve the security of Cognitive Services accounts, ensure that it isn't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
|[Cognitive Services accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) | |[Cognitive Services should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcddd188c-4b82-4c48-a19d-ddf74ee66a01) |Azure Private Link lets you connect your virtual networks to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Cognitive Services, you'll reduce the potential for data leakage. Learn more about private links at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_EnablePrivateEndpoints_Audit.json) | |[Container registries should not allow unrestricted network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) |
initiative definition.
|[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) | |[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) | |[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
-|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
|[Vulnerability assessment should be enabled on your Synapse workspaces](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0049a6b3-a662-4f3e-8635-39cf44ace45a) |Discover, track, and remediate potential vulnerabilities by configuring recurring SQL vulnerability assessment scans on your Synapse workspaces. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/ASC_SQLVulnerabilityAssessmentOnSynapse_Audit.json) | ### Update Vulnerabilities to Be Scanned
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Azure DDoS Protection should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS Protection should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDDoSProtection_Audit.json) |
+|[Azure DDoS Protection Standard should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS protection standard should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDDoSProtection_Audit.json) |
|[Azure Web Application Firewall should be enabled for Azure Front Door entry-points](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F055aa869-bc98-4af8-bafc-23f1ab6ffe2c) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AFD_Enabled_Audit.json) | |[Develop and document a DDoS response plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb7306e73-0494-83a2-31f5-280e934a8f70) |CMA_0147 - Develop and document a DDoS response plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0147.json) | |[IP Forwarding on your virtual machine should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd352bd5-2853-4985-bf0d-73806b4a5744) |Enabling IP forwarding on a virtual machine's NIC allows the machine to receive traffic addressed to other destinations. IP forwarding is rarely required (e.g., when using the VM as a network virtual appliance), and therefore, this should be reviewed by the network security team. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_IPForwardingOnVirtualMachines_Audit.json) |
initiative definition.
|[Azure Synapse workspaces should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72d11df1-dd8a-41f7-8925-b05b960ebafc) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Synapse workspace, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/synapse-analytics/security/how-to-connect-to-workspace-with-private-links](../../../synapse-analytics/security/how-to-connect-to-workspace-with-private-links.md). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/SynapseWorkspaceUsePrivateLinks_Audit.json) | |[Azure Web Application Firewall should be enabled for Azure Front Door entry-points](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F055aa869-bc98-4af8-bafc-23f1ab6ffe2c) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AFD_Enabled_Audit.json) | |[Azure Web PubSub Service should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb907f70-7514-460d-92b3-a5ae93b4f917) |Azure Private Link lets you connect your virtual networks to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure Web PubSub Service, you can reduce data leakage risks. Learn more about private links at: [https://aka.ms/awps/privatelink](https://aka.ms/awps/privatelink). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Web%20PubSub/WebPubSub_PrivateEndpointEnabled_Audit_v2.json) |
-|[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
+|[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |To improve the security of Cognitive Services accounts, ensure that it isn't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
|[Cognitive Services accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) | |[Cognitive Services should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcddd188c-4b82-4c48-a19d-ddf74ee66a01) |Azure Private Link lets you connect your virtual networks to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Cognitive Services, you'll reduce the potential for data leakage. Learn more about private links at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_EnablePrivateEndpoints_Audit.json) | |[Container registries should not allow unrestricted network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) |
initiative definition.
|[Azure Synapse workspaces should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72d11df1-dd8a-41f7-8925-b05b960ebafc) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Synapse workspace, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/synapse-analytics/security/how-to-connect-to-workspace-with-private-links](../../../synapse-analytics/security/how-to-connect-to-workspace-with-private-links.md). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/SynapseWorkspaceUsePrivateLinks_Audit.json) | |[Azure Web Application Firewall should be enabled for Azure Front Door entry-points](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F055aa869-bc98-4af8-bafc-23f1ab6ffe2c) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AFD_Enabled_Audit.json) | |[Azure Web PubSub Service should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb907f70-7514-460d-92b3-a5ae93b4f917) |Azure Private Link lets you connect your virtual networks to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure Web PubSub Service, you can reduce data leakage risks. Learn more about private links at: [https://aka.ms/awps/privatelink](https://aka.ms/awps/privatelink). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Web%20PubSub/WebPubSub_PrivateEndpointEnabled_Audit_v2.json) |
-|[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
+|[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |To improve the security of Cognitive Services accounts, ensure that it isn't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
|[Cognitive Services accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) | |[Cognitive Services should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcddd188c-4b82-4c48-a19d-ddf74ee66a01) |Azure Private Link lets you connect your virtual networks to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Cognitive Services, you'll reduce the potential for data leakage. Learn more about private links at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_EnablePrivateEndpoints_Audit.json) | |[Container registries should not allow unrestricted network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) |
initiative definition.
||||| |[Perform information input validation](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8b1f29eb-1b22-4217-5337-9207cb55231e) |CMA_C1723 - Perform information input validation |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1723.json) |
-### Error Handling
+### Information Input Validation
-**ID**: NIST SP 800-53 Rev. 5 SI-11
+**ID**: NIST SP 800-53 Rev. 5 SI-10
**Ownership**: Shared |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
governance Rbi_Itf_Nbfc_V2017 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/rbi_itf_nbfc_v2017.md
Title: Regulatory Compliance details for Reserve Bank of India - IT Framework for NBFC description: Details of the Reserve Bank of India - IT Framework for NBFC Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/04/2022 Last updated : 01/05/2023
initiative definition.
|[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) | |[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) | |[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
-|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
### IT Governance-1.1
initiative definition.
|[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) | |[Vulnerability Assessment settings for SQL server should contain an email address to receive scan reports](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F057d6cfe-9c4f-4a6d-bc60-14420ea1f1a9) |Ensure that an email address is provided for the 'Send scan reports to' field in the Vulnerability Assessment settings. This email address receives scan result summary after a periodic scan runs on SQL servers. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_VulnerabilityAssessmentEmails_Audit.json) | |[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
-|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
|[Vulnerability assessment should be enabled on your Synapse workspaces](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0049a6b3-a662-4f3e-8635-39cf44ace45a) |Discover, track, and remediate potential vulnerabilities by configuring recurring SQL vulnerability assessment scans on your Synapse workspaces. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/ASC_SQLVulnerabilityAssessmentOnSynapse_Audit.json) | ### Digital Signatures-3.8
governance Rmit Malaysia https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/rmit-malaysia.md
Title: Regulatory Compliance details for RMIT Malaysia description: Details of the RMIT Malaysia Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/04/2022 Last updated : 01/05/2023
initiative definition.
||||| |[Activity log should be retained for at least one year](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb02aacc0-b073-424e-8298-42b22829ee0a) |This policy audits the activity log if the retention is not set for 365 days or forever (retention days set to 0). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLogRetention_365orGreater.json) | |[App Service apps should have resource logs enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F91a78b24-f231-4a8a-8da9-02c35b2b6510) |Audit enabling of resource logs on the app. This enables you to recreate activity trails for investigation purposes if a security incident occurs or your network is compromised. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_ResourceLoggingMonitoring_Audit.json) |
-|[Audit diagnostic setting](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types |AuditIfNotExists |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
+|[Audit diagnostic setting for selected resource types](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types. Be sure to select only resource types which support diagnostics settings. |AuditIfNotExists |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
|[Azure Monitor log profile should collect logs for categories 'write,' 'delete,' and 'action'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1a4e592a-6a6e-44a5-9814-e36264ca96e7) |This policy ensures that a log profile collects logs for categories 'write,' 'delete,' and 'action' |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_CaptureAllCategories.json) | |[Azure Monitor Logs for Application Insights should be linked to a Log Analytics workspace](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd550e854-df1a-4de9-bf44-cd894b39a95e) |Link the Application Insights component to a Log Analytics workspace for logs encryption. Customer-managed keys are commonly required to meet regulatory compliance and for more control over the access to your data in Azure Monitor. Linking your component to a Log Analytics workspace that's enabled with a customer-managed key, ensures that your Application Insights logs meet this compliance requirement, see [https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys](/azure/azure-monitor/platform/customer-managed-keys). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ApplicationInsightsComponent_WorkspaceAssociation_Deny.json) | |[Azure Monitor should collect activity logs from all regions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F41388f1c-2db0-4c25-95b2-35d7f5ccbfa9) |This policy audits the Azure Monitor log profile which does not export activities from all Azure supported regions including global. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_CaptureAllRegions.json) |
initiative definition.
||||| |[Auditing on SQL server should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6fb4358-5bf4-4ad7-ba82-2cd2f41ce5e9) |Auditing on your SQL Server should be enabled to track database activities across all databases on the server and save them in an audit log. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditing_Audit.json) | |[Auto provisioning of the Log Analytics agent should be enabled on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F475aae12-b88a-4572-8b36-9b712b2b3a17) |To monitor for security vulnerabilities and threats, Azure Security Center collects data from your Azure virtual machines. Data is collected by the Log Analytics agent, formerly known as the Microsoft Monitoring Agent (MMA), which reads various security-related configurations and event logs from the machine and copies the data to your Log Analytics workspace for analysis. We recommend enabling auto provisioning to automatically deploy the agent to all supported Azure VMs and any new ones that are created. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Automatic_provisioning_log_analytics_monitoring_agent.json) |
-|[Azure DDoS Protection should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS Protection should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDDoSProtection_Audit.json) |
+|[Azure DDoS Protection Standard should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS protection standard should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDDoSProtection_Audit.json) |
|[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) | |[Disconnections should be logged for PostgreSQL database servers.](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb6f77b9-bd53-4e35-a23d-7f65d5f0e446) |This policy helps audit any PostgreSQL databases in your environment without log_disconnections enabled. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableLogDisconnections_Audit.json) | |[Email notification for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6e2593d9-add6-4083-9c9b-4b7d2188c899) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, enable email notifications for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification.json) |
initiative definition.
||||| |[Auditing on SQL server should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6fb4358-5bf4-4ad7-ba82-2cd2f41ce5e9) |Auditing on your SQL Server should be enabled to track database activities across all databases on the server and save them in an audit log. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditing_Audit.json) | |[Auto provisioning of the Log Analytics agent should be enabled on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F475aae12-b88a-4572-8b36-9b712b2b3a17) |To monitor for security vulnerabilities and threats, Azure Security Center collects data from your Azure virtual machines. Data is collected by the Log Analytics agent, formerly known as the Microsoft Monitoring Agent (MMA), which reads various security-related configurations and event logs from the machine and copies the data to your Log Analytics workspace for analysis. We recommend enabling auto provisioning to automatically deploy the agent to all supported Azure VMs and any new ones that are created. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Automatic_provisioning_log_analytics_monitoring_agent.json) |
-|[Azure DDoS Protection should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS Protection should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDDoSProtection_Audit.json) |
+|[Azure DDoS Protection Standard should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS protection standard should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDDoSProtection_Audit.json) |
|[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) | |[Disconnections should be logged for PostgreSQL database servers.](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb6f77b9-bd53-4e35-a23d-7f65d5f0e446) |This policy helps audit any PostgreSQL databases in your environment without log_disconnections enabled. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableLogDisconnections_Audit.json) | |[Email notification for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6e2593d9-add6-4083-9c9b-4b7d2188c899) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, enable email notifications for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification.json) |
initiative definition.
|[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) | |[Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0f936f-2f01-4bf5-b6be-d423792fa562) |Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerRegistryVulnerabilityAssessment_Audit.json) | |[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
-|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
## Control Measures on Cybersecurity
initiative definition.
|[All network ports should be restricted on network security groups associated to your virtual machine](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9daedab3-fb2d-461e-b861-71790eead4f6) |Azure Security Center has identified some of your network security groups' inbound rules to be too permissive. Inbound rules should not allow access from 'Any' or 'Internet' ranges. This can potentially enable attackers to target your resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnprotectedEndpoints_Audit.json) | |[App Service apps should have remote debugging turned off](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcb510bfd-1cba-4d9f-a230-cb0976f4bb71) |Remote debugging requires inbound ports to be opened on an App Service app. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_WebApp_Audit.json) | |[Auto provisioning of the Log Analytics agent should be enabled on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F475aae12-b88a-4572-8b36-9b712b2b3a17) |To monitor for security vulnerabilities and threats, Azure Security Center collects data from your Azure virtual machines. Data is collected by the Log Analytics agent, formerly known as the Microsoft Monitoring Agent (MMA), which reads various security-related configurations and event logs from the machine and copies the data to your Log Analytics workspace for analysis. We recommend enabling auto provisioning to automatically deploy the agent to all supported Azure VMs and any new ones that are created. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Automatic_provisioning_log_analytics_monitoring_agent.json) |
-|[Azure DDoS Protection should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS Protection should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDDoSProtection_Audit.json) |
+|[Azure DDoS Protection Standard should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS protection standard should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDDoSProtection_Audit.json) |
|[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) | |[Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) | |[Azure Defender for SQL servers on machines should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6581d072-105e-4418-827f-bd446d56421b) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServerVirtualMachines_Audit.json) |
governance Ukofficial Uknhs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/ukofficial-uknhs.md
Title: Regulatory Compliance details for UK OFFICIAL and UK NHS description: Details of the UK OFFICIAL and UK NHS Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/04/2022 Last updated : 01/05/2023
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Audit diagnostic setting](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types |AuditIfNotExists |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
+|[Audit diagnostic setting for selected resource types](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types. Be sure to select only resource types which support diagnostics settings. |AuditIfNotExists |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) |
|[Auditing on SQL server should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6fb4358-5bf4-4ad7-ba82-2cd2f41ce5e9) |Auditing on your SQL Server should be enabled to track database activities across all databases on the server and save them in an audit log. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditing_Audit.json) | |[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) |
This built-in initiative is deployed as part of the
|[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) | |[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) | |[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
-|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
### Protective Monitoring
This built-in initiative is deployed as part of the
||||| |[Adaptive application controls for defining safe applications should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F47a6b606-51aa-4496-8bb7-64b11cf66adc) |Enable application controls to define the list of known-safe applications running on your machines, and alert you when other applications run. This helps harden your machines against malware. To simplify the process of configuring and maintaining your rules, Security Center uses machine learning to analyze the applications running on each machine and suggest the list of known-safe applications. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveApplicationControls_Audit.json) | |[Audit virtual machines without disaster recovery configured](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0015ea4d-51ff-4ce3-8d8c-f3f8f0179a56) |Audit virtual machines which do not have disaster recovery configured. To learn more about disaster recovery, visit [https://aka.ms/asr-doc](https://aka.ms/asr-doc). |auditIfNotExists |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/RecoveryServices_DisasterRecovery_Audit.json) |
-|[Azure DDoS Protection should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS Protection should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDDoSProtection_Audit.json) |
+|[Azure DDoS Protection Standard should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS protection standard should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDDoSProtection_Audit.json) |
|[Storage accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34c877ad-507e-4c82-993e-3452a6e0ad3c) |Network access to storage accounts should be restricted. Configure network rules so only applications from allowed networks can access the storage account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges |Audit, Deny, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_NetworkAcls_Audit.json) | ## Secure user management
governance First Query Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/first-query-dotnet.md
Title: "Quickstart: Your first .NET Core query" description: In this quickstart, you follow the steps to enable the Resource Graph NuGet packages for .NET Core and run your first query. Previously updated : 07/09/2021 Last updated : 01/06/2023 + # Quickstart: Run your first Resource Graph query using .NET Core
required packages.
dotnet new console --name "argQuery" ```
-1. Change directories into the new project folder and install the required packages for Azure
- Resource Graph:
+1. Change directories into the new project folder and install the required packages for Azure Resource Graph:
```dotnetcli # Add the Resource Graph package for .NET Core
- dotnet add package Microsoft.Azure.Management.ResourceGraph --version 2.0.0
+ dotnet add package Azure.ResourceManager.ResourceGraph --version 1.0.0
# Add the Azure app auth package for .NET Core dotnet add package Microsoft.Azure.Services.AppAuthentication --version 1.5.0
required packages.
using System.Threading.Tasks; using Microsoft.IdentityModel.Clients.ActiveDirectory; using Microsoft.Rest;
- using Microsoft.Azure.Management.ResourceGraph;
- using Microsoft.Azure.Management.ResourceGraph.Models;
+ using Azure.ResourceManager.ResourceGraph;
+ using Azure.ResourceManager.ResourceGraph.Models;
namespace argQuery {
hdinsight Apache Hadoop Connect Hive Jdbc Driver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-connect-hive-jdbc-driver.md
description: Use the JDBC driver from a Java application to submit Apache Hive q
Previously updated : 06/08/2022 Last updated : 01/06/2023 # Query Apache Hive through the JDBC driver in HDInsight
SQuirreL SQL is a JDBC client that can be used to remotely run Hive queries with
2. In the following script, replace `sshuser` with the SSH user account name for the cluster. Replace `CLUSTERNAME` with the HDInsight cluster name. From a command line, change your work directory to the one created in the prior step, and then enter the following command to copy files from an HDInsight cluster: ```cmd
- scp sshuser@CLUSTERNAME-ssh.azurehdinsight.net:/usr/hdp/current/hadoop-client/{hadoop-auth.jar,hadoop-common.jar,lib/log4j-*.jar,lib/slf4j-*.jar,lib/curator-*.jar} .
+ scp sshuser@CLUSTERNAME-ssh.azurehdinsight.net:/usr/hdp/current/hadoop-client/{hadoop-auth.jar,hadoop-common.jar,lib/log4j-*.jar,lib/slf4j-*.jar,lib/curator-*.jar} . -> scp sshuser@CLUSTERNAME-ssh.azurehdinsight.net:/usr/hdp/current/hadoop-client/{hadoop-auth.jar,hadoop-common.jar,lib/reload4j-*.jar,lib/slf4j-*.jar,lib/curator-*.jar} .
scp sshuser@CLUSTERNAME-ssh.azurehdinsight.net:/usr/hdp/current/hive-client/lib/{commons-codec*.jar,commons-logging-*.jar,hive-*-*.jar,httpclient-*.jar,httpcore-*.jar,libfb*.jar,libthrift-*.jar} . ```
hdinsight Hdinsight Apps Publish Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-apps-publish-applications.md
description: Learn how to create an HDInsight application, and then publish it i
Previously updated : 11/17/2022 Last updated : 01/04/2023 # Publish an HDInsight application in the Azure Marketplace
Two steps are involved in publishing applications in the Marketplace. First, def
"version": "0.0.1-preview", "clusterFilters": { "types": ["Hadoop", "HBase", "Spark"],
- "versions": ["3.6"]
+ "versions": ["4.0"]
} } ```
hdinsight Hdinsight Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes.md
For workload specific versions, see [here.](/azure/hdinsight/hdinsight-40-compon
* **Log Analytics** - Customers can enable classic monitoring to get the latest OMS version 14.19. To remove old versions, disable and enable classic monitoring. * **Ambari** user auto UI logout due to inactivity. For more information, see [here](/azure/hdinsight/ambari-web-ui-auto-logout)
-* **Spark** - A new and optimized version of Spark 3.1 is included in this release which is twice as fast as before.
+* **Spark** - A new and optimized version of Spark 3.1.3 is included in this release. We tested Apache Spark 3.1.2(previous version) and Apache Spark 3.1.3(current version) using the TPC-DS benchmark. The test was carried out using E8 V3  SKU, for Apache Spark on 1-TB workload. Apache Spark 3.1.3 (current version) outperformed Apache Spark 3.1.2 (previous version) by over 40% in total query runtime for TPC-DS queries using the same hardware specs. The Microsoft Spark team added optimizations available in Azure Synapse with Azure HDInsight. For more information, please refer to [ Speed up your data workloads with performance updates to Apache Spark 3.1.2 in Azure Synapse](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/speed-up-your-data-workloads-with-performance-updates-to-apache/ba-p/2769467)
![Icon showing new regions added with text.](media/hdinsight-release-notes/new-icon-for-new-regions-added.png)
For more information on how to check Ubuntu version of cluster, see [here](https
|[HIVE-26127](https://issues.apache.org/jira/browse/HIVE-26127)| INSERT OVERWRITE error - File Not Found| |[HIVE-24957](https://issues.apache.org/jira/browse/HIVE-24957)| Wrong results when subquery has COALESCE in correlation predicate| |[HIVE-24999](https://issues.apache.org/jira/browse/HIVE-24999)| HiveSubQueryRemoveRule generates invalid plan for IN subquery with multiple correlations|
-|[HIVE-24322](https://issues.apache.org/jira/browse/HIVE-24322)| If there's direct insert, the attempt ID has to be checked when reading the manifest files|
+|[HIVE-24322](https://issues.apache.org/jira/browse/HIVE-24322)| If there's direct insert, the attempt ID has to be checked when reading the manifest fails|
|[HIVE-23363](https://issues.apache.org/jira/browse/HIVE-23363)| Upgrade DataNucleus dependency to 5.2 | |[HIVE-26412](https://issues.apache.org/jira/browse/HIVE-26412)| Create interface to fetch available slots and add the default| |[HIVE-26173](https://issues.apache.org/jira/browse/HIVE-26173)| Upgrade derby to 10.14.2.0|
hdinsight Hdinsight Use Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-use-availability-zones.md
description: Learn how to create an Azure HDInsight cluster that uses Availabili
Previously updated : 10/25/2022 Last updated : 01/05/2023 # Create an HDInsight cluster that uses Availability Zones (Preview)
When the HDInsight cluster is ready, you can check the location to see which ava
``` ## Scale up the cluster
-You can scale up an HDInsight cluster with more worker nodes. The newly added worker nodes will be placed in the same Availability zone of this cluster.
-
-**Limitations**:
+You can scale up an HDInsight cluster with more worker nodes. The newly added worker nodes will be placed in the same Availability zone of this cluster.
## Best practices
hdinsight Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/policy-reference.md
Title: Built-in policy definitions for Azure HDInsight description: Lists Azure Policy built-in policy definitions for Azure HDInsight. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/28/2022 Last updated : 01/05/2023
healthcare-apis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/policy-reference.md
Title: Built-in policy definitions for Azure API for FHIR description: Lists Azure Policy built-in policy definitions for Azure API for FHIR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/28/2022 Last updated : 01/05/2023
healthcare-apis Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure API for FHIR description: Lists Azure Policy Regulatory Compliance controls available for Azure API for FHIR. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/04/2022 Last updated : 01/05/2023
healthcare-apis References For Dicom Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/references-for-dicom-service.md
This article describes our open-source projects on GitHub that provide source co
### Convert imaging study data to hierarchical parquet files
-* [FHIR to Synapse Sync Agent](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToDataLake/docs/Deployment.md): After you provision a DICOM service, FHIR service and synchronizing imaging study for a given patient via DICOM cast, you can use FHIR to Synapse Sync Agent to perform Analytics and Machine Learning on imaging study data by moving FHIR data to Azure Data Lake in near real time and making it available to a Synapse workspace.
+* [FHIR to Synapse Sync Agent](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToDataLake/docs/Deploy-FhirToDatalake.md): After you provision a DICOM service, FHIR service and synchronizing imaging study for a given patient via DICOM cast, you can use FHIR to Synapse Sync Agent to perform Analytics and Machine Learning on imaging study data by moving FHIR data to Azure Data Lake in near real time and making it available to a Synapse workspace.
## Next steps
For more information about DICOM cast, see
>[!div class="nextstepaction"] >[DICOM cast overview](dicom-cast-overview.md)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Deploy New Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-new-arm.md
+
+ Title: Deploy the MedTech service using an Azure Resource Manager template - Azure Health Data Services
+description: In this article, you'll learn how to deploy the MedTech service using an Azure Resource Manager template.
++++ Last updated : 1/5/2023+++
+# Quickstart: Deploy the MedTech service using an Azure Resource Manager template
+
+To implement infrastructure as code for your Azure solutions, use Azure Resource Manager templates (ARM templates). The template is a JavaScript Object Notation (JSON) file that defines the infrastructure and configuration for your project. The template uses declarative syntax, which lets you state what you intend to deploy without having to write the sequence of programming commands to create it. In the template, you specify the resources to deploy and the properties for those resources.
+
+In this quickstart, you'll learn how to:
+
+> [!div class="checklist"]
+> - Open an ARM template in the Azure portal.
+> - Configure the ARM template for your deployment.
+> - Deploy the ARM template.
+
+> [!TIP]
+> To learn more about ARM templates, see [What are ARM templates?](./../../azure-resource-manager/templates/overview.md)
+
+## Prerequisites
+
+To begin your deployment and complete the quickstart, you must have the following prerequisites:
+
+- An active Azure subscription account. If you don't have an Azure subscription, see [Subscription decision guide](/azure/cloud-adoption-framework/decision-guides/subscriptions/).
+
+- Owner or Contributor and User Access Administrator role assignments in the Azure subscription. For more information, see [What is Azure role-based access control (Azure RBAC)?](../../role-based-access-control/overview.md)
+
+- The Microsoft.HealthcareApis and Microsoft.EventHub resource providers registered with your Azure subscription. To learn more about registering resource providers, see [Azure resource providers and types](../../azure-resource-manager/management/resource-providers-and-types.md).
+
+When you have these prerequisites, you're ready to configure the ARM template by using the **Deploy to Azure** button.
+
+## Review the ARM template - Optional
+
+The ARM template used to deploy the resources in this quickstart is available at [Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/iotconnectors-with-iothub/) by using the *azuredeploy.json* file on [GitHub](https://github.com/azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors-with-iothub).
+
+## Use the Deploy to Azure button
+
+To begin deployment in the Azure portal, select the **Deploy to Azure** button:
+
+ [![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.healthcareapis%2Fworkspaces%2Fiotconnectors%2Fazuredeploy.json).
+
+## Configure the deployment
+
+1. In the Azure portal, on the Basics tab of the Azure Quickstart Template, select or enter the following information for your deployment:
+
+ - **Subscription** - The Azure subscription to use for the deployment.
+
+ - **Resource group** - An existing resource group, or you can create a new resource group.
+
+ - **Region** - The Azure region of the resource group that's used for the deployment. Region auto-fills by using the resource group region.
+
+ - **Basename** - A value that's appended to the name of the Azure resources and services that are deployed.
+
+ - **Location** - Use the drop-down list to select a supported Azure region for the Azure Health Data Services (the value could be the same or different region than your resource group).
+
+ - **Device Mapping** - Don't change the default values for this quickstart.
+
+ - **Destination Mapping** - Don't change the default values for this quickstart.
+
+ :::image type="content" source="media\deploy-new-arm\iot-deploy-quickstart-options.png" alt-text="Screenshot of Azure portal page displaying deployment options for the Azure Health Data Service MedTech service." lightbox="media\deploy-new-arm\iot-deploy-quickstart-options.png":::
+
+2. To validate your configuration, select **Review + create**.
+
+ :::image type="content" source="media\deploy-new-arm\iot-review-and-create-button.png" alt-text="Screenshot that shows the Review + create button selected in the Azure portal.":::
+
+3. In **Review + create**, check the template validation status. If validation is successful, the template displays **Validation Passed**. If validation fails, fix the detail that's indicated in the error message, and then select **Review + create** again.
+
+ :::image type="content" source="media\deploy-new-arm\iot-validation-completed.png" alt-text="Screenshot that shows the Review + create pane displaying the Validation Passed message.":::
+
+4. After a successful validation, to begin the deployment, select **Create**.
+
+ :::image type="content" source="media\deploy-new-arm\iot-create-button.png" alt-text="Screenshot that shows the highlighted Create button.":::
+
+5. In a few minutes, the Azure portal displays the message that your deployment is completed.
+
+ :::image type="content" source="media\deploy-new-arm\iot-deployment-complete-banner.png" alt-text="Screenshot that shows a green checkmark and the message Your deployment is complete.":::
+
+ > [!IMPORTANT]
+ > If you're going to allow access from multiple services to the device message event hub, it is highly recommended that each service has its own event hub consumer group.
+ >
+ > Consumer groups enable multiple consuming applications to have a separate view of the event stream, and to read the stream independently at their own pace and with their own offsets. For more information, see [Consumer groups](../../event-hubs/event-hubs-features.md#consumer-groups).
+ >
+ > Examples:
+ >
+ > - Two MedTech services accessing the same device message event hub.
+ >
+ > - A MedTech service and a storage writer application accessing the same device message event hub.
+
+## Review deployed resources and access permissions
+
+When deployment is completed, the following resources and access roles are created in the ARM template deployment:
+
+- Azure Event Hubs namespace and device message event hub. In this deployment, the device message event hub is named *devicedata*.
+
+ - An event hub consumer group. In this deployment, the consumer group is named *$Default*.
+
+ - An Azure Event Hubs Data Sender role. In this deployment, the sender role is named *devicedatasender* and can be used to provide access to the device event hub using a shared access signature (SAS). To learn more about authorizing access using a SAS, see [Authorizing access to Event Hubs resources using Shared Access Signatures](/azure/event-hubs/authorize-access-shared-access-signature).
+
+- A Health Data Services workspace.
+
+- A Health Data Services Fast Healthcare Interoperability Resources (FHIR&#174;) service.
+
+- A Health Data Services MedTech service with the required [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) roles:
+
+ - For the device message event hub, the Azure Events Hubs Data Receiver role is assigned in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the device message event hub.
+
+ - For the FHIR service, the FHIR Data Writer role is assigned in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the FHIR service.
+
+> [!IMPORTANT]
+> In this quickstart, the ARM template configures the MedTech service to operate in Create mode. A patient resource and a device resource are created for each device that sends data to your FHIR service.
+>
+> To learn more about the MedTech service resolution types Create and Lookup, see [Destination properties](deploy-new-config.md#destination-properties).
+
+## Post-deployment mappings
+
+After you've successfully deployed an instance of the MedTech service, you'll still need to provide conforming and valid device and FHIR destination mappings.
+
+ - To learn about device mappings, see [How to configure device mappings](how-to-configure-device-mappings.md).
+
+ - To learn about FHIR destination mappings, see [How to configure FHIR destination mappings](how-to-configure-fhir-mappings.md).
+
+## Next steps
+
+In this quickstart, you learned how to deploy an instance of the MedTech service in the Azure portal using an ARM template with a **Deploy to Azure** button.
+
+To learn about other methods for deploying the MedTech service, see
+
+> [!div class="nextstepaction"]
+> [Choose a deployment method for the MedTech service](deploy-new-choose.md)
+
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Deploy New Bicep Powershell Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-new-bicep-powershell-cli.md
Previously updated : 12/27/2022 Last updated : 1/5/2023
For example: `az group delete --resource-group BicepTestDeployment`
In this quickstart, you learned about how to use Azure PowerShell or the Azure CLI to deploy an instance of the MedTech service using a Bicep file.
-To learn about other methods of deploying the MedTech service, see
+To learn about other methods for deploying the MedTech service, see
> [!div class="nextstepaction"] > [Choose a deployment method for the MedTech service](deploy-new-choose.md)
-> [!div class="nextstepaction"]
-> [Deploy the MedTech service using an Azure Resource Manager template](deploy-new-button.md)
-
-> [!div class="nextstepaction"]
-> [Deploy the MedTech service manually using the Azure portal](deploy-new-manual.md)
- FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Deploy New Choose https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-new-choose.md
Title: Choose a deployment method for the MedTech service - Azure Health Data Services
-description: In this article, you'll learn how to choose a method to deploy the MedTech service.
+description: In this article, you'll learn about the different methods for deploying the MedTech service.
Previously updated : 12/27/2022 Last updated : 1/5/2023
The MedTech service provides multiple methods for deployment into Azure. Each de
In this quickstart, you'll learn about these deployment methods: > [!div class="checklist"]
-> - Azure Resource Manager template (ARM template) with the **Deploy to Azure** button.
-> - Azure PowerShell or the Azure CLI.
-> - Azure portal manual deployment.
+> - Azure Resource Manager template (ARM template) using the **Deploy to Azure** button.
+> - ARM template using Azure PowerShell or the Azure CLI.
+> - Azure portal manually.
-## Azure Resource Manager template with the Deploy to Azure button
+## ARM template using the Deploy to Azure button
Using an ARM template with the **Deploy to Azure** button is an easy and fast deployment method because it automates the deployment, most configuration steps, and uses the Azure portal.
-To learn more about using an ARM template and the **Deploy to Azure button**, see [Deploy the MedTech service using an Azure Resource Manager template](deploy-new-button.md).
+To learn more about using an ARM template with the **Deploy to Azure button**, see [Deploy the MedTech service using an Azure Resource Manager template](deploy-new-arm.md).
-## Azure PowerShell or the Azure CLI
+## ARM template using Azure PowerShell or the Azure CLI
-Using Azure PowerShell or the Azure CLI to deploy an ARM template is a more advanced deployment method. This deployment method can be useful for adding automation and repeatability so that you can scale and customize your deployments.
+Using an ARM template with Azure PowerShell or the Azure CLI is a more advanced deployment method. This deployment method can be useful for adding automation and repeatability so that you can scale and customize your deployments.
To learn more about using an ARM template with Azure PowerShell or the Azure CLI, see [Deploy the MedTech service using an Azure Resource Manager template and Azure PowerShell or the Azure CLI](deploy-new-powershell-cli.md).
-## Azure portal manual deployment
+## Azure portal manually
Using the Azure portal manual deployment will allow you to see the details of each deployment step. The manual deployment has many steps, but it provides valuable technical information that may be useful for customizing and troubleshooting your MedTech service.
healthcare-apis Deploy New Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-new-config.md
Previously updated : 12/15/2022 Last updated : 1/5/2023
Follow these six steps to fill in the Basics tab configuration:
The Basics tab should now look like this after you've filled it out:
- :::image type="content" source="media\iot-deploy-manual-in-portal\select-device-mapping-button.png" alt-text="Screenshot of Basics tab filled out correctly." lightbox="media\iot-deploy-manual-in-portal\select-device-mapping-button.png":::
+ :::image type="content" source="media\deploy-new-config\select-device-mapping-button.png" alt-text="Screenshot of Basics tab filled out correctly." lightbox="media\deploy-new-config\select-device-mapping-button.png":::
You're now ready to select the Device mapping tab and begin setting up the device mappings for your MedTech service.
To begin the validation process of your MedTech service deployment, select the *
Your validation screen should look something like this:
- :::image type="content" source="media\iot-deploy-manual-in-portal\validate-and-review-medtech-service.png" alt-text="Screenshot of validation success with details displayed." lightbox="media\iot-deploy-manual-in-portal\validate-and-review-medtech-service.png":::
+ :::image type="content" source="media\deploy-new-config\validate-and-review-medtech-service.png" alt-text="Screenshot of validation success with details displayed." lightbox="media\deploy-new-config\validate-and-review-medtech-service.png":::
If your MedTech service didn't validate, review the validation failure message, and troubleshoot the issue. Check all properties under each MedTech service tab that you've configured. Go back and try again.
healthcare-apis Deploy New Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-new-deploy.md
Previously updated : 12/15/2022 Last updated : 1/5/2023
When you're satisfied with your configuration and it has been successfully valid
Your screen should look something like this:
- :::image type="content" source="media\iot-deploy-manual-in-portal\created-medtech-service.png" alt-text="Screenshot of the MedTech service deployment completion." lightbox="media\iot-deploy-manual-in-portal\created-medtech-service.png":::
+ :::image type="content" source="media\deploy-new-deploy\created-medtech-service.png" alt-text="Screenshot of the MedTech service deployment completion." lightbox="media\deploy-new-deploy\created-medtech-service.png":::
## Manual post-deployment requirements
Follow these steps to grant access to the device message event hub:
13. After the role assignment has been successfully added to the event hub, a notification will display on your screen with a green check mark. This notification indicates that your MedTech service can now read from your device message event hub. It should look like this:
- :::image type="content" source="media\iot-deploy-manual-in-portal\validate-medtech-service-managed-identity-added-to-event-hub.png" alt-text="Screenshot of the MedTech service system-assigned managed identity being successfully granted access to the event hub with a red box around the message." lightbox="media\iot-deploy-manual-in-portal\validate-medtech-service-managed-identity-added-to-event-hub.png":::
+ :::image type="content" source="media\deploy-new-deploy\validate-medtech-service-managed-identity-added-to-event-hub.png" alt-text="Screenshot of the MedTech service system-assigned managed identity being successfully granted access to the event hub with a red box around the message." lightbox="media\deploy-new-deploy\validate-medtech-service-managed-identity-added-to-event-hub.png":::
For more information about authorizing access to Event Hubs resources, see [Authorize access with Azure Active Directory](../../event-hubs/authorize-access-azure-active-directory.md).
Now that you have granted access to the device message event hub and the FHIR se
In this article, you learned how to perform the manual deployment and post-deployment steps to implement your MedTech service.
-To learn about other methods of deploying the MedTech service, see
+To learn about other methods for deploying the MedTech service, see
> [!div class="nextstepaction"] > [Choose a deployment method for the MedTech service](deploy-new-choose.md)
-> [!div class="nextstepaction"]
-> [Deploy the MedTech service with an Azure Resource Manager template](deploy-new-button.md)
-
-> [!div class="nextstepaction"]
-> [Deploy the MedTech service using an Azure Resource Manager template and Azure PowerShell or the Azure CLI](deploy-new-powershell-cli.md)
- FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Deploy New Powershell Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-new-powershell-cli.md
Title: Deploy the MedTech service using an Azure Resource Manager template and Azure PowerShell or Azure CLI - Azure Health Data Services
-description: In this article, you'll learn how to deploy the MedTech service using an Azure Resource Manager template and Azure PowerShell or Azure CLI
+ Title: Deploy the MedTech service using an Azure Resource Manager template and Azure PowerShell or the Azure CLI - Azure Health Data Services
+description: In this article, you'll learn how to deploy the MedTech service using an Azure Resource Manager template and Azure PowerShell or the Azure CLI
Previously updated : 12/27/2022 Last updated : 1/5/2023
For example: `az group delete --resource-group ArmTestDeployment`
In this quickstart, you learned how to use Azure PowerShell or Azure CLI to deploy an instance of the MedTech service using an ARM template.
-To learn about the different deployment methods for the MedTech service, see
+To learn about other methods for deploying the MedTech service, see
> [!div class="nextstepaction"]
-> [Choose a deployment method for the MedTech service](deploy-iot-connector-in-azure.md)
-
-> [!div class="nextstepaction"]
-> [Deploy the MedTech service using an Azure Resource Manager template](deploy-new-button.md)
-
-> [!div class="nextstepaction"]
-> [Deploy the MedTech service manually using the Azure portal](deploy-new-manual.md)
+> [Choose a deployment method for the MedTech service](deploy-new-choose.md)
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Device Data Through Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/device-data-through-iot-hub.md
To learn how to get an Azure AD access token and view FHIR resources in your FHI
In this tutorial, you deployed an ARM template in the Azure portal, connected to your IoT hub, created a device, sent a test message, and reviewed your MedTech service metrics.
-To learn about the different deployment methods for the MedTech service, see
+To learn about other methods for deploying the MedTech service, see
> [!div class="nextstepaction"] > [Choose a deployment method for the MedTech service](deploy-new-choose.md)
-> [!div class="nextstepaction"]
-> [Deploy the MedTech service using an ARM template and Azure PowerShell or the Azure CLI](deploy-new-powershell-cli.md)
-
-> [!div class="nextstepaction"]
-> [Deploy the MedTech service manually using Azure portal](deploy-new-manual.md)
- FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Frequently Asked Questions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/frequently-asked-questions.md
Previously updated : 12/15/2022 Last updated : 1/5/2023
To learn more about the MedTech service open-source projects, see [Open-source p
In this article, you learned about the MedTech service frequently asked questions (FAQs)
-To learn about the different deployment methods for the MedTech service, see
+To learn about methods for deploying the MedTech service, see
> [!div class="nextstepaction"] > [Choose a deployment method for the MedTech service](deploy-new-choose.md)
healthcare-apis Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/get-started.md
Title: Get started with the MedTech service in Azure Health Data Services
-description: This document describes how to get you started with the MedTech service in Azure Health Data Services.
+description: This article describes how to get started with the MedTech service in Azure Health Data Services.
Previously updated : 12/27/2022 Last updated : 1/5/2023
You can verify that the data is correctly persisted into the FHIR service by usi
This article only described the basic steps needed to get started using the MedTech service.
-To learn about different deployment methods for the MedTech service, see
+To learn about other methods of deploying the MedTech service, see
> [!div class="nextstepaction"] > [Choose a deployment method for the MedTech service](deploy-new-choose.md)
healthcare-apis Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Health Data Services FHIR service description: Lists Azure Policy Regulatory Compliance controls available. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/04/2022 Last updated : 01/05/2023
healthcare-apis Workspace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/workspace-overview.md
Previously updated : 12/15/2022 Last updated : 1/5/2023
Deploy a DICOM service to bring medical imaging data into the cloud from any DIC
## MedTech service
-The MedTech service enables you to ingest high-frequency IoT device data into the FHIR Service in a scalable, secure, and compliant manner. For more information, see [the MedTech service documentation page]see [Overview of MedTech](../healthcare-apis/iot/overview.md).
+The MedTech service enables you to ingest high-frequency IoT device data into the FHIR service in a scalable, secure, and compliant manner. For more information, see [Overview of the MedTech service](../healthcare-apis/iot/overview.md).
## Workspace configuration settings
iot-central Tutorial Use Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/tutorial-use-rest-api.md
The tutorial uses a predefined Postman collection that includes some scripts to
## Import the Postman collection
-To import the collection, open Postman and select **Import**. In the **Import** dialog, select **Link** and paste in the following URL: <!-- TODO: Add link here -->. Select **Continue**.
+To import the collection, open Postman and select **Import**. In the **Import** dialog, select **Link** and paste in the following [URL](https://raw.githubusercontent.com/Azure-Samples/iot-central-docs-samples/main/postman-collection/IoT%20Central.postman_collection.json), <!-- TODO: Add link here --> Select **Continue**.
Your workspace now contains the **IoT Central REST tutorial** collection. This collection includes all the APIs you use in the tutorial.
iot-develop Howto Convert To Pnp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/howto-convert-to-pnp.md
Before you create a model for your device, you need to understand the existing c
- The read-only and writable properties the device synchronizes with your service. - The commands invoked from the service that the device responds to.
-For example, review the following device code snippets that implement various device capabilities. These examples are based on the sample in [PnPMQTTWin32-Before](https://github.com/Azure-Samples/IoTMQTTSample/tree/master/src/Windows/PnPMQTTWin32-Before).
+For example, review the following device code snippets that implement various device capabilities.
The following snippet shows the device sending temperature telemetry:
iot-develop Libraries Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/libraries-sdks.md
The IoT Plug and Play libraries and SDKs enable developers to build IoT solution
||||||| | .NET - IoT Hub service | [NuGet 1.38.1](https://www.nuget.org/packages/Microsoft.Azure.Devices ) | [GitHub](https://github.com/Azure/azure-iot-sdk-csharp) | [Samples](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/iothub/service/samples/solutions/PnpServiceSamples) | N/A | [Reference](/dotnet/api/microsoft.azure.devices) | | Java - IoT Hub service | [Maven 1.26.0](https://mvnrepository.com/artifact/com.microsoft.azure.sdk.iot/iot-service-client/1.26.0) | [GitHub](https://github.com/Azure/azure-iot-sdk-java) | [Samples](https://github.com/Azure/azure-iot-sdk-java/tree/main/service/iot-service-samples/pnp-service-sample) | N/A | [Reference](/java/api/com.microsoft.azure.sdk.iot.service) |
-| Node - IoT Hub service | [npm 1.13.0](https://www.npmjs.com/package/azure-iothub) | [GitHub](https://github.com/Azure/azure-iot-sdk-node) | [Samples](https://github.com/Azure/azure-iot-sdk-node/tree/main/service/samples) | N/A | [Reference](/javascript/api/azure-iothub/) |
+| Node - IoT Hub service | [npm 1.13.0](https://www.npmjs.com/package/azure-iothub) | [GitHub](https://github.com/Azure/azure-iot-sdk-node) | [Samples](https://github.com/Azure/azure-iot-sdk-node/tree/main/device/samples) | N/A | [Reference](/javascript/api/azure-iothub/) |
| Python - IoT Hub service | [pip 2.2.3](https://pypi.org/project/azure-iot-hub) | [GitHub](https://github.com/Azure/azure-iot-hub-python) | [Samples](https://github.com/Azure/azure-iot-hub-python/tree/main/samples) | N/A | [Reference](/python/api/azure-iot-hub/) | ## Next steps
iot-develop Quickstart Devkit Mxchip Az3166 Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-mxchip-az3166-iot-hub.md
ms.devlang: c Previously updated : 10/21/2022 Last updated : 01/04/2023
-# Connect an MXCHIP AZ3166 devkit to IoT Hub
+# Quickstart: Connect an MXCHIP AZ3166 devkit to IoT Hub
**Applies to**: [Embedded device development](about-iot-develop.md#embedded-device-development)<br> **Total completion time**: 30 minutes
You'll complete the following tasks:
## Prerequisites
-* A PC running Windows 10
-* If you don't have an Azure subscription, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+* A PC running Windows 10 or Windows 11
* [Git](https://git-scm.com/downloads) for cloning the repository
+* If you don't have an Azure subscription, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
* Azure CLI. You have two options for running Azure CLI commands in this quickstart: * Use the Azure Cloud Shell, an interactive shell that runs CLI commands in your browser. This option is recommended because you don't need to install anything. If you're using Cloud Shell for the first time, sign in to the [Azure portal](https://portal.azure.com). Follow the steps in [Cloud Shell quickstart](../cloud-shell/quickstart.md) to **Start Cloud Shell** and **Select the Bash environment**. * Optionally, run Azure CLI on your local machine. If Azure CLI is already installed, run `az upgrade` to upgrade the CLI and extensions to the current version. To install Azure CLI, see [Install Azure CLI](/cli/azure/install-azure-cli).
To create an IoT hub:
1. Run the [az iot hub create](/cli/azure/iot/hub#az-iot-hub-create) command to create an IoT hub. It might take a few minutes to create an IoT hub.
- *YourIotHubName*. Replace this placeholder below with the name you chose for your IoT hub. An IoT hub name must be globally unique in Azure. This placeholder is used in the rest of this quickstart to represent your unique IoT hub name.
+ *YourIotHubName*. Replace this placeholder in the code with the name you chose for your IoT hub. An IoT hub name must be globally unique in Azure. This placeholder is used in the rest of this quickstart to represent your unique IoT hub name.
The `--sku F1` parameter creates the IoT hub in the Free tier. Free tier hubs have a limited feature set and are used for proof of concept applications. For more information on IoT Hub tiers, features, and pricing, see [Azure IoT Hub pricing](https://azure.microsoft.com/pricing/details/iot-hub).
To create an IoT hub:
### Configure IoT Explorer
-In the rest of this quickstart, you'll use IoT Explorer to register a device to your IoT hub, to view the device properties and telemetry, and to send commands to your device. In this section, you configure IoT Explorer to connect to the IoT hub you just created and to read plug and play models from the public model repository.
+In the rest of this quickstart, you'll use IoT Explorer to register a device to your IoT hub, to view the device properties and telemetry, and to send commands to your device. In this section, you configure IoT Explorer to connect to your IoT hub and to read plug and play models from the public model repository.
To add a connection to your IoT hub:
To connect the MXCHIP DevKit to Azure, you'll modify a configuration file for Wi
|Constant name|Value| |-|--|
- |`IOT_HUB_HOSTNAME` |{*Your Iot hub hostName value*}|
- |`IOT_HUB_DEVICE_ID` |{*Your Device ID value*}|
- |`IOT_DEVICE_SAS_KEY` |{*Your Primary key value*}|
+ | `IOT_HUB_HOSTNAME` | {*Your host name value*} |
+ | `IOT_HUB_DEVICE_ID` | {*Your Device ID value*} |
+ | `IOT_DEVICE_SAS_KEY` | {*Your Primary key value*} |
1. Save and close the file.
You can use the **Termite** app to monitor communication and confirm that your d
```output Starting Azure thread-
+
+
Initializing WiFi
- MAC address: C8:93:46:8A:4C:43
- Connecting to SSID 'iot'
- SUCCESS: WiFi connected to iot
-
+ MAC address: ******************
+ SUCCESS: WiFi initialized
+
+ Connecting WiFi
+ Connecting to SSID 'iot'
+ Attempt 1...
+ SUCCESS: WiFi connected
+
Initializing DHCP
- IP address: 192.168.0.18
- Mask: 255.255.255.0
- Gateway: 192.168.0.1
+ IP address: 192.168.0.49
+ Mask: 255.255.255.0
+ Gateway: 192.168.0.1
SUCCESS: DHCP initialized-
+
Initializing DNS client
- DNS address: 75.75.75.75
+ DNS address: 192.168.0.1
SUCCESS: DNS client initialized-
- Initializing SNTP client
- SNTP server 0.pool.ntp.org
- SNTP IP address: 157.245.166.169
- SNTP time update: Jun 8, 2021 18:16:50.807 UTC
+
+ Initializing SNTP time sync
+ SNTP server 0.pool.ntp.org
+ SNTP time update: Jan 4, 2023 22:57:32.658 UTC
SUCCESS: SNTP initialized-
+
Initializing Azure IoT Hub client
- Hub hostname: ***.azure-devices.net
- Device id: mydevice
- Model id: dtmi:azurertos:devkit:gsgmxchip;1
- Connected to IoT Hub
- SUCCESS: Azure IoT Hub client initialized
+ Hub hostname: ***.azure-devices.net
+ Device id: mydevice
+ Model id: dtmi:azurertos:devkit:gsgmxchip;2
+ SUCCESS: Connected to IoT Hub
+
+ Receive properties: {"desired":{"$version":1},"reported":{"deviceInformation":{"__t":"c","manufacturer":"MXCHIP","model":"AZ3166","swVersion":"1.0.0","osName":"Azure RTOS","processorArchitecture":"Arm Cortex M4","processorManufacturer":"STMicroelectronics","totalStorage":1024,"totalMemory":128},"ledState":false,"telemetryInterval":{"ac":200,"av":1,"value":10},"$version":4}}
+ Sending property: $iothub/twin/PATCH/properties/reported/?$rid=3{"deviceInformation":{"__t":"c","manufacturer":"MXCHIP","model":"AZ3166","swVersion":"1.0.0","osName":"Azure RTOS","processorArchitecture":"Arm Cortex M4","processorManufacturer":"STMicroelectronics","totalStorage":1024,"totalMemory":128}}
+ Sending property: $iothub/twin/PATCH/properties/reported/?$rid=5{"ledState":false}
+ Sending property: $iothub/twin/PATCH/properties/reported/?$rid=7{"telemetryInterval":{"ac":200,"av":1,"value":10}}
+
+ Starting Main loop
+ Telemetry message sent: {"humidity":31.01,"temperature":25.62,"pressure":927.3}.
+ Telemetry message sent: {"magnetometerX":177,"magnetometerY":-36,"magnetometerZ":-346.5}.
+ Telemetry message sent: {"accelerometerX":-22.5,"accelerometerY":0.54,"accelerometerZ":1049.01}.
+ Telemetry message sent: {"gyroscopeX":0,"gyroscopeY":0,"gyroscopeZ":0}.
``` Keep Termite open to monitor device output in the following steps. ## View device properties
-You can use Azure IoT Explorer to view and manage the properties of your devices. In this section and the following sections, you'll use the Plug and Play capabilities that surfaced in IoT Explorer to manage and interact with the MXCHIP DevKit. These capabilities rely on the device model published for the MXCHIP DevKit in the public model repository. You configured IoT Explorer to search this repository for device models earlier in this quickstart. In many cases, you can perform the same action without using plug and play by selecting the same action from the left side menu of your device pane in IoT Explorer. However, using plug and play often provides an enhanced experience. This is because IoT Explorer can read the device model specified by a plug and play device and present information specific to that device.
+You can use Azure IoT Explorer to view and manage the properties of your devices. In this section and the following sections, you'll use the Plug and Play capabilities that surfaced in IoT Explorer to manage and interact with the MXCHIP DevKit. These capabilities rely on the device model published for the MXCHIP DevKit in the public model repository. You configured IoT Explorer to search this repository for device models earlier in this quickstart. You can perform many actions without using plug and play by selecting the action from the left side menu of your device pane in IoT Explorer. However, using plug and play often provides an enhanced experience. IoT Explorer can read the device model specified by a plug and play device and present information specific to that device.
To access IoT Plug and Play components for the device in IoT Explorer:
To access IoT Plug and Play components for the device in IoT Explorer:
| Tab | Type | Name | Description | ||||| | **Interface** | Interface | `MXCHIP Getting Started Guide` | Example model for the MXCHIP DevKit |
- | **Properties (read-only)** | Property | -- | The model currently doesn't have any read-only properties |
+ | **Properties (read-only)** | Property | `ledState` | The current state of the LED |
| **Properties (writable)** | Property | `telemetryInterval` | The interval that the device sends telemetry | | **Commands** | Command | `setLedState` | Turn the LED on or off |
- | **Telemetry** | Telemetry | `temperature` | The temperature in Celsius |
To view device properties using Azure IoT Explorer:
-1. Select the **Properties (read-only)** tab. Currently, there aren't any read-only properties exposed by the device model.
1. Select the **Properties (writable)** tab. It displays the interval that telemetry is sent. 1. Change the `telemetryInterval` to *5*, and then select **Update desired value**. Your device now uses this interval to send telemetry.
To view device properties using Azure IoT Explorer:
To use Azure CLI to view device properties:
-1. Run the [az iot hub device-identity show](/cli/azure/iot/hub/device-identity#az-iot-hub-device-identity-show) command.
+1. Run the [az iot hub device-twin show](/cli/azure/iot/hub/device-twin#az-iot-hub-device-twin-show) command.
```azurecli
- az iot hub device-identity show --device-id mydevice --hub-name {YourIoTHubName}
+ az iot hub device-twin show --device-id mydevice --hub-name {YourIoTHubName}
``` 1. Inspect the properties for your device in the console output.
If you no longer need the Azure resources created in this quickstart, you can us
To delete a resource group by name:
-1. Run the [az group delete](/cli/azure/group#az-group-delete) command. This removes the resource group, the IoT Hub, and the device registration you created.
+1. Run the [az group delete](/cli/azure/group#az-group-delete) command. This command removes the resource group, the IoT Hub, and the device registration you created.
```azurecli-interactive az group delete --name MyResourceGroup
To delete a resource group by name:
In this quickstart, you built a custom image that contains Azure RTOS sample code, and then flashed the image to the MXCHIP DevKit device. You also used the Azure CLI and/or IoT Explorer to create Azure resources, connect the MXCHIP DevKit securely to Azure, view telemetry, and send messages.
-As a next step, explore the following articles to learn more about using the IoT device SDKs to connect devices to Azure IoT.
+As a next step, explore the following articles to learn more about using the IoT device SDKs to connect general devices, and embedded devices, to Azure IoT.
> [!div class="nextstepaction"]
-> [Connect an MXCHIP AZ3166 devkit to IoT Central](quickstart-devkit-mxchip-az3166.md)
-> [!div class="nextstepaction"]
-> [Connect a simulated device to IoT Central](quickstart-send-telemetry-central.md)
+> [Connect a general simulated device to IoT Hub](quickstart-send-telemetry-iot-hub.md)
+ > [!div class="nextstepaction"]
-> [Connect a simulated device to IoT Hub](quickstart-send-telemetry-iot-hub.md)
+> [Learn more about connecting embedded devices using C SDK and Embedded C SDK](concepts-using-c-sdk-and-embedded-c-sdk.md)
> [!IMPORTANT] > Azure RTOS provides OEMs with components to secure communication and to create code and data isolation using underlying MCU/MPU hardware protection mechanisms. However, each OEM is ultimately responsible for ensuring that their device meets evolving security requirements.
iot-develop Quickstart Devkit Stm B L4s5i Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-stm-b-l4s5i-iot-hub.md
+
+ Title: Connect an STMicroelectronics B-L4S5I-IOT01A to Azure IoT Hub quickstart
+description: Use Azure RTOS embedded software to connect an STMicroelectronics B-L4S5I-IOT01A device to Azure IoT Hub and send telemetry.
+++
+ms.devlang: c
+ Last updated : 01/06/2022++
+# Quickstart: Connect an STMicroelectronics B-L4S5I-IOT01A Discovery kit to IoT Hub
+
+**Applies to**: [Embedded device development](about-iot-develop.md#embedded-device-development)<br>
+**Total completion time**: 30 minutes
+
+[![Browse code](media/common/browse-code.svg)](https://github.com/azure-rtos/getting-started/tree/master/STMicroelectronics/B-L4S5I-IOT01A)
+
+In this quickstart, you use Azure RTOS to connect the STMicroelectronics [B-L4S5I-IOT01A](https://www.st.com/en/evaluation-tools/b-l4S5i-iot01a.html) Discovery kit (from now on, the STM DevKit) to Azure IoT.
+
+You'll complete the following tasks:
+
+* Install a set of embedded development tools for programming the STM DevKit in C
+* Build an image and flash it onto the STM DevKit
+* Use Azure CLI to create and manage an Azure IoT hub that the STM DevKit will securely connect to
+* Use Azure IoT Explorer to register a device with your IoT hub, view device properties, view device telemetry, and call direct commands on the device
+
+## Prerequisites
+
+* A PC running Windows 10 or Windows 11
+* An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+* [Git](https://git-scm.com/downloads) for cloning the repository
+* Azure CLI. You have two options for running Azure CLI commands in this quickstart:
+ * Use the Azure Cloud Shell, an interactive shell that runs CLI commands in your browser. This option is recommended because you don't need to install anything. If you're using Cloud Shell for the first time, sign in to the [Azure portal](https://portal.azure.com). Follow the steps in [Cloud Shell quickstart](../cloud-shell/quickstart.md) to **Start Cloud Shell** and **Select the Bash environment**.
+ * Optionally, run Azure CLI on your local machine. If Azure CLI is already installed, run `az upgrade` to upgrade the CLI and extensions to the current version. To install Azure CLI, see [Install Azure CLI](/cli/azure/install-azure-cli).
+
+* [Azure IoT Explorer](https://github.com/Azure/azure-iot-explorer/releases): Cross-platform utility to monitor and manage Azure IoT
+* Hardware
+
+ * The [B-L4S5I-IOT01A](https://www.st.com/en/evaluation-tools/b-l4s5i-iot01a.html) (STM DevKit)
+ * Wi-Fi 2.4 GHz
+ * USB 2.0 A male to Micro USB male cable
+
+## Prepare the development environment
+
+To set up your development environment, first you clone a GitHub repo that contains all the assets you need for the quickstart. Then you install a set of programming tools.
+
+### Clone the repo for the quickstart
+
+Clone the following repo to download all sample device code, setup scripts, and offline versions of the documentation. If you previously cloned this repo in another quickstart, you don't need to do it again.
+
+To clone the repo, run the following command:
+
+```shell
+git clone --recursive https://github.com/azure-rtos/getting-started.git
+```
+
+### Install the tools
+
+The cloned repo contains a setup script that installs and configures the required tools. If you installed these tools in another embedded device quickstart, you don't need to do it again.
+
+> [!NOTE]
+> The setup script installs the following tools:
+> * [CMake](https://cmake.org): Build
+> * [ARM GCC](https://developer.arm.com/tools-and-software/open-source-software/developer-tools/gnu-toolchain/gnu-rm): Compile
+> * [Termite](https://www.compuphase.com/software_termite.htm): Monitor serial port output for connected devices
+
+To install the tools:
+
+1. From File Explorer, navigate to the following path in the repo and run the setup script named *get-toolchain.bat*:
+
+ *getting-started\tools\get-toolchain.bat*
+
+1. After the installation, open a new console window to recognize the configuration changes made by the setup script. Use this console to complete the remaining programming tasks in the quickstart. You can use Windows CMD, PowerShell, or Git Bash for Windows.
+1. Run the following code to confirm that CMake version 3.14 or later is installed.
+
+ ```shell
+ cmake --version
+ ```
+
+## Create the cloud components
+
+### Create an IoT hub
+
+You can use Azure CLI to create an IoT hub that handles events and messaging for your device.
+
+To create an IoT hub:
+
+1. Launch your CLI app. To run the CLI commands in the rest of this quickstart, copy the command syntax, paste it into your CLI app, edit variable values, and press Enter.
+ - If you're using Cloud Shell, right-click the link for [Cloud Shell](https://shell.azure.com/bash), and select the option to open in a new tab.
+ - If you're using Azure CLI locally, start your CLI console app and sign in to Azure CLI.
+
+1. Run [az extension add](/cli/azure/extension#az-extension-add) to install or upgrade the *azure-iot* extension to the current version.
+
+ ```azurecli-interactive
+ az extension add --upgrade --name azure-iot
+ ```
+
+1. Run the [az group create](/cli/azure/group#az-group-create) command to create a resource group. The following command creates a resource group named *MyResourceGroup* in the *centralus* region.
+
+ > [!NOTE]
+ > You can optionally set an alternate `location`. To see available locations, run [az account list-locations](/cli/azure/account#az-account-list-locations).
+
+ ```azurecli
+ az group create --name MyResourceGroup --location centralus
+ ```
+
+1. Run the [az iot hub create](/cli/azure/iot/hub#az-iot-hub-create) command to create an IoT hub. It might take a few minutes to create an IoT hub.
+
+ *YourIotHubName*. Replace this placeholder in the code with the name you chose for your IoT hub. An IoT hub name must be globally unique in Azure. This placeholder is used in the rest of this quickstart to represent your unique IoT hub name.
+
+ The `--sku F1` parameter creates the IoT hub in the Free tier. Free tier hubs have a limited feature set and are used for proof of concept applications. For more information on IoT Hub tiers, features, and pricing, see [Azure IoT Hub pricing](https://azure.microsoft.com/pricing/details/iot-hub).
+
+ ```azurecli
+ az iot hub create --resource-group MyResourceGroup --name {YourIoTHubName} --sku F1 --partition-count 2
+ ```
+
+1. After the IoT hub is created, view the JSON output in the console, and copy the `hostName` value to use in a later step. The `hostName` value looks like the following example:
+
+ `{Your IoT hub name}.azure-devices.net`
+
+### Configure IoT Explorer
+
+In the rest of this quickstart, you'll use IoT Explorer to register a device to your IoT hub, to view the device properties and telemetry, and to send commands to your device. In this section, you configure IoT Explorer to connect to the IoT hub you created, and to read plug and play models from the public model repository.
+
+To add a connection to your IoT hub:
+
+1. In your CLI app, run the [az iot hub connection-string show](/cli/azure/iot/hub/connection-string#az-iot-hub-connection-string-show) command to get the connection string for your IoT hub.
+
+ ```azurecli
+ az iot hub connection-string show --hub-name {YourIoTHubName}
+ ```
+
+1. Copy the connection string without the surrounding quotation characters.
+1. In Azure IoT Explorer, select **IoT hubs** on the left menu.
+1. Select **+ Add connection**.
+1. Paste the connection string into the **Connection string** box.
+1. Select **Save**.
+
+ :::image type="content" source="media/quickstart-devkit-stm-b-l4s5i-iot-hub/iot-explorer-add-connection.png" alt-text="Screenshot of adding a connection in IoT Explorer.":::
+
+If the connection succeeds, IoT Explorer switches to the **Devices** view.
+
+To add the public model repository:
+
+1. In IoT Explorer, select **Home** to return to the home view.
+1. On the left menu, select **IoT Plug and Play Settings**, then select **+Add** and select **Public repository** from the drop-down menu.
+1. An entry appears for the public model repository at `https://devicemodels.azure.com`.
+
+ :::image type="content" source="media/quickstart-devkit-stm-b-l4s5i-iot-hub/iot-explorer-add-public-repository.png" alt-text="Screenshot of adding the public model repository in IoT Explorer.":::
+
+1. Select **Save**.
+
+### Register a device
+
+In this section, you create a new device instance and register it with the IoT hub you created. You'll use the connection information for the newly registered device to securely connect your physical device in a later section.
+
+To register a device:
+
+1. From the home view in IoT Explorer, select **IoT hubs**.
+1. The connection you previously added should appear. Select **View devices in this hub** below the connection properties.
+1. Select **+ New** and enter a device ID for your device; for example, `mydevice`. Leave all other properties the same.
+1. Select **Create**.
+
+ :::image type="content" source="media/quickstart-devkit-stm-b-l4s5i-iot-hub/iot-explorer-device-created.png" alt-text="Screenshot of Azure IoT Explorer device identity.":::
+
+1. Use the copy buttons to copy the **Device ID** and **Primary key** fields.
+
+Before continuing to the next section, save each of the following values retrieved from earlier steps, to a safe location. You use these values in the next section to configure your device.
+
+* `hostName`
+* `deviceId`
+* `primaryKey`
+
+## Prepare the device
+
+To connect the STM DevKit to Azure, you'll modify a configuration file for Wi-Fi and Azure IoT settings, rebuild the image, and flash the image to the device.
+
+### Add configuration
+
+1. Open the following file in a text editor:
+
+ *getting-started\STMicroelectronics\B-L4S5I-IOT01A\app\azure_config.h*
+
+1. Comment out the following line near the top of the file as shown:
+
+ ```c
+ // #define ENABLE_DPS
+ ```
+
+1. Set the Wi-Fi constants to the following values from your local environment.
+
+ |Constant name|Value|
+ |-|--|
+ |`WIFI_SSID` |{*Your Wi-Fi SSID*}|
+ |`WIFI_PASSWORD` |{*Your Wi-Fi password*}|
+ |`WIFI_MODE` |{*One of the enumerated Wi-Fi mode values in the file*}|
+
+1. Set the Azure IoT device information constants to the values that you saved after you created Azure resources.
+
+ |Constant name|Value|
+ |-|--|
+ |`IOT_HUB_HOSTNAME` |{*Your Iot hub hostName value*}|
+ |`IOT_HUB_DEVICE_ID` |{*Your Device ID value*}|
+ |`IOT_DEVICE_SAS_KEY` |{*Your Primary key value*}|
+
+1. Save and close the file.
+
+### Build the image
+
+1. In your console or in File Explorer, run the batch file *rebuild.bat* at the following path to build the image:
+
+ *getting-started\STMicroelectronics\B-L4S5I-IOT01A\tools\rebuild.bat*
+
+2. After the build completes, confirm that the binary file was created in the following path:
+
+ *getting-started\STMicroelectronics\B-L4S5I-IOT01A\build\app\stm32l475_azure_iot.bin*
+
+### Flash the image
+
+1. On the STM DevKit MCU, locate the **Reset** button (1), the Micro USB port (2), which is labeled **USB STLink**, and the board part number (3). You'll refer to these items in the next steps. All of them are highlighted in the following picture:
+
+ :::image type="content" source="media/quickstart-devkit-stm-b-l4s5i-iot-hub/stm-b-l4s5i.png" alt-text="Photo that that shows key components on the STM DevKit board.":::
+
+1. Connect the Micro USB cable to the **USB STLINK** port on the STM DevKit, and then connect it to your computer.
+
+ > [!NOTE]
+ > For detailed setup information about the STM DevKit, see the instructions on the packaging, or see [B-L4S5I-IOT01A Resources](https://www.st.com/en/evaluation-tools/b-l4s5i-iot01a.html#resource).
+
+1. In File Explorer, find the binary files that you created in the previous section.
+
+1. Copy the binary file named *stm32l4s5_azure_iot.bin*.
+
+1. In File Explorer, find the STM Devkit that's connected to your computer. The device appears as a drive on your system.
+
+1. Paste the binary file into the root folder of the STM Devkit. Flashing starts automatically and completes in a few seconds.
+
+ > [!NOTE]
+ > During the flashing process, an LED toggles between red and green on the STM DevKit.
+
+### Confirm device connection details
+
+You can use the **Termite** app to monitor communication and confirm that your device is set up correctly.
+
+1. Start **Termite**.
+ > [!TIP]
+ > If you are unable to connect Termite to your devkit, install the [ST-LINK driver](https://www.st.com/en/development-tools/stsw-link009.html) and try again. See [Troubleshooting](troubleshoot-embedded-device-quickstarts.md) for additional steps.
+1. Select **Settings**.
+1. In the **Serial port settings** dialog, check the following settings and update if needed:
+ * **Baud rate**: 115,200
+ * **Port**: The port that your STM DevKit is connected to. If there are multiple port options in the dropdown, you can find the correct port to use. Open Windows **Device Manager**, and view **Ports** to identify which port to use.
+
+ :::image type="content" source="media/quickstart-devkit-stm-b-l4s5i-iot-hub/termite-settings.png" alt-text="Screenshot of serial port settings in the Termite app.":::
+
+1. Select OK.
+1. Press the **Reset** button on the device. The button is black and is labeled on the device.
+1. In the **Termite** app, check the following checkpoint values to confirm that the device is initialized and connected to Azure IoT.
+
+ ```output
+ Starting Azure thread
+
+
+ Initializing WiFi
+ Module: ISM43362-M3G-L44-SPI
+ MAC address: ******************
+ Firmware revision: C3.5.2.7.STM
+ SUCCESS: WiFi initialized
+
+ Connecting WiFi
+ Connecting to SSID '************'
+ Attempt 1...
+ SUCCESS: WiFi connected
+
+ Initializing DHCP
+ IP address: 192.168.0.50
+ Mask: 255.255.255.0
+ Gateway: 192.168.0.1
+ SUCCESS: DHCP initialized
+
+ Initializing DNS client
+ DNS address 1: 192.168.0.1
+ SUCCESS: DNS client initialized
+
+ Initializing SNTP time sync
+ SNTP server 0.pool.ntp.org
+ SNTP time update: Jan 6, 2023 20:10:23.522 UTC
+ SUCCESS: SNTP initialized
+
+ Initializing Azure IoT Hub client
+ Hub hostname: ************.azure-devices.net
+ Device id: mydevice
+ Model id: dtmi:azurertos:devkit:gsgstml4s5;2
+ SUCCESS: Connected to IoT Hub
+ ```
+ > [!IMPORTANT]
+ > If the DNS client initialization fails and notifies you that the Wi-Fi firmware is out of date, you'll need to update the Wi-Fi module firmware. Download and install the [Inventek ISM 43362 Wi-Fi module firmware update](https://www.st.com/resource/en/utilities/inventek_fw_updater.zip). Then press the **Reset** button on the device to recheck your connection, and continue with this quickstart.
++
+Keep Termite open to monitor device output in the following steps.
+
+## View device properties
+
+You can use Azure IoT Explorer to view and manage the properties of your devices. In the following sections, you'll use the Plug and Play capabilities that are visible in IoT Explorer to manage and interact with the STM DevKit. These capabilities rely on the device model published for the STM DevKit in the public model repository. You configured IoT Explorer to search this repository for device models earlier in this quickstart. In many cases, you can perform the same action without using plug and play by selecting IoT Explorer menu options. However, using plug and play often provides an enhanced experience. IoT Explorer can read the device model specified by a plug and play device and present information specific to that device.
+
+To access IoT Plug and Play components for the device in IoT Explorer:
+
+1. From the home view in IoT Explorer, select **IoT hubs**, then select **View devices in this hub**.
+1. Select your device.
+1. Select **IoT Plug and Play components**.
+1. Select **Default component**. IoT Explorer displays the IoT Plug and Play components that are implemented on your device.
+
+ :::image type="content" source="media/quickstart-devkit-stm-b-l4s5i-iot-hub/iot-explorer-default-component-view.png" alt-text="Screenshot of STM DevKit default component in IoT Explorer.":::
+
+1. On the **Interface** tab, view the JSON content in the device model **Description**. The JSON contains configuration details for each of the IoT Plug and Play components in the device model.
+
+ > [!NOTE]
+ > The name and description for the default component refer to the STM L4S5 board. The STM L4S5 plug and play device model is also used for the STM L475E board in this quickstart.
+
+ Each tab in IoT Explorer corresponds to one of the IoT Plug and Play components in the device model.
+
+ | Tab | Type | Name | Description |
+ |||||
+ | **Interface** | Interface | `STM L4S5 Getting Started Guide` | Example model for the STM DevKit |
+ | **Properties (read-only)** | Property | `ledState` | Whether the led is on or off |
+ | **Properties (writable)** | Property | `telemetryInterval` | The interval that the device sends telemetry |
+ | **Commands** | Command | `setLedState` | Turn the LED on or off |
+
+To view device properties using Azure IoT Explorer:
+
+1. Select the **Properties (read-only)** tab. There's a single read-only property to indicate whether the led is on or off.
+1. Select the **Properties (writable)** tab. It displays the interval that telemetry is sent.
+1. Change the `telemetryInterval` to *5*, and then select **Update desired value**. Your device now uses this interval to send telemetry.
+
+ :::image type="content" source="media/quickstart-devkit-stm-b-l4s5i-iot-hub/iot-explorer-set-telemetry-interval.png" alt-text="Screenshot of setting telemetry interval on STM DevKit in IoT Explorer.":::
+
+1. IoT Explorer responds with a notification. You can also observe the update in Termite.
+1. Set the telemetry interval back to 10.
+
+To use Azure CLI to view device properties:
+
+1. Run the [az iot hub device-twin show](/cli/azure/iot/hub/device-twin#az-iot-hub-device-twin-show) command.
+
+ ```azurecli
+ az iot hub device-twin show --device-id mydevice --hub-name {YourIoTHubName}
+ ```
+
+1. Inspect the properties for your device in the console output.
+
+## View telemetry
+
+With Azure IoT Explorer, you can view the flow of telemetry from your device to the cloud. Optionally, you can do the same task using Azure CLI.
+
+To view telemetry in Azure IoT Explorer:
+
+1. From the **IoT Plug and Play components** (Default Component) pane for your device in IoT Explorer, select the **Telemetry** tab. Confirm that **Use built-in event hub** is set to *Yes*.
+1. Select **Start**.
+1. View the telemetry as the device sends messages to the cloud.
+
+ :::image type="content" source="media/quickstart-devkit-stm-b-l4s5i-iot-hub/iot-explorer-device-telemetry.png" alt-text="Screenshot of device telemetry in IoT Explorer.":::
+
+ > [!NOTE]
+ > You can also monitor telemetry from the device by using the Termite app.
+
+1. Select the **Show modeled events** checkbox to view the events in the data format specified by the device model.
+
+ :::image type="content" source="media/quickstart-devkit-stm-b-l4s5i-iot-hub/iot-explorer-show-modeled-events.png" alt-text="Screenshot of modeled telemetry events in IoT Explorer.":::
+
+1. Select **Stop** to end receiving events.
+
+To use Azure CLI to view device telemetry:
+
+1. Run the [az iot hub monitor-events](/cli/azure/iot/hub#az-iot-hub-monitor-events) command. Use the names that you created previously in Azure IoT for your device and IoT hub.
+
+ ```azurecli
+ az iot hub monitor-events --device-id mydevice --hub-name {YourIoTHubName}
+ ```
+
+1. View the JSON output in the console.
+
+ ```json
+ {
+ "event": {
+ "origin": "mydevice",
+ "module": "",
+ "interface": "dtmi:azurertos:devkit:gsgmxchip;1",
+ "component": "",
+ "payload": "{\"humidity\":41.21,\"temperature\":31.37,\"pressure\":1005.18}"
+ }
+ }
+ ```
+
+1. Select CTRL+C to end monitoring.
++
+## Call a direct method on the device
+
+You can also use Azure IoT Explorer to call a direct method that you've implemented on your device. Direct methods have a name, and can optionally have a JSON payload, configurable connection, and method timeout. In this section, you call a method that turns an LED on or off. Optionally, you can do the same task using Azure CLI.
+
+To call a method in Azure IoT Explorer:
+
+1. From the **IoT Plug and Play components** (Default Component) pane for your device in IoT Explorer, select the **Commands** tab.
+1. For the **setLedState** command, set the **state** to **true**.
+1. Select **Send command**. You should see a notification in IoT Explorer, and the green LED light on the device should turn on.
+
+ :::image type="content" source="media/quickstart-devkit-stm-b-l4s5i-iot-hub/iot-explorer-invoke-method.png" alt-text="Screenshot of calling the setLedState method in IoT Explorer.":::
+
+1. Set the **state** to **false**, and then select **Send command**. The LED should turn off.
+1. Optionally, you can view the output in Termite to monitor the status of the methods.
+
+To use Azure CLI to call a method:
+
+1. Run the [az iot hub invoke-device-method](/cli/azure/iot/hub#az-iot-hub-invoke-device-method) command, and specify the method name and payload. For this method, setting `method-payload` to `true` turns on the LED, and setting it to `false` turns it off.
+
+ ```azurecli
+ az iot hub invoke-device-method --device-id mydevice --method-name setLedState --method-payload true --hub-name {YourIoTHubName}
+ ```
+
+ The CLI console shows the status of your method call on the device, where `204` indicates success.
+
+ ```json
+ {
+ "payload": {},
+ "status": 200
+ }
+ ```
+
+1. Check your device to confirm the LED state.
+
+1. View the Termite terminal to confirm the output messages:
+
+ ```output
+ Received command: setLedState
+ Payload: true
+ LED is turned ON
+ Sending property: $iothub/twin/PATCH/properties/reported/?$rid=15{"ledState":true}
+ ```
+
+## Troubleshoot and debug
+
+If you experience issues building the device code, flashing the device, or connecting, see [Troubleshooting](troubleshoot-embedded-device-quickstarts.md).
+
+For debugging the application, see [Debugging with Visual Studio Code](https://github.com/azure-rtos/getting-started/blob/master/docs/debugging.md).
+
+## Clean up resources
+
+If you no longer need the Azure resources created in this quickstart, you can use the Azure CLI to delete the resource group and all of its resources.
+
+> [!IMPORTANT]
+> Deleting a resource group is irreversible. The resource group and all the resources contained in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources.
+
+To delete a resource group by name:
+
+1. Run the [az group delete](/cli/azure/group#az-group-delete) command. This command removes the resource group, the IoT Hub, and the device registration you created.
+
+ ```azurecli-interactive
+ az group delete --name MyResourceGroup
+ ```
+
+1. Run the [az group list](/cli/azure/group#az-group-list) command to confirm the resource group is deleted.
+
+ ```azurecli-interactive
+ az group list
+ ```
++
+## Next steps
+
+In this quickstart, you built a custom image that contains Azure RTOS sample code, and then flashed the image to the STM DevKit device. You connected the STM DevKit to Azure, and carried out tasks such as viewing telemetry and calling a method on the device.
+
+As a next step, explore the following articles to learn more about using the IoT device SDKs, or Azure RTOS to connect devices to Azure IoT.
+
+> [!div class="nextstepaction"]
+> [Connect a general simulated device to IoT Hub](quickstart-send-telemetry-iot-hub.md)
+> [!div class="nextstepaction"]
+> [Quickstart: Connect an STMicroelectronics B-L475E-IOT01A Discovery kit to IoT Hub](quickstart-devkit-stm-b-l475e-iot-hub.md)
+> [!div class="nextstepaction"]
+> [Learn more about connecting embedded devices using C SDK and Embedded C SDK](concepts-using-c-sdk-and-embedded-c-sdk.md)
+
+> [!IMPORTANT]
+> Azure RTOS provides OEMs with components to secure communication and to create code and data isolation using underlying MCU/MPU hardware protection mechanisms. However, each OEM is ultimately responsible for ensuring that their device meets evolving security requirements.
iot-edge About Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/about-iot-edge.md
Previously updated : 10/28/2019 Last updated : 01/18/2022
iot-edge Development Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/development-environment.md
description: Learn about the supported systems and first-party development tools
Previously updated : 01/04/2019 Last updated : 11/28/2022
iot-edge How To Configure Api Proxy Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-configure-api-proxy-module.md
description: Learn how to customize the API proxy module for IoT Edge gateway hi
Previously updated : 11/10/2020 Last updated : 01/05/2023
monikerRange: ">=iotedge-2020-11"
# Configure the API proxy module for your gateway hierarchy scenario (Preview) This article walks through the configuration options for the API proxy module, so you can customize the module to support your gateway hierarchy requirements.
iot-edge How To Configure Proxy Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-configure-proxy-support.md
Title: Configure devices for network proxies - Azure IoT Edge | Microsoft Docs
description: How to configure the Azure IoT Edge runtime and any internet-facing IoT Edge modules to communicate through a proxy server. Previously updated : 06/27/2022 Last updated : 07/26/2022
This step takes place once on the IoT Edge device during initial device setup.
2. In the config file, find the `[agent]` section, which contains all the configuration information for the edgeAgent module to use on startup. Check and make sure that the `[agent]`section is uncommented or add it if it is not included in the `config.toml`. The IoT Edge agent definition includes an `[agent.env]` subsection where you can add environment variables. -
-<!-- 1.3 -->
- 3. Add the **https_proxy** parameter to the environment variables section, and set your proxy URL as its value. ```toml
This step takes place once on the IoT Edge device during initial device setup.
type = "docker" [agent.config]
- image = "mcr.microsoft.com/azureiotedge-agent:1.3"
+ image = "mcr.microsoft.com/azureiotedge-agent:1.4"
[agent.env] # "RuntimeLogLevel" = "debug"
This step takes place once on the IoT Edge device during initial device setup.
```toml [agent.config]
- image = "mcr.microsoft.com/azureiotedge-agent:1.3"
+ image = "mcr.microsoft.com/azureiotedge-agent:1.4"
[agent.env] # "RuntimeLogLevel" = "debug"
This step takes place once on the IoT Edge device during initial device setup.
"UpstreamProtocol" = "AmqpWs" "https_proxy" = "<proxy URL>" ```
-
-
-<!-- >= 1.3 -->
5. Save the changes and close the editor. Apply your latest changes.
iot-edge How To Connect Downstream Iot Edge Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-connect-downstream-iot-edge-device.md
description: Step by step adaptable manual instructions on how to create a hiera
Previously updated : 09/01/2022 Last updated : 01/05/2023
monikerRange: ">=iotedge-2020-11"
# Connect Azure IoT Edge devices together to create a hierarchy (nested edge) This article provides instructions for establishing a trusted connection between an IoT Edge gateway and a downstream IoT Edge device. This setup is also known as "nested edge".
You should already have IoT Edge installed on your device. If not, follow the st
trust_bundle_cert = "file:///var/aziot/certs/azure-iot-test-only.root.ca.cert.pem" ```
-01. Find or add the **Edge CA certificate** section in the config file. Update the certificate `cert` and private key `pk` parameters with the file URI paths for the certificate and key files on the parent IoT Edge device. IoT Edge requires the certificate and private key to be in text-based privacy-enhanced mail (PEM) format. For example:
+01. Find or add the **Edge CA certificate** section in the config file. Update the certificate `cert` and private key `pk` parameters with the file URI paths for the full-chain certificate and key files on the parent IoT Edge device. IoT Edge requires the certificate and private key to be in text-based privacy-enhanced mail (PEM) format. For example:
```toml [edge_ca]
- cert = "file:///var/aziot/certs/iot-edge-device-ca-gateway.cert.pem"
+ cert = "file:///var/aziot/certs/iot-edge-device-ca-gateway-full-chain.cert.pem"
pk = "file:///var/aziot/secrets/iot-edge-device-ca-gateway.key.pem" ```
You should already have IoT Edge installed on your device. If not, follow the st
trust_bundle_cert = "file:///var/aziot/certs/azure-iot-test-only.root.ca.cert.pem" [edge_ca]
- cert = "file:///var/aziot/certs/iot-edge-device-ca-gateway.cert.pem"
+ cert = "file:///var/aziot/certs/iot-edge-device-ca-gateway-full-chain.cert.pem"
pk = "file:///var/aziot/secrets/iot-edge-device-ca-gateway.key.pem" ```
You should already have IoT Edge installed on your device. If not, follow the st
trust_bundle_cert = "file:///var/aziot/certs/azure-iot-test-only.root.ca.cert.pem" ```
-01. Find or add the **Edge CA certificate** section in the configuration file. Update the certificate `cert` and private key `pk` parameters with the file URI paths for the certificate and key files on the IoT Edge downstream device. IoT Edge requires the certificate and private key to be in text-based privacy-enhanced mail (PEM) format. For example:
+01. Find or add the **Edge CA certificate** section in the configuration file. Update the certificate `cert` and private key `pk` parameters with the file URI paths for the full-chain certificate and key files on the IoT Edge downstream device. IoT Edge requires the certificate and private key to be in text-based privacy-enhanced mail (PEM) format. For example:
```toml [edge_ca]
- cert = "file:///var/aziot/certs/iot-edge-device-ca-downstream.cert.pem"
+ cert = "file:///var/aziot/certs/iot-edge-device-ca-downstream-full-chain.cert.pem"
pk = "file:///var/aziot/secrets/iot-edge-device-ca-downstream.key.pem" ```
You should already have IoT Edge installed on your device. If not, follow the st
trust_bundle_cert = "file:///var/aziot/certs/azure-iot-test-only.root.ca.cert.pem" [edge_ca]
- cert = "file:///var/aziot/certs/iot-edge-device-ca-downstream.cert.pem"
+ cert = "file:///var/aziot/certs/iot-edge-device-ca-downstream-full-chain.cert.pem"
pk = "file:///var/aziot/secrets/iot-edge-device-ca-downstream.key.pem" ```
You should already have IoT Edge installed on your device. If not, follow the st
01. Verify the TLS/SSL connection from the child to the parent by running the following `openssl` command on the downstream device. Replace `<parent hostname>` with the FQDN or IP address of the parent. ```bash
- echo | openssl s_client -connect <parent hostname>:8883 2> | openssl x509 -text
+ openssl s_client -connect <parent hostname>:8883 < 2>&1 >
```
- The command should return the certificate chain similar to the following example.
+ The command should assert successful validation of the parent certificate chain similar to the following example:
```Output
- azureUser@child-vm:~$ echo | openssl s_client -connect 10.0.0.4:8883 2> | openssl x509 -text
-
- Certificate:
- Data:
- Version: 3 (0x2)
- Serial Number: 0 (0x0)
- Signature Algorithm: sha256WithRSAEncryption
- Issuer: CN = gateway.ca
- Validity
- Not Before: Apr 27 16:25:44 2022 GMT
- Not After : May 26 14:43:24 2022 GMT
- Subject: CN = 10.0.0.4
- Subject Public Key Info:
- Public Key Algorithm: rsaEncryption
- RSA Public-Key: (2048 bit)
- Modulus:
- 00:b2:a6:df:d9:91:43:4e:77:d8:2c:2a:f7:01:b1:
- ...
- 33:bd:c8:f0:de:07:36:2c:0d:06:9e:89:22:95:5e:
- 3b:43
- Exponent: 65537 (0x10001)
- X509v3 extensions:
- X509v3 Extended Key Usage:
- TLS Web Server Authentication
- X509v3 Subject Alternative Name:
- DNS:edgehub, IP Address:10.0.0.4
- Signature Algorithm: sha256WithRSAEncryption
- 76:d4:5b:4a:d5:c4:80:7d:32:bc:c0:a8:ce:4f:69:5d:4d:ee:
- ...
- ```
-
- The `Subject: CN = ` value should match the **hostname** parameter specified in the parent's `config.toml` configuration file.
-
- If the command times out, there may be blocked ports between the child and parent devices. Review the network configuration and settings for the devices.
+ azureUser@child-vm:~$ openssl s_client -connect <parent hostname>:8883 < 2>&1 >
+
+ Can't use SSL_get_servername
+ depth=3 CN = Azure_IoT_Hub_CA_Cert_Test_Only
+ verify return:1
+ depth=2 CN = Azure_IoT_Hub_Intermediate_Cert_Test_Only
+ verify return:1
+ depth=1 CN = gateway.ca
+ verify return:1
+ depth=0 CN = <parent hostname>
+ verify return:1
+ DONE
+ ```
+
+ The "Can't use SSL_get_servername" message can be ignored.
+
+ The `depth=0 CN = ` value should match the **hostname** parameter specified in the parent's `config.toml` configuration file.
+
+ If the command times out, there may be blocked ports between the child and parent devices. Review the network configuration and settings for the devices.
+
+ > [!WARNING]
+ > A previous version of this document directed users to copy the `iot-edge-device-ca-gateway.cert.pem` certificate for use in the gateway `[edge_ca]` section. This was incorrect, and results in certificate validation errors from the downstream device. For example, the `openssl s_client ...` command above will produce:
+ >
+ > ```
+ > Can't use SSL_get_servername
+ > depth=1 CN = gateway.ca
+ > verify error:num=20:unable to get local issuer certificate
+ > verify return:1
+ > depth=0 CN = <parent hostname>
+ > verify return:1
+ > DONE
+ > ```
+ >
+ > The same issue will appear for TLS-enabled devices connecting to the downstream Edge device if `iot-edge-device-ca-downstream.cert.pem` is copied to the device instead of `iot-edge-device-ca-downstream-full-chain.cert.pem`.
## Network isolate downstream devices
iot-edge How To Create Virtual Switch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-create-virtual-switch.md
Note that if you're using an Azure VM, the virtual switch can't be **External**.
1. Open PowerShell in an elevated session. You can do so by opening the **Start** pane on Windows and typing in "PowerShell". Right-click the **Windows PowerShell** app that shows up and select **Run as administrator**.
-2. Check the virtual switches on the Windows host and make sure you don't already have a virtual switch that can be used. You can do so by running the following [Get-VMSwitch](/powershell/module/hyper-v/get-vmswitch) command in PowerShell:
+1. Check the virtual switches on the Windows host and make sure you don't already have a virtual switch that can be used. You can do so by running the following [Get-VMSwitch](/powershell/module/hyper-v/get-vmswitch) command in PowerShell:
```powershell Get-VMSwitch
Note that if you're using an Azure VM, the virtual switch can't be **External**.
If a virtual switch named **Default Switch** is already created and you don't need a custom virtual switch, you should be able to install IoT Edge for Linux on Windows without following the rest of the steps in this guide.
-3. Create a new VM switch with a name of your choice and an **Internal** or **Private** switch type by running the following [New-VMSwitch](/powershell/module/hyper-v/new-vmswitch) command, replacing the placeholder values:
+1. Create a new VM switch with a name of your choice and an **Internal** or **Private** switch type by running the following [New-VMSwitch](/powershell/module/hyper-v/new-vmswitch) command, replacing the placeholder values:
```powershell New-VMSwitch -Name "{switchName}" -SwitchType {switchType} ```
-4. To get the IP address for the switch you created, you must first get its interface index. You can get this value by running the following [Get-NetAdapter](/powershell/module/netadapter/get-netadapter) command, replacing the placeholder value:
+1. To get the IP address for the switch you created, you must first get its interface index. You can get this value by running the following [Get-NetAdapter](/powershell/module/netadapter/get-netadapter) command, replacing the placeholder value:
```powershell (Get-NetAdapter -Name "{switchName}").ifIndex
Note that if you're using an Azure VM, the virtual switch can't be **External**.
:::image type="content" source="media/how-to-create-virtual-switch/get-netadapter-output.png" alt-text="Screenshot of the output from running the Get-NetAdapter command, highlighting the interface index value." lightbox="media/how-to-create-virtual-switch/get-netadapter-output.png"::: Take note of the interface index value, as you'll need to use it in future steps.
+
+6. The resulting virtual switch IP address will be different for each environment. Note that for the rest of the commands in this guide you will make use of IP addresses that are derived from the *172.20.X.Y* family. However, you can you use your own address family and IP addresses.
-5. Using the interface index from the previous step, get the IP address of the created switch network adapter by running the following [Get-NetIPAddress](/powershell/module/nettcpip/get-netipaddress) command, replacing the placeholder value:
-
- ```powershell
- Get-NetIPAddress -AddressFamily IPv4 -InterfaceIndex {interfaceIndex}
- ```
-
- Running this command should output information similar to the following example:
-
- :::image type="content" source="media/how-to-create-virtual-switch/get-netipaddress-output.png" alt-text="Screenshot of the output from running the Get-NetIPAddress command, highlighting the IP address." lightbox="media/how-to-create-virtual-switch/get-netipaddress-output.png":::
-
- The resulting virtual switch IP address will be different for each environment. Take note of the IP address, as the rest of the commands in this guide will make use of more IP addresses that are derived from this outputted address.
-
-6. For the other IP addresses, you'll need to create variations where the last octet (the number separated by each dot in an IP address) is replaced by a different value. You'll create and use the following IP addresses:
+ You'll create and use the following IP addresses:
- | IP address | Template | Example |
- |-|--|--|
- | Virtual switch IP | xxx.xxx.xxx.yyy | 169.254.229.39 |
- | Gateway IP | xxx.xxx.xxx.1 | 169.254.229.1 |
- | NAT IP | xxx.xxx.xxx.0 | 169.254.229.0 |
- | Start IP | xxx.xxx.xxx.100 | 169.254.229.100 |
- | End IP | xxx.xxx.xxx.200 | 169.254.229.200 |
+ | IP address | Template | Example |
+ |-|--|--|
+ | Gateway IP | xxx.xxx.xxx.1 | 172.20.0.1 |
+ | NAT IP | xxx.xxx.xxx.0 | 172.20.0.0 |
+ | Start IP | xxx.xxx.xxx.100 | 172.20.0.100 |
+ | End IP | xxx.xxx.xxx.200 | 172.20.0.200 |
-7. Set the **gateway IP address** by replacing the last octet of your virtual switch IP with a new numerical value, for example 1. Run the following [New-NetIPAddress](/powershell/module/nettcpip/new-netipaddress) command to set the new gateway IP address, replacing the placeholder values:
+1. Set the **gateway IP address** by replacing the last octet of your virtual switch IP address family with a new numerical value. For example, replace last octet with 1 and get the address 172.20.0.1. Run the following [New-NetIPAddress](/powershell/module/nettcpip/new-netipaddress) command to set the new gateway IP address, replacing the placeholder values:
```powershell New-NetIPAddress -IPAddress {gatewayIp} -PrefixLength 24 -InterfaceIndex {interfaceIndex}
Note that if you're using an Azure VM, the virtual switch can't be **External**.
:::image type="content" source="media/how-to-create-virtual-switch/new-netipaddress-output.png" alt-text="Screenshot of the output from running the New-NetIPAddress command." lightbox="media/how-to-create-virtual-switch/new-netipaddress-output.png":::
-8. Create a Network Address Translation (NAT) object that translates an internal network address to an external network. Use the same IPv4 family address from previous steps. Based on the table from step six, the **NAT IP address** corresponds to the original virtual switch IP address, except that the last octet is replaced with a new numerical value, for example 0. Run the following [New-NetNat](/powershell/module/netnat/new-netnat) command to set the NAT IP address, replacing the placeholder values:
+1. Create a Network Address Translation (NAT) object that translates an internal network address to an external network. Use the same IPv4 family address from previous steps. Based on the table from step six, the **NAT IP address** corresponds to the original IP address family, except that the last octet is replaced with a new numerical value, for example 0. Run the following [New-NetNat](/powershell/module/netnat/new-netnat) command to set the NAT IP address, replacing the placeholder values:
```powershell New-NetNat -Name "{switchName}" -InternalIPInterfaceAddressPrefix "{natIp}/24"
The switch is now created. Next, you'll set up the DNS.
Get-WindowsFeature -Name 'DHCP' ```
-2. If the DHCP server isn't already installed, do so by running the following command:
+1. If the DHCP server isn't already installed, do so by running the following command:
```powershell Install-WindowsFeature -Name 'DHCP' -IncludeManagementTools ```
-3. Add the DHCP Server to the default local security groups and restart the server.
+1. Add the DHCP Server to the default local security groups and restart the server.
```powershell netsh dhcp add securitygroups
The switch is now created. Next, you'll set up the DNS.
You'll receive the following warning messages while the DHCP server is starting up: `WARNING: Waiting for service 'DHCP Server (dhcpserver)' to start...`
-4. To configure the DHCP server range of IPs to be made available, you'll need to set an IP address as the **start IP** and an IP address as the **end IP**. This range is defined by the **StartRange** and the **EndRange** parameters in the [Add-DhcpServerv4Scope](/powershell/module/dhcpserver/add-dhcpserverv4scope) command. You'll also need to set the subnet mask when running this command, which will be 255.255.255.0. Based on the IP address templates and examples in the table from the previous section, setting the **StartRange** as 169.254.229.100 and the **EndRange** as 169.254.229.200 will make 100 IP addresses available. Run the following command, replacing the placeholders with your own values:
+1. To configure the DHCP server range of IPs to be made available, you'll need to set an IP address as the **start IP** and an IP address as the **end IP**. This range is defined by the **StartRange** and the **EndRange** parameters in the [Add-DhcpServerv4Scope](/powershell/module/dhcpserver/add-dhcpserverv4scope) command. You'll also need to set the subnet mask when running this command, which will be 255.255.255.0. Based on the IP address templates and examples in the table from the previous section, setting the **StartRange** as 169.254.229.100 and the **EndRange** as 169.254.229.200 will make 100 IP addresses available. Run the following command, replacing the placeholders with your own values:
```powershell Add-DhcpServerV4Scope -Name "AzureIoTEdgeScope" -StartRange {startIp} -EndRange {endIp} -SubnetMask 255.255.255.0 -State Active
The switch is now created. Next, you'll set up the DNS.
This command should produce no output.
-5. Assign the **NAT** and **gateway IP** addresses you created in the earlier section to the DHCP server, and restart the server to load the configuration. The first command should produce no output, but restarting the DHCP server should output the same warning messages that you received when you did so in the third step of this section.
+1. Assign the **NAT** and **gateway IP** addresses you created in the earlier section to the DHCP server, and restart the server to load the configuration. The first command should produce no output, but restarting the DHCP server should output the same warning messages that you received when you did so in the third step of this section.
```powershell Set-DhcpServerV4OptionValue -ScopeID {natIp} -Router {gatewayIp}
iot-edge How To Install Iot Edge Ubuntuvm Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-install-iot-edge-ubuntuvm-bicep.md
Previously updated : 08/29/2022 Last updated : 01/05/2023 # Run Azure IoT Edge on Ubuntu Virtual Machines by using Bicep The Azure IoT Edge runtime is what turns a device into an IoT Edge device. The runtime can be deployed on devices as small as a Raspberry Pi or as large as an industrial server. Once a device is configured with the IoT Edge runtime, you can start deploying business logic to it from the cloud.
This article lists the steps to deploy an Ubuntu 18.04 LTS virtual machine with
On first boot, the virtual machine [installs the latest version of the Azure IoT Edge runtime via cloud-init](https://github.com/Azure/iotedge-vm-deploy/blob/1.1/cloud-init.txt). It also sets a supplied connection string before the runtime starts, allowing you to easily configure and connect the IoT Edge device without the need to start an SSH or remote desktop session. :::moniker-end :::moniker range=">=iotedge-2020-11"
-This article lists the steps to deploy an Ubuntu 20.04 LTS virtual machine with the Azure IoT Edge runtime installed and configured using a pre-supplied device connection string. The deployment is accomplished using a [cloud-init](../virtual-machines/linux/using-cloud-init.md) based [Bicep file](../azure-resource-manager/bicep/overview.md) maintained in the [iotedge-vm-deploy](https://github.com/Azure/iotedge-vm-deploy/tree/1.3) project repository.
+This article lists the steps to deploy an Ubuntu 20.04 LTS virtual machine with the Azure IoT Edge runtime installed and configured using a pre-supplied device connection string. The deployment is accomplished using a [cloud-init](../virtual-machines/linux/using-cloud-init.md) based [Bicep file](../azure-resource-manager/bicep/overview.md) maintained in the [iotedge-vm-deploy](https://github.com/Azure/iotedge-vm-deploy/tree/1.4) project repository.
-On first boot, the virtual machine [installs the latest version of the Azure IoT Edge runtime via cloud-init](https://github.com/Azure/iotedge-vm-deploy/blob/1.3/cloud-init.txt). It also sets a supplied connection string before the runtime starts, allowing you to easily configure and connect the IoT Edge device without the need to start an SSH or remote desktop session.
+On first boot, the virtual machine [installs the latest version of the Azure IoT Edge runtime via cloud-init](https://github.com/Azure/iotedge-vm-deploy/blob/1.4/cloud-init.txt). It also sets a supplied connection string before the runtime starts, allowing you to easily configure and connect the IoT Edge device without the need to start an SSH or remote desktop session.
:::moniker-end ## Deploy from Azure CLI
iot-edge How To Manage Device Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-manage-device-certificates.md
description: How to install and manage certificates on an Azure IoT Edge device
Previously updated : 11/03/2022 Last updated : 12/06/2022
iot-edge How To Update Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-update-iot-edge.md
keywords:
Previously updated : 06/03/2022 Last updated : 11/29/2022
iot-edge How To Vs Code Develop Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-vs-code-develop-module.md
description: Use Visual Studio Code to develop, build, and debug a module for Az
Previously updated : 10/18/2022 Last updated : 10/27/2022
iot-edge Iot Edge Certs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-certs.md
In summary, *EdgeGateway* can verify and trust *ContosoIotHub's* identity becaus
## IoT Hub verifies IoT Edge device identity
-How does *ContosoIotHub* verify it's communicating with *EdgeGateway*? Verification is done by checking the certificate at the IoTHub application code level. This step happens together with the *TLS handshake* (IoT Hub doesn't support mutual TLS). Authentication of the client doesn't happen at the TLS level, only at the application layer. For simplicity, we'll skip some steps in the following diagram.
+How does *ContosoIotHub* verify it's communicating with *EdgeGateway*? Since [IoT Hub supports *mutual TLS* (mTLS)](../iot-hub/iot-hub-tls-support.md#mutual-tls-support), it checks *EdgeGateway*'s certificate during [client-authenticated TLS handshake](https://wikipedia.org/wiki/Transport_Layer_Security#Client-authenticated_TLS_handshake). For simplicity, we'll skip some steps in the following diagram.
:::image type="content" source="./media/iot-edge-certs/verify-edge-identity.svg" alt-text="Sequence diagram showing certificate exchange from IoT Edge device to IoT Hub with certificate thumbprint check verification on IoT Hub.":::
sequenceDiagram
ContosoIotHub->>EdgeGateway: Great, let's connect -->
-In this case, IoT Edge provides its **IoT Edge device identity certificate**. From *ContosoIotHub* perspective, it checks if the thumbprint of the provided certificate matches its record. When you provision an IoT Edge device in IoT Hub, you provide a thumbprint. The thumbprint is what IoT Hub uses to verify the certificate.
+In this case, *EdgeGateway* provides its **IoT Edge device identity certificate**. From *ContosoIotHub* perspective, it checks both that the thumbprint of the provided certificate matches its record and *EdgeGateway* has the private key paired with the certificate it presented. When you provision an IoT Edge device in IoT Hub, you provide a thumbprint. The thumbprint is what IoT Hub uses to verify the certificate.
> [!TIP] > IoT Hub requires two thumprints when registering and IoT Edge device. For best practice, prepare two different device identity certificates with different expiration dates. This way, if one certificate expires, the other is still valid and gives you time to rotate the expired certificate. However, it's also possible to use only one certificate for registration by putting the same thumbprint for both fields.
To secure Edge CA in production:
* Put the EdgeCA private key in a trusted platform module (TPM), preferably in a fashion where the private key is ephemerally generated and never leaves the TPM. * Use a Public Key Infrastructure (PKI) to which Edge CA rolls up. This provides the ability to disable or refuse renewal of compromised certificates. The PKI can be managed by customer IT if they have the know how (lower cost) or through a commercial PKI provider.
-### Self-signed root CA complexity
+### Self-signed root CA specificity
The [*edgeHub* module](iot-edge-runtime.md#iot-edge-hub) is an important component that makes up IoT Edge by handling all incoming traffic. In this example, it uses a certificate issued by Edge CA, which is in turn issued by a self-signed root CA. Because the root CA isn't trusted by the OS, the only way *TempSensor* would trust it is to install the CA certificate onto the device. This is also known as the *trust bundle* scenario, where you need to distribute the root to clients that need to trust the chain. The trust bundle scenario can be troublesome because you need access the device and install the certificate. Installing the certificate requires planning. It can be done with scripts, added during manufacturing, or pre-installed in the OS image.
iot-edge Iot Edge For Linux On Windows Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-for-linux-on-windows-support.md
Title: Supported operating systems, container engines - Azure IoT Edge for Linux
description: Learn which operating systems can run Azure IoT Edge for Linux on Windows Previously updated : 03/15/2022 Last updated : 06/23/2022
The following table lists the components included in each release. Each release
| Release | IoT Edge | CBL-Mariner | Defender for IoT | | - | -- | -- | - | | **1.1 LTS** | 1.1 | 2.0 | - |
-| **Continuous Release** | 1.3 | 2.0 | 3.12.3 |
+| **Continuous Release** | 1.3 | 2.0 | 3.12.3 |
| **1.4 LTS** | 1.4 | 2.0 | 3.12.3 |
iot-edge Module Composition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/module-composition.md
description: Learn how a deployment manifest declares which modules to deploy, h
Previously updated : 10/08/2020 Last updated : 01/05/2023
The $edgeAgent properties follow this structure:
"modulesContent": { "$edgeAgent": { "properties.desired": {
- "schemaVersion": "1.4",
+ "schemaVersion": "1.1",
"runtime": { "settings":{ "registryCredentials":{
The $edgeAgent properties follow this structure:
} ```
-The IoT Edge agent schema version 1.4 was released along with IoT Edge version 1.0.10, and enables module startup order. Schema version 1.4 is recommended for any IoT Edge deployment running version 1.0.10 or later.
+The IoT Edge agent schema version 1.1 was released along with IoT Edge version 1.0.10, and enables module startup order. Schema version 1.1 is recommended for any IoT Edge deployment running version 1.0.10 or later.
### Module configuration and management
For example:
"modulesContent": { "$edgeAgent": { "properties.desired": {
- "schemaVersion": "1.4",
+ "schemaVersion": "1.1",
"runtime": { ... }, "systemModules": { "edgeAgent": { ... },
Routes are declared in the **$edgeHub** desired properties with the following sy
"$edgeAgent": { ... }, "$edgeHub": { "properties.desired": {
- "schemaVersion": "1.4",
+ "schemaVersion": "1.1",
"routes": { "route1": "FROM <source> WHERE <condition> INTO <sink>", "route2": {
Routes are declared in the **$edgeHub** desired properties with the following sy
} ```
-The IoT Edge hub schema version 1.4 was released along with IoT Edge version 1.0.10, and enables route prioritization and time to live. Schema version 1.4 is recommended for any IoT Edge deployment running version 1.0.10 or later.
+The IoT Edge hub schema version 1 was released along with IoT Edge version 1.0.10, and enables route prioritization and time to live. Schema version 1.1 is recommended for any IoT Edge deployment running version 1.0.10 or later.
Every route needs a *source* where the messages come from and a *sink* where the messages go. The *condition* is an optional piece that you can use to filter messages.
Option 1:
"route1": "FROM <source> WHERE <condition> INTO <sink>", ```
-Option 2, introduced in IoT Edge version 1.0.10 with IoT Edge hub schema version 1.4:
+Option 2, introduced in IoT Edge version 1.0.10 with IoT Edge hub schema version 1.1:
```json "route2": {
The following example shows what a valid deployment manifest document may look l
"modulesContent": { "$edgeAgent": { "properties.desired": {
- "schemaVersion": "1.4",
+ "schemaVersion": "1.1",
"runtime": { "type": "docker", "settings": {
The following example shows what a valid deployment manifest document may look l
}, "$edgeHub": { "properties.desired": {
- "schemaVersion": "1.4",
+ "schemaVersion": "1.1",
"routes": { "sensorToFilter": { "route": "FROM /messages/modules/SimulatedTemperatureSensor/outputs/temperatureOutput INTO BrokeredEndpoint(\"/modules/filtermodule/inputs/input1\")",
iot-edge Module Deployment Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/module-deployment-monitoring.md
description: Use automatic deployments in Azure IoT Edge to manage groups of dev
Previously updated : 10/18/2021 Last updated : 11/17/2022
iot-edge Quickstart Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/quickstart-linux.md
Open the command prompt on your IoT Edge device again, or use the SSH connection
<!-- 1.1 --> :::moniker range="iotedge-2018-06"
- ![View three modules on your device](./media/quickstart-linux/iotedge-list-2-version-201806.png)
+![View three modules on your device](./media/quickstart-linux/iotedge-list-2-version-201806.png)
:::moniker-end <!-- iotedge-2020-11 --> :::moniker range=">=iotedge-2020-11"
- ![View three modules on your device](./media/quickstart-linux/iotedge-list-2-version-202011.png)
+![View three modules on your device](./media/quickstart-linux/iotedge-list-2-version-1.4.png)
:::moniker-end View the messages being sent from the temperature sensor module:
iot-edge Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/support.md
Title: IoT Edge supported platforms
description: Azure IoT Edge supported operating systems, runtimes, and container engines. Previously updated : 02/08/2022 Last updated : 07/26/2022
iot-edge Troubleshoot Iot Edge For Linux On Windows Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/troubleshoot-iot-edge-for-linux-on-windows-common-errors.md
Title: Common issues and resolutions for Azure IoT Edge for Linux on Windows | M
description: Use this article to resolve common issues encountered when deploying an IoT Edge for Linux on Windows (EFLOW) solution Previously updated : 07/25/2022 Last updated : 07/26/2022
The following section addresses the common errors related to EFLOW networking an
> [!div class="mx-tdCol2BreakAll"] > | Error | Error Description | Solution | > | -- | -- | -- |
-> | Installation of virtual switch failed <br/> The virtual switch '$switchName' of type '$switchType' was not found | When creating the EFLOW VM, there's a check that the virtual switch provided exists and has the correct type. If using no parameter, the installation uses the default switch provided by the Windows client. | Check that the virtual switch being used is part of the Windows host OS. You can check the virtual switches using the PowerShell cmdlet `Get-VmSwitch`. For more information about networking configurations, see [Azure IoT Edge for Linux on Windows networking](./iot-edge-for-linux-on-windows-networking.md). |
-> | The virtual switch '$switchName' of type '$switchType' </br> is not supported on current host OS | When using Windows client SKUs, external/default switches are supported. However, when using Windows Server SKUs, external/internal switches are supported. | For more information about networking configurations, see [Azure IoT Edge for Linux on Windows networking](./iot-edge-for-linux-on-windows-networking.md).
-> | Cannot set Static IP on ICS type virtual switch (Default Switch) | The _default switch_ is a virtual switch that's provided in the Windows client SKUs after installing Hyper-V. This switch already has a DHCP server for *IP4Address* assignation and for security reasons doesn't support a static IP. | If using the _default switch_, you can either use the `Get-EflowVmAddr` cmdlet or use the hostname of the EFLOW VM to get the VM *IP4Address*. If using the hostname, try using _Windows-hostname_-EFLOW.mshome.net. For more information about networking configurations, see [Azure IoT Edge for Linux on Windows networking](./iot-edge-for-linux-on-windows-networking.md). |
-> | $dnsServer is not a valid IP4 address | The *Set-EflowVmDnsServers* cmdlet expects a list of valid *IP4Addresses* | Verify you provided a valid list of addresses. You can check the Windows host OS DNS servers by using the PowerShell cmdlet `ipconfig /all` and then looking for the entry _DNS Servers_. For example, if you wanted to set two DNS servers with IPs 10.0.1.2 and 10.0.1.3, use the `Set-EflowVmDnsServers -dnsServers @("10.0.1.2", "10.0.1.3")` cmdlet. |
-> | Could not retrieve IP address for virtual machine <br/> Virtual machine name could not be retrieved, failed to get IP/MAC address <br/> Failed to acquire MAC address for virtual machine <br/> Failed to acquire IP address for virtual machine <br/> Unable to obtain host routing. Connectivity test to $computerName failed. <br/> wssdagent does not have the expected vnet resource provisioned. <br/> Missing EFLOW-VM guest interface for ($vnicName) | Caused by connectivity issues with the EFLOW virtual machine. The errors are generally related to an IP address change (if using static IP) or failure to assign an IP if using DHCP server. | Make sure to use the appropriate networking configuration. If there's a valid DHCP server, you can use DHCP assignation. If using static IP, make sure the IP configuration is correct (all three parameters: _ip4Address_, _ip4GatewayAddress_ and _ip4PrefixLength_) and the address isn't being used by another device in the network. For more information about networking configurations, see [Azure IoT Edge for Linux on Windows networking](./iot-edge-for-linux-on-windows-networking.md). |
-> | No adapters associated with the switch '$vnetName' are found. <br/> No adapters associated with the device ID '$adapterGuid' are found <br/> No adapters associated with the adapter name '$name' are found. <br/> Network '$vswitchName' does not exist | Caused by a network communication error between the Windows host OS and the EFLOW virtual machine. | Ensure you can reach the EFLOW VM and establish an SSH channel. Use the `Connect-EflowVm` PowerShell cmdlet to connect to the virtual machine. If connectivity fails, reboot the EFLOW VM and check again. |
-> | ip4Address & ip4PrefixLength are required for StaticIP! | During EFLOW VM deployment or when adding multiple NICs, if using static IP, the three static ip parameters are needed: _ip4Address_, _ip4GatewayAddress_, _ip4PrefixLength_. | For more information about `Deploy-EFlow` PowerShell cmdlet, see [PowerShell functions for IoT Edge for Linux on Windows](./reference-iot-edge-for-linux-on-windows-functions.md). |
-> | Found multiple VMMS switches <br/> with name '$switchName' of type '$switchType' | There are two or more virtual switches with the same name and type. This environment conflicts with the EFLOW VM installation and lifecycle of the VM. | Use `Get-VmSwitch` PowerShell cmdlet to check the virtual switches available in the Windows host and make sure that each {name,type} is unique. |
+> | Installation of virtual switch failed <br/> The virtual switch '$switchName' of type '$switchType' was not found | When creating the EFLOW VM, there's a check that the virtual switch provided exists and has the correct type. If using no parameter, the installation uses the default switch provided by the Windows client. | Check that the virtual switch being used is part of the Windows host OS. You can check the virtual switches using the PowerShell cmdlet `Get-VmSwitch`. For more information about networking configurations, see [Azure IoT Edge for Linux on Windows networking](./iot-edge-for-linux-on-windows-networking.md). |
+> | The virtual switch '$switchName' of type '$switchType' </br> is not supported on current host OS | When using Windows client SKUs, external/default switches are supported. However, when using Windows Server SKUs, external/internal switches are supported. | For more information about networking configurations, see [Azure IoT Edge for Linux on Windows networking](./iot-edge-for-linux-on-windows-networking.md).
+> | Cannot set Static IP on ICS type virtual switch (Default Switch) | The _default switch_ is a virtual switch that's provided in the Windows client SKUs after installing Hyper-V. This switch already has a DHCP server for *IP4Address* assignation and for security reasons doesn't support a static IP. | If using the _default switch_, you can either use the `Get-EflowVmAddr` cmdlet or use the hostname of the EFLOW VM to get the VM *IP4Address*. If using the hostname, try using _Windows-hostname_-EFLOW.mshome.net. For more information about networking configurations, see [Azure IoT Edge for Linux on Windows networking](./iot-edge-for-linux-on-windows-networking.md). |
+> | $dnsServer is not a valid IP4 address | The *Set-EflowVmDnsServers* cmdlet expects a list of valid *IP4Addresses* | Verify you provided a valid list of addresses. You can check the Windows host OS DNS servers by using the PowerShell cmdlet `ipconfig /all` and then looking for the entry _DNS Servers_. For example, if you wanted to set two DNS servers with IPs 10.0.1.2 and 10.0.1.3, use the `Set-EflowVmDnsServers -dnsServers @("10.0.1.2", "10.0.1.3")` cmdlet. |
+> | Could not retrieve IP address for virtual machine <br/> Virtual machine name could not be retrieved, failed to get IP/MAC address <br/> Failed to acquire MAC address for virtual machine <br/> Failed to acquire IP address for virtual machine <br/> Unable to obtain host routing. Connectivity test to $computerName failed. <br/> wssdagent does not have the expected vnet resource provisioned. <br/> Missing EFLOW-VM guest interface for ($vnicName) | Caused by connectivity issues with the EFLOW virtual machine. The errors are generally related to an IP address change (if using static IP) or failure to assign an IP if using DHCP server. | Make sure to use the appropriate networking configuration. If there's a valid DHCP server, you can use DHCP assignation. If using static IP, make sure the IP configuration is correct (all three parameters: _ip4Address_, _ip4GatewayAddress_ and _ip4PrefixLength_) and the address isn't being used by another device in the network. For more information about networking configurations, see [Azure IoT Edge for Linux on Windows networking](./iot-edge-for-linux-on-windows-networking.md). |
+> | No adapters associated with the switch '$vnetName' are found. <br/> No adapters associated with the device ID '$adapterGuid' are found <br/> No adapters associated with the adapter name '$name' are found. <br/> Network '$vswitchName' does not exist | Caused by a network communication error between the Windows host OS and the EFLOW virtual machine. | Ensure you can reach the EFLOW VM and establish an SSH channel. Use the `Connect-EflowVm` PowerShell cmdlet to connect to the virtual machine. If connectivity fails, reboot the EFLOW VM and check again. |
+> | ip4Address & ip4PrefixLength are required for StaticIP! | During EFLOW VM deployment or when adding multiple NICs, if using static IP, the three static ip parameters are needed: _ip4Address_, _ip4GatewayAddress_, _ip4PrefixLength_. | For more information about `Deploy-EFlow` PowerShell cmdlet, see [PowerShell functions for IoT Edge for Linux on Windows](./reference-iot-edge-for-linux-on-windows-functions.md). |
+> | Found multiple VMMS switches <br/> with name '$switchName' of type '$switchType' | There are two or more virtual switches with the same name and type. This environment conflicts with the EFLOW VM installation and lifecycle of the VM. | Use `Get-VmSwitch` PowerShell cmdlet to check the virtual switches available in the Windows host and make sure that each {name,type} is unique. |
## Next steps
iot-edge Tutorial Configure Est Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-configure-est-server.md
Title: Tutorial - Configure Enrollment over Secure Transport Server (EST) for Az
description: This tutorial shows you how to set up an Enrollment over Secure Transport (EST) server for Azure IoT Edge. Previously updated : 07/06/2022 Last updated : 01/05/2023
monikerRange: ">=iotedge-2020-11"
# Tutorial: Configure Enrollment over Secure Transport Server for Azure IoT Edge With Azure IoT Edge, you can configure your devices to use an Enrollment over Secure Transport (EST) server to manage x509 certificates.
On the IoT Edge device, update the IoT Edge configuration file to use device cer
[provisioning.attestation] method = "x509" registration_id = "myiotedgedevice"
- identity_cert = { method = "est", common_name = "myiotedgedevice" }
+
+ [provisioning.attestation.identity_cert]
+ method = "est"
+ common_name = "myiotedgedevice"
# Auto renewal settings for the identity cert # Available only from IoT Edge 1.3 and above
iot-edge Tutorial Csharp Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-csharp-module.md
Previously updated : 07/30/2020 Last updated : 07/30/2021
iot-edge Tutorial Nested Iot Edge For Linux On Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-nested-iot-edge-for-linux-on-windows.md
monikerRange: ">=iotedge-2020-11"
# Tutorial: Create a hierarchy of IoT Edge devices using IoT Edge for Linux on Windows Deploy Azure IoT Edge nodes across networks organized in hierarchical layers. Each layer in a hierarchy is a gateway device that handles messages and requests from devices in the layer beneath it.
iot-edge Tutorial Nested Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-nested-iot-edge.md
description: This tutorial shows you how to create a hierarchical structure of I
Previously updated : 03/31/2022 Last updated : 08/31/2022
monikerRange: ">=iotedge-2020-11"
# Tutorial: Create a hierarchy of IoT Edge devices Deploy Azure IoT Edge nodes across networks organized in hierarchical layers. Each layer in a hierarchy is a gateway device that handles messages and requests from devices in the layer beneath it. This setup is also known as "nested edge".
iot-edge Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/version-history.md
Title: IoT Edge version history and release notes
description: Release history and notes for IoT Edge. Previously updated : 08/25/2022 Last updated : 10/24/2022
iot-hub Iot Hub Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-managed-identity.md
result = iothub_job_manager.create_import_export_job(JobProperties(
- [.NET SDK sample](https://aka.ms/iothubmsicsharpsample) - [Java SDK sample](https://aka.ms/iothubmsijavasample) - [Python SDK sample](https://github.com/Azure/azure-iot-hub-python/tree/main/samples)-- Node.js SDK samples: [bulk device import](https://aka.ms/iothubmsinodesampleimport), [bulk device export](https://aka.ms/iothubmsinodesampleexport) ## Next steps
iot-hub Iot Hub Python Python Module Twin Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-python-python-module-twin-getstarted.md
ms.devlang: python Previously updated : 04/03/2020 Last updated : 01/04/2023
At the end of this article, you have three Python apps:
* **ReceiveModuleTwinDesiredPropertiesPatch**: receives the module twin, desired properties patch on your device. > [!NOTE]
-> See [Azure IoT SDKs](iot-hub-devguide-sdks.md) for more information about the SDK tools available to build both device and back-end apps.
+> For more information about the SDK tools available to build both device and back-end apps, see [Azure IoT SDKs](iot-hub-devguide-sdks.md).
## Prerequisites
In this section, you create a Python app to get the module twin desired properti
import time from azure.iot.device import IoTHubModuleClient
- CONNECTION_STRING = "YourIotHubConnectionString"
+ CONNECTION_STRING = "YourModuleConnectionString"
def twin_patch_handler(twin_patch):
iot-hub Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/policy-reference.md
Title: Built-in policy definitions for Azure IoT Hub description: Lists Azure Policy built-in policy definitions for Azure IoT Hub. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/28/2022 Last updated : 01/05/2023
iot-hub Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure IoT Hub description: Lists Azure Policy Regulatory Compliance controls available for Azure IoT Hub. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/04/2022 Last updated : 01/05/2023
key-vault About Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/about-certificates.md
tags: azure-resource-manager
Previously updated : 09/04/2019 Last updated : 01/04/2023
Azure Key Vault certificate support provides for management of your X.509 certificates and the following behaviors: -- Allows a certificate owner to create a certificate through a key vault creation process or through the import of an existing certificate. This includes both self-signed certificates and certificates that are generated from a certificate authority (CA).
+- Allows a certificate owner to create a certificate through a key vault creation process or through the import of an existing certificate. Imported certificates include both self-signed certificates and certificates that are generated from a certificate authority (CA).
- Allows a Key Vault certificate owner to implement secure storage and management of X.509 certificates without interacting with private key material. - Allows a certificate owner to create a policy that directs Key Vault to manage the lifecycle of a certificate. - Allows a certificate owner to provide contact information for notifications about the lifecycle events of expiration and renewal.
A Key Vault certificate has the following attribute:
- `enabled`: This Boolean attribute is optional. Default is `true`. It can be specified to indicate if the certificate data can be retrieved as a secret or operable as a key.
- This attribute is also used in conjunction with `nbf` and `exp` when an operation occurs between `nbf` and `exp`, but only if `enabled` is set to `true`. Operations outside the `nbf` and `exp` window are automatically disallowed.
+ This attribute is also used with `nbf` and `exp` when an operation occurs between `nbf` and `exp`, but only if `enabled` is set to `true`. Operations outside the `nbf` and `exp` window are automatically disallowed.
A response includes these additional read-only attributes:
At a high level, a certificate policy contains the following information:
- X.509 certificate properties, which include subject name, subject alternate names, and other properties that are used to create an X.509 certificate request. - Key properties, which include key type, key length, exportable, and `ReuseKeyOnRenewal` fields. These fields instruct Key Vault on how to generate a key.
- [Supported key types](/rest/api/keyvault/certificates/create-certificate/create-certificate#jsonwebkeytype) are RSA, RSA-HSM, EC, EC-HSM, and oct.
+ [Supported key types](/rest/api/keyvault/certificates/create-certificate/create-certificate#jsonwebkeytype) are RSA, RSA-HSM, EC, EC-HSM, and oct.
- Secret properties, such as the content type of an addressable secret to generate the secret value, for retrieving a certificate as a secret. - Lifetime actions for the Key Vault certificate. Each lifetime action contains:
key-vault Quick Create Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/quick-create-node.md
Title: Quickstart - Azure Key Vault certificate client library for JavaScript (
description: Learn how to create, retrieve, and delete certificates from an Azure key vault using the JavaScript client library Previously updated : 12/13/2021 Last updated : 01/04/2023
For more information about Key Vault and certificates, see:
- [Azure portal](../general/quick-create-portal.md) - [Azure PowerShell](../general/quick-create-powershell.md)
-This quickstart assumes you are running [Azure CLI](/cli/azure/install-azure-cli).
+This quickstart assumes you're running [Azure CLI](/cli/azure/install-azure-cli).
## Sign in to Azure
export KEY_VAULT_NAME=<your-key-vault-name>
## Code example
-The code samples below will show you how to create a client, set a certificate, retrieve a certificate, and delete a certificate.
+These code samples demonstrate how to create a client, set a certificate, retrieve a certificate, and delete a certificate.
### Set up the app framework
The code samples below will show you how to create a client, set a certificate,
## Integrating with App Configuration
-The Azure SDK provides a helper method, [parseKeyVaultCertificateIdentifier](/javascript/api/@azure/keyvault-certificates#parseKeyVaultCertificateIdentifier_string_), to parse the given Key Vault certificate ID. This is necessary if you use [App Configuration](../../azure-app-configuration/index.yml) references to Key Vault. App Config stores the Key Vault certificate ID. You need the _parseKeyVaultCertificateIdentifier_ method to parse that ID to get the certificate name. Once you have the certificate name, you can get the current certificate using code from this quickstart.
+The Azure SDK provides a helper method, [parseKeyVaultCertificateIdentifier](/javascript/api/@azure/keyvault-certificates#parseKeyVaultCertificateIdentifier_string_), to parse the given Key Vault certificate ID, which is necessary if you use [App Configuration](../../azure-app-configuration/index.yml) references to Key Vault. App Config stores the Key Vault certificate ID. You need the _parseKeyVaultCertificateIdentifier_ method to parse that ID to get the certificate name. Once you have the certificate name, you can get the current certificate using code from this quickstart.
## Next steps
-In this quickstart, you created a key vault, stored a certificate, and retrieved that certificate. To learn more about Key Vault and how to integrate it with your applications, continue on to the articles below.
+In this quickstart, you created a key vault, stored a certificate, and retrieved that certificate. To learn more about Key Vault and how to integrate it with your applications, continue on to these articles.
- Read an [Overview of Azure Key Vault](../general/overview.md) - Read an [Overview of certificates](about-certificates.md)
key-vault Overview Vnet Service Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/overview-vnet-service-endpoints.md
Here's a list of trusted services that are allowed to access a key vault if the
| Azure Data Lake Store|[Encryption of data in Azure Data Lake Store](../../data-lake-store/data-lake-store-encryption.md) with a customer-managed key.| | Azure Database for MySQL | [Data encryption for Azure Database for MySQL](../../mysql/howto-data-encryption-cli.md) | | Azure Database for PostgreSQL Single server | [Data encryption for Azure Database for PostgreSQL Single server](../../postgresql/howto-data-encryption-cli.md) |
+| Azure Database for PostgreSQL Flexible server | [Data encryption for Azure Database for PostgreSQL Flexible server](../../postgresql/flexible-server/concepts-data-encryption.md) |
| Azure Databricks|[Fast, easy, and collaborative Apache SparkΓÇôbased analytics service](/azure/databricks/scenarios/what-is-azure-databricks)| | Azure Disk Encryption volume encryption service|Allow access to BitLocker Key (Windows VM) or DM Passphrase (Linux VM), and Key Encryption Key, during virtual machine deployment. This enables [Azure Disk Encryption](../../security/fundamentals/encryption-overview.md).| | Azure Disk Storage | When configured with a Disk Encryption Set (DES). For more information, see [Server-side encryption of Azure Disk Storage using customer-managed keys](../../virtual-machines/disk-encryption.md#customer-managed-keys).|
key-vault Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/quick-create-cli.md
tags: azure-resource-manager
Previously updated : 01/27/2021 Last updated : 01/04/2023 #Customer intent: As a security admin who is new to Azure, I want to use Key Vault to securely store keys and passwords in Azure # Quickstart: Set and retrieve a key from Azure Key Vault using Azure CLI
-In this quickstart, you create a key vault in Azure Key Vault with Azure CLI. Azure Key Vault is a cloud service that works as a secure secrets store. You can securely store keys, passwords, certificates, and other secrets. For more information on Key Vault you may review the [Overview](../general/overview.md). Azure CLI is used to create and manage Azure resources using commands or scripts. Once that you have completed that, you will store a key.
+In this quickstart, you create a key vault in Azure Key Vault with Azure CLI. Azure Key Vault is a cloud service that works as a secure secrets store. You can securely store keys, passwords, certificates, and other secrets. For more information on Key Vault, review the [Overview](../general/overview.md). Azure CLI is used to create and manage Azure resources using commands or scripts. Once that you've completed that, you will store a key.
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
In this quickstart, you create a key vault in Azure Key Vault with Azure CLI. Az
To add a key to the vault, you just need to take a couple of additional steps. This key could be used by an application.
-Type the commands below to create a key called **ExampleKey** :
+Type this command to create a key called **ExampleKey** :
```azurecli az keyvault key create --vault-name "<your-unique-keyvault-name>" -n ExampleKey --protection software
To view previously stored key:
az keyvault key show --name "ExampleKey" --vault-name "<your-unique-keyvault-name>" ```
-Now, you have created a Key Vault, stored a key, and retrieved it.
+Now, you've created a Key Vault, stored a key, and retrieved it.
## Clean up resources
Now, you have created a Key Vault, stored a key, and retrieved it.
## Next steps
-In this quickstart you created a Key Vault and stored a key in it. To learn more about Key Vault and how to integrate it with your applications, continue on to the articles below.
+In this quickstart, you created a Key Vault and stored a key in it. To learn more about Key Vault and how to integrate it with your applications, continue on to these articles.
- Read an [Overview of Azure Key Vault](../general/overview.md) - See the reference for the [Azure CLI az keyvault commands](/cli/azure/keyvault)
key-vault Quick Create Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/quick-create-java.md
description: Provides a quickstart for the Azure Key Vault Keys client library f
Previously updated : 01/05/2021 Last updated : 01/04/2023
ms.devlang: java
# Quickstart: Azure Key Vault Key client library for Java
-Get started with the Azure Key Vault Key client library for Java. Follow the steps below to install the package and try out example code for basic tasks.
+
+Get started with the Azure Key Vault Key client library for Java. Follow these steps to install the package and try out example code for basic tasks.
Additional resources:
Additional resources:
- [Apache Maven](https://maven.apache.org) - [Azure CLI](/cli/azure/install-azure-cli)
-This quickstart assumes you are running [Azure CLI](/cli/azure/install-azure-cli) and [Apache Maven](https://maven.apache.org) in a Linux terminal window.
+This quickstart assumes you're running [Azure CLI](/cli/azure/install-azure-cli) and [Apache Maven](https://maven.apache.org) in a Linux terminal window.
## Setting up This quickstart is using the Azure Identity library with Azure CLI to authenticate user to Azure Services. Developers can also use Visual Studio or Visual Studio Code to authenticate their calls, for more information, see [Authenticate the client with Azure Identity client library](/java/api/overview/azure/identity-readme).
export KEY_VAULT_NAME=<your-key-vault-name>
## Object model The Azure Key Vault Key client library for Java allows you to manage keys. The [Code examples](#code-examples) section shows how to create a client, create a key, retrieve a key, and delete a key.
-The entire console app is [below](#sample-code).
+The entire console app is supplied in [Sample code](#sample-code).
## Code examples ### Add directives
import com.azure.security.keyvault.keys.models.KeyVaultKey;
``` ### Authenticate and create a client+ In this quickstart, a logged in user is used to authenticate to Key Vault, which is preferred method for local development. For applications deployed to Azure, a Managed Identity should be assigned to an App Service or Virtual Machine. For more information, see [Managed Identity Overview](../../active-directory/managed-identities-azure-resources/overview.md).
-In the example below, the name of your key vault is expanded to the key vault URI, in the format "https://\<your-key-vault-name\>.vault.azure.net". This example is using the ['DefaultAzureCredential()'](/java/api/com.azure.identity.defaultazurecredential) class, which allows to use the same code across different environments with different options to provide identity. For more information, see [Default Azure Credential Authentication](/java/api/overview/azure/identity-readme).
+In this example, the name of your key vault is expanded to the key vault URI, in the format `https://<your-key-vault-name>.vault.azure.net`. This example is using the ['DefaultAzureCredential()'](/java/api/com.azure.identity.defaultazurecredential) class, which allows to use the same code across different environments with different options to provide identity. For more information, see [Default Azure Credential Authentication](/java/api/overview/azure/identity-readme).
```java String keyVaultName = System.getenv("KEY_VAULT_NAME");
KeyClient keyClient = new KeyClientBuilder()
``` ### Create a key
-Now that your application is authenticated, you can create a key in your key vault using the `keyClient.createKey` method. This requires a name for the key and a key type -- we've assigned the value "myKey" to the `keyName` variable and use a an RSA `KeyType` in this sample.
+Now that your application is authenticated, you can create a key in your key vault using the `keyClient.createKey` method. This requires a name for the key and a key type. We've assigned the value "myKey" to the `keyName` variable and use a an RSA `KeyType` in this sample.
```java keyClient.createKey(keyName, KeyType.RSA);
public class App {
``` ## Next steps
-In this quickstart you created a key vault, created a key, retrieved it, and then deleted it. To learn more about Key Vault and how to integrate it with your applications, continue on to the articles below.
+In this quickstart, you created a key vault, created a key, retrieved it, and then deleted it. To learn more about Key Vault and how to integrate it with your applications, continue on to these articles.
- Read an [Overview of Azure Key Vault](../general/overview.md) - Read the [Key Vault security overview](../general/security-features.md)
key-vault Quick Create Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/quick-create-net.md
Title: Quickstart - Azure Key Vault keys client library for .NET (version 4)
description: Learn how to create, retrieve, and delete keys from an Azure key vault using the .NET client library (version 4) Previously updated : 09/23/2020 Last updated : 01/04/2023
using Azure.Security.KeyVault.Keys;
In this quickstart, logged in user is used to authenticate to key vault, which is preferred method for local development. For applications deployed to Azure, managed identity should be assigned to App Service or Virtual Machine, for more information, see [Managed Identity Overview](/azure/active-directory/managed-identities-azure-resources/overview).
-In below example, the name of your key vault is expanded to the key vault URI, in the format "https://\<your-key-vault-name\>.vault.azure.net". This example is using ['DefaultAzureCredential()'](/dotnet/api/azure.identity.defaultazurecredential) class from [Azure Identity Library](/dotnet/api/overview/azure/identity-readme), which allows to use the same code across different environments with different options to provide identity. Fore more information about authenticating to key vault, see [Developer's Guide](/azure/key-vault/general/developers-guide#authenticate-to-key-vault-in-code).
+In this example, the name of your key vault is expanded to the key vault URI, in the format `https://<your-key-vault-name>.vault.azure.net`. This example is using ['DefaultAzureCredential()'](/dotnet/api/azure.identity.defaultazurecredential) class from [Azure Identity Library](/dotnet/api/overview/azure/identity-readme), which allows to use the same code across different environments with different options to provide identity. Fore more information about authenticating to key vault, see [Developer's Guide](/azure/key-vault/general/developers-guide#authenticate-to-key-vault-in-code).
```csharp var keyVaultName = Environment.GetEnvironmentVariable("KEY_VAULT_NAME");
var key = await client.CreateKeyAsync("myKey", KeyType.Rsa);
``` > [!NOTE]
-> If key name exists, above code will create new version of that key.
+> If key name exists, this code will create new version of that key.
### Retrieve a key
key-vault Quick Create Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/quick-create-node.md
Title: Quickstart - Azure Key Vault key client library for JavaScript (version
description: Learn how to create, retrieve, and delete keys from an Azure key vault using the JavaScript client library Previously updated : 12/13/2021 Last updated : 01/04/2023
For more information about Key Vault and keys, see:
- [Azure portal](../general/quick-create-portal.md) - [Azure PowerShell](../general/quick-create-powershell.md)
-This quickstart assumes you are running [Azure CLI](/cli/azure/install-azure-cli).
+This quickstart assumes you're running [Azure CLI](/cli/azure/install-azure-cli).
## Sign in to Azure
export KEY_VAULT_NAME=<your-key-vault-name>
## Code example
-The code sample below will show you how to create a client, set a key, retrieve a key, and delete a key.
+This code sample demonstrates how to create a client, set a key, retrieve a key, and delete a key.
### Set up the app framework
The Azure SDK provides a helper method, [parseKeyVaultKeyIdentifier](/javascript
## Next steps
-In this quickstart, you created a key vault, stored a key, and retrieved that key. To learn more about Key Vault and how to integrate it with your applications, continue on to the articles below.
+In this quickstart, you created a key vault, stored a key, and retrieved that key. To learn more about Key Vault and how to integrate it with your applications, continue on to these articles.
- Read an [Overview of Azure Key Vault](../general/overview.md) - Read an [Overview of Azure Key Vault Keys](about-keys.md)
key-vault Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/quick-create-portal.md
Previously updated : 03/24/2020 Last updated : 01/04/2023 #Customer intent: As a security admin who is new to Azure, I want to use Key Vault to securely store keys in Azure
Sign in to the Azure portal at https://portal.azure.com.
To add a key to the vault, you just need to take a couple of additional steps. In this case, we add a key that could be used by an application. The key is called **ExampleKey**. 1. On the Key Vault properties pages, select **Keys**.
-2. Click on **Generate/Import**.
+2. Select **Generate/Import**.
3. On the **Create a key** screen choose the following values: - **Options**: Generate. - **Name**: ExampleKey.
- - Leave the other values to their defaults. Click **Create**.
+ - Leave the other values to their defaults. Select **Create**.
## Retrieve a key from Key Vault
-Once that you receive the message that the key has been successfully created, you may click on it on the list. You can then see some of the properties and click **Download public key** to retrieve the key.
+Once that you receive the message that the key has been successfully created, you may click on it on the list. You can then see some of the properties and select **Download public key** to retrieve the key.
:::image type="content" source="../media/keys/quick-create-portal/current-version-hidden.png" alt-text="Key properties":::
When no longer needed, delete the resource group, which deletes the Key Vault an
## Next steps
-In this quickstart, you created a Key Vault and stored a key in it. To learn more about Key Vault and how to integrate it with your applications, continue on to the articles below.
+In this quickstart, you created a Key Vault and stored a key in it. To learn more about Key Vault and how to integrate it with your applications, continue on to these articles.
- Read an [Overview of Azure Key Vault](../general/overview.md) - See the [Azure Key Vault developer's guide](../general/developers-guide.md)
key-vault Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/quick-create-powershell.md
tags: azure-resource-manager
Previously updated : 01/27/2021 Last updated : 01/04/2023 #Customer intent: As a security admin who is new to Azure, I want to use Key Vault to securely store keys and passwords in Azure # Quickstart: Set and retrieve a key from Azure Key Vault using Azure PowerShell
-In this quickstart, you create a key vault in Azure Key Vault with Azure PowerShell. Azure Key Vault is a cloud service that works as a secure secrets store. You can securely store keys, passwords, certificates, and other secrets. For more information on Key Vault you may review the [Overview](../general/overview.md). Azure PowerShell is used to create and manage Azure resources using commands or scripts. Once that you have completed that, you will store a key.
+In this quickstart, you create a key vault in Azure Key Vault with Azure PowerShell. Azure Key Vault is a cloud service that works as a secure secrets store. You can securely store keys, passwords, certificates, and other secrets. For more information on Key Vault, review the [Overview](../general/overview.md). Azure PowerShell is used to create and manage Azure resources using commands or scripts. Once that you've completed that, you will store a key.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. [!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)]
-If you choose to install and use PowerShell locally, this tutorial requires Azure PowerShell module version 1.0.0 or later. Type `$PSVersionTable.PSVersion` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you are running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this tutorial requires Azure PowerShell module version 1.0.0 or later. Type `$PSVersionTable.PSVersion` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
```azurepowershell-interactive Login-AzAccount
Login-AzAccount
To add a key to the vault, you just need to take a couple of additional steps. This key could be used by an application.
-Type the commands below to create a called **ExampleKey** :
+Type this command to create a called **ExampleKey** :
```azurepowershell-interactive Add-AzKeyVaultKey -VaultName "<your-unique-keyvault-name>" -Name "ExampleKey" -Destination "Software"
To view previously stored key:
Get-AzKeyVaultKey -VaultName "<your-unique-keyvault-name>" -KeyName "ExampleKey" ```
-Now, you have created a Key Vault, stored a key, and retrieved it.
+Now, you've created a Key Vault, stored a key, and retrieved it.
## Clean up resources
Now, you have created a Key Vault, stored a key, and retrieved it.
## Next steps
-In this quickstart you created a Key Vault and stored a certificate in it. To learn more about Key Vault and how to integrate it with your applications, continue on to the articles below.
+In this quickstart, you created a Key Vault and stored a certificate in it. To learn more about Key Vault and how to integrate it with your applications, continue on to these articles.
- Read an [Overview of Azure Key Vault](../general/overview.md) - See the reference for the [Azure PowerShell Key Vault cmdlets](/powershell/module/az.keyvault/)
key-vault Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/access-control.md
tags: azure-resource-manager
Previously updated : 02/17/2021 Last updated : 01/04/2023 # Customer intent: As the admin for managed HSMs, I want to set access policies and configure the Managed HSM, so that I can ensure it's secure and auditors can properly monitor all activities for these managed HSMs.
Azure Key Vault Managed HSM is a cloud service that safeguards encryption keys.
## Access control model
-Access to a managed HSM is controlled through two interfaces: the **management plane** and the **data plane**. The management plane is where you manage the HSM itself. Operations in this plane include creating and deleting managed HSMs and retrieving managed HSM properties. The data plane is where you work with the data stored in an managed HSM -- that is HSM-backed encryption keys. You can add, delete, modify, and use keys to perform cryptographic operations, manage role assignments to control access to the keys, create a full HSM backup, restore full backup, and manage security domain from the data plane interface.
+Access to a managed HSM is controlled through two interfaces: the **management plane** and the **data plane**. The management plane is where you manage the HSM itself. Operations in this plane include creating and deleting managed HSMs and retrieving managed HSM properties. The data plane is where you work with the data stored in a managed HSM ΓÇö that is, the HSM-backed encryption keys. You can add, delete, modify, and use keys to perform cryptographic operations, manage role assignments to control access to the keys, create a full HSM backup, restore full backup, and manage security domain from the data plane interface.
To access a managed HSM in either plane, all callers must have proper authentication and authorization. Authentication establishes the identity of the caller. Authorization determines which operations the caller can execute. A caller can be any one of the [security principals](../../role-based-access-control/overview.md#security-principal) defined in Azure Active Directory - user, group, service principal or managed identity.
-Both planes use Azure Active Directory for authentication. For authorization they use different systems as follows
-- The management plane uses Azure role-based access control -- Azure RBAC -- an authorization system built on Azure Azure Resource Manager -- The data plane uses a managed HSM-level RBAC (Managed HSM local RBAC) -- an authorization system implemented and enforced at the managed HSM level.
+Both planes use Azure Active Directory for authentication. For authorization, they use different systems as follows
+- The management plane uses Azure role-based access control (Azure RBAC), an authorization system built on Azure Resource Manager
+- The data plane uses a managed HSM-level RBAC (Managed HSM local RBAC), an authorization system implemented and enforced at the managed HSM level.
When a managed HSM is created, the requestor also provides a list of data plane administrators (all [security principals](../../role-based-access-control/overview.md#security-principal) are supported). Only these administrators are able to access the managed HSM data plane to perform key operations and manage data plane role assignments (Managed HSM local RBAC).
-Permission model for both planes uses the same syntax, but they are enforced at different levels and role assignments use different scopes. Management plane Azure RBAC is enforced by Azure Resource Manager while data plane Managed HSM local RBAC is enforced by managed HSM itself.
+Permission model for both planes uses the same syntax, but they're enforced at different levels and role assignments use different scopes. Management plane Azure RBAC is enforced by Azure Resource Manager while data plane Managed HSM local RBAC is enforced by managed HSM itself.
> [!IMPORTANT] > Granting a security principal management plane access to an managed HSM does not grant them any access to data plane to access keys or data plane role assignments Managed HSM local RBAC). This isolation is by design to prevent inadvertent expansion of privileges affecting access to keys stored in Managed HSM.
-For example, a subscription administrator (since they have "Contributor" permission to all resources in the subscription) can delete an managed HSM in their subscription, but if they don't have data plane access specifically granted through Managed HSM local RBAC, they cannot gain access to keys or manage role assignment in the managed HSM to grant themselves or others access to data plane.
+For example, a subscription administrator (since they have "Contributor" permission to all resources in the subscription) can delete an managed HSM in their subscription, but if they don't have data plane access specifically granted through Managed HSM local RBAC, they can't gain access to keys or manage role assignment in the managed HSM to grant themselves or others access to data plane.
## Azure Active Directory authentication
-When you create an managed HSM in an Azure subscription, it's automatically associated with the Azure Active Directory tenant of the subscription. All callers in both planes must be registered in this tenant and authenticate to access the managed HSM.
+When you create a managed HSM in an Azure subscription, it's automatically associated with the Azure Active Directory tenant of the subscription. All callers in both planes must be registered in this tenant and authenticate to access the managed HSM.
The application authenticates with Azure Active Directory before calling either plane. The application can use any [supported authentication method](../../active-directory/develop/authentication-vs-authorization.md) based on the application type. The application acquires a token for a resource in the plane to gain access. The resource is an endpoint in the management or data plane, based on the Azure environment. The application uses the token and sends a REST API request to Managed HSM endpoint. To learn more, review the [whole authentication flow](../../active-directory/develop/v2-oauth2-auth-code-flow.md).
The use of a single authentication mechanism for both planes has several benefit
## Resource endpoints
-Security principals access the planes through endpoints. The access controls for the two planes work independently. To grant an application access to use keys in an managed HSM, you grant data plane access by using Managed HSM local RBAC. To grant a user access to Managed HSM resource to create, read, delete, move the managed HSMs and edit other properties and tags you use Azure RBAC.
+Security principals access the planes through endpoints. The access controls for the two planes work independently. To grant an application access to use keys in a managed HSM, you grant data plane access by using Managed HSM local RBAC. To grant a user access to Managed HSM resource to create, read, delete, move the managed HSMs and edit other properties and tags you use Azure RBAC.
The following table shows the endpoints for the management and data planes.
There are several predefined roles. If a predefined role doesn't fit your needs,
## Data plane and Managed HSM local RBAC
-You grant a security principal access to execute specific key operations by assigning a role. For each role assignment you need to specify a role and scope over which that assignment applies. For Managed HSM local RBAC two scopes are available.
+You grant a security principal access to execute specific key operations by assigning a role. For each role assignment, you must specify a role and scope over which that assignment applies. For Managed HSM local RBAC two scopes are available.
- **"/" or "/keys"**: HSM level scope. Security principals assigned a role at this scope can perform the operations defined in the role for all objects (keys) in the managed HSM. - **"/keys/&lt;key-name&gt;"**: Key level scope. Security principals assigned a role at this scope can perform the operations defined in this role for all versions of the specified key only.
key-vault Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/azure-policy.md
+
+ Title: Integrate Azure Managed HSM with Azure Policy
+description: Learn how to integrate Azure Managed HSM with Azure Policy
++ Last updated : 03/31/2021++++++
+# Integrate Azure Managed HSM with Azure Policy
+
+[Azure Policy](../../governance/policy/index.yml) is a governance tool that gives users the ability to audit and manage their Azure environment at scale. Azure Policy provides the ability to place guardrails on Azure resources to ensure they're compliant with assigned policy rules. It allows users to perform audit, real-time enforcement, and remediation of their Azure environment. The results of audits performed by policy will be available to users in a compliance dashboard where they'll be able to see a drill-down of which resources and components are compliant and which aren't. For more information, see the [Overview of the Azure Policy service](../../governance/policy/overview.md).
+
+Example Usage Scenarios:
+
+- You currently don't have a solution to perform an audit across your organization, or you are conducting manual audits of your environment by asking individual teams within your organization to report their compliance. You're looking for a way to automate this task, perform audits in real time, and guarantee the accuracy of the audit.
+- You want to enforce your company security policies and stop individuals from creating certain cryptographic keys, but you don't have an automated way to block their creation.
+- You want to relax some requirements for your test teams, but you want to maintain tight controls over your production environment. You need a simple automated way to separate enforcement of your resources.
+- You want to be sure that you can roll back enforcement of new policies if there's a live-site issue. You need a one-click solution to turn off enforcement of the policy.
+- You are relying on a third-party solution for auditing your environment and you want to use an internal Microsoft offering.
+
+## Types of policy effects and guidance
+
+**Audit**: When the effect of a policy is set to audit, the policy will not cause any breaking changes to your environment. It will only alert you to components such as keys that do not comply with the policy definitions within a specified scope, by marking these components as non-compliant in the policy compliance dashboard. Audit is default if no policy effect is selected.
+
+**Deny**: When the effect of a policy is set to deny, the policy will block the creation of new components such as weaker keys, and will block new versions of existing keys that do not comply with the policy definition. Existing non-compliant resources within a Managed HSM are not affected. The 'audit' capabilities will continue to operate.
++
+### Keys using elliptic curve cryptography should have the specified curve names
+
+If you use elliptic curve cryptography or ECC keys, you can customize an allowed list of curve names from the list below. The default option allows all the following curve names.
+
+- P-256
+- P-256K
+- P-384
+- P-521
+
+### Keys should have expirations dates set
+
+This policy audits all keys in your Managed HSMs and flags keys that do not have an expiration date set as non-compliant. You can also use this policy to block the creation of keys that do not have an expiration date set.
+
+### Keys should have more than the specified number of days before expiration
+
+If a key is too close to expiration, an organizational delay to rotate the key may result in an outage. Keys should be rotated at a specified number of days prior to expiration to provide sufficient time to react to a failure. This policy will audit keys too close to their expiration date and allows you to set this threshold in days. You can also use this policy to prevent the creation of new keys too close to their expiration date.
+
+### Keys using RSA cryptography should have a specified minimum key size
+
+Using RSA keys with smaller key sizes is not a secure design practice. You may be subject to audit and certification standards that mandate the use of a minimum key size. The following policy allows you to set a minimum key size requirement on your Managed HSM. You can audit keys that do not meet this minimum requirement. This policy can also be used to block the creation of new keys that do not meet the minimum key size requirement.
+
+## Enabling and managing a Managed HSM policy through the Azure CLI
+
+### Register preview feature in your subscription
+
+In the subscription that customer owns, run the following Azure CLI command line as Contributor or Owner role of the subscription,
+
+```azurecli-interactive
+az feature register --namespace Microsoft.KeyVault --name MHSMGovernance
+```
+
+If there is an existing HSM pool in this subscription, update will be carried to these pools. Full enablement of the policy may take up to 30 mins. See [Set up preview features in Azure subscription](../../azure-resource-manager/management/preview-features.md?tabs=azure-cli).
+
+### Giving permission to scan daily
+
+To check the compliance of the pool's inventory keys, the customer must assign the "Managed HSM Crypto Auditor" role to "Azure Key Vault Managed HSM Key Governance Service"(App ID: a1b76039-a76c-499f-a2dd-846b4cc32627) so it can access key's metadata. Without the grant of permission, inventory keys are not going to be reported on Azure Policy compliance report, only new keys, updated keys, imported keys and rotated keys will be checked on compliance. To do so, a user who has role of "Managed HSM Administrator" to the Managed HSM needs to run the following Azure CLI commands:
+
+On windows:
+
+```azurecli-interactive
+az ad sp show --id a1b76039-a76c-499f-a2dd-846b4cc32627 --query objectId
+```
+
+Copy the `id` printed, paste it in the following command:
+
+```azurecli-interactive
+az keyvault role assignment create --scope / --role "Managed HSM Crypto Auditor" --assignee-object-id "the id printed in previous command" --hsm-name <hsm name>
+```
+
+On Linux or Windows Subsystem of Linux:
+
+```azurecli-interactive
+spId=$(az ad sp show --id a1b76039-a76c-499f-a2dd-846b4cc32627 --query objectId|cut -d "\"" -f2)
+echo $spId
+az keyvault role assignment create --scope / --role "Managed HSM Crypto Auditor" --assignee-object-id $spId --hsm-name <hsm name>
+```
+
+### Create policy assignments - define rules of audit and/or deny
+
+Policy assignments have concrete values defined for policy definitions' parameters. In the [Azure portal](https://portal.azure.com/?Microsoft_Azure_ManagedHSM_assettypeoptions=%7B%22ManagedHSM%22:%7B%22options%22:%22%22%7D%7D&Microsoft_Azure_ManagedHSM=true&feature.canmodifyextensions=true}), go to "Policy", filter on the "Key Vault" category, find these four preview key governance policy definitions. Select one, then select "Assign" button on top. Fill in each field. If the policy assignment is for request denials, use a clear name about the policy because, when a request is denied, the policy assignment's name will appear in the error. Select Next, uncheck "Only show parameters that need input or review", and enter values for parameters of the policy definition. Skip the "Remediation", and create the assignment. The service will need up to 30 minutes to enforce "Deny" assignments.
+
+- [Preview]: Azure Key Vault Managed HSM keys should have an expiration date
+- [Preview]: Azure Key Vault Managed HSM keys using RSA cryptography should have a specified minimum key size
+- [Preview]: Azure Key Vault Managed HSM Keys should have more than the specified number of days before expiration
+- [Preview]: Azure Key Vault Managed HSM keys using elliptic curve cryptography should have the specified curve names
+
+You can also do this operation using the Azure CLI. See [Create a policy assignment to identify non-compliant resources with Azure CLI](../../governance/policy/assign-policy-azurecli.md).
+
+### Test your setup
+
+Try to update/create a key that violates the rule, if you have a policy assignment with effect "Deny", it will return 403 to your request.
+Review the scan result of inventory keys of auditing policy assignments. After 12 hours, check the Policy's Compliance menu, filter on the "Key Vault" category, and find your assignments. Select on each of them, to check the compliance result report.
+
+## Troubleshooting
+
+If there are no compliance results of a pool after one day. Check if the role assignment has been done on step 2 successfully. Without Step 2, the key governance service won't be able to access key's metadata. The Azure CLI `az keyvault role assignment list` command can verify whether the role has been assigned.
+
+## Next Steps
+
+- [Logging and frequently asked questions for Azure policy for key vault](../general/troubleshoot-azure-policy-for-key-vault.md)
+- Learn more about the [Azure Policy service](../../governance/policy/overview.md)
+- See Key Vault samples: [Key Vault built-in policy definitions](../../governance/policy/samples/built-in-policies.md#key-vault)
+- Learn about [Microsoft cloud security benchmark on Key vault](/security/benchmark/azure/baselines/key-vault-security-baseline)
key-vault Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/backup-restore.md
tags: azure-key-vault
Previously updated : 09/15/2020 Last updated : 01/04/2023 # Customer intent: As a developer using Key Vault I want to know the best practices so I can implement them.
You must provide the following information to execute a full restore:
- Storage account name - Storage account blob container - Storage container SAS token with permissions `rl`-- Storage container folder name where the source backup is store
+- Storage container folder name where the source backup is stored
Restore is a long running operation but will immediately return a Job ID. You can check the status of the restore process using this Job ID. When the restore process is in progress, the HSM enters a restore mode and all data plane command (except check restore status) are disabled.
key-vault Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/best-practices.md
tags: azure-key-vault
Previously updated : 06/21/2021 Last updated : 01/04/2023 # Customer intent: As a developer using Managed HSM I want to know the best practices so I can implement them.
## Control Access to your managed HSM Managed HSM is a cloud service that safeguards encryption keys. As these keys are sensitive and business critical, make sure to secure access to your managed HSMs by allowing only authorized applications and users. This [article](access-control.md) provides an overview of the access model. It explains authentication and authorization, and role-based access control.-- Create an [Azure Active Directory Security Group](../../active-directory/fundamentals/active-directory-manage-groups.md) for the HSM Administrators (instead of assigning Administrator role to individuals). This will prevent "administration lock-out" in case of individual account deletion.
+- Create an [Azure Active Directory Security Group](../../active-directory/fundamentals/active-directory-manage-groups.md) for the HSM Administrators (instead of assigning Administrator role to individuals), to prevent "administration lock-out" if there was individual account deletion.
- Lock down access to your management groups, subscriptions, resource groups and Managed HSMs - Use Azure RBAC to control access to your management groups, subscriptions, and resource groups - Create per key role assignments using [Managed HSM local RBAC](access-control.md#data-plane-and-managed-hsm-local-rbac).-- To maintain separation of duties avoid assigning multiple roles to same principals.
+- To maintain separation of duties, avoid assigning multiple roles to same principals.
- Use least privilege access principal to assign roles. - Create custom role definition with precise set of permissions.
key-vault Built In Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/built-in-roles.md
Previously updated : 06/01/2021 Last updated : 01/04/2023
Managed HSM local RBAC has several built-in roles. You can assign these roles to
## Permitted operations > [!NOTE] > - An 'X' indicates that a role is allowed to perform the data action. Empty cell indicates the role does not have pemission to perform that data action.
-> - All the data action names have a 'Microsoft.KeyVault/managedHsm' prefix, which is omitted in the tables below for brevity.
+> - All the data action names have a 'Microsoft.KeyVault/managedHsm' prefix, which is omitted in the tables for brevity.
> - All role names have a prefix "Managed HSM" which is omitted in the below table for brevity. |Data Action | Administrator | Crypto Officer | Crypto User | Policy Administrator | Crypto Service Encryption | Backup | Crypto Auditor|
key-vault Disaster Recovery Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/disaster-recovery-guide.md
Title: What to do if there if an Azure service disruption that affects Managed HSM - Azure Key Vault | Microsoft Docs
-description: Learn what to do f there is an Azure service disruption that affects Managed HSM.
+ Title: What to do if there's an Azure service disruption that affects Managed HSM - Azure Key Vault | Microsoft Docs
+description: Learn what to do if there's an Azure service disruption that affects Managed HSM.
Previously updated : 09/15/2020 Last updated : 01/04/2023
Here are the steps of the disaster recovery procedure:
1. Create a new HSM Instance. 2. Activate "Security Domain recovery". A new RSA key pair (Security Domain Exchange Key) will be generated for Security Domain transfer and sent in response, which will be downloaded as a SecurityDomainExchangeKey (public key).
-3. Create and then upload the "Security Domain Transfer File". You will need the private keys that encrypt the security domain. The private keys are used locally, and never transferred anywhere in this process.
+3. Create and then upload the "Security Domain Transfer File". You'll need the private keys that encrypt the security domain. The private keys are used locally, and never transferred anywhere in this process.
4. Take a backup of the new HSM. A backup is required before any restore, even when the HSM is empty. Backups allow for easy roll-back. 5. Restore the recent HSM backup from the source HSM.
Your Azure account is now authorized to perform any operations on this Managed H
## Activate the Security Domain recovery mode
-At this point in the normal creation process, we initialize and download the new HSM's Security Domain. However, since we are executing a disaster recovery procedure, we request the HSM to enter Security Domain Recovery Mode and download a Security Domain Exchange Key instead. The Security Domain Exchange Key is an RSA public key that will be used to encrypt the security domain before uploading it to the HSM. The corresponding private key is protected inside the HSM, to keep your Security Domain contents safe during the transfer.
+At this point in the normal creation process, we initialize and download the new HSM's Security Domain. However, since we're executing a disaster recovery procedure, we request the HSM to enter Security Domain Recovery Mode and download a Security Domain Exchange Key instead. The Security Domain Exchange Key is an RSA public key that will be used to encrypt the security domain before uploading it to the HSM. The corresponding private key is protected inside the HSM, to keep your Security Domain contents safe during the transfer.
```azurecli-interactive az keyvault security-domain init-recovery --hsm-name ContosoMHSM2 --sd-exchange-key ContosoMHSM2-SDE.cer
az keyvault security-domain init-recovery --hsm-name ContosoMHSM2 --sd-exchange-
## Upload Security Domain to destination HSM
-For this step you will need:
+For this step you'll need:
- The Security Domain Exchange Key you downloaded in previous step. - The Security Domain of the source HSM. - At least quorum number of private keys that were used to encrypt the security domain.
Now both the source HSM (ContosoMHSM) and the destination HSM (ContosoMHSM2) hav
## Create a backup (as a restore point) of your new HSM
-It is always a good idea to take a full backup before you execute a full HSM restore, so that you have a restore point in case something goes wrong with the restore.
+It's always a good idea to take a full backup before you execute a full HSM restore, so that you have a restore point in case something goes wrong with the restore.
-To create an HSM backup, you will need:
+To create an HSM backup, you'll need:
- A storage account where the backup will be stored - A blob storage container in this storage account where the backup process will create a new folder to store encrypted backup
az keyvault backup start --hsm-name ContosoMHSM2 --storage-account-name ContosoB
For this step you need: -- The storage account and the blob container where the source HSM's backups are stored.
+- The storage account and the blob container in which the source HSM's backups are stored.
- The folder name from where you want to restore the backup. If you create regular backups, there will be many folders inside this container.
sas=$(az storage container generate-sas -n mhsmdemobackupcontainer --account-nam
az keyvault restore start --hsm-name ContosoMHSM2 --storage-account-name ContosoBackup --blob-container-name mhsmdemobackupcontainer --storage-container-SAS-token $sas --backup-folder mhsm-ContosoMHSM-2020083120161860 ```
-Now you have completed a full disaster recovery process. The contents of the source HSM when the backup was taken are copied to the destination HSM, including all the keys, versions, attributes, tags, and role assignments.
+Now you've completed a full disaster recovery process. The contents of the source HSM when the backup was taken are copied to the destination HSM, including all the keys, versions, attributes, tags, and role assignments.
## Next steps
key-vault Hsm Protected Keys Byok https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/hsm-protected-keys-byok.md
tags: azure-resource-manager
Previously updated : 02/04/2021 Last updated : 01/04/2023
Here's an overview of the process. Specific steps to complete are described late
* Download the KEK public key as a .pem file. * Transfer the KEK public key to an offline computer that is connected to an on-premises HSM. * In the offline computer, use the BYOK tool provided by your HSM vendor to create a BYOK file.
-* The target key is encrypted with a KEK, which stays encrypted until it is transferred to the Managed HSM. Only the encrypted version of your key leaves the on-premises HSM.
-* A KEK that's generated inside a Managed HSM is not exportable. HSMs enforce the rule that no clear version of a KEK exists outside a Managed HSM.
+* The target key is encrypted with a KEK, which stays encrypted until it's transferred to the Managed HSM. Only the encrypted version of your key leaves the on-premises HSM.
+* A KEK that's generated inside a Managed HSM isn't exportable. HSMs enforce the rule that no clear version of a KEK exists outside a Managed HSM.
* The KEK must be in the same managed HSM where the target key will be imported. * When the BYOK file is uploaded to Managed HSM, a Managed HSM uses the KEK private key to decrypt the target key material and import it as an HSM key. This operation happens entirely inside the HSM. The target key always remains in the HSM protection boundary.
To use the Azure CLI commands in this article, you must have the following items
[!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)]
-To sign in to Azure using the CLI you can type:
+To sign in to Azure using the CLI, type:
```azurecli az login ```
-For more information on login options via the CLI take a look at [sign in with Azure CLI](/cli/azure/authenticate-azure-cli)
+For more information on login options via the CLI, take a look at [sign in with Azure CLI](/cli/azure/authenticate-azure-cli)
## Supported HSMs
The KEK must be:
> [!NOTE] > The KEK must have 'import' as the only allowed key operation. 'import' is mutually exclusive with all other key operations.
-Use the [az keyvault key create](/cli/azure/keyvault/key#az-keyvault-key-create) command to create a KEK that has key operations set to `import`. Record the key identifier (`kid`) that's returned from the following command. (You will use the `kid` value in [Step 3](#step-3-generate-and-prepare-your-key-for-transfer).)
+Use the [az keyvault key create](/cli/azure/keyvault/key#az-keyvault-key-create) command to create a KEK that has key operations set to `import`. Record the key identifier (`kid`) that's returned from the following command. (You'll use the `kid` value in [Step 3](#step-3-generate-and-prepare-your-key-for-transfer).)
```azurecli-interactive az keyvault key create --kty RSA-HSM --size 4096 --name KEKforBYOK --ops import --hsm-name ContosoKeyVaultHSM
az keyvault key download --name KEKforBYOK --hsm-name ContosoKeyVaultHSM --file
```
-Transfer the KEKforBYOK.publickey.pem file to your offline computer. You will need this file in the next step.
+Transfer the KEKforBYOK.publickey.pem file to your offline computer. You'll need this file in the next step.
### Step 3: Generate and prepare your key for transfer
key-vault Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/policy-reference.md
Title: Built-in policy definitions for Key Vault description: Lists Azure Policy built-in policy definitions for Key Vault. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/28/2022 Last updated : 01/05/2023
key-vault Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Key Vault description: Lists Azure Policy Regulatory Compliance controls available for Azure Key Vault. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/04/2022 Last updated : 01/05/2023
lab-services How To Configure Canvas For Lab Plans https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-configure-canvas-for-lab-plans.md
Title: Configure Canvas to access lab plans-
-description: Learn how to configure Canvas to access Azure Lab Services lab plans.
+ Title: Configure Canvas to use Azure Lab Services
+description: Learn how to configure Canvas to use Azure Lab Services.
Previously updated : 11/29/2022 Last updated : 12/16/2022
-# Configure Canvas to access Azure Lab Services lab plans
+# Configure Canvas to use Azure Lab Services
-In this article, you learn how to configure [Canvas](https://www.instructure.com/canvas) to access Azure Lab Services lab plans. Add the Azure Lab Services app to let educators and students access to their labs directly without navigating to the Azure Lab Services portal. Learn more about the [benefits of using Azure Lab Services within Canvas](./lab-services-within-canvas-overview.md).
+[Canvas Learning Management System](https://canvaslms.com/) (LMS) is a cloud-based learning management system that provides one place for course content, quizzes, and grades for both educators and students. In this article, you learn how to add the Azure Lab Services app to [Canvas](https://www.instructure.com/canvas). Educators can create labs from within Canvas and students will see their lab VMs alongside their other material for a course.
-To use Azure Lab Services in Canvas, two tasks must be completed:
+Learn more about the [benefits of using Azure Lab Services within Canvas](./lab-services-within-canvas-overview.md).
-1. Enable the Azure Lab Services app in your school's Canvas instance. The Azure Lab Services app will be an inherited app in Canvas.
-1. Connect the Canvas instance to a lab plan resource in Azure.
+To configure Canvas to use Azure Lab Services, go through the one-time step to [enable the Azure Lab Services app in Canvas](#enable-the-azure-lab-services-app-in-canvas). Next, you can then [add the Azure Lab Services app to your course](#add-azure-lab-services-to-a-course).
-For information about creating and managing labs in Canvas, see [Create and manage labs in Canvas](./how-to-manage-labs-within-canvas.md).
+If you've already configured your course to use Azure Lab Services, learn how you can [Create and manage labs in Canvas](./how-to-manage-labs-within-canvas.md).
[!INCLUDE [preview note](./includes/lab-services-new-update-focused-article.md)] ## Prerequisites -- Canvas administrator permissions.-- Write access to [lab plan](tutorial-setup-lab-plan.md) to be linked to Canvas.
+- An Azure Lab Services lab plan. Follow these steps to [Create a lab plan in the Azure portal](./tutorial-setup-lab-plan.md), if you don't have one yet.
-## Enable Azure Lab Services app in Canvas
+- Your Canvas account needs [Admin permissions](https://community.canvaslms.com/t5/Canvas-Basics-Guide/What-is-the-Admin-role/ta-p/78) to add the Azure Lab Services app to Canvas.
-To use the Azure Lab Services Canvas app, first enable the corresponding developer key:
+- To link lab plans, your Azure account needs the following permissions. Learn how to [assign Azure Active Directory roles to users](/azure/active-directory/roles/manage-roles-portal).
+ - Reader role on the Azure subscription.
+ - Contributor role on the resource group that contains your lab plan.
+ - Write access to the lab plan.
-1. Select the **Admin** page in Canvas.
-1. Select **Developer Keys** in the menu bar, and then select the **Inherited** view of the developer keys.
-1. Change the **Azure Lab Services** entry to **On**. The Azure Lab Services developer key is **170000000000711**.
+## Enable the Azure Lab Services app in Canvas
- :::image type="content" source="./media/how-to-configure-canvas-for-lab-plans/canvas-enable-lab-services-app.png" alt-text="Screenshot that shows how to turn on the inherited Azure Lab Services app in the Canvas Admin settings.":::
+The first step to let users access their labs and lab plans through Canvas is to enable the Azure Lab Services app in Canvas. To use a third-party application, such as Azure Lab Services, in Canvas, you have to enable the corresponding developer key in Canvas.
-### Link lab plans to Canvas
+The Canvas developer key for the Azure Lab Services app is an *inherited key*, also referred to as a *global developer key*. Learn more about [developer keys in the Canvas Community Hub](https://community.canvaslms.com/t5/Canvas-Admin-Blog/Administrative-guidelines-for-managing-Inherited-Developer-Keys/ba-p/269029).
-After enabling the Azure Lab Services app in Canvas, you can link lab plans to Canvas. Only linked lab plans will be available for Canvas educators to use when creating labs.
+To enable the developer key for the Azure Lab Services app:
-To link lab plans to Canvas, your account must be a Canvas administrator. The Canvas administrator must have the following permissions on the lab plan.
+1. In Canvas, select the **Admin** page.
-- Reader role on the subscription.-- Contributor role on the resource group that contains your lab plan.
+1. Select **Developer Keys** in the left navigation.
-Perform the following steps to link lab plans to Canvas:
+1. Select the **Inherited** tab of the developer keys.
-1. [Add Azure Lab Services to a course in Canvas](#add-azure-lab-services-app-to-a-course). A Canvas administrator will need to add Azure Lab Services to the course *only* if there are no other courses with Azure Lab Services. If there's already a course with the Azure Lab Services app, navigate to that course in Canvas and skip this step.
-1. [Create a lab plan in Azure](./tutorial-setup-lab-plan.md) if you haven't already.
-1. Open the Azure Lab Services app in the course.
-1. Select the tool icon in the upper right to see the list all the lab plans.
-1. Choose which lab plans to link.
+1. In the list, change the state of the **Azure Lab Services** entry to **On**.
- :::image type="content" source="./media/how-to-configure-canvas-for-lab-plans/canvas-select-lab-plans.png" alt-text="Screenshot that shows list of lab plans that can be linked to Canvas.":::
+ :::image type="content" source="./media/how-to-configure-canvas-for-lab-plans/canvas-enable-lab-services-app.png" alt-text="Screenshot that shows how to turn on the inherited Azure Lab Services app in the Canvas Admin settings." lightbox="./media/how-to-configure-canvas-for-lab-plans/canvas-enable-lab-services-app.png":::
-1. Select **Save**.
+## Add Azure Lab Services app to an account (optional)
+
+You can enable the Azure Lab Services app for a Canvas course in either of two ways:
- In the [Azure portal](https://portal.azure.com), the **LMS settings** page for the lab plan shows that the lab plan is successfully linked.
+- Add the Azure Lab Services app at the Canvas account level.
- :::image type="content" source="./media/how-to-configure-canvas-for-lab-plans/lab-plan-linked-canvas.png" alt-text="Screenshot of the L M S settings page for a lab plan.":::
+- [Add the Azure Lab Services app for a specific course](#add-the-azure-lab-services-app-to-a-course) in Canvas.
-### Add Azure Lab Services app to an account
+When you add the app at the Canvas account level, you avoid that you have to add the app for every individual course. If you have multiple courses that use Azure Lab Services, adding the app at the account level might be quicker. After adding the app for the Canvas account, you only have to [enable the Azure Lab Services app in the course navigation](#enable-azure-lab-services-in-course-navigation).
-Canvas administrators may choose to enable the Azure Lab Services app for an account. Enabling an app at the account level allows educators to enable or disable navigation to the Azure Lab Services app per course. Educators can avoid adding the app for each individual course.
+To add the app at the Canvas account level:
1. In Canvas, select the **Admin** menu.
-1. Select the account that you want to add the Azure Lab Services app to. Alternatively, select **All Accounts** to add the Azure Lab Services app to all accounts for the Canvas LMS instance.
+
+1. Select the account that you want to add the Azure Lab Services app to. Alternatively, select **All Accounts** to add the Azure Lab Services app to all accounts for the Canvas Learning Management System (LMS) instance.
:::image type="content" source="./media/how-to-configure-canvas-for-lab-plans/canvas-admin-choose-account.png" alt-text="Screenshot that shows the Admin menu and accounts list in Canvas.":::
-1. Choose **Settings**, then select the **Apps** tab.
+1. Choose **Settings**, and then select the **Apps** tab.
+ 1. Select **View App Configurations** button at the top right of the page.
- :::image type="content" source="./media/how-to-configure-canvas-for-lab-plans/canvas-admin-settings.png" alt-text="Screenshot that shows the App tab of the admin settings page in Canvas.":::
+ :::image type="content" source="./media/how-to-configure-canvas-for-lab-plans/canvas-admin-settings.png" alt-text="Screenshot that shows the App tab of the admin settings page in Canvas." lightbox="./media/how-to-configure-canvas-for-lab-plans/canvas-admin-settings.png":::
-1. Select the blue **+ App** button at the top right of the page.
+1. Select the **+ App** button at the top right of the page.
:::image type="content" source="./media/how-to-configure-canvas-for-lab-plans/canvas-add-app.png" alt-text="Screenshot that shows Add app button in the admin settings page.":::
Canvas administrators may choose to enable the Azure Lab Services app for an acc
:::image type="content" source="./media/how-to-configure-canvas-for-lab-plans/enable-lab-services.png" alt-text="Screenshot that shows Add by Client ID dialog in Canvas admin settings page.":::
-1. When the **Add App** dialog asks *Tool "Azure Lab Services" found for client ID 170000000000711. Would you like to install it?* select **Install**.
+1. When the **Add App** dialog asks *Tool "Azure Lab Services" found for client ID 170000000000711. Would you like to install it?*, select **Install**.
-The Azure Lab Services app will now be available for all courses in that account. The app won't show in course navigation by default. Educators must first enable the app in course navigation before it can be used.
+The Azure Lab Services app is now available for all courses in that account.
-### Add Azure Lab Services app to a course
+## Add Azure Lab Services to a course
-If you already [added the Azure Lab Services app at the account level](#add-azure-lab-services-app-to-an-account), the educator must enable the app in the course navigation.
+Next, you associate the Azure Lab Services app with a course in Canvas. You have two options to configure a course in Canvas to use Azure Lab
-To enable the Azure Lab Services app in the course navigation:
+- If you added the Azure Lab Services app at the Canvas account level, [enable the app in the course navigation](#enable-azure-lab-services-in-course-navigation).
-1. In Canvas, go to the course that will use Azure Lab Services.
-1. Choose **Settings**, then select the **Navigation** tab.
-1. Find the **Azure Lab Services** entry, select the three vertical dots, then select **Enable**.
+- Otherwise, [add the Azure Lab Services app to a course](#add-the-azure-lab-services-app-to-a-course).
- :::image type="content" source="./media/how-to-configure-canvas-for-lab-plans/canvas-enable-lab-services-app-in-course-navigation.png" alt-text="Screenshot of enabling Lab Services app in course navigation.":::
-
-1. Select **Save**.
+### Add the Azure Lab Services app to a course
-If you didn't add the Azure Lab Services app at the account level, use the following instructions to add the app at the course level:
+You now add the Azure Lab Services app to a specific course in Canvas.
1. In Canvas, go to the course that will use Azure Lab Services.+ 1. Choose **Settings**, and then select the **Apps** tab.+ 1. Select **View App Configurations** button at the top right of the page.
- :::image type="content" source="./media/how-to-configure-canvas-for-lab-plans/canvas-settings-apps.png" alt-text="Screenshot that shows the App tab of the settings page for a course in Canvas.":::
+ :::image type="content" source="./media/how-to-configure-canvas-for-lab-plans/canvas-settings-apps.png" alt-text="Screenshot that shows the App tab of the settings page for a course in Canvas." lightbox="./media/how-to-configure-canvas-for-lab-plans/canvas-settings-apps.png":::
-1. Select the blue **+ App** button at the top right of the page.
+1. Select the **+ App** button at the top right of the page.
:::image type="content" source="./media/how-to-configure-canvas-for-lab-plans/canvas-add-app.png" alt-text="Screenshot that shows Add app button in Canvas.":::
If you didn't add the Azure Lab Services app at the account level, use the follo
1. When the **Add App** dialog asks *Tool "Azure Lab Services" found for client ID 170000000000711. Would you like to install it?*, select **Install**.
- The Azure Lab Services app will take a few moments to show in the course navigation list.
+ The Azure Lab Services app takes a few moments to show in the course navigation list.
+
+You can skip to [Link a lab plan to a course](#link-lab-plans-to-canvas) to finalize the configuration of Canvas.
+
+### Enable Azure Lab Services in course navigation
+
+If you previously added the app at the Canvas account level, you don't have to add the app for a specific course. Instead, you enable the app in the Canvas course navigation:
+
+1. In Canvas, go to the course that uses Azure Lab Services.
+
+1. Choose **Settings**, then select the **Navigation** tab.
+
+1. Find the **Azure Lab Services** entry, select the three vertical dots, and then select **Enable**.
+
+ :::image type="content" source="./media/how-to-configure-canvas-for-lab-plans/canvas-enable-lab-services-app-in-course-navigation.png" alt-text="Screenshot of enabling Lab Services app in course navigation.":::
+
+1. Select **Save**.
+
+## Link lab plans to Canvas
+
+After you enable the Azure Lab Services app in Canvas and associate it with a course, you link specific lab plans to Canvas. You can only use linked lab plans for creating labs in Canvas.
+
+To link lab plans to Canvas, your Canvas account must be a Canvas administrator. In addition, your Azure account must have the following permissions on the lab plan.
+
+- Reader role on the subscription.
+- Contributor role on the resource group that contains your lab plan.
+
+Perform the following steps to link lab plans to Canvas:
+
+1. In Canvas, go to a course for which you previously added the Azure Lab Services app.
+
+1. Open the Azure Lab Services app in the course.
+
+1. Select the tool icon in the upper right to see the list all the lab plans.
+
+1. Choose the lab plans you want to link to Canvas from the list.
+
+ :::image type="content" source="./media/how-to-configure-canvas-for-lab-plans/canvas-select-lab-plans.png" alt-text="Screenshot that shows the list of lab plans that can be linked to Canvas." lightbox="./media/how-to-configure-canvas-for-lab-plans/canvas-select-lab-plans.png":::
+
+1. Select **Save**.
+
+ In the [Azure portal](https://portal.azure.com), the **LMS settings** page for the lab plan shows that you linked the lab plan successfully to Canvas.
+
+ :::image type="content" source="./media/how-to-configure-canvas-for-lab-plans/lab-plan-linked-canvas.png" alt-text="Screenshot of the L M S settings page for a lab plan." lightbox="./media/how-to-configure-canvas-for-lab-plans/lab-plan-linked-canvas.png":::
## Next steps
+You've successfully configured Canvas to access Azure Lab Services. You can now continue to create and manage labs for your courses in Canvas.
+ See the following articles: - As an admin, [add educators as lab creators to the lab plan](./add-lab-creator.md) in the Azure portal. - As an educator, [create and manage labs in Canvas](./how-to-manage-labs-within-canvas.md).-- As an eductor, [manage user lists in Canvas](./how-to-manage-labs-within-canvas.md#manage-lab-user-lists-in-canvas).
+- As an educator, [manage user lists in Canvas](./how-to-manage-labs-within-canvas.md#manage-lab-user-lists-in-canvas).
- As a student, [access a lab VM within Canvas](./how-to-access-vm-for-students-within-canvas.md).
lab-services How To Configure Teams For Lab Plans https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-configure-teams-for-lab-plans.md
Title: Configure Teams to access lab plans-
-description: Learn how to configure Microsoft Teams to access Azure Lab Services lab plans.
+ Title: Configure Teams to use Azure Lab Services
+description: Learn how to configure Microsoft Teams to use Azure Lab Services.
Last updated 11/15/2022
-# Configure Microsoft Teams to access Azure Lab Services lab plans
+# Configure Microsoft Teams to use Azure Lab Services
-In this article, you learn how to configure Microsoft Teams to access Azure Lab Services lab plans. Add the Azure Lab Services Teams app to a team channel to let educators and students access to their labs directly without navigating to the Azure Lab Services portal. Learn more about the [benefits of using Azure Lab Services within Teams](./lab-services-within-teams-overview.md).
+In this article, you learn how to configure Microsoft Teams to use Azure Lab Services. Add the Azure Lab Services Teams app to a team channel to let educators and students access to their labs directly without navigating to the Azure Lab Services portal. Learn more about the [benefits of using Azure Lab Services within Teams](./lab-services-within-teams-overview.md).
For information about creating and managing labs in Microsoft Teams, see [Create and manage labs in Microsoft Teams](./how-to-manage-labs-within-teams.md).
lab-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/policy-reference.md
Title: Built-in policy definitions for Lab Services description: Lists Azure Policy built-in policy definitions for Azure Lab Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/28/2022 Last updated : 01/05/2023
lighthouse Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/samples/policy-reference.md
Title: Built-in policy definitions for Azure Lighthouse description: Lists Azure Policy built-in policy definitions for Azure Lighthouse. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/28/2022 Last updated : 01/05/2023
load-balancer Load Balancer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-overview.md
Key scenarios that you can accomplish using Azure Standard Load Balancer include
- Load balance TCP and UDP flow on all ports simultaneously using **[HA ports](./load-balancer-ha-ports-overview.md)**. -- Chain Standard Load Balancer and [Gateway Loadbalancer](./tutorial-gateway-portal.md).
+- Chain Standard Load Balancer and [Gateway Load Balancer](./tutorial-gateway-portal.md).
### <a name="securebydefault"></a>Secure by default
logic-apps Edit App Settings Host Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/edit-app-settings-host-settings.md
ms.suite: integration Previously updated : 12/05/2022 Last updated : 01/05/2023
App settings in Azure Logic Apps work similarly to app settings in Azure Functio
| `ServiceProviders.Sftp.FileUploadBufferTimeForTrigger` | `00:00:20` <br>(20 seconds) | Sets the buffer time to ignore files that have a last modified timestamp that's greater than the current time. This setting is useful when large file writes take a long time and avoids fetching data for a partially written file. | | `ServiceProviders.Sftp.OperationTimeout` | `00:02:00` <br>(2 min) | Sets the time to wait before timing out on any operation. | | `ServiceProviders.Sftp.ServerAliveInterval` | `00:30:00` <br>(30 min) | Send a "keep alive" message to keep the SSH connection active if no data exchange with the server happens during the specified period. For more information, see the [ServerAliveInterval setting](https://man.openbsd.org/ssh_config.5#ServerAliveInterval). |
-| `ServiceProviders.Sftp.SftpConnectionPoolSize` | `2` connections | Sets the number of connections that each processor can cache. The total connections that you can cache is *ProcessorCount* multiplied by the setting value. |
+| `ServiceProviders.Sftp.SftpConnectionPoolSize` | `2` connections | Sets the number of connections that each processor can cache. The total number of connections that you can cache is *ProcessorCount* multiplied by the setting value. |
| `ServiceProviders.MaximumAllowedTriggerStateSizeInKB` | `10` KB, which is ~1,000 files | Sets the trigger state entity size in kilobytes, which is proportional to the number of files in the monitored folder and is used to detect files. If the number of files exceeds 1,000, increase this value. | | `ServiceProviders.Sql.QueryTimeout` | `00:02:00` <br>(2 min) | Sets the request timeout value for SQL service provider operations. | | `WEBSITE_LOAD_ROOT_CERTIFICATES` | None | Sets the thumbprints for the root certificates to be trusted. |
These settings affect the throughput and capacity for single-tenant Azure Logic
| Setting | Default value | Description | |||-|
+| `Microsoft.Azure.Workflows.TemplateLimits.InputParametersLimit` | `50` | Change the default limit on [cross-environment workflow parameters](create-parameters-workflows.md) up to 500 for Standard logic apps created by [exporting Consumption logic apps](export-from-consumption-to-standard-logic-app.md). |
| `Runtime.ContentLink.MaximumContentSizeInBytes` | `104857600` bytes | Sets the maximum size in bytes that an input or output can have in a trigger or action. | | `Runtime.FlowRunActionJob.MaximumActionResultSize` | `209715200` bytes | Sets the maximum size in bytes that the combined inputs and outputs can have in an action. |
These settings affect the throughput and capacity for single-tenant Azure Logic
| Setting | Default value | Description | |||-|
+| `Microsoft.Azure.Workflows.ContentStorage.RequestOptionsThreadCount` | None | Sets the thread count for blob upload and download operations. You can use this setting to force the Azure Logic Apps runtime to use multiple threads when uploading and downloading content from action inputs and outputs. |
| `Runtime.ContentStorage.RequestOptionsDeltaBackoff` | `00:00:02` <br>(2 sec) | Sets the backoff interval between retries sent to blob storage. | | `Runtime.ContentStorage.RequestOptionsMaximumAttempts` | `4` retries | Sets the maximum number of retries sent to table and queue storage. | | `Runtime.ContentStorage.RequestOptionsMaximumExecutionTime` | `00:02:00` <br>(2 min) | Sets the operation timeout value, including retries, for blob requests from the Azure Logic Apps runtime. |
These settings affect the throughput and capacity for single-tenant Azure Logic
| `Runtime.DataStorage.RequestOptionsMaximumExecutionTime` | `00:00:45` <br>(45 sec) | Sets the operation timeout value, including retries, for table and queue storage requests from the Azure Logic Apps runtime. | | `Runtime.DataStorage.RequestOptionsServerTimeout` | `00:00:16` <br>(16 sec) | Sets the timeout value for table and queue storage requests from the Azure Logic Apps runtime. |
+<a name="built-in-file-share"></a>
+
+#### File share
+
+| Setting | Default value | Description |
+|||-|
+| `ServiceProviders.AzureFile.MaxFileSizeInBytes` | `150000000` bytes | Sets the maximum file size in bytes for an Azure file share. |
+ <a name="built-in-azure-functions"></a> ### Built-in Azure Functions operations
logic-apps Logic Apps Http Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-http-endpoint.md
For more information about security, authorization, and encryption for inbound c
**A**: Yes, HTTPS endpoints support more advanced configuration through [Azure API Management](../api-management/api-management-key-concepts.md). This service also offers the capability for you to consistently manage all your APIs, including logic apps, set up custom domain names, use more authentication methods, and more, for example:
-* [Change the request method](../api-management/api-management-advanced-policies.md#SetRequestMethod)
-* [Change the URL segments of the request](../api-management/api-management-transformation-policies.md#RewriteURL)
+* [Change the request method](../api-management/set-method-policy.md)
+* [Change the URL segments of the request](../api-management/rewrite-uri-policy.md)
* Set up your API Management domains in the [Azure portal](https://portal.azure.com/) * Set up policy to check for Basic authentication
logic-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/policy-reference.md
Title: Built-in policy definitions for Azure Logic Apps description: Lists Azure Policy built-in policy definitions for Azure Logic Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/28/2022 Last updated : 01/05/2023 ms.suite: integration
logic-apps Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Logic Apps description: Lists Azure Policy Regulatory Compliance controls available for Azure Logic Apps. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/04/2022 Last updated : 01/05/2023
machine-learning Azure Machine Learning Ci Image Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/azure-machine-learning-ci-image-release-notes.md
+ Last updated 9/30/2022
machine-learning Concept Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-data.md
Azure Machine Learning distinguishes two types of URIs:
Data type | Description | Examples || `uri_file` | Refers to a specific **file** location | `https://<account_name>.blob.core.windows.net/<container_name>/<folder>/<file>`<br> `azureml://datastores/<datastore_name>/paths/<folder>/<file>` <br> `abfss://<file_system>@<account_name>.dfs.core.windows.net/<folder>/<file>`
-`uri_folder`| Refers to a specific **folder** location | `https://<account_name>.blob.core.windows.net/<container_name>/<folder>`<br> `azureml://datastores/<datastore_name>/paths/<folder>` <br> `abfss://<file_system>@<account_name>.dfs.core.windows.net/<folder>/`
+`uri_folder`| Refers to a specific **folder** location | `azureml://datastores/<datastore_name>/paths/<folder>` <br> `abfss://<file_system>@<account_name>.dfs.core.windows.net/<folder>/`
URIs are mapped to the filesystem on the compute target, hence using URIs is like using files or folders in the command that consumes/produces them. URIs leverage **identity-based authentication** to connect to storage services with either your Azure Active Directory ID (default) or Managed Identity.
command: |
inputs: sampledata: type: uri_folder
- path: https://<account_name>.blob.core.windows.net/<container_name>/<folder>
+ path: azureml://datastores/<datastore_name>/paths/<folder>
environment: azureml:AzureML-sklearn-1.0-ubuntu20.04-py38-cpu@latest compute: azureml:cpu-cluster ```
For more information about the MLTable YAML schema, see [CLI (v2) mltable YAML s
- [Create datastores](how-to-datastore.md#create-datastores) - [Create data assets](how-to-create-data-assets.md#create-data-assets) - [Access data in a job](how-to-read-write-data-v2.md)-- [Data administration](how-to-administrate-data-authentication.md#data-administration)
+- [Data administration](how-to-administrate-data-authentication.md#data-administration)
machine-learning Concept Sourcing Human Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-sourcing-human-data.md
description: Learn best practices for mitigating potential harm to peopleΓÇöespe
+ Last updated 11/04/2022
machine-learning Concept Vulnerability Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-vulnerability-management.md
Azure Machine Learning uses Conda for package installations. By default, package
RUN conda config --set offline false \ && conda config --remove channels defaults || true \ && conda config --add channels https://my.private.conda.feed/conda/feed
+&& conda config --add repodata_fns <repodata_file_on_your_server>.json
``` See [use your own dockerfile](how-to-use-environments.md#use-your-own-dockerfile) to learn how to specify your own base images in Azure Machine Learning. For more details on configuring Conda environments, see [Conda - Creating an environment file manually](https://docs.conda.io/projects/conda/en/4.6.1/user-guide/tasks/manage-environments.html#creating-an-environment-file-manually).
machine-learning How To Access Data Batch Endpoints Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-data-batch-endpoints-jobs.md
Title: "Input data for batch endpoints jobs"
+ Title: "Create jobs and input data for batch endpoints"
description: Learn how to access data from different sources in batch endpoints jobs.
-# Input data for batch endpoints jobs
+# Create jobs and input data for batch endpoints
Batch endpoints can be used to perform batch scoring on large amounts of data. Such data can be placed in different places. In this tutorial we'll cover the different places where batch endpoints can read data from and how to reference it.
machine-learning How To Access Data Interactive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-data-interactive.md
-++ Last updated 11/17/2022 #Customer intent: As a professional data scientist, I want to know how to build and deploy a model with Azure Machine Learning by using Python in a Jupyter Notebook.
machine-learning How To Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-assign-roles.md
-+ Last updated 08/01/2022
machine-learning How To Create Data Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-data-assets.md
-+ Last updated 09/22/2022
machine-learning How To Create Image Labeling Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-image-labeling-projects.md
For bounding boxes, important questions include:
* How to label the object if there is no clear boundary of the object? * How to label the object which is not object class of interest but visually similar to an interested object type?
->[!NOTE]
+> [!NOTE]
> Be sure to note that the labelers will be able to select the first 9 labels by using number keys 1-9.
+## Quality control (preview)
++
+> [!NOTE]
+> **Instance Segmentation** projects cannot use consensus labeling.
+ ## Use ML-assisted data labeling The **ML-assisted labeling** page lets you trigger automatic machine learning models to accelerate labeling tasks. Medical images (".dcm") are not included in assisted labeling.
At the beginning of your labeling project, the items are shuffled into a random
Select *Enable ML assisted labeling* and specify a GPU to enable assisted labeling. If you don't have one in your workspace, a GPU cluster will be created for you and added to your workspace. The cluster is created with a minimum of 0 nodes, which means it doesn't cost anything when it's not in use. - ML-assisted labeling consists of two phases: * Clustering
ML-assisted labeling consists of two phases:
The exact number of labeled data necessary to start assisted labeling is not a fixed number. This can vary significantly from one labeling project to another. For some projects, is sometimes possible to see prelabel or cluster tasks after 300 items have been manually labeled. ML Assisted Labeling uses a technique called *Transfer Learning*, which uses a pre-trained model to jump-start the training process. If your dataset's classes are similar to those in the pre-trained model, pre-labels may be available after only a few hundred manually labeled items. If your dataset is significantly different from the data used to pre-train the model, it may take much longer.
+When you're using consensus labeling, the consensus label is used for training.
+ Since the final labels still rely on input from the labeler, this technology is sometimes called *human in the loop* labeling. > [!NOTE]
On the right side is a distribution of the labels for those tasks that are compl
On the **Data** tab, you can see your dataset and review labeled data. Scroll through the labeled data to see the labels. If you see incorrectly labeled data, select it and choose **Reject**, which will remove the labels and put the data back into the unlabeled queue.
+If your project uses consensus labeling, you'll also want to review those images without a consensus. To do so:
+
+1. Select the **Data** tab.
+1. On the left, select **Review labels**.
+1. On the top right, select **All filters**.
+
+ :::image type="content" source="media/how-to-create-labeling-projects/select-filters.png" alt-text="Screenshot: select filters to review consensus label problems." lightbox="media/how-to-create-labeling-projects/select-filters.png":::
+
+1. Under **Labeled datapoints**, select **Consensus labels in need of review**. This shows only those images where a consensus was not achieved among the labelers.
+
+ :::image type="content" source="media/how-to-create-labeling-projects/select-need-review.png" alt-text="Screenshot: Select labels in need of review.":::
+
+1. For each image in need of review, select the **Consensus label** dropdown to view the conflicting labels.
+
+ :::image type="content" source="media/how-to-create-labeling-projects/consensus-dropdown.png" alt-text="Screenshot: Select Consensus label dropdown to review conflicting labels." lightbox="media/how-to-create-labeling-projects/consensus-dropdown.png":::
+
+1. While you can select an individual to see just their label(s), you can only update or reject the labels from the top choice, **Consensus label (preview)**.
+ ### Details tab View and change details of your project. In this tab you can:
View and change details of your project. In this tab you can:
## Export the labels
-Use the **Export** button on the **Project details** page of your labeling project. You can export the label data for Machine Learning experimentation at any time.
+Use the **Export** button on the **Project details** page of your labeling project. You can export the label data for Machine Learning experimentation at any time.
* Image labels can be exported as: * [COCO format](http://cocodataset.org/#format-data).The COCO file is created in the default blob store of the Azure Machine Learning workspace in a folder within *Labeling/export/coco*.
machine-learning How To Create Text Labeling Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-text-labeling-projects.md
To directly upload your data:
>[!NOTE] > Be sure to note that the labelers will be able to select the first 9 labels by using number keys 1-9.
+## Quality control (preview)
++ ## Use ML-assisted data labeling The **ML-assisted labeling** page lets you trigger automatic machine learning models to accelerate labeling tasks. ML-assisted labeling is available for both file (.txt) and tabular (.csv) text data inputs.
At the beginning of your labeling project, the items are shuffled into a random
For training the text DNN model used by ML-assist, the input text per training example will be limited to approximately the first 128 words in the document. For tabular input, all text columns are first concatenated before applying this limit. This is a practical limit imposed to allow for the model training to complete in a timely manner. The actual text in a document (for file input) or set of text columns (for tabular input) can exceed 128 words. The limit only pertains to what is internally leveraged by the model during the training process.
-The exact number of labeled items necessary to start assisted labeling isn't a fixed number. This can vary significantly from one labeling project to another, depending on many factors, including the number of labels classes and label distribution.
+The exact number of labeled items necessary to start assisted labeling isn't a fixed number. This can vary significantly from one labeling project to another, depending on many factors, including the number of labels classes and label distribution.
+
+When you're using consensus labeling, the consensus label is used for training.
Since the final labels still rely on input from the labeler, this technology is sometimes called *human in the loop* labeling.
On the right side is a distribution of the labels for those tasks that are compl
On the **Data** tab, you can see your dataset and review labeled data. Scroll through the labeled data to see the labels. If you see incorrectly labeled data, select it and choose **Reject**, which will remove the labels and put the data back into the unlabeled queue.
+If your project uses consensus labeling, you'll also want to review those images without a consensus. To do so:
+
+1. Select the **Data** tab.
+1. On the left, select **Review labels**.
+1. On the top right, select **All filters**.
+
+ :::image type="content" source="media/how-to-create-text-labeling-projects/text-labeling-select-filter.png" alt-text="Screenshot: select filters to review consensus label problems." lightbox="media/how-to-create-text-labeling-projects/text-labeling-select-filter.png":::
+
+1. Under **Labeled datapoints**, select **Consensus labels in need of review**. This shows only those images where a consensus was not achieved among the labelers.
+
+ :::image type="content" source="media/how-to-create-labeling-projects/select-need-review.png" alt-text="Screenshot: Select labels in need of review.":::
+
+1. For each item in need of review, select the **Consensus label** dropdown to view the conflicting labels.
+
+ :::image type="content" source="media/how-to-create-text-labeling-projects/text-labeling-consensus-dropdown.png" alt-text="Screenshot: Select Consensus label dropdown to review conflicting labels." lightbox="media/how-to-create-text-labeling-projects/text-labeling-consensus-dropdown.png":::
+
+1. While you can select an individual to see just their label(s), you can only update or reject the labels from the top choice, **Consensus label (preview)**.
+ ### Details tab View and change details of your project. In this tab you can:
machine-learning How To Debug Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-debug-visual-studio-code.md
-++ Last updated 10/21/2021
machine-learning How To Deploy Mlflow Models Online Progressive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models-online-progressive.md
We are going to exploit this functionality by deploying multiple versions of the
# [Python (MLflow SDK)](#tab/mlflow)
- This functionality is not available in the MLflow SDK. Go to [Azure ML studio](https://ml.azure.com), navigate to the endpoint and retrieve the secret key from there. Once you have it, set the value here:
-
- ```python
- endpoint_secret_key = "<ACCESS_KEY>"
- ```
+ This functionality is not available in the MLflow SDK. Go to [Azure ML studio](https://ml.azure.com), navigate to the endpoint and retrieve the secret key from there.
### Create a blue deployment
So far, the endpoint is empty. There are no deployments on it. Let's create the
# [Python (MLflow SDK)](#tab/mlflow) ```python
- deployment_client.predict(endpoint=endpoint_name, df=samples)
+ deployment_client.predict(
+ endpoint=endpoint_name,
+ df=samples
+ )
``` ### Create a green deployment under the endpoint
Let's imagine that there is a new version of the model created by the developmen
# [Python (MLflow SDK)](#tab/mlflow) ```python
- deployment_client.predict(endpoint=endpoint_name, deployment_name=green_deployment_name, df=samples)
+ deployment_client.predict(
+ endpoint=endpoint_name,
+ deployment_name=green_deployment_name,
+ df=samples
+ )
```
deployment_client.delete_endpoint(endpoint_name)
## Next steps - [Deploy MLflow models to Batch Endpoints](how-to-mlflow-batch.md)-- [Using MLflow models for no-code deployment](how-to-log-mlflow-models.md)
+- [Using MLflow models for no-code deployment](how-to-log-mlflow-models.md)
machine-learning How To Deploy Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-online-endpoints.md
# Deploy and score a machine learning model by using an online endpoint - Learn how to use an online endpoint to deploy your model, so you don't have to create and manage the underlying infrastructure. You'll begin by deploying a model on your local machine to debug any errors, and then you'll deploy and test it in Azure.
machine-learning How To Inference Onnx Automl Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-inference-onnx-automl-image-models.md
env = Environment(
) ```
-Use the following model specific arguments to submit the script. For more details on arguments, refer to [model specific hyperparameters](how-to-auto-train-image-models.md#configure-experiments) and for supported object detection model names refer to the [supported model algorithm section](how-to-auto-train-image-models.md#supported-model-algorithms).
+Use the following model specific arguments to submit the script. For more details on arguments, refer to [model specific hyperparameters](reference-automl-images-hyperparameters.md#model-specific-hyperparameters) and for supported object detection model names refer to the [supported model algorithm section](how-to-auto-train-image-models.md#supported-model-algorithms).
To get the argument values needed to create the batch scoring model, refer to the scoring scripts generated under the outputs folder of the AutoML training runs. Use the hyperparameter values available in the model settings variable inside the scoring file for the best child run.
Every ONNX model has a predefined set of input and output formats.
# [Multi-class image classification](#tab/multi-class)
-This example applies the model trained on the [fridgeObjects](https://cvbp-secondary.z19.web.core.windows.net/datasets/image_classification/fridgeObjects.zip) dataset with 134 images and 4 classes/labels to explain ONNX model inference. For more information on training an image classification task, see the [multi-class image classification notebook](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-classification-multiclass).
+This example applies the model trained on the [fridgeObjects](https://cvbp-secondary.z19.web.core.windows.net/datasets/image_classification/fridgeObjects.zip) dataset with 134 images and 4 classes/labels to explain ONNX model inference. For more information on training an image classification task, see the [multi-class image classification notebook](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/automl-standalone-jobs/automl-image-classification-multiclass-task-fridge-items).
### Input format
The output is an array of logits for all the classes/labels.
# [Multi-label image classification](#tab/multi-label)
-This example uses the model trained on the [multi-label fridgeObjects dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/image_classification/multilabelFridgeObjects.zip) with 128 images and 4 classes/labels to explain ONNX model inference. For more information on model training for multi-label image classification, see the [multi-label image classification notebook](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-classification-multilabel).
+This example uses the model trained on the [multi-label fridgeObjects dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/image_classification/multilabelFridgeObjects.zip) with 128 images and 4 classes/labels to explain ONNX model inference. For more information on model training for multi-label image classification, see the [multi-label image classification notebook](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/automl-standalone-jobs/automl-image-classification-multilabel-task-fridge-items).
### Input format
The output is an array of logits for all the classes/labels.
# [Object detection with Faster R-CNN or RetinaNet](#tab/object-detect-cnn)
-This object detection example uses the model trained on the [fridgeObjects detection dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) of 128 images and 4 classes/labels to explain ONNX model inference. This example trains Faster R-CNN models to demonstrate inference steps. For more information on training object detection models, see the [object detection notebook](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection).
+This object detection example uses the model trained on the [fridgeObjects detection dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) of 128 images and 4 classes/labels to explain ONNX model inference. This example trains Faster R-CNN models to demonstrate inference steps. For more information on training object detection models, see the [object detection notebook](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items).
### Input format
The following table describes boxes, labels and scores returned for each sample
# [Object detection with YOLO](#tab/object-detect-yolo)
-This object detection example uses the model trained on the [fridgeObjects detection dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) of 128 images and 4 classes/labels to explain ONNX model inference. This example trains YOLO models to demonstrate inference steps. For more information on training object detection models, see the [object detection notebook](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection).
+This object detection example uses the model trained on the [fridgeObjects detection dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) of 128 images and 4 classes/labels to explain ONNX model inference. This example trains YOLO models to demonstrate inference steps. For more information on training object detection models, see the [object detection notebook](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items).
### Input format
Each cell in the list indicates box detections of a sample with shape `(n_boxes,
# [Instance segmentation](#tab/instance-segmentation)
-For this instance segmentation example, you use the Mask R-CNN model that has been trained on the [fridgeObjects dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjectsMask.zip) with 128 images and 4 classes/labels to explain ONNX model inference. For more information on training of the instance segmentation model, see the [instance segmentation notebook](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-instance-segmentation).
+For this instance segmentation example, you use the Mask R-CNN model that has been trained on the [fridgeObjects dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjectsMask.zip) with 128 images and 4 classes/labels to explain ONNX model inference. For more information on training of the instance segmentation model, see the [instance segmentation notebook](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/automl-standalone-jobs/automl-image-instance-segmentation-task-fridge-items).
>[!IMPORTANT] > Only Mask R-CNN is supported for instance segmentation tasks. The input and output formats are based on Mask R-CNN only.
Perform the following preprocessing steps for the ONNX model inference:
5. Convert to float type. 6. Normalize with ImageNet's `mean` = `[0.485, 0.456, 0.406]` and `std` = `[0.229, 0.224, 0.225]`.
-If you chose different values for the [hyperparameters](how-to-auto-train-image-models.md#configure-experiments) `valid_resize_size` and `valid_crop_size` during training, then those values should be used.
+If you chose different values for the [hyperparameters](reference-automl-images-hyperparameters.md) `valid_resize_size` and `valid_crop_size` during training, then those values should be used.
Get the input shape needed for the ONNX model.
Perform the following preprocessing steps for the ONNX model inference. These st
5. Convert to float type. 6. Normalize with ImageNet's `mean` = `[0.485, 0.456, 0.406]` and `std` = `[0.229, 0.224, 0.225]`.
-If you chose different values for the [hyperparameters](how-to-auto-train-image-models.md#configure-experiments) `valid_resize_size` and `valid_crop_size` during training, then those values should be used.
+If you chose different values for the [hyperparameters](reference-automl-images-hyperparameters.md) `valid_resize_size` and `valid_crop_size` during training, then those values should be used.
Get the input shape needed for the ONNX model.
batch, channel, height_onnx, width_onnx = session.get_inputs()[0].shape
batch, channel, height_onnx, width_onnx ```
-For preprocessing required for YOLO, refer to [yolo_onnx_preprocessing_utils.py](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection).
+For preprocessing required for YOLO, refer to [yolo_onnx_preprocessing_utils.py](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items).
```python import glob
Perform the following preprocessing steps for the ONNX model inference:
4. Convert to float type. 5. Normalize with ImageNet's `mean` = `[0.485, 0.456, 0.406]` and `std` = `[0.229, 0.224, 0.225]`.
-For `resize_height` and `resize_width`, you can also use the values that you used during training, bounded by the `min_size` and `max_size` [hyperparameters](how-to-auto-train-image-models.md#configure-experiments) for Mask R-CNN.
+For `resize_height` and `resize_width`, you can also use the values that you used during training, bounded by the `min_size` and `max_size` [hyperparameters](reference-automl-images-hyperparameters.md) for Mask R-CNN.
```python import glob
machine-learning How To Interactive Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-interactive-jobs.md
It might take a few minutes to start the job and the training applications speci
- To connect via SSH to the container where the job is running, run the command `az ml job connect-ssh --name <job-name> --node-index <compute node index> --private-key-file-path <path to private key>`. To set up the Azure Machine Learning CLIv2, follow this [guide](./how-to-configure-cli.md).
-You can find the reference documentation for the SDKv2 [here](/sdk/azure/ml).
+You can find the reference documentation for the SDKv2 [here](/azure/machine-learning/).
You can access the applications only when they are in **Running** status and only the **job owner** is authorized to access the applications. If you're training on multiple nodes, you can pick the specific node you would like to interact with by passing in the node index.
machine-learning How To Manage Models Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-models-mlflow.md
client = mlflow.tracking.MlflowClient()
The following sample prints all the model's names: ```python
-for model in client.list_registered_models():
+for model in client.search_registered_models():
print(f"{model.name}") ```
+> [!NOTE]
+> __MLflow 2.0 advisory:__ In older versions of Mlflow (<2.0), use method `MlflowClient.list_registered_models()` instead.
+ ### Getting specific versions of the model The command above will retrieve the model object which contains all the model versions. However, if you want to get the last registered model version of a given model, you can use `get_registered_model`:
model = mlflow.pyfunc.load_model(f"models:/{model_name}/Staging")
## Editing and deleting models
-Editing registered models is supported in both Mlflow and Azure ML, however, there are some differences between them that are important to notice:
+Editing registered models is supported in both Mlflow and Azure ML. However, there are some differences important to be noticed:
> [!WARNING] > Renaming models is not supported in Azure Machine Learning as model objects are immmutable.
The MLflow client exposes several methods to retrieve and manage models. The fol
| Registering models in MLflow format | **&check;** | **&check;** | **&check;** | **&check;** | | Registering models not in MLflow format | | | **&check;** | **&check;** | | Registering models from runs outputs/artifacts | **&check;** | **&check;**<sup>1</sup> | **&check;**<sup>2</sup> | **&check;** |
-| Registering models from runs outputs/artifacts in a different tracking server/workspace | **&check;** | | | |
+| Registering models from runs outputs/artifacts in a different tracking server/workspace | **&check;** | | **&check;**<sup>5</sup> | **&check;**<sup>5</sup> |
| Listing registered models | **&check;** | **&check;** | **&check;** | **&check;** | | Retrieving details of registered model's versions | **&check;** | **&check;** | **&check;** | **&check;** | | Editing registered model's versions description | **&check;** | **&check;** | **&check;** | **&check;** |
The MLflow client exposes several methods to retrieve and manage models. The fol
> - <sup>2</sup> Use URIs with format `azureml://jobs/<job-id>/outputs/artifacts/<path>`. > - <sup>3</sup> Registered models are immutable objects in Azure ML. > - <sup>4</sup> Use search box in Azure ML Studio. Partial match supported.
+> - <sup>5</sup> Use [registries](how-to-manage-registries.md).
## Next steps - [Logging MLflow models](how-to-log-mlflow-models.md) - [Query & compare experiments and runs with MLflow](how-to-track-experiments-mlflow.md)-- [Guidelines for deploying MLflow models](how-to-deploy-mlflow-models.md)
+- [Guidelines for deploying MLflow models](how-to-deploy-mlflow-models.md)
machine-learning How To Manage Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-models.md
description: Learn how to register and work with different model types in Azure
+ Last updated 04/15/2022
machine-learning How To Manage Optimize Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-optimize-cost.md
Title: Manage and optimize costs description: Learn tips to optimize your cost when building machine learning models in Azure Machine Learning-++
machine-learning How To Manage Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-quotas.md
+ Last updated 11/28/2022
machine-learning How To Monitor Tensorboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-monitor-tensorboard.md
+ Last updated 10/21/2021
machine-learning How To Responsible Ai Scorecard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-responsible-ai-scorecard.md
+ Last updated 11/09/2022
machine-learning How To Select Algorithms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-select-algorithms.md
-+ Last updated 10/21/2021 # How to select algorithms for Azure Machine Learning
machine-learning How To Set Up Vs Code Remote https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-set-up-vs-code-remote.md
-++ Last updated 10/21/2021 # As a data scientist, I want to connect to an Azure Machine Learning compute instance in Visual Studio Code to access my resources and run my code.
machine-learning How To Setup Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-vs-code.md
Title: Set up Visual Studio Code extension (preview)
description: Learn how to set up the Azure Machine Learning Visual Studio Code extension. -++
machine-learning How To Track Experiments Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-track-experiments-mlflow.md
# Query & compare experiments and runs with MLflow
-Experiments and runs in Azure Machine Learning can be queried using MLflow. This removes the need of any Azure Machine Learning specific SDKs to manage anything that happens inside of a training job, allowing dependencies removal and creating a more seamless transition between local runs and cloud.
+Experiments and runs tracking information in Azure Machine Learning can be queried using MLflow. You don't need to install any specific SDK to manage what happens inside of a training job, creating a more seamless transition between local runs and the cloud by removing cloud-specific dependencies.
> [!NOTE] > The Azure Machine Learning Python SDK v2 does not provide native logging or tracking capabilities. This applies not just for logging but also for querying the metrics logged. Instead, we recommend to use MLflow to manage experiments and runs. This article explains how to use MLflow to manage experiments and runs in Azure ML.
Use MLflow to query and manage all the experiments in Azure Machine Learning. Th
You can get all the active experiments in the workspace using MLFlow:
- ```python
- experiments = mlflow.list_experiments()
- for exp in experiments:
- print(exp.name)
- ```
+```python
+experiments = mlflow.search_experiments()
+for exp in experiments:
+ print(exp.name)
+```
+
+> [!NOTE]
+> __MLflow 2.0 advisory:__ In legacy versions of MLflow (<2.0) use method `list_experiments` instead.
If you want to retrieve archived experiments too, then include the option `ViewType.ALL` in the `view_type` argument. The following sample shows how:
- ```python
- from mlflow.entities import ViewType
+```python
+from mlflow.entities import ViewType
- experiments = mlflow.list_experiments(view_type=ViewType.ALL)
- for exp in experiments:
- print(exp.name)
- ```
+experiments = mlflow.search_experiments(view_type=ViewType.ALL)
+for exp in experiments:
+ print(exp.name)
+```
## Getting a specific experiment Details about a specific experiment can be retrieved using the `get_experiment_by_name` method:
- ```python
- exp = mlflow.get_experiment_by_name(experiment_name)
- print(exp)
- ```
+```python
+exp = mlflow.get_experiment_by_name(experiment_name)
+print(exp)
+```
## Getting runs inside an experiment
MLflow allows searching runs inside of any experiment, including multiple experi
By experiment name:
- ```python
- mlflow.search_runs(experiment_names=[ "my_experiment" ])
- ```
+```python
+mlflow.search_runs(experiment_names=[ "my_experiment" ])
+```
+ By experiment ID:
- ```python
- mlflow.search_runs(experiment_ids=[ "1234-5678-90AB-CDEFG" ])
- ```
+```python
+mlflow.search_runs(experiment_ids=[ "1234-5678-90AB-CDEFG" ])
+```
> [!TIP] > Notice that `experiment_ids` supports providing an array of experiments, so you can search runs across multiple experiments if required. This may be useful in case you want to compare runs of the same model when it is being logged in different experiments (by different people, different project iterations, etc). You can also use `search_all_experiments=True` if you want to search across all the experiments in the workspace.
Another important point to notice is that get returning runs, all metrics are pa
By default, experiments are ordered descending by `start_time`, which is the time the experiment was queue in Azure ML. However, you can change this default by using the parameter `order_by`.
- ```python
- mlflow.search_runs(experiment_ids=[ "1234-5678-90AB-CDEFG" ], order_by=["start_time DESC"])
- ```
+```python
+mlflow.search_runs(experiment_ids=[ "1234-5678-90AB-CDEFG" ], order_by=["start_time DESC"])
+```
Use the argument `max_results` from `search_runs` to limit the number of runs returned. For instance, the following example returns the last run of the experiment:
- ```python
- mlflow.search_runs(experiment_ids=[ "1234-5678-90AB-CDEFG" ], max_results=1, order_by=["start_time DESC"])
- ```
+```python
+mlflow.search_runs(experiment_ids=[ "1234-5678-90AB-CDEFG" ], max_results=1, order_by=["start_time DESC"])
+```
> [!WARNING] > Using `order_by` with expressions containing `metrics.*` in the parameter `order_by` is not supported by the moment. Please use `order_values` method from Pandas as shown in the next example. You can also order by metrics to know which run generated the best results:
- ```python
- mlflow.search_runs(experiment_ids=[ "1234-5678-90AB-CDEFG" ]).sort_values("metrics.accuracy", ascending=False)
- ```
+```python
+mlflow.search_runs(experiment_ids=[ "1234-5678-90AB-CDEFG" ]).sort_values("metrics.accuracy", ascending=False)
+```
### Filtering runs You can also look for a run with a specific combination in the hyperparameters using the parameter `filter_string`. Use `params` to access run's parameters and `metrics` to access metrics logged in the run. MLflow supports expressions joined by the AND keyword (the syntax does not support OR):
- ```python
- mlflow.search_runs(experiment_ids=[ "1234-5678-90AB-CDEFG" ],
- filter_string="params.num_boost_round='100'")
- ```
+```python
+mlflow.search_runs(experiment_ids=[ "1234-5678-90AB-CDEFG" ],
+ filter_string="params.num_boost_round='100'")
+```
### Filter runs by status
You can also filter experiment by status. It becomes useful to find runs that ar
> [!WARNING] > Expressions containing `attributes.status` in the parameter `filter_string` are not support at the moment. Please use Pandas filtering expressions as shown in the next example.
-The following example shows all the runs that have been completed:
+The following example shows all the completed runs:
- ```python
- runs = mlflow.search_runs(experiment_ids=[ "1234-5678-90AB-CDEFG" ])
- runs[runs.status == "FINISHED"]
- ```
+```python
+runs = mlflow.search_runs(experiment_ids=[ "1234-5678-90AB-CDEFG" ])
+runs[runs.status == "FINISHED"]
+```
## Getting metrics, parameters, artifacts and models
-By default, MLflow returns runs as a Pandas `Dataframe` containing a limited amount of information. You can get Python objects if needed, which may be useful to get details about them. Use the `output_format` parameter to control how output is returned:
+The method `search_runs` returns a Pandas `Dataframe` containing a limited amount of information by default. You can get Python objects if needed, which may be useful to get details about them. Use the `output_format` parameter to control how output is returned:
+
+```python
+runs = mlflow.search_runs(
+ experiment_ids=[ "1234-5678-90AB-CDEFG" ],
+ filter_string="params.num_boost_round='100'",
+ output_format="list",
+)
+```
- ```python
- runs = mlflow.search_runs(
- experiment_ids=[ "1234-5678-90AB-CDEFG" ],
- filter_string="params.num_boost_round='100'",
- output_format="list",
- )
- ```
Details can then be accessed from the `info` member. The following sample shows how to get the `run_id`:
- ```python
- last_run = runs[-1]
- print("Last run ID:", last_run.info.run_id)
- ```
+```python
+last_run = runs[-1]
+print("Last run ID:", last_run.info.run_id)
+```
### Getting params and metrics from a run When runs are returned using `output_format="list"`, you can easily access parameters using the key `data`:
- ```python
- last_run.data.params
- ```
+```python
+last_run.data.params
+```
In the same way, you can query metrics:
- ```python
- last_run.data.metrics
- ```
+```python
+last_run.data.metrics
+```
+ For metrics that contain multiple values (for instance, a loss curve, or a PR curve), only the last logged value of the metric is returned. If you want to retrieve all the values of a given metric, uses `mlflow.get_metric_history` method. This method requires you to use the `MlflowClient`:
- ```python
- client = mlflow.tracking.MlflowClient()
- client.get_metric_history("1234-5678-90AB-CDEFG", "log_loss")
- ```
+```python
+client = mlflow.tracking.MlflowClient()
+client.get_metric_history("1234-5678-90AB-CDEFG", "log_loss")
+```
### Getting artifacts from a run Any artifact logged by a run can be queried by MLflow. Artifacts can't be access using the run object itself and the MLflow client should be used instead:
- ```python
- client = mlflow.tracking.MlflowClient()
- client.list_artifacts("1234-5678-90AB-CDEFG")
- ```
+```python
+client = mlflow.tracking.MlflowClient()
+client.list_artifacts("1234-5678-90AB-CDEFG")
+```
The method above will list all the artifacts logged in the run, but they will remain stored in the artifacts store (Azure ML storage). To download any of them, use the method `download_artifact`:
- ```python
- file_path = client.download_artifacts("1234-5678-90AB-CDEFG", path="feature_importance_weight.png")
- ```
+```python
+file_path = mlflow.artifacts.download_artifacts(
+ run_id="1234-5678-90AB-CDEFG", artifact_path="feature_importance_weight.png"
+)
+```
+
+> [!NOTE]
+> __MLflow 2.0 advisory:__ In legacy versions of MLflow (<2.0), use the method `MlflowClient.download_artifacts()` instead.
### Getting models from a run Models can also be logged in the run and then retrieved directly from it. To retrieve it, you need to know the artifact's path where it is stored. The method `list_artifacats` can be used to find artifacts that are representing a model since MLflow models are always folders. You can download a model by indicating the path where the model is stored using the `download_artifact` method:
- ```python
- artifact_path="classifier"
- model_local_path = client.download_artifacts("1234-5678-90AB-CDEFG", path=artifact_path)
- ```
+```python
+artifact_path="classifier"
+model_local_path = mlflow.artifacts.download_artifacts(
+ run_id="1234-5678-90AB-CDEFG", artifact_path=artifact_path
+)
+```
You can then load the model back from the downloaded artifacts using the typical function `load_model`:
- ```python
- model = mlflow.xgboost.load_model(model_local_path)
- ```
+```python
+model = mlflow.xgboost.load_model(model_local_path)
+```
+ > [!NOTE]
-> In the example above, we are assuming the model was created using `xgboost`. Change it to the flavor applies to your case.
+> The previous example assumes the model was created using `xgboost`. Change it to the flavor applies to your case.
-MLflow also allows you to both operations at once and download and load the model in a single instruction. MLflow will download the model to a temporary folder and load it from there. This can be done using the `load_model` method which uses an URI format to indicate from where the model has to be retrieved. In the case of loading a model from a run, the URI structure is as follows:
+MLflow also allows you to both operations at once and download and load the model in a single instruction. MLflow will download the model to a temporary folder and load it from there. The method `load_model` uses an URI format to indicate from where the model has to be retrieved. In the case of loading a model from a run, the URI structure is as follows:
- ```python
- model = mlflow.xgboost.load_model(f"runs:/{last_run.info.run_id}/{artifact_path}")
- ```
+```python
+model = mlflow.xgboost.load_model(f"runs:/{last_run.info.run_id}/{artifact_path}")
+```
> [!TIP] > You can also load models from the registry using MLflow. View [loading MLflow models with MLflow](how-to-manage-models-mlflow.md#loading-models-from-registry) for details. ## Getting child (nested) runs
-MLflow supports the concept of child (nested) runs. They are useful when you need to spin off training routines requiring being tracked independently from the main training process. This is the typical case of hyper-parameter tuning for instance. You can query all the child runs of a specific run using the property tag `mlflow.parentRunId`, which contains the run ID of the parent run.
+MLflow supports the concept of child (nested) runs. They are useful when you need to spin off training routines requiring being tracked independently from the main training process. Hyper-parameter tuning optimization processes or Azure Machine Learning pipelines are typical examples of jobs that generate multiple child runs. You can query all the child runs of a specific run using the property tag `mlflow.parentRunId`, which contains the run ID of the parent run.
```python hyperopt_run = mlflow.last_active_run()
child_runs = mlflow.search_runs(
) ```
-## Compare jobs and models in AzureML Studio (preview)
+## Compare jobs and models in AzureML studio (preview)
To compare and evaluate the quality of your jobs and models in AzureML Studio, use the [preview panel](./how-to-enable-preview-features.md) to enable the feature. Once enabled, you can compare the parameters, metrics, and tags between the jobs and/or models you selected.
machine-learning How To Train Mlflow Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-mlflow-projects.md
-# Train ML models with MLflow Projects and Azure Machine Learning (Preview)
+# Train with MLflow Projects in Azure Machine Learning (Preview)
-In this article, learn how to enable MLflow's tracking URI and logging API, collectively known as [MLflow Tracking](https://mlflow.org/docs/latest/quickstart.html#using-the-tracking-api), to submit training jobs with [MLflow Projects](https://www.mlflow.org/docs/latest/projects.html) and Azure Machine Learning backend support. You can submit jobs locally with Azure Machine Learning tracking or migrate your runs to the cloud like via an [Azure Machine Learning Compute](./how-to-create-attach-compute-cluster.md).
+In this article, learn how to submit training jobs with [MLflow Projects](https://www.mlflow.org/docs/latest/projects.html) that uses Azure Machine Learning workspaces for tracking. You can submit jobs and only track them with Azure Machine Learning or migrate your runs to the cloud to run completely on [Azure Machine Learning Compute](./how-to-create-attach-compute-cluster.md).
[MLflow Projects](https://mlflow.org/docs/latest/projects.html) allow for you to organize and describe your code to let other data scientists (or automated tools) run it. MLflow Projects with Azure Machine Learning enable you to track and manage your training runs in your workspace.
-[MLflow](https://www.mlflow.org) is an open-source library for managing the life cycle of your machine learning experiments. MLFlow Tracking is a component of MLflow that logs and tracks your training run metrics and model artifacts, no matter your experiment's environment--locally on your computer, on a remote compute target, a virtual machine, or an [Azure Databricks cluster](how-to-use-mlflow-azure-databricks.md).
-
-[Learn more about the MLflow and Azure Machine Learning integration.](how-to-use-mlflow.md).
-
-> [!TIP]
-> The information in this document is primarily for data scientists and developers who want to monitor the model training process. If you are an administrator interested in monitoring resource usage and events from Azure Machine Learning, such as quotas, completed training runs, or completed model deployments, see [Monitoring Azure Machine Learning](monitor-azure-machine-learning.md).
+[Learn more about the MLflow and Azure Machine Learning integration.](concept-mlflow.md)
## Prerequisites [!INCLUDE [mlflow-prereqs](../../includes/machine-learning-mlflow-prereqs.md)]
-### Connect to your workspace
-
-First, let's connect MLflow to your Azure Machine Learning workspace.
-
-# [Azure Machine Learning compute](#tab/aml)
-
-Tracking is already configured for you. Your default credentials will also be used when working with MLflow.
-
-# [Remote compute](#tab/remote)
-
-**Configure tracking URI**
--
-**Configure authentication**
-
-Once the tracking is configured, you'll also need to configure how the authentication needs to happen to the associated workspace. By default, the Azure Machine Learning plugin for MLflow will perform interactive authentication by opening the default browser to prompt for credentials. Refer to [Configure MLflow for Azure Machine Learning: Configure authentication](how-to-use-mlflow-configure-tracking.md#configure-authentication) to additional ways to configure authentication for MLflow in Azure Machine Learning workspaces.
----
-## Train MLflow Projects on local compute
-
-This example shows how to submit MLflow projects locally with Azure Machine Learning.
-
-Create the backend configuration object to store necessary information for the integration such as, the compute target and which type of managed environment to use.
+* Using Azure Machine Learning as backend for MLflow projects requires the package `azureml-core`:
-```python
-backend_config = {"USE_CONDA": False}
-```
+ ```bash
+ pip install azureml-core
+ ```
-Add the `azureml-mlflow` package as a pip dependency to your environment configuration file in order to track metrics and key artifacts in your workspace.
-
-``` shell
-name: mlflow-example
-channels:
- - defaults
- - anaconda
- - conda-forge
-dependencies:
- - python=3.6
- - scikit-learn=0.19.1
- - pip
- - pip:
- - mlflow
- - azureml-mlflow
-```
-
-Submit the local run and ensure you set the parameter `backend = "azureml" `. With this setting, you can submit runs locally and get the added support of automatic output tracking, log files, snapshots, and printed errors in your workspace.
-
-View your runs and metrics in the [Azure Machine Learning studio](https://ml.azure.com).
-
-```python
-local_env_run = mlflow.projects.run(uri=".",
- parameters={"alpha":0.3},
- backend = "azureml",
- use_conda=False,
- backend_config = backend_config,
- )
-
-```
-
-## Train MLflow projects with remote compute
-
-This example shows how to submit MLflow projects on a remote compute with Azure Machine Learning tracking.
-
-Create the backend configuration object to store necessary information for the integration such as, the compute target and which type of managed environment to use.
-
-The integration accepts "COMPUTE" and "USE_CONDA" as parameters where "COMPUTE" is set to the name of your remote compute cluster and "USE_CONDA" which creates a new environment for the project from the environment configuration file. If "COMPUTE" is present in the object, the project will be automatically submitted to the remote compute and ignore "USE_CONDA". MLflow accepts a dictionary object or a JSON file.
-
-```python
-# dictionary
-backend_config = {"COMPUTE": "cpu-cluster", "USE_CONDA": False}
-```
-
-Add the `azureml-mlflow` package as a pip dependency to your environment configuration file in order to track metrics and key artifacts in your workspace.
-
-``` shell
-name: mlflow-example
-channels:
- - defaults
- - anaconda
- - conda-forge
-dependencies:
- - python=3.6
- - scikit-learn=0.19.1
- - pip
- - pip:
- - mlflow
- - azureml-mlflow
-```
-
-Submit the mlflow project run and ensure you set the parameter `backend = "azureml" `. With this setting, you can submit your run to your remote compute and get the added support of automatic output tracking, log files, snapshots, and printed errors in your workspace.
-
-View your runs and metrics in the [Azure Machine Learning studio](https://ml.azure.com).
-
-```python
-remote_mlflow_run = mlflow.projects.run(uri=".",
- parameters={"alpha":0.3},
- backend = "azureml",
- backend_config = backend_config,
- )
+### Connect to your workspace
-```
+If you're working outside Azure Machine Learning, you need to configure MLflow to point to your Azure Machine Learning workspace's tracking URI. You can find the instructions at [Configure MLflow for Azure Machine Learning](how-to-use-mlflow-configure-tracking.md).
++
+## Track MLflow Projects in Azure Machine Learning workspaces
+
+This example shows how to submit MLflow projects and track them Azure Machine Learning.
+
+1. Add the `azureml-mlflow` package as a pip dependency to your environment configuration file in order to track metrics and key artifacts in your workspace.
+
+ __conda.yaml__
+
+ ```yaml
+ name: mlflow-example
+ channels:
+ - defaults
+ dependencies:
+ - numpy>=1.14.3
+ - pandas>=1.0.0
+ - scikit-learn
+ - pip:
+ - mlflow
+ - azureml-mlflow
+ ```
+
+1. Submit the local run and ensure you set the parameter `backend = "azureml"`, which adds support of automatic tracking, model's capture, log files, snapshots, and printed errors in your workspace. In this example we assume the MLflow project you are trying to run is in the same folder you currently are, `uri="."`.
+
+ # [MLflow CLI](#tab/cli)
+
+ ```bash
+ mlflow run . --experiment-name --backend azureml --env-manager=local -P alpha=0.3
+ ```
+
+ # [Python](#tab/sdk)
+
+ ```python
+ local_env_run = mlflow.projects.run(
+ uri=".",
+ parameters={"alpha":0.3},
+ backend = "azureml",
+ env_manager="local",
+ backend_config = backend_config,
+ )
+ ```
+
+
+
+ View your runs and metrics in the [Azure Machine Learning studio](https://ml.azure.com).
+
+## Train MLflow projects in Azure Machine Learning jobs
+
+This example shows how to submit MLflow projects as a job running on Azure Machine Learning compute.
+
+1. Create the backend configuration object, in this case we are going to indicate `COMPUTE`. This parameter references the name of your remote compute cluster you want to use for running your project. If `COMPUTE` is present, the project will be automatically submitted as an Azure Machine Learning job to the indicated compute.
+
+ # [MLflow CLI](#tab/cli)
+
+ __backend_config.json__
+
+ ```json
+ {
+ "COMPUTE": "cpu-cluster"
+ }
+
+ ```
+
+ # [Python](#tab/sdk)
+
+ ```python
+ backend_config = {"COMPUTE": "cpu-cluster"}
+ ```
+
+1. Add the `azureml-mlflow` package as a pip dependency to your environment configuration file in order to track metrics and key artifacts in your workspace.
+
+ __conda.yaml__
+
+ ```yaml
+ name: mlflow-example
+ channels:
+ - defaults
+ dependencies:
+ - numpy>=1.14.3
+ - pandas>=1.0.0
+ - scikit-learn
+ - pip:
+ - mlflow
+ - azureml-mlflow
+ ```
+
+1. Submit the local run and ensure you set the parameter `backend = "azureml"`, which adds support of automatic tracking, model's capture, log files, snapshots, and printed errors in your workspace. In this example we assume the MLflow project you are trying to run is in the same folder you currently are, `uri="."`.
+
+ # [MLflow CLI](#tab/cli)
+
+ ```bash
+ mlflow run . --backend azureml --backend-config backend_config.json -P alpha=0.3
+ ```
+
+ # [Python](#tab/sdk)
+
+ ```python
+ local_env_run = mlflow.projects.run(
+ uri=".",
+ parameters={"alpha":0.3},
+ backend = "azureml",
+ backend_config = backend_config,
+ )
+ ```
+
+
+
+ > [!NOTE]
+ > Since Azure Machine Learning jobs always run in the context of environments, the parameter `env_manager` is ignored.
+
+ View your runs and metrics in the [Azure Machine Learning studio](https://ml.azure.com).
## Clean up resources
The [MLflow with Azure ML notebooks](https://github.com/Azure/MachineLearningNot
## Next steps
-* [Deploy models with MLflow](how-to-deploy-mlflow-models.md).
-* Monitor your production models for [data drift](v1/how-to-enable-data-collection.md).
* [Track Azure Databricks runs with MLflow](how-to-use-mlflow-azure-databricks.md).
-* [Manage your models](concept-model-management-and-deployment.md).
+* [Query & compare experiments and runs with MLflow](how-to-track-experiments-mlflow.md).
+* [Manage models registries in Azure Machine Learning with MLflow](how-to-manage-models-mlflow.md).
+* [Guidelines for deploying MLflow models](how-to-deploy-mlflow-models.md).
+
machine-learning How To Train With Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-with-ui.md
Last updated 11/04/2022-+ # Create a training job with the job creation UI (preview)
machine-learning How To Troubleshoot Validation For Schema Failed Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-validation-for-schema-failed-error.md
+
+ Title: Troubleshoot Validation For Schema Failed Error
+
+description: Troubleshooting steps when you get the "Validation for schema failed" error message in AzureML v2 CLI
+++++++ Last updated : 01/06/2023++
+# Troubleshoot Validation For Schema Failed Error
+
+This article helps fix all categories of Validation for Schema Failed errors that a user may encounter after submitting a **create** or **update** command for a YAML file while using AzureML v2 CLI. The list of commands that can generate this error include:
+
+Create
+* `az ml job create`
+* `az ml data create`
+* `az ml datastore create`
+* `az ml compute create`
+* `az ml batch-endpoint create`
+* `az ml batch-deployment create`
+* `az ml online-endpoint create`
+* `az ml online-deployment create`
+* `az ml online-deployment create`
+* `az ml component create`
+* `az ml environment create`
+* `az ml model create`
+* `az ml connection create`
+* `az ml schedule create`
+* `az ml registry create`
+* `az ml workspace create`
+
+Update
+* `az ml online-endpoint update`
+* `az online-deployment update`
+* `az batch-deployment update`
+* `az datastore update`
+* `az compute update`
+* `az data update`
+
+## Symptoms
+
+When the user submits a YAML file via a **create** or **update** command using AzureML v2 CLI to complete a particular task (for example, create a data asset, submit a training job, or update an online deployment), they can encounter a ΓÇ£Validation for Schema FailedΓÇ¥ error.
+
+## Cause
+
+ΓÇ£Validation for Schema FailedΓÇ¥ errors occur because the submitted YAML file didn't match the prescribed schema for the asset type (workspace, data, datastore, component, compute, environment, model, job, batch-endpoint, batch-deployment, online-endpoint, online-deployment, schedule, connection, or registry) that the user was trying to create or update. This might happen due to several causes.
+
+*The general procedure for fixing this error is to first go to the location where the YAML file is stored, open it and make the necessary edits, save the YAML file, then go back to the terminal and resubmit the command. The sections below will detail the changes necessary based on the cause.*
+
+## Error - Invalid Value
+
+The submitted YAML file contains one or more parameters whose value is of the incorrect type. For example ΓÇô for ml data create (that is, data schema), the ΓÇ£pathΓÇ¥ parameter expects a URL value. Providing a number or string thatΓÇÖs not a file path would be considered invalid. The parameter might also have a range of acceptable values, and the value provided isn't in that range. For example ΓÇô for ml data create, the ΓÇ£typeΓÇ¥ parameter only accepts uri_file, uri_folder, or ml_table. Any other value would be considered invalid.
+
+### Solution - Invalid Value
+
+If the type of value provided for a parameter is invalid, check the prescribed schema and change the value to the correct type (note: this refers to the data type of the value provided for the parameter, not to be confused with the ΓÇ£typeΓÇ¥ parameter in many schemas). If the value itself is invalid, select a value from the expected range of values (you'll find that in the error message). Save the YAML file and resubmit the command. [Here's a list of schemas](reference-yaml-overview.md) for all different asset types in AzureML v2.
+
+## Error - Unknown Field
+
+The submitted YAML file contains one or more parameters, which isn't part of the prescribed schema for that asset type. For example ΓÇô for ml job create (that is, `commandjob` schema), if a parameter called ΓÇ£nameΓÇ¥ is provided, this error will be encountered because the `commandjob` schema has no such parameter.
+
+### Solution - Unknown Field
+
+In the submitted YAML file, delete the field that is invalid. Save the YAML file and resubmit the command.
+
+## Error - File or Folder Not Found
+
+The submitted YAML file contains a ΓÇ£pathΓÇ¥ parameter. The file or folder path provided as a value for that parameter, is either incorrect (spelled wrong, missing extension, etc.), or the file / folder doesn't exist.
+
+### Solution - File or Folder Not Found
+
+In the submitted YAML file, go to the ΓÇ£pathΓÇ¥ parameter and double check whether the file / folder path provided is written correctly (that is, path is complete, no spelling mistakes, no missing file extension, special characters, etc.). Save the YAML file and resubmit the command. If the error still persists, the file / folder doesn't exist in the location provided.
+
+## Error - Missing Field
+
+The submitted YAML file is missing a required parameter. For example ΓÇô for ml job create (that is, `commandjob` schema), if the ΓÇ£computeΓÇ¥ parameter isn't provided, this error will be encountered because compute is required to run a command job.
+
+### Solution - Missing Field
+
+Check the prescribed schema for the asset type you're trying to create or update ΓÇô check what parameters are required and what their correct value types are. [Here's a list of schemas](reference-yaml-overview.md) for different asset types in AzureML v2. Ensure that the submitted YAML file has all the required parameters needed. Also ensure that the values provided for those parameters are of the correct type, or in the accepted range of values. Save the YAML file and resubmit the command.
+
+## Error - Cannot Parse
+
+The submitted YAML file can't be read, because either the syntax is wrong, formatting is wrong, or there are unwanted characters somewhere in the file. For example ΓÇô a special character (like a colon or a semicolon) that has been entered by mistake somewhere in the YAML file.
+
+### Solution - Cannot Parse
+
+Double check the contents of the submitted YAML file for correct syntax, unwanted characters, and wrong formatting. Fix all of these, save the YAML file and resubmit the command.
+
+## Error - Resource Not Found
+
+One or more of the resources (for example, file / folder) in the submitted YAML file doesn't exist, or you don't have access to it.
+
+### Solution - Resource Not Found
+
+Double check whether the name of the resource has been specified correctly, and that you have access to it. Make changes if needed, save the YAML file and resubmit the command.
+
+## Error - Cannot Serialize
+
+One or more fields in the YAML can't be serialized (converted) into objects.
+
+### Solution - Cannot Serialize
+
+Double check that your YAML file isn't corrupted and that the fileΓÇÖs contents are properly formatted.
machine-learning How To Tune Hyperparameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-tune-hyperparameters.md
description: Automate hyperparameter tuning for deep learning and machine learning models using Azure Machine Learning. +
machine-learning How To Use Automl Onnx Model Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automl-onnx-model-dotnet.md
Title: Make predictions with AutoML ONNX Model in .NET description: Learn how to make predictions using an AutoML ONNX model in .NET with ML.NET -+ + Last updated 10/21/2021
machine-learning Migrate To V2 Managed Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-managed-online-endpoints.md
Last updated 09/28/2022-+
machine-learning Overview What Happened To Workbench https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/overview-what-happened-to-workbench.md
+ Last updated 11/04/2022 # What happened to Azure Machine Learning Workbench?
machine-learning Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/policy-reference.md
Title: Built-in policy definitions for Azure Machine Learning description: Lists Azure Policy built-in policy definitions for Azure Machine Learning. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/28/2022 Last updated : 01/05/2023 -++
machine-learning Quickstart Spark Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/quickstart-spark-jobs.md
In this quickstart guide, you'll learn how to submit a Spark job using Azure Mac
## Prerequisites
-# [Studio UI](#tab/studio-ui)
-- An Azure subscription; if you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free) before you begin.-- An Azure Machine Learning workspace. See [Create workspace resources](./quickstart-create-resources.md).-- An Azure Data Lake Storage (ADLS) Gen 2 storage account. See [Create an Azure Data Lake Storage (ADLS) Gen 2 storage account](../storage/blobs/create-data-lake-storage-account.md).-- To enable this feature:
- 1. Navigate to Azure Machine Learning studio UI.
- 2. Select **Manage preview features** (megaphone icon) among the icons on the top right side of the screen.
- 3. In **Managed preview feature** panel, toggle on **Run notebooks and jobs on managed Spark** feature.
- :::image type="content" source="media/quickstart-spark-jobs/how-to-enable-managed-spark-preview.png" lightbox="media/quickstart-spark-jobs/how-to-enable-managed-spark-preview.png" alt-text="Expandable screenshot showing option for enabling Managed Spark preview.":::
- # [CLI](#tab/cli) [!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] - An Azure subscription; if you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free) before you begin.
In this quickstart guide, you'll learn how to submit a Spark job using Azure Mac
> - [Visual Studio Code connected to an Azure Machine Learning compute instance](./how-to-set-up-vs-code-remote.md?tabs=studio). > - your local computer that has [the Azure Machine Learning SDK for Python](/python/api/overview/azure/ml/installv2) installed.
+# [Studio UI](#tab/studio-ui)
+- An Azure subscription; if you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free) before you begin.
+- An Azure Machine Learning workspace. See [Create workspace resources](./quickstart-create-resources.md).
+- An Azure Data Lake Storage (ADLS) Gen 2 storage account. See [Create an Azure Data Lake Storage (ADLS) Gen 2 storage account](../storage/blobs/create-data-lake-storage-account.md).
+- To enable this feature:
+ 1. Navigate to Azure Machine Learning studio UI.
+ 2. Select **Manage preview features** (megaphone icon) among the icons on the top right side of the screen.
+ 3. In **Managed preview feature** panel, toggle on **Run notebooks and jobs on managed Spark** feature.
+ :::image type="content" source="media/quickstart-spark-jobs/how-to-enable-managed-spark-preview.png" lightbox="media/quickstart-spark-jobs/how-to-enable-managed-spark-preview.png" alt-text="Expandable screenshot showing option for enabling Managed Spark preview.":::
+ ## Add role assignments in Azure storage accounts
The above script takes two arguments `--titanic_data` and `--wrangled_data`, whi
## Submit a standalone Spark job
-# [Studio UI](#tab/studio-ui)
-First, upload the parameterized Python code `titanic.py` to the Azure Blob storage container for workspace default datastore `workspaceblobstore`. To submit a standalone Spark job using the Azure Machine Learning studio UI:
--
-1. In the left pane, select **+ New**.
-2. Select **Spark job (preview)**.
-3. On the **Compute** screen:
-
- :::image type="content" source="media/quickstart-spark-jobs/create-standalone-spark-job-compute.png" lightbox="media/quickstart-spark-jobs/create-standalone-spark-job-compute.png" alt-text="Expandable screenshot showing compute selection screen for a new Spark job in Azure Machine Learning studio UI.":::
-
- 1. Under **Select compute type**, select **Spark automatic compute (Preview)** for Managed (Automatic) Spark compute.
- 2. Select **Virtual machine size**. The following instance types are currently supported:
- - `Standard_E4s_v3`
- - `Standard_E8s_v3`
- - `Standard_E16s_v3`
- - `Standard_E32s_v3`
- - `Standard_E64s_v3`
- 3. Select **Spark runtime version** as **Spark 3.2**.
- 4. Select **Next**.
-4. On the **Environment** screen, select **Next**.
-5. On **Job settings** screen:
- 1. Provide a job **Name**, or use the job **Name**, which is generated by default.
- 2. Select an **Experiment name** from the dropdown menu.
- 3. Under **Add tags**, provide **Name** and **Value**, then select **Add**. Adding tags is optional.
- 4. Under the **Code** section:
- 1. Select **Azure Machine Learning workspace default blob storage** from **Choose code location** dropdown.
- 2. Under **Path to code file to upload**, select **Browse**.
- 3. In the pop-up screen titled **Path selection**, select the path of code file `titanic.py` on the workspace default datastore `workspaceblobstore`.
- 4. Select **Save**.
- 5. Input `titanic.py` as the name of **Entry file** for the standalone job.
- 6. To add an input, select **+ Add input** under **Inputs** and
- 1. Enter **Input name** as `titanic_data`. The input should refer to this name later in the **Arguments**.
- 2. Select **Input type** as **Data**.
- 3. Select **Data type** as **File**.
- 4. Select **Data source** as **URI**.
- 5. Enter an Azure Data Lake Storage (ADLS) Gen 2 data URI for `titanic.csv` file in format `abfss://<FILE_SYSTEM_NAME>@<STORAGE_ACCOUNT_NAME>.dfs.core.windows.net/<PATH_TO_DATA>`.
- 7. To add an input, select **+ Add output** under **Outputs** and
- 1. Enter **Output name** as `wrangled_data`. The output should refer to this name later in the **Arguments**.
- 2. Select **Output type** as **Folder**.
- 3. For **Output URI destination**, enter an Azure Data Lake Storage (ADLS) Gen 2 folder URI in format `abfss://<FILE_SYSTEM_NAME>@<STORAGE_ACCOUNT_NAME>.dfs.core.windows.net/<PATH_TO_DATA>`.
- 8. Enter **Arguments** as `--titanic_data ${{inputs.titanic_data}} --wrangled_data ${{outputs.wrangled_data}}`.
- 5. Under the **Spark configurations** section:
- 1. For **Executor size**:
- 1. Enter the number of executor **Cores** as 2 and executor **Memory (GB)** as 2.
- 2. For **Dynamically allocated executors**, select **Disabled**.
- 3. Enter the number of **Executor instances** as 2.
- 2. For **Driver size**, enter number of driver **Cores** as 1 and driver **Memory (GB)** as 2.
- 6. Select **Next**.
-6. On the **Review** screen:
- 1. Review the job specification before submitting it.
- 2. Select **Create** to submit the standalone Spark job.
-
-> [!NOTE]
-> A standalone job submitted from the Studio UI using an Azure Machine Learning Managed (Automatic) Spark compute defaults to user identity passthrough for data access.
- # [CLI](#tab/cli) [!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] This example YAML specification shows a standalone Spark job. It uses an Azure Machine Learning Managed (Automatic) Spark compute, user identity passthrough, and input/output data URI in format `abfss://<FILE_SYSTEM_NAME>@<STORAGE_ACCOUNT_NAME>.dfs.core.windows.net/<PATH_TO_DATA>`:
In the above code sample:
- `Standard_E32S_V3` - `Standard_E64S_V3`
+# [Studio UI](#tab/studio-ui)
+First, upload the parameterized Python code `titanic.py` to the Azure Blob storage container for workspace default datastore `workspaceblobstore`. To submit a standalone Spark job using the Azure Machine Learning studio UI:
++
+1. In the left pane, select **+ New**.
+2. Select **Spark job (preview)**.
+3. On the **Compute** screen:
+
+ :::image type="content" source="media/quickstart-spark-jobs/create-standalone-spark-job-compute.png" lightbox="media/quickstart-spark-jobs/create-standalone-spark-job-compute.png" alt-text="Expandable screenshot showing compute selection screen for a new Spark job in Azure Machine Learning studio UI.":::
+
+ 1. Under **Select compute type**, select **Spark automatic compute (Preview)** for Managed (Automatic) Spark compute.
+ 2. Select **Virtual machine size**. The following instance types are currently supported:
+ - `Standard_E4s_v3`
+ - `Standard_E8s_v3`
+ - `Standard_E16s_v3`
+ - `Standard_E32s_v3`
+ - `Standard_E64s_v3`
+ 3. Select **Spark runtime version** as **Spark 3.2**.
+ 4. Select **Next**.
+4. On the **Environment** screen, select **Next**.
+5. On **Job settings** screen:
+ 1. Provide a job **Name**, or use the job **Name**, which is generated by default.
+ 2. Select an **Experiment name** from the dropdown menu.
+ 3. Under **Add tags**, provide **Name** and **Value**, then select **Add**. Adding tags is optional.
+ 4. Under the **Code** section:
+ 1. Select **Azure Machine Learning workspace default blob storage** from **Choose code location** dropdown.
+ 2. Under **Path to code file to upload**, select **Browse**.
+ 3. In the pop-up screen titled **Path selection**, select the path of code file `titanic.py` on the workspace default datastore `workspaceblobstore`.
+ 4. Select **Save**.
+ 5. Input `titanic.py` as the name of **Entry file** for the standalone job.
+ 6. To add an input, select **+ Add input** under **Inputs** and
+ 1. Enter **Input name** as `titanic_data`. The input should refer to this name later in the **Arguments**.
+ 2. Select **Input type** as **Data**.
+ 3. Select **Data type** as **File**.
+ 4. Select **Data source** as **URI**.
+ 5. Enter an Azure Data Lake Storage (ADLS) Gen 2 data URI for `titanic.csv` file in format `abfss://<FILE_SYSTEM_NAME>@<STORAGE_ACCOUNT_NAME>.dfs.core.windows.net/<PATH_TO_DATA>`.
+ 7. To add an input, select **+ Add output** under **Outputs** and
+ 1. Enter **Output name** as `wrangled_data`. The output should refer to this name later in the **Arguments**.
+ 2. Select **Output type** as **Folder**.
+ 3. For **Output URI destination**, enter an Azure Data Lake Storage (ADLS) Gen 2 folder URI in format `abfss://<FILE_SYSTEM_NAME>@<STORAGE_ACCOUNT_NAME>.dfs.core.windows.net/<PATH_TO_DATA>`.
+ 8. Enter **Arguments** as `--titanic_data ${{inputs.titanic_data}} --wrangled_data ${{outputs.wrangled_data}}`.
+ 5. Under the **Spark configurations** section:
+ 1. For **Executor size**:
+ 1. Enter the number of executor **Cores** as 2 and executor **Memory (GB)** as 2.
+ 2. For **Dynamically allocated executors**, select **Disabled**.
+ 3. Enter the number of **Executor instances** as 2.
+ 2. For **Driver size**, enter number of driver **Cores** as 1 and driver **Memory (GB)** as 2.
+ 6. Select **Next**.
+6. On the **Review** screen:
+ 1. Review the job specification before submitting it.
+ 2. Select **Create** to submit the standalone Spark job.
+
+> [!NOTE]
+> A standalone job submitted from the Studio UI using an Azure Machine Learning Managed (Automatic) Spark compute defaults to user identity passthrough for data access.
++ > [!TIP]
machine-learning Reference Yaml Datastore Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-datastore-blob.md
Last updated 10/21/2021-+ # CLI (v2) Azure Blob datastore YAML schema
machine-learning Reference Yaml Datastore Data Lake Gen1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-datastore-data-lake-gen1.md
Last updated 10/21/2021-+ # CLI (v2) Azure Data Lake Gen1 YAML schema
machine-learning Reference Yaml Datastore Data Lake Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-datastore-data-lake-gen2.md
Last updated 10/21/2021-+ # CLI (v2) Azure Data Lake Gen2 YAML schema
machine-learning Reference Yaml Datastore Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-datastore-files.md
Last updated 10/21/2021-+ # CLI (v2) Azure Files datastore YAML schema
machine-learning Reference Yaml Mltable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-mltable.md
The ideal scenarios to use mltable are:
- The schema of your data is complex and/or changes frequently. - You only need a subset of data. (for example: a sample of rows or files, specific columns, etc.) - AutoML jobs requiring tabular data.
-If your scenario does not fit the above, then it is likely that [URIs](reference-yaml-data.md) are a more suitable type.
+If your scenario doesn't fit the above, then it's likely that [URIs](reference-yaml-data.md) are a more suitable type.
The source JSON schema can be found at https://azuremlschemas.azureedge.net/latest/MLTable.schema.json.
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
| Key | Type | Description | Allowed values | Default value | | | - | -- | -- | - | | `$schema` | string | The YAML schema. If you use the Azure Machine Learning VS Code extension to author the YAML file, including `$schema` at the top of your file enables you to invoke schema and resource completions. | | |
-| `type` | const | `mltable` to abstract the schema definition for tabular data so that it is easier for consumers of the data to materialize the table into a Pandas/Dask/Spark dataframe | `mltable` | `mltable`|
+| `type` | const | `mltable` to abstract the schema definition for tabular data so that it's easier for consumers of the data to materialize the table into a Pandas/Dask/Spark dataframe | `mltable` | `mltable`|
| `paths` | array | Paths can be a `file` path, `folder` path or `pattern` for paths. `pattern` specifies a search pattern to allow globbing(* and **) of files and folders containing data. Supported URI types are `azureml`, `https`, `wasbs`, `abfss`, and `adl`. See [Core yaml syntax](reference-yaml-core-syntax.md) for more information on how to use the `azureml://` URI format. |`file`, `folder`, `pattern` | | | `transformations`| array | Defined sequence of transformations that are applied to data loaded from defined paths. |`read_delimited`, `read_parquet` , `read_json_lines` , `read_delta_lake`, `take` to take the first N rows from dataset, `take_random_sample` to take a random sample of records in the dataset approximately by the probability specified, `drop_columns`, `keep_columns`,... ||
These transformations apply to all mltable-artifact files:
- `convert_column_types` - `columns`: The column name you want to convert type of. - `column_type`: The type you want to convert the column to. For example: string, float, int, or datetime with specified formats.
+- `extract_partition_format_into_columns`: Specify the partition format of path. Defaults to None. The partition information of each path will be extracted into columns based on the specified format. Format part '{column_name}' creates string column, and '{column_name:yyyy/MM/dd/HH/mm/ss}' creates datetime column, where 'yyyy', 'MM', 'dd', 'HH', 'mm' and 'ss' are used to extract year, month, day, hour, minute and second for the datetime type.
+
+ The format should start from the position of first partition key until the end of file path. For example, given the path '../Accounts/2022/01/01/data.csv' where the partition is by department name and time, partition_format='/{Department}/{PartitionDate:yyyy/MM/dd}/data.csv' creates a string column 'Department' with the value 'Accounts' and a datetime column 'PartitionDate' with the value '2022-01-01'. Our principle here is to support transforms specific to data delivery and not to get into wider feature engineering transforms.
## MLTable transformations: read_delimited
The following transformations are specific to delimited files.
- header: user can choose one of the following options: `no_header`, `from_first_file`, `all_files_different_headers`, `all_files_same_headers`. Defaults to `all_files_same_headers`. - delimiter: The separator used to split columns. - empty_as_string: Specify if empty field values should be loaded as empty strings. The default (`False`) will read empty field values as nulls. Passing this setting as `True` will read empty field values as empty strings. If the values are converted to numeric or datetime, then this setting has no effect, as empty values will be converted to nulls.-- include_path_column: Boolean to keep path information as column in the table. Defaults to `False`. This setting is useful when you are reading multiple files, and want to know which file a particular record originated from. And you can also keep useful information in file path.-- support_multi_line: By default (support_multi_line=`False`), all line breaks, including those in quoted field values, will be interpreted as a record break. Reading data this way is faster and more optimized for parallel execution on multiple CPU cores. However, it may result in silently producing more records with misaligned field values. This setting should be set to `True` when the delimited files are known to contain quoted line breaks.
+- include_path_column: Boolean to keep path information as column in the table. Defaults to `False`. This setting is useful when you're reading multiple files, and want to know which file a particular record originated from. And you can also keep useful information in file path.
+- support_multi_line: By default (support_multi_line=`False`), all line breaks, including those line breaks in quoted field values, will be interpreted as a record break. Reading data this way is faster and more optimized for parallel execution on multiple CPU cores. However, it may result in silently producing more records with misaligned field values. This setting should be set to `True` when the delimited files are known to contain quoted line breaks.
## MLTable transformations: read_json_lines ```yaml
transformations:
Only flat Json files are supported. Below are the supported transformations that are specific for json lines: -- `include_path_column` Boolean to keep path information as column in the MLTable. Defaults to False. This setting is useful when you are reading multiple files, and want to know which file a particular record originated from. And you can also keep useful information in file path.
+- `include_path_column` Boolean to keep path information as column in the MLTable. Defaults to False. This setting is useful when you're reading multiple files, and want to know which file a particular record originated from. And you can also keep useful information in file path.
- `invalid_lines` How to handle lines that are invalid JSON. Supported values are `error` and `drop`. Defaults to `error`. - `encoding` Specify the file encoding. Supported encodings are `utf8`, `iso88591`, `latin1`, `ascii`, `utf16`, `utf32`, `utf8bom` and `windows1252`. Default is `utf8`.
transformations:
### Parquet files transformations If the user doesn't define options for `read_parquet` transformation, default options will be selected (see below). -- `include_path_column`: Boolean to keep path information as column in the table. Defaults to False. This setting is useful when you are reading multiple files, and want to know which file a particular record originated from. And you can also keep useful information in file path.
+- `include_path_column`: Boolean to keep path information as column in the table. Defaults to False. This setting is useful when you're reading multiple files, and want to know which file a particular record originated from. And you can also keep useful information in file path.
## MLTable transformations: read_delta_lake ```yaml
type: mltable
paths: - folder: abfss://my_delta_files
-transforms:
+transformations:
- read_delta_lake: timestamp_as_of: '2022-08-26T00:00:00Z' ``` ### Delta lake transformations -- `timestamp_as_of`: Timestamp to be specified for time-travel on the specific Delta Lake data.
+- `timestamp_as_of`: Datetime string in RFC-3339/ISO-8601 format to be specified for time-travel on the specific Delta Lake data.
- `version_as_of`: Version to be specified for time-travel on the specific Delta Lake data. ## Next steps
machine-learning Reference Yaml Schedule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-schedule.md
Last updated 08/15/2022-+ # CLI (v2) schedule YAML schema
machine-learning Resource Limits Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/resource-limits-capacity.md
description: Service limits used for capacity planning and maximum limits on req
-+
machine-learning Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Machine Learning description: Lists Azure Policy Regulatory Compliance controls available for Azure Machine Learning. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/04/2022 Last updated : 01/05/2023 -+ +
machine-learning Tutorial Create Secure Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-create-secure-workspace.md
To create a virtual network, use the following steps:
:::image type="content" source="./media/tutorial-create-secure-workspace/create-vnet-basics.png" alt-text="Image of the basic virtual network config":::
-1. Select __IP Addresses__ tab. The default settings should be similar to the following image:
+1. Select __Security__. Select to __Enable Azure Bastion__. [Azure Bastion](../bastion/bastion-overview.md) provides a secure way to access the VM jump box you'll create inside the VNet in a later step. Use the following values for the remaining fields:
- :::image type="content" source="./media/tutorial-create-secure-workspace/create-vnet-ip-address-default.png" alt-text="Default IP Address screen":::
+ * __Bastion name__: A unique name for this Bastion instance
+ * __Public IP address__: Create a new public IP address.
+
+ Leave the other fields at the default values.
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/create-bastion.png" alt-text="Screenshot of Bastion config.":::
+
+1. Select __IP Addresses__. The default settings should be similar to the following image:
+
+ :::image type="content" source="./media/tutorial-create-secure-workspace/create-vnet-ip-address-default.png" alt-text="Default IP Address screen.":::
Use the following steps to configure the IP address and configure a subnet for training and scoring resources:
To create a virtual network, use the following steps:
1. Select the __Default__ subnet and then select __Remove subnet__.
- :::image type="content" source="./media/tutorial-create-secure-workspace/delete-default-subnet.png" alt-text="Screenshot of deleting default subnet":::
+ :::image type="content" source="./media/tutorial-create-secure-workspace/delete-default-subnet.png" alt-text="Screenshot of deleting default subnet.":::
- 1. To create a subnet to contain the workspace, dependency services, and resources used for training, select __+ Add subnet__ and set the subnet name and address range. The following are the values used in this tutorial:
- * __Subnet name__: Training
- * __Subnet address range__: 172.16.0.0/24
+ 1. To create a subnet to contain the workspace, dependency services, and resources used for _training_, select __+ Add subnet__ and set the subnet name, starting address, and subnet size. The following are the values used in this tutorial:
+ * __Name__: Training
+ * __Starting address__: 172.16.0.0
+ * __Subnet size__: /24 (256 addresses)
- :::image type="content" source="./media/tutorial-create-secure-workspace/vnet-add-training-subnet.png" alt-text="Screenshot of Training subnet":::
+ :::image type="content" source="./media/tutorial-create-secure-workspace/vnet-add-training-subnet.png" alt-text="Screenshot of Training subnet.":::
- > [!TIP]
- > If you plan on using a _service endpoint_ to add your Azure Storage Account, Azure Key Vault, and Azure Container Registry to the VNet, select the following under __Services__:
- > * __Microsoft.Storage__
- > * __Microsoft.KeyVault__
- > * __Microsoft.ContainerRegistry__
- >
- > If you plan on using a _private endpoint_ to add these services to the VNet, you do not need to select these entries. The steps in this article use a private endpoint for these services, so you do not need to select them when following these steps.
-
- 1. To create a subnet for compute resources used to score your models, select __+ Add subnet__ again, and set the name and address range:
+ 1. To create a subnet for compute resources used to _score_ your models, select __+ Add subnet__ again, and set the name and address range:
* __Subnet name__: Scoring
- * __Subnet address range__: 172.16.1.0/24
+ * __Starting address__: 172.16.1.0
+ * __Subnet size__: /24 (256 addresses)
- :::image type="content" source="./media/tutorial-create-secure-workspace/vnet-add-scoring-subnet.png" alt-text="Screenshot of Scoring subnet":::
+ :::image type="content" source="./media/tutorial-create-secure-workspace/vnet-add-scoring-subnet.png" alt-text="Screenshot of Scoring subnet.":::
- > [!TIP]
- > If you plan on using a _service endpoint_ to add your Azure Storage Account, Azure Key Vault, and Azure Container Registry to the VNet, select the following under __Services__:
- > * __Microsoft.Storage__
- > * __Microsoft.KeyVault__
- > * __Microsoft.ContainerRegistry__
- >
- > If you plan on using a _private endpoint_ to add these services to the VNet, you do not need to select these entries. The steps in this article use a private endpoint for these services, so you do not need to select them when following these steps.
-
-1. Select __Security__. For __BastionHost__, select __Enable__. [Azure Bastion](../bastion/bastion-overview.md) provides a secure way to access the VM jump box you'll create inside the VNet in a later step. Use the following values for the remaining fields:
-
- * __Bastion name__: A unique name for this Bastion instance
- * __AzureBastionSubnetAddress space__: 172.16.2.0/27
- * __Public IP address__: Create a new public IP address.
-
- Leave the other fields at the default values.
+ 1. To create a subnet for _Azure Bastion_, select __+ Add subnet__ and set the template, starting address, and subnet size:
+ * __Subnet template__: Azure Bastion
+ * __Starting address__: 172.16.2.0
+ * __Subnet size__: /26 (64 addresses)
- :::image type="content" source="./media/tutorial-create-secure-workspace/create-bastion.png" alt-text="Screenshot of Bastion config":::
+ :::image type="content" source="./media/tutorial-create-secure-workspace/vnet-add-azure-bastion-subnet.png" alt-text="Screenshot of Azure Bastion subnet.":::
1. Select __Review + create__.
machine-learning How To Debug Parallel Run Step https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-debug-parallel-run-step.md
--++ Last updated 11/16/2022 #Customer intent: As a data scientist, I want to figure out why my ParallelRunStep doesn't run so that I can fix it.
machine-learning How To Identity Based Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-identity-based-data-access.md
- Previously updated : 01/25/2022+ Last updated : 01/05/2023 # Customer intent: As an experienced Python developer, I need to make my data in Azure Storage available to my compute for training my machine learning models.
# Connect to storage by using identity-based data access with SDK v1
-In this article, you learn how to connect to storage services on Azure by using identity-based data access and Azure Machine Learning datastores via the [Azure Machine Learning SDK for Python](/python/api/overview/azure/ml/intro).
+In this article, you'll learn how to connect to storage services on Azure, with identity-based data access and Azure Machine Learning datastores via the [Azure Machine Learning SDK for Python](/python/api/overview/azure/ml/intro).
+
+Typically, datastores use **credential-based authentication** to confirm you have permission to access the storage service. They keep connection information, like your subscription ID and token authorization, in the [key vault](https://azure.microsoft.com/services/key-vault/) that's associated with the workspace. When you create a datastore that uses **identity-based data access**, your Azure account ([Azure Active Directory token](../../active-directory/fundamentals/active-directory-whatis.md)) is used to confirm you have permission to access the storage service. In the **identity-based data access** scenario, no authentication credentials are saved. Only the storage account information is stored in the datastore.
-Typically, datastores use **credential-based authentication** to confirm you have permission to access the storage service. They keep connection information, like your subscription ID and token authorization, in the [key vault](https://azure.microsoft.com/services/key-vault/) that's associated with the workspace. When you create a datastore that uses **identity-based data access**, your Azure account ([Azure Active Directory token](../../active-directory/fundamentals/active-directory-whatis.md)) is used to confirm you have permission to access the storage service. In the **identity-based data access** scenario, no authentication credentials are saved. Only the storage account information is stored in the datastore.
-
To create datastores with **identity-based** data access via the Azure Machine Learning studio UI, see [Connect to data with the Azure Machine Learning studio](how-to-connect-data-ui.md#create-datastores). To create datastores that use **credential-based** authentication, like access keys or service principals, see [Connect to storage services on Azure](how-to-access-data.md).
There are two scenarios in which you can apply identity-based data access in Azu
### Accessing storage services
-You can connect to storage services via identity-based data access with Azure Machine Learning datastores or [Azure Machine Learning datasets](how-to-create-register-datasets.md).
+You can connect to storage services via identity-based data access with Azure Machine Learning datastores or [Azure Machine Learning datasets](how-to-create-register-datasets.md).
-Your authentication credentials are usually kept in a datastore, which is used to ensure you have permission to access the storage service. When these credentials are registered via datastores, any user with the workspace Reader role can retrieve them. That scale of access can be a security concern for some organizations. [Learn more about the workspace Reader role.](../how-to-assign-roles.md#default-roles)
+Your authentication credentials are kept in a datastore, which is used to ensure you have permission to access the storage service. When these credentials are registered via datastores, any user with the workspace Reader role can retrieve them. That scale of access can be a security concern for some organizations. [Learn more about the workspace Reader role.](../how-to-assign-roles.md#default-roles)
-When you use identity-based data access, Azure Machine Learning prompts you for your Azure Active Directory token for data access authentication instead of keeping your credentials in the datastore. That approach allows for data access management at the storage level and keeps credentials confidential.
+When you use identity-based data access, Azure Machine Learning prompts you for your Azure Active Directory token for data access authentication, instead of keeping your credentials in the datastore. That approach allows for data access management at the storage level and keeps credentials confidential.
The same behavior applies when you:
Certain machine learning scenarios involve training models with private data. In
- An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/). -- An Azure storage account with a supported storage type. These storage types are supported:
+- An Azure storage account with a supported storage type. These storage types are supported:
- [Azure Blob Storage](../../storage/blobs/storage-blobs-overview.md) - [Azure Data Lake Storage Gen1](../../data-lake-store/index.yml) - [Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-introduction.md)
Certain machine learning scenarios involve training models with private data. In
- An Azure Machine Learning workspace.
- Either [create an Azure Machine Learning workspace](../how-to-manage-workspace.md) or use an [existing one via the Python SDK](../how-to-manage-workspace.md#connect-to-a-workspace).
+ Either [create an Azure Machine Learning workspace](../how-to-manage-workspace.md) or use an [existing one via the Python SDK](../how-to-manage-workspace.md#connect-to-a-workspace).
## Create and register datastores
-When you register a storage service on Azure as a datastore, you automatically create and register that datastore to a specific workspace. See [Storage access permissions](#storage-access-permissions) for guidance on required permission types. You also have the option to manually create the storage you want to connect to without any special permissions, and you just need the name.
+When you register a storage service on Azure as a datastore, you automatically create and register that datastore to a specific workspace. See [Storage access permissions](#storage-access-permissions) for guidance on required permission types. You can also manually create the storage you want to connect to without any special permissions, and you just need the name.
See [Work with virtual networks](#work-with-virtual-networks) for details on how to connect to data storage behind virtual networks. In the following code, notice the absence of authentication parameters like `sas_token`, `account_key`, `subscription_id`, and the service principal `client_id`. This omission indicates that Azure Machine Learning will use identity-based data access for authentication. Creation of datastores typically happens interactively in a notebook or via the studio. So your Azure Active Directory token is used for data access authentication. > [!NOTE]
-> Datastore names should consist only of lowercase letters, numbers, and underscores.
+> Datastore names should consist only of lowercase letters, numbers, and underscores.
### Azure blob container
sqldb_dstore = Datastore.register_azure_sql_database(workspace=ws,
``` - ## Storage access permissions To help ensure that you securely connect to your storage service on Azure, Azure Machine Learning requires that you have permission to access the corresponding data storage.
Identity-based data access supports connections to **only** the following storag
* Azure Data Lake Storage Gen2 * Azure SQL Database
-To access these storage services, you must have at least [Storage Blob Data Reader](../../role-based-access-control/built-in-roles.md#storage-blob-data-reader) access to the storage account. Only storage account owners can [change your access level via the Azure portal](../../storage/blobs/assign-azure-role-data-access.md).
+To access these storage services, you must have at least [Storage Blob Data Reader](../../role-based-access-control/built-in-roles.md#storage-blob-data-reader) access to the storage account. Only storage account owners can [change your access level via the Azure portal](../../storage/blobs/assign-azure-role-data-access.md).
-If you prefer to not use your user identity (Azure Active Directory), you also have the option to grant a workspace managed-system identity (MSI) permission to create the datastore. To do so, you must have Owner permissions to the storage account and add the `grant_workspace_access= True` parameter to your data register method.
+If you prefer to not use your user identity (Azure Active Directory), you can also grant a workspace managed-system identity (MSI) permission to create the datastore. To do so, you must have Owner permissions to the storage account and add the `grant_workspace_access= True` parameter to your data register method.
If you're training a model on a remote compute target and want to access the data for training, the compute identity must be granted at least the Storage Blob Data Reader role from the storage service. Learn how to [set up managed identity on a compute cluster](how-to-create-attach-compute-cluster.md#set-up-managed-identity).
If you're training a model on a remote compute target and want to access the dat
By default, Azure Machine Learning can't communicate with a storage account that's behind a firewall or in a virtual network.
-You can configure storage accounts to allow access only from within specific virtual networks. This configuration requires additional steps to ensure data isn't leaked outside of the network. This behavior is the same for credential-based data access. For more information, see [How to configure virtual network scenarios](how-to-access-data.md#virtual-network).
+You can configure storage accounts to allow access only from within specific virtual networks. This configuration requires more steps, to ensure that data doesn't leak outside of the network. This behavior is the same for credential-based data access. For more information, see [How to configure virtual network scenarios](how-to-access-data.md#virtual-network).
+
+If your storage account has virtual network settings, they dictate the needed identity type and permissions access. For example for data preview and data profile, the virtual network settings determine what type of identity is used to authenticate data access.
-If your storage account has virtual network settings, that dictates what identity type and permissions access is needed. For example for data preview and data profile, the virtual network settings determine what type of identity is used to authenticate data access.
-
* In scenarios where only certain IPs and subnets are allowed to access the storage, then Azure Machine Learning uses the workspace MSI to accomplish data previews and profiles.
-* If your storage is ADLS Gen 2 or Blob and has virtual network settings, customers can use either user identity or workspace MSI depending on the datastore settings defined during creation.
+* If your storage is ADLS Gen 2 or Blob and has virtual network settings, customers can use either user identity or workspace MSI depending on the datastore settings defined during creation.
-* If the virtual network setting is ΓÇ£Allow Azure services on the trusted services list to access this storage accountΓÇ¥, then Workspace MSI is used.
+* If the virtual network setting is ΓÇ£Allow Azure services on the trusted services list to access this storage accountΓÇ¥, then Workspace MSI is used.
## Use data in storage
We recommend that you use [Azure Machine Learning datasets](how-to-create-regist
Datasets package your data into a lazily evaluated consumable object for machine learning tasks like training. Also, with datasets you can [download or mount](how-to-train-with-datasets.md#mount-vs-download) files of any format from Azure storage services like Azure Blob Storage and Azure Data Lake Storage to a compute target.
-To create a dataset, you can reference paths from datastores that also use identity-based data access .
+To create a dataset, you can reference paths from datastores that also use identity-based data access.
-* If you're underlying storage account type is Blob or ADLS Gen 2, your user identity needs Blob Reader role.
-* If your underlying storage is ADLS Gen 1, permissions need can be set via the storage's Access Control List (ACL).
+* If you're underlying storage account type is Blob or ADLS Gen 2, your user identity needs Blob Reader role.
+* If your underlying storage is ADLS Gen 1, permissions need can be set via the storage's Access Control List (ACL).
-In the following example, `blob_datastore` already exists and uses identity-based data access.
+In the following example, `blob_datastore` already exists and uses identity-based data access.
```python blob_dataset = Dataset.Tabular.from_delimited_files(blob_datastore,'test.csv')
Another option is to skip datastore creation and create datasets directly from s
blob_dset = Dataset.File.from_files('https://myblob.blob.core.windows.net/may/keras-mnist-fashion/') ```
-When you submit a training job that consumes a dataset created with identity-based data access, the managed identity of the training compute is used for data access authentication. Your Azure Active Directory token isn't used. For this scenario, ensure that the managed identity of the compute is granted at least the Storage Blob Data Reader role from the storage service. For more information, see [Set up managed identity on compute clusters](how-to-create-attach-compute-cluster.md#set-up-managed-identity).
-
-## Access data for training jobs on compute clusters (preview)
--
-When training on [Azure Machine Learning compute clusters](how-to-create-attach-compute-cluster.md#what-is-a-compute-cluster), you can authenticate to storage with your Azure Active Directory token.
-
-This authentication mode allows you to:
-* Set up fine-grained permissions, where different workspace users can have access to different storage accounts or folders within storage accounts.
-* Audit storage access because the storage logs show which identities were used to access data.
-
-> [!IMPORTANT]
-> This functionality has the following limitations
-> * Feature is only supported for experiments submitted via the [Azure Machine Learning CLI](../how-to-configure-cli.md)
-> * Only CommandJobs, and PipelineJobs with CommandSteps and AutoMLSteps are supported
-> * User identity and compute managed identity cannot be used for authentication within same job.
-
-> [!WARNING]
-> This feature is __public preview__ and is __not secure for production workloads__. Ensure that only trusted users have permissions to access your workspace and storage accounts.
->
-> Preview features are provided without a service-level agreement, and are not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
->
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-The following steps outline how to set up identity-based data access for training jobs on compute clusters.
-
-1. Grant the user identity access to storage resources. For example, grant StorageBlobReader access to the specific storage account you want to use or grant ACL-based permission to specific folders or files in Azure Data Lake Gen 2 storage.
-
-1. Create an Azure Machine Learning datastore without cached credentials for the storage account. If a datastore has cached credentials, such as storage account key, those credentials are used instead of user identity.
-
-1. Submit a training job with property **identity** set to **type: user_identity**, as shown in following job specification. During the training job, the authentication to storage happens via the identity of the user that submits the job.
-
-> [!NOTE]
-> If the **identity** property is left unspecified and datastore does not have cached credentials, then compute managed identity becomes the fallback option.
-
-```yaml
-command: |
- echo "--census-csv: ${{inputs.census_csv}}"
- python hello-census.py --census-csv ${{inputs.census_csv}}
-code: src
-inputs:
- census_csv:
- type: uri_file
- path: azureml://datastores/mydata/paths/census.csv
-environment: azureml:AzureML-sklearn-1.0-ubuntu20.04-py38-cpu@latest
-compute: azureml:cpu-cluster
-identity:
- type: user_identity
-```
+When you submit a training job that consumes a dataset created with identity-based data access, the managed identity of the training compute is used for data access authentication. Your Azure Active Directory token isn't used. For this scenario, ensure that the managed identity of the compute is granted at least the Storage Blob Data Reader role from the storage service. For more information, see [Set up managed identity on compute clusters](how-to-create-attach-compute-cluster.md#set-up-managed-identity).
## Next steps
machine-learning How To Inference Onnx Automl Image Models V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-inference-onnx-automl-image-models-v1.md
In this guide, you'll learn how to use [Python APIs for ONNX Runtime](https://on
## Prerequisites
-* Get an AutoML-trained computer vision model for any of the supported image tasks: classification, object detection, or instance segmentation. [Learn more about AutoML support for computer vision tasks](../how-to-auto-train-image-models.md).
+* Get an AutoML-trained computer vision model for any of the supported image tasks: classification, object detection, or instance segmentation. [Learn more about AutoML support for computer vision tasks](how-to-auto-train-image-models-v1.md).
* Install the [onnxruntime](https://onnxruntime.ai/docs/get-started/with-python.html) package. The methods in this article have been tested with versions 1.3.0 to 1.8.0.
Within the best child run, go to **Outputs+logs** > **train_artifacts**. Use the
- *labels.json*: File that contains all the classes or labels in the training dataset. - *model.onnx*: Model in ONNX format.
-![Screenshot that shows selections for downloading O N N X model files.](.././media/how-to-inference-onnx-automl-image-models/onnx-files-manual-download.png)
+![Screenshot that shows selections for downloading ONNX model files.](.././media/how-to-inference-onnx-automl-image-models/onnx-files-manual-download.png)
Save the downloaded model files in a directory. The example in this article uses the *./automl_models* directory.
automl_image_run = AutoMLRun(experiment=experiment, run_id=run_id)
best_child_run = automl_image_run.get_best_child() ```
-Use the following model specific arguments to submit the script. For more details on arguments, refer to [model specific hyperparameters](../how-to-auto-train-image-models.md#configure-experiments) and for supported object detection model names refer to the [supported model algorithm section](../how-to-auto-train-image-models.md#supported-model-algorithms).
+Use the following model specific arguments to submit the script. For more details on arguments, refer to [model specific hyperparameters](reference-automl-images-hyperparameters-v1.md#model-specific-hyperparameters) and for supported object detection model names refer to the [supported model algorithm section](how-to-auto-train-image-models-v1.md#supported-model-algorithms).
To get the argument values needed to create the batch scoring model, refer to the scoring scripts generated under the outputs folder of the AutoML training runs. Use the hyperparameter values available in the model settings variable inside the scoring file for the best child run.
onnx_model_path = 'automl_models/model.onnx' # local path to save the model
remote_run.download_file(name='outputs/model_'+str(batch_size)+'.onnx', output_file_path=onnx_model_path) ```
-After the model downloading step, you use the ONNX Runtime Python package to perform inferencing by using the *model.onnx* file. For demonstration purposes, this article uses the datasets from [How to prepare image datasets](../how-to-prepare-datasets-for-automl-images.md) for each vision task.
+After the model downloading step, you use the ONNX Runtime Python package to perform inferencing by using the *model.onnx* file. For demonstration purposes, this article uses the datasets from [How to prepare image datasets](how-to-prepare-datasets-for-automl-images-v1.md) for each vision task.
We've trained the models for all vision tasks with their respective datasets to demonstrate ONNX model inference.
Perform the following preprocessing steps for the ONNX model inference:
5. Convert to float type. 6. Normalize with ImageNet's `mean` = `[0.485, 0.456, 0.406]` and `std` = `[0.229, 0.224, 0.225]`.
-If you chose different values for the [hyperparameters](../how-to-auto-train-image-models.md#configure-experiments) `valid_resize_size` and `valid_crop_size` during training, then those values should be used.
+If you chose different values for the [hyperparameters](reference-automl-images-hyperparameters-v1.md) `valid_resize_size` and `valid_crop_size` during training, then those values should be used.
Get the input shape needed for the ONNX model.
Perform the following preprocessing steps for the ONNX model inference. These st
5. Convert to float type. 6. Normalize with ImageNet's `mean` = `[0.485, 0.456, 0.406]` and `std` = `[0.229, 0.224, 0.225]`.
-If you chose different values for the [hyperparameters](../how-to-auto-train-image-models.md#configure-experiments) `valid_resize_size` and `valid_crop_size` during training, then those values should be used.
+If you chose different values for the [hyperparameters](reference-automl-images-hyperparameters-v1.md) `valid_resize_size` and `valid_crop_size` during training, then those values should be used.
Get the input shape needed for the ONNX model.
Perform the following preprocessing steps for the ONNX model inference:
4. Convert to float type. 5. Normalize with ImageNet's `mean` = `[0.485, 0.456, 0.406]` and `std` = `[0.229, 0.224, 0.225]`.
-For `resize_height` and `resize_width`, you can also use the values that you used during training, bounded by the `min_size` and `max_size` [hyperparameters](../how-to-auto-train-image-models.md#configure-experiments) for Mask R-CNN.
+For `resize_height` and `resize_width`, you can also use the values that you used during training, bounded by the `min_size` and `max_size` [hyperparameters](reference-automl-images-hyperparameters-v1.md) for Mask R-CNN.
```python import glob
display_detections(img, boxes.copy(), labels, scores, masks.copy(),
## Next steps
-* [Learn more about computer vision tasks in AutoML](../how-to-auto-train-image-models.md)
+* [Learn more about computer vision tasks in AutoML](how-to-auto-train-image-models-v1.md)
* [Troubleshoot AutoML experiments](../how-to-troubleshoot-auto-ml.md)
managed-grafana How To Create Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-create-dashboard.md
+
+ Title: Create a Grafana dashboard with Azure Managed Grafana
+description: Learn how to create and configure Azure Managed Grafana dashboards.
++++ Last updated : 01/02/2023++
+# Create a dashboard in Azure Managed Grafana
+
+In this guide, learn how to create a dashboard in Azure Managed Grafana to visualize data from your Azure services.
+
+A Grafana dashboard contains panels and rows. You can import a Grafana dashboard and adapt it to your own scenario, create a new Grafana dashboard, or duplicate an existing dashboard.
+
+> [!NOTE]
+> The Grafana UI may change periodically. This article shows the Grafana interface and user flow at a given point. Your experience may slightly differ from the examples below at the time of reading this document. If this is the case, refer to the [Grafana Labs documentation.](https://grafana.com/docs/grafana/latest/dashboards/)
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+- An Azure Managed Grafana instance. If you don't have one yet, [create an Azure Managed Grafana instance](./quickstart-managed-grafana-portal.md).
+- Another existing Azure service instance with monitoring data.
+
+## Import a Grafana dashboard
+
+To quickly create a dashboard, import a dashboard template from the Grafana Labs website and add it to your Managed Grafana workspace.
+
+1. From the Grafana Labs website, browse through [Grafana dashboards templates](https://grafana.com/grafana/dashboards/?category=azure) and select a dashboard to import.
+1. Select **Copy ID to clipboard**.
+1. For the next steps, use the Azure portal or the Azure CLI.
+
+ ### [Portal](#tab/azure-portal)
+
+ 1. In the Azure portal, open your Azure Managed Grafana workspace and select the **Endpoint** URL.
+ 1. In the Grafana portal, go to **Dashboards > Import**.
+ 1. On the **Import** page, under **Import via grafana.com**, paste the Grafana dashboard ID copied earlier, and select **Load**.
+
+ :::image type="content" source="media/create-dashboard/import-load.png" alt-text="Screenshot of the Grafana instance. Load dashboard to import.":::
+
+ 1. Optionally update the dashboard name, folder and UID.
+ 1. Select a datasource and select **Import**.
+ 1. A new dashboard is displayed.
+ 1. Review the visualizations displayed and edit the dashboard if necessary.
+
+ ### [Azure CLI](#tab/azure-cli)
+
+ 1. Open a CLI and run the `az login` command.
+ 1. Run the [az grafana dashboard import](/cli/azure/grafana/dashboard#az-grafana-update)command and replace the placeholders `<AMG-name>`, `<AMG-resource-group>`, and `<dashboard-id>` with the name of the Azure Managed Grafana instance, its resource group, and the dashboard ID you copied earlier.
+
+ ```azurecli
+ az grafana dashboard import --name <AMG-name> --resource-group <AMG-resource-group> --definition <dashboard-id>
+ ```
+
+
+
+## Create a new Grafana dashboard
+
+If none of the pre-configured dashboards listed on the Grafana Labs website fit your needs, create a new dashboard.
+
+### [Portal](#tab/azure-portal)
+
+1. In the Azure portal, open your Azure Managed Grafana workspace and select the **Endpoint** URL.
+1. In the Grafana portal, go to **Dashboards > New Dashboard**.
+1. Select one of the following options:
+ - **Add a new panel**: instantly creates a dashboard from scratch with a first default panel.
+ - **Add a new row**: instantly creates a dashboard with a new empty row.
+ - **Add a panel from the panel library**: instantly creates a dashboard with an existing reusable panel from another instance you have access to.
+
+ :::image type="content" source="media/create-dashboard/from-scratch.png" alt-text="Screenshot of the Grafana instance. Create a new dashboard.":::
+
+### [Azure CLI](#tab/azure-cli)
+
+Run the [az grafana dashboard create](/cli/azure/grafana/dashboard#az-grafana-dashboard-create) command and replace the placeholders `<AMG-name>`, `<AMG-resource-group>`, `<title>`, and `<definition>` with the name of the Azure Managed Grafana instance, its resource group, a title and a definition for the new dashboard. The definition consists of a dashboard model in JSON string, a path or URL to a file with such content.
+
+```azurecli
+az grafana dashboard create --name <AMG-name> --resource-group <AMG-resource-group> --title <title> --definition <definition>
+```
+
+For example:
+
+```azurecli
+az grafana dashboard create --name myGrafana --resource-group myResourceGroup --title "My dashboard" --folder folder1 --definition '{
+ "dashboard": {
+ "annotations": {
+ ...
+ },
+ "panels": {
+ ...
+ }
+ },
+ "message": "Create a new test dashboard"
+}'
+```
+++
+## Duplicate a Grafana dashboard
+
+Duplicate a Grafana dashboard using your preferred method.
+
+### [Portal](#tab/azure-portal)
+
+To copy a Grafana dashboard:
+
+1. Open an existing dashboard in your Grafana instance
+1. Select **Dashboard settings**
+1. Select **Save as**
+1. Enter a new name and/or a new folder and select **Save**
+
+ :::image type="content" source="media\create-dashboard\copy-dashboard.png" alt-text="Screenshot of the Grafana instance. Duplicate a dashboard.":::
+
+### [Azure CLI](#tab/azure-cli)
+
+1. Run the [az grafana dashboard show](/cli/azure/grafana/dashboard#az-grafana-dashboard-show) command to show the definition of the dashboard you want to duplicate, and copy the output.
+
+ ```azurecli
+ az grafana dashboard show --name <AMG-name> --resource-group <AMG-resource-group> --dashboard <dashboard-UID>
+ ```
+
+1. Run the [az grafana dashboard create](/cli/azure/grafana/dashboard#az-grafana-dashboard-create) command and replace the placeholders `<AMG-name>`, `<AMG-resource-group>`, `<title>`, and `<dashboard-id>` with your own information. Replace `<definition>` with the output you copied in the previous step, and remove the `uid`and `id`.
+
+ ```azurecli
+ az grafana dashboard create --name <AMG-name> --resource-group <AMG-resource-group> --title <title>--definition <definition>
+ ```
+
+ For example:
+
+ ```azurecli
+ az grafana dashboard create --name myGrafana --resource-group myResourceGroup --title "My dashboard" --folder folder1 --definition '{
+ "dashboard": {
+ "annotations": {
+ ...
+ },
+ "panels": {
+ ...
+ }
+ },
+ "message": "Create a new test dashboard"
+ }'
+ ```
+++
+## Edit a dashboard panel
+
+Edit a Grafana dashboard panel using your preferred method.
+
+### [Portal](#tab/azure-portal)
+
+To update a Grafana panel, follow the steps below.
+
+1. Review the panel to check if you're satisfied with it or want to make some edits.
+
+ :::image type="content" source="media/create-dashboard/visualization.png" alt-text="Screenshot of the Grafana instance. Example of visualization.":::
+
+1. In the lower part of the page:
+ 1. **Query** tab:
+ 1. Review the selected data source. If necessary, select the drop-down list to use another data source.
+ 1. Update the query. Each data source has a specific query editor that provides different features and capabilities for that type of [data source](https://grafana.com/docs/grafana/v9.1/datasources/#querying).
+ 1. Select **+ Query** or **+ Expression** to add a new query or expression.
+
+ :::image type="content" source="media/create-dashboard/edit-query.png" alt-text="Screenshot of the Grafana instance. Queries.":::
+
+ 1. **Transform** tab: filter data or queries, and organize or combine data before the data is visualized.
+ 1. **Alert** tab: set alert rules and notifications.
+
+1. At the top of the page:
+ 1. Toggle **Table view** to display data as a table.
+ 1. Switch between **Fill** and **Actual** to edit panel size
+ 1. Select the time icon to update the time range
+ 1. Select the visualization drop-down menu to choose a visualization type that best supports your use case. Go to [visualization](https://grafana.com/docs/grafana/latest/panels-visualizations/visualizations/) for more information.
+
+ :::image type="content" source="media/create-dashboard/panel-time-visualization-options.png" alt-text="Screenshot of the Grafana instance. Time, visualization and more options.":::
+
+1. On the right hand side, select the **Panel options** icon to review and update various panel options.
+
+## [Azure CLI](#tab/azure-cli)
+
+Run the [az grafana dashboard update](/cli/azure/grafana/dashboard#az-grafana-dashboard-update) command and update the Grafana dashboard definition.
+
+```azurecli
+az grafana dashboard update --name <AMG-name> --resource-group <AMG-resource-group> --definition <definition>
+```
+++
+## Next steps
+
+In this how-to guide, you learned how to create a Grafana dashboard. To learn how to manage your data sources, go to:
+
+> [!div class="nextstepaction"]
+> [Configure data sources](how-to-data-source-plugins-managed-identity.md)
mariadb Concepts Connectivity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-connectivity-architecture.md
Title: Connectivity architecture - Azure Database for MariaDB description: Describes the connectivity architecture for your Azure Database for MariaDB server.--++ Last updated 06/24/2022
mariadb Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/policy-reference.md
Previously updated : 11/28/2022 Last updated : 01/05/2023 # Azure Policy built-in definitions for Azure Database for MariaDB
mariadb Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Database for MariaDB description: Lists Azure Policy Regulatory Compliance controls available for Azure Database for MariaDB. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/04/2022 Last updated : 01/05/2023
marketplace Azure App Managed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-app-managed.md
To set custom prices in an individual market, export, modify, and then import th
1. In the dialog box that appears, click **Yes**. 1. Select the exportedPrice.xlsx file you updated, and then click **Open**.
+> [!NOTE]
+> Offers will be billed to customers in the customersΓÇÖ agreement currency, using the local market price that was published at the time the offer was created. The amount that customers pay, and that ISVs are paid, depends on the Foreign Exchange rates at the time the customer transacts the offer. Learn more on ["How we convert currency?"](/azure/marketplace/marketplace-geo-availability-currencies).
+ ## Choose who can see your plan You can configure each plan to be visible to everyone or to only a specific audience. You grant access to a private audience using Azure subscription IDs with the option to include a description of each subscription ID you assign. You can add a maximum of 10 subscription IDs manually or up to 10,000 subscription IDs using a .CSV file. Azure subscription IDs are represented as GUIDs and letters must be lowercase.
The actions that are available in the **Action** column of the **Plan overview**
## Next steps - [Test and publish Azure application offer](azure-app-test-publish.md).-- [Sell an Azure application offer](azure-app-marketing.md) through the **Co-sell with Microsoft** and/or **Resell through CSPs** programs.
+- [Sell an Azure application offer](azure-app-marketing.md) through the **Co-sell with Microsoft** and/or **Resell through CSPs** programs.
marketplace Azure App Metered Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-app-metered-billing.md
When it comes to defining the offer along with its pricing models, it is importa
* Pricing model has a monthly recurring fee, which can be set to $0. * In addition to the recurring fee, the plan can also include optional dimensions used to charge customers for usage not included in the flat rate. Each dimension represents a billable unit that your service will communicate to Microsoft using the [Marketplace metering service API](marketplace-metering-service-apis.md).
- > [!IMPORTANT]
- > You must keep track of the usage in your code and only send usage events to Microsoft for the usage that is above the base fee.
+* > [!IMPORTANT]
+ > You must keep track of the usage in your code and only send usage events to Microsoft for the usage that is above the base fee.
+ > [!Note]
+> Offers will be billed to customers in the customersΓÇÖ agreement currency, using the local market price that was published at the time the offer was created. The amount that customers pay, and that ISVs are paid, depends on the Foreign Exchange rates at the time the customer transacts the offer. Learn more on ["How we convert currency?"](/azure/marketplace/marketplace-geo-availability-currencies).
## Sample offer As an example, Contoso is a publisher with a managed application service called Contoso Analytics (CoA). CoA allows customers to analyze large amount of data for reporting and data warehousing. Contoso is registered as a publisher in Partner Center for the commercial marketplace program to publish offers to Azure customers. There are two plans associated with CoA, outlined below:
Billing dimensions are shared across all plans for an offer. Some attributes app
The attributes, which define the dimension itself, are shared across all plans for an offer. Before you publish the offer, a change made to these attributes from the context of any plan will affect the dimension definition across all plans. Once you publish the offer, these attributes will no longer be editable. The attributes are: * Identifier
-* Name
-* Unit of measure
- The other attributes of a dimension are specific to each plan and can have different values from plan to plan. Before you publish the plan, you can edit these values and only this plan will be affected. Once you publish the plan, the following attributes will no longer be editable: * Included quantity for monthly customers
A dimension used with the Marketplace metering service represents an understandi
Once an offer is published with a dimension, the offer-level details for that dimension can no longer be changed: * Identifier
-* Name
-* Unit of measure
- Once a plan is published, the plan-level details can no longer be changed: * Included quantity for monthly term
Follow the instruction in [Support for the commercial marketplace program in Par
**Video tutorial** - [Metered Billing for Azure Managed Applications Overview](https://go.microsoft.com/fwlink/?linkid=2196310)++
marketplace Azure Container Plan Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-container-plan-availability.md
When you remove a market, customers from that market who are using active deploy
Select *Save* to continue.
+> [!NOTE]
+> Offers will be billed to customers in the customersΓÇÖ agreement currency, using the local market price that was published at the time the offer was created. The amount that customers pay, and that ISVs are paid, depends on the Foreign Exchange rates at the time the customer transacts the offer. Learn more on ["How we convert currency?"](/azure/marketplace/marketplace-geo-availability-currencies).
+ ## Pricing For the License model, select *Custom price* to configure pricing for this plan, or *Bring your own license* (BYOL) to let customers use this plan with their existing license.
marketplace Azure Container Technical Assets Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-container-technical-assets-kubernetes.md
Previously updated : 09/27/2022 Last updated : 11/30/2022 # Prepare Azure container technical assets for a Kubernetes app
In addition to your solution domain, your engineering team should have knowledge
## Publishing overview
-The first step to publish your Kubernetes app-based Container offer on the Azure Marketplace is to package your application as aΓÇ»[Cloud Native Application Bundle (CNAB)][cnab]. This CNAB, comprised of your applicationΓÇÖs artifacts, will be first published to your private Azure Container Registry (ACR) and later pushed to an Azure Marketplace-specific public ACR and will be used as the single artifact you reference in Partner Center.
+The first step to publish your Kubernetes app-based Container offer on the Azure Marketplace is to package your application as aΓÇ»[Cloud Native Application Bundle (CNAB)][cnab]. This CNAB, comprised of your applicationΓÇÖs artifacts, will be first published to your private Azure Container Registry (ACR) and later pushed to a Microsoft-owned ACR and will be used as the single artifact you reference in Partner Center.
+
+From there, vulnerability scanning is performed to ensure images are secure. Finally, the Kubernetes application is registered as an extension type for an Azure Kubernetes Service (AKS) cluster.
Once your offer is published, your application will leverage theΓÇ»[cluster extensions for AKS][cluster-extensions] feature to manage your application lifecycle inside an AKS cluster. ++ ## Grant access to your Azure Container Registry As part of the publishing process, Microsoft will deep copy your CNAB from your ACR to a Microsoft-owned, Azure Marketplace-specific ACR. This step requires you to grant Microsoft access to your registry.
The fields used in the manifest are as follows:
|applicationName|String|Name of the application| |publisher|String|Name of the Publisher| |description|String|Short description of the package|
-|version|SemVer string|SemVer string that describes the application package version, may or may not match the version of the binaries inside. Mapped to PorterΓÇÖs version field|
+|version|String in `#.#.#` format|Version string that describes the application package version, may or may not match the version of the binaries inside. Mapped to PorterΓÇÖs version field|
|helmChart|String|Local directory where the Helm chart can be found relative to this `manifest.yaml`| |clusterARMTemplate|String|Local path where an ARM template that describes an AKS cluster that meets the requirements in restrictions field can be found| |uiDefinition|String|Local path where a JSON file that describes an Azure portal Create experience can be found|
The fields used in the manifest are as follows:
For a sample configured for the voting app, see the following [manifest file example][manifest-sample].
+### User parameter flow
+
+It's important to understand how user parameters flow throughout the artifacts you're creating and packaging. Parameters are initially defined when creating the UI through a *createUiDefinition.json* file:
++
+> [!NOTE]
+> In this example, `extensionResourceName` is also parameterized and passed to the cluster extension resource. Similarly, other extension properties can be parameterized, such as enabling auto upgrade for minor versions. For more on cluster extension properties, see [optional parameters][extension-parameters].
+
+and are exported via the `outputs` section:
++
+From there, the values are passed to the Azure Resource Manager template and will be propagated to the Helm chart during deployment:
++
+Finally, the values are consumed by the Helm chart:
++ ### Structure your application Place the createUiDefinition, ARM template, and manifest file beside your application's Helm chart.
The following Docker command pulls the latest packaging tool image and also moun
AssumingΓÇ»`~\<path-to-content>` is a directory containing the contents to be packaged, the following docker command will mount `~/<path-to-content>` to `/data` in the container. Be sure to replace `~/<path-to-content>` with your own app's location. ```bash
+docker pull mcr.microsoft.com/container-package-app:latest
+ docker run -it -v /var/run/docker.sock:/var/run/docker.sock -v ~/<path-to-content>:/data --entrypoint "/bin/bash" mcr.microsoft.com/container-package-app:latest ```
docker run -it -v /var/run/docker.sock:/var/run/docker.sock -v ~/<path-to-conten
AssumingΓÇ»`D:\<path-to-content>` is a directory containing the contents to be packaged, the following docker command will mount `d:/<path-to-content>` to `/data` in the container. Be sure to replace `d:/<path-to-content>` with your own app's location. ```bash
+docker pull mcr.microsoft.com/container-package-app:latest
+ docker run -it -v /var/run/docker.sock:/var/run/docker.sock -v d:/<path-to-content>:/data --entrypoint "/bin/bash" mcr.microsoft.com/container-package-app:latest ```
For an example of how to integrate `container-package-app` into an Azure Pipelin
[ui-sample]: https://github.com/Azure-Samples/kubernetes-offer-samples/blob/main/samples/k8s-offer-azure-vote/createUIDefinition.json [pipeline-sample]: https://github.com/Azure-Samples/kubernetes-offer-samples/tree/main/samples/.pipelines/AzurePipelines/azure-pipelines.yml [arm-template-sample]: https://github.com/Azure-Samples/kubernetes-offer-samples/blob/main/samples/k8s-offer-azure-vote/mainTemplate.json
-[manifest-sample]: https://github.com/Azure-Samples/kubernetes-offer-samples/blob/main/samples/k8s-offer-azure-vote/manifest.yaml
+[manifest-sample]: https://github.com/Azure-Samples/kubernetes-offer-samples/blob/main/samples/k8s-offer-azure-vote/manifest.yaml
+[extension-parameters]: ../aks/cluster-extensions.md#optional-parameters
marketplace Azure Container Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-container-troubleshoot.md
+
+ Title: Troubleshoot publishing issues for a Kubernetes application based Container offer in Microsoft AppSource.
+description: Learn about potential issue and solutions when publishing a Kubernetes application based Container offer in Microsoft AppSource.
+++++ Last updated : 11/14/2022++
+# Troubleshoot issues while publishing a Kubernetes application-based Container offer
+
+Once published, a Kubernetes application based Container offer goes through the following high level flow for bundle processing.
++
+First, the contents of the Cloud Native Application Bundle (CNAB) are copied from your own registry to a Microsoft-owned Azure Container Registry (ACR). From there, vulnerability scanning is performed to ensure images are secure. Finally, the Kubernetes application is registered as an [extension][cluster-extension] type for an Azure Kubernetes Service (AKS) cluster. If the publish fails, it may be an issue with one of these components. See below for common errors and related mitigation steps.
+
+## Common issues
+
+### Publishing fails with missing artifacts in the CNAB
+
+|Error|Description|Action|
+|--|:--|--|
+|"extensionRegistrationParameters cannot be null or empty in manifest.yaml of your package. For more details, please refer to https://aka.ms/K8sOfferAssets#create-the-manifest-file"|Kubernetes applications are packaged as AKS cluster extensions. The manifest file provides input for the Extension Type creation.|Read the description for each property and provide the information.|
+|"namespace cannot be null or empty for defaultScope as cluster in extensionRegistrationParameters in manifest.yaml of your package. For more details, please refer to https://aka.ms/K8sOfferAssets#create-the-manifest-file"|Kubernetes applications that are installed at Cluster scope will use the default scope provided as the namespace.|Be sure to provide a namespace in the `extensionRegistrationParameters` section in your manifest file|
+
+### Publishing fails while copying the artifacts from your ACR to a Microsoft-owned ACR
+
+|Error|Description|Action|
+|--|--|--|
+|"Access to registry {sourceACRName} was denied. Please provide MarketPlace access to registry. please refer: https://aka.ms/K8sOfferAssets#grant-access-to-your-azure-container-registry"|During the publishing process, Microsoft moves your Kubernetes application, which is packaged as a CNAB and uploaded to an ACR, to a Microsoft-owned registry. <br><br/> To do so, Microsoft's first party app responsible for this process must be provided with permissions. This error appears if the Marketplace publishing was done without providing the permissions.|[Provide Microsoft's first party app with the proper permissions][grant-access].|
+|"CNAB repository {cnabBundle} cannot be found in registry {sourceACRName}. Please provide MarketPlace access to registry. please refer: https://aka.ms/K8sOfferAssets#grant-access-to-your-azure-container-registry"|The Kubernetes application that has been packaged using the CPA tool can't be found in your ACR.|Ensure the bundle has been successfully uploaded to your registry, and [provide Microsoft's first party app with the proper permissions][grant-access].|
+|"The CNAB repository name {cnabBundle} with digest {targetDigest} already exists and is different than your provided CNAB digest {sourcedigest}."|A plan with the same version is already published using a different CNAB.|If your CNAB contents have changed, increment the plan version and try publishing again.|
+
+### Publishing fails with Platform errors
+
+|Error|Description|Action|
+|--|--|--|
+|Internal server error|May be a transient error.|Try publishing again.|
+
+### Vulnerability scanning
+
+You may also encounter errors due to vulnerabilities in your images. For more information on vulnerability scanning and how to mitigate issues, see [Container certification troubleshooting][container-certification-troubleshooting].
+
+<!-- LINKS -->
+[container-certification-troubleshooting]: ./azure-container-certification-faq.yml
+[cluster-extension]: /azure/aks/integrations#extensions/
+[grant-access]: ./azure-container-technical-assets-kubernetes.md#grant-access-to-your-azure-container-registry
marketplace Azure Vm Plan Pricing And Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-vm-plan-pricing-and-availability.md
When you remove a market, customers from that market who are using active deploy
Select **Save** to continue.
+> [!NOTE]
+> Offers will be billed to customers in the customersΓÇÖ agreement currency, using the local market price that was published at the time the offer was created. The amount that customers pay, and that ISVs are paid, depends on the Foreign Exchange rates at the time the customer transacts the offer. Learn more on ["How we convert currency?"](/azure/marketplace/marketplace-geo-availability-currencies).
+ ## Pricing For the **License model**, select **Usage-based monthly billed plan** to configure pricing for this plan, or **Bring your own license** to let customers use this plan with their existing license.
marketplace Create Consulting Service Pricing Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/create-consulting-service-pricing-availability.md
To validate the conversion or to set custom prices in an individual market, you
1. Open the exportedPrice.xlsx file in Microsoft Excel. 1. In the spreadsheet, you can adjust prices and currencies for each market. See [Geographic availability and currency support for the commercial marketplace](./marketplace-geo-availability-currencies.md) for the list of supported currencies. When you're done, save the file. 1. In Partner Center, under **Pricing**, select the **Import pricing data** link. Importing the file will overwrite previous pricing information.-
+> [!Note]
+> Offers will be billed to customers in the customersΓÇÖ agreement currency, using the local market price that was published at the time the offer was created. The amount that customers pay, and that ISVs are paid, depends on the Foreign Exchange rates at the time the customer transacts the offer. Learn more on ["How we convert currency?"](/azure/marketplace/marketplace-geo-availability-currencies).
> [!IMPORTANT] > The prices you define in Partner Center are static and don't follow variations in the exchange rates. To change the price in one or more markets after publication, update and resubmit your offer in Partner Center.
Select **Save draft** before continuing.
## Next steps * [Review and publish](review-publish-offer.md)++
marketplace Create New Saas Offer Plans https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/create-new-saas-offer-plans.md
Every plan must be available in at least one market. On the **Pricing and availa
> This dialog box includes a search box and an option to filter on only "Tax Remitted" countries, in which Microsoft remits sales and use tax on your behalf. 1. Select **Save**, to close the dialog box.-
+> [!Note]
+> Offers will be billed to customers in the customersΓÇÖ agreement currency, using the local market price that was published at the time the offer was created. The amount that customers pay, and that ISVs are paid, depends on the Foreign Exchange rates at the time the customer transacts the offer. Learn more on ["How we convert currency?"](/azure/marketplace/marketplace-geo-availability-currencies).
## Define a pricing model You must associate a pricing model with each plan: either _flat rate_ or _per user_. All plans in the same offer must use the same pricing model. For example, an offer cannot have one plan that's flat rate and another plan thatΓÇÖs per user. For more information, see [SaaS pricing models](plan-saas-offer.md#saas-pricing-models).
If you haven't already done so, create a development and test (DEV) offer to tes
- [Publishing a Private SaaS plan](https://go.microsoft.com/fwlink/?linkid=2196256) - [Configuring SaaS Pricing in Partner Center: Publisher Overview](https://go.microsoft.com/fwlink/?linkid=2201523) - [Configuring SaaS Pricing in Partner Center: Publisher Demo](https://go.microsoft.com/fwlink/?linkid=2201524)++
marketplace Marketplace Geo Availability Currencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-geo-availability-currencies.md
To change the price of an offer that has already been published, see [Changing p
Customers see the offer price in their tenant currency, or in their billing account currency if the customers have selected a specific subscription for their purchase.
-Microsoft receives payments from customers in the customer account billing currency and pays you in the currency you selected in Partner Center. Microsoft converts the customer currency using the exchange rate of the month of the transaction.
+Microsoft receives payments from customers in the customer account billing currency and pays you in the currency you selected in the Partner Center. Microsoft converts the customer currency using the exchange rate of the month of the transaction.
Microsoft converts offer prices using exchange rates sourced directly from the WMR exchange rates (4pm London WM/Refinitiv). Microsoft sources WMR rates on both a daily and monthly basis.
-The following illustration shows the currency conversion flow:
+The following illustration shows the currency conversion flow, with up to 3 different foreign exchanges used, depending on the offer currency, customer agreement currency, and ISV currency:
![The screenshot shows the updated currency exchange flow.](media/marketplace-geo-availability-currencies/currency-exchange-flow-updated-13.png)
As an ISV, you have several options available to minimize impact of foreign exch
+
marketplace Marketplace Rewards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-rewards.md
Previously updated : 05/28/2021 Last updated : 01/06/2023 # ISV Success program and Marketplace Rewards
-Microsoft continues its strong commitment to the growth and success of ISVs and supporting them throughout the entire journey of building, publishing, and selling apps through the Microsoft commercial marketplace. To further this mission, Marketplace Rewards is now included in the ISV Success program, available—at no cost—to all participants of the program. As you grow through the Microsoft commercial marketplace, you unlock new benefits designed to help you convert customers and close deals. For details on the program and benefits, see [Marketplace Rewards](https://aka.ms/marketplacerewards) (PPT). To see what other Microsoft partners are saying about their experiences with Marketplace Rewards, visit Marketplace [Rewards testimonials](https://aka.ms/MarketplaceRewardsTestimonials).
-the benefits at each stage of growth help you progress to the next stage, helping you to grow your business to Microsoft customers, with Microsoft's field, and through Microsoft's channel by applying the commercial marketplace as your platform.
+
+Microsoft continues its strong commitment to the growth and success of ISVs and supporting them throughout the entire journey of building, publishing, and selling apps through the Microsoft commercial marketplace. To further this mission, Marketplace Rewards is now included in the ISV Success program, available—at no cost—to all participants of the program. 
+
+## Your commercial marketplace benefits
+
+As you grow through the Microsoft commercial marketplace, you unlock new benefits designed to help you convert customers and close deals. For details on the program and benefits, see [Marketplace Rewards](https://aka.ms/marketplacerewards) (PPT). To see what other Microsoft partners are saying about their experiences with Marketplace Rewards, visit Marketplace [Rewards testimonials](https://aka.ms/MarketplaceRewardsTestimonials).
+
+The program creates a positive feedback loop: the benefits at each stage of growth help you progress to the next stage, helping you to grow your business to Microsoft customers, with Microsoft's field, and through Microsoft's channel by leveraging the commercial marketplace as your platform
Your benefits are differentiated based on whether your offer is [List, Trial, Consulting or Transact](/azure/marketplace/determine-your-listing-type).
Your steps to get started are easy:
> If your offer has been live for more than three weeks and you have not received a message, check in Partner Center to find who in your organization owns the offer. They should have the communication and next steps. If you cannot determine the owner, or if the owner has left your company, open a [support ticket](https://go.microsoft.com/fwlink/?linkid=2165533). The scope of the activities available to you expands as you grow your offerings in the marketplace. All listings receive a base level of optimization recommendations and promotion as part of a self-serve email of resources and best practices.----
marketplace Orders Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/orders-dashboard.md
This table displays a numbered list of the 500 top orders sorted by date of acqu
| Term End Date | TermEndDate | Indicates the end date of a term for an order | TermEndDate | | Not available | purchaseRecordId | The identifier of the purchase record for an order purchase | purchaseRecordId | | Not available | purchaseRecordLineItemId | The identifier of the purchase record line item related to this order. | purchaseRecordLineItemId |
-| Billed Revenue USD | EstimatedCharges | The price the customer will be charged for all order units before taxation. This is calculated in customer transaction currency. In tax-inclusive countries, this price includes the tax, otherwise it doesn't. | EstimatedCharges |
+| Billed Revenue USD | EstimatedCharges | The price the customer will be charged for all order units before taxation. This is calculated in customer transaction currency. In tax-inclusive countries/regions, this price includes the tax, otherwise it doesn't. | EstimatedCharges |
| Not available | Currency | Billing currency for the order purchase | Currency | | Not available | HasTrial | Represents whether an offer has trial period enabled | HasTrial | | Is Trial | IsTrial | Represents whether an offer SKU is in trial period | IsTrial |
marketplace Saas Metered Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/partner-center-portal/saas-metered-billing.md
For a SaaS offer to use metered billing, it must first:
- Be configured for the **flat rate** pricing model when charging customers for your service. Dimensions are an optional extension to the flat rate pricing model. Then the SaaS offer can integrate with the [commercial marketplace metering service APIs](../marketplace-metering-service-apis.md) to inform Microsoft of billable events.-
->[!Note]
->Marketplace metering service is available only to the flat rate billing model, and does not apply to the per user billing model.
+> [!Note]
+>Marketplace metering service is available only to the flat rate billing model and does not apply to the per user billing model.
## How metered billing fits in with pricing
-Understanding the offer hierarchy is important, when it comes to defining the offer along with its pricing models.
+Understanding the offer hierarchy is important when it comes to defining the offer along with its pricing models.
- Each SaaS offer is configured to sell either through Microsoft or not. Once an offer is published, this option cannot be changed. - Each SaaS offer, configured to sell through Microsoft, can have one or more plans. A user subscribes to the SaaS offer, but it is purchased through Microsoft within the context of a plan.
Understanding the offer hierarchy is important, when it comes to defining the of
> [!IMPORTANT] > You must keep track of the usage in your code and only send usage events to Microsoft for the usage that is above the base fee.-
+> [!Note]
+> Offers will be billed to customers in the customersΓÇÖ agreement currency, using the local market price that was published at the time the offer was created. The amount that customers pay, and that ISVs are paid, depends on the Foreign Exchange rates at the time the customer transacts the offer. Learn more on ["How we convert currency?"](/azure/marketplace/marketplace-geo-availability-currencies).
## Sample offer As an example, Contoso is a publisher with a SaaS service called Contoso Notification Services (CNS). CNS lets its customers send notifications either via email or text. Contoso is registered as a publisher in Partner Center for the commercial marketplace program to publish SaaS offers to Azure customers. There are three plans associated with CNS, outlined below:
To understand publisher support options and open a support ticket with Microsoft
- [SaaS Metered Billing Overview](https://go.microsoft.com/fwlink/?linkid=2196314) - [The SaaS Metered Billing API with REST](https://go.microsoft.com/fwlink/?linkid=2196418)++
marketplace Price Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/price-changes.md
When planning a price change, consider the following:
For a price decrease to a Software as a service offer to take effect on the first of the next month, publish the price change at least four days before the end of the current month. For a price increase to a Software as a service offer to take effect on the first of a future month, 90 days out, publish the price change at least four days before the end of the current month.-
+> [!Note]
+> Offers will be billed to customers in the customersΓÇÖ agreement currency, using the local market price that was published at the time the offer was created. The amount that customers pay, and that ISVs are paid, depends on the Foreign Exchange rates at the time the customer transacts the offer. Learn more on ["How we convert currency?"](/azure/marketplace/marketplace-geo-availability-currencies).
## Changing the flat fee of a SaaS or Azure app offer To update the monthly or yearly price of a SaaS or Azure app offer:
After the price change is canceled, follow the steps in the appropriate part of
++++++++++++++++++++++++++++++++++++++++++++++
migrate Concepts Assessment Calculation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-assessment-calculation.md
Calculations are in the preceding order. A server moves to a later stage only if
Here's what's included in an Azure VM assessment:
-**Property** | **Details**
+**Setting** | **Details**
| **Target location** | The location to which you want to migrate. The assessment currently supports these target Azure regions:<br><br> Australia Central, Australia Central 2, Australia East, Australia Southeast, Brazil South, Canada Central, Canada East, Central India, Central US, China East, China East 2, China North, China North 2, East Asia, East US, East US 2, France Central, France South, Germany North, Germany West Central, Japan East, Japan West, Korea Central, Korea South, North Central US, North Europe, Norway East, Norway West, South Africa North, South Africa West, South Central US, Southeast Asia, South India, Switzerland North, Switzerland West, UAE Central, UAE North, UK South, UK West, West Central US, West Europe, West India, West US, West US 2, JioIndiaCentral, JioIndiaWest, US Gov Arizona, US Gov Iowa, US Gov Texas, US Gov Virginia. **Target storage disk (as-is sizing)** | The type of disk to use for storage in Azure. <br><br> Specify the target storage disk as Premium-managed, Standard SSD-managed, Standard HDD-managed, or Ultra disk. **Target storage disk (performance-based sizing)** | Specifies the type of target storage disk as automatic, Premium-managed, Standard HDD-managed, Standard SSD-managed, or Ultra disk.<br><br> **Automatic**: The disk recommendation is based on the performance data of the disks, meaning the IOPS and throughput.<br><br>**Premium or Standard or Ultra disk**: The assessment recommends a disk SKU within the storage type selected.<br><br> If you want a single-instance VM service-level agreement (SLA) of 99.9%, consider using Premium-managed disks. This use ensures that all disks in the assessment are recommended as Premium-managed disks.<br><br> If you're looking to run data-intensive workloads that need high throughput, high IOPS, and consistent low latency disk storage, consider using Ultra disks.<br><br> Azure Migrate supports only managed disks for migration assessment.
-**Azure Reserved VM Instances** | Specifies [reserved instances](https://azure.microsoft.com/pricing/reserved-vm-instances/) so that cost estimations in the assessment take them into account.<br><br> When you select 'Reserved instances', the 'Discount (%)' and 'VM uptime' properties are not applicable.<br><br> Azure Migrate currently supports Azure Reserved VM Instances only for pay-as-you-go offers.
+**Savings options (compute)** | Specify the savings option that you want the assessment to consider to help optimize your Azure compute cost. <br><br> [Azure reservations](../cost-management-billing/reservations/save-compute-costs-reservations.md) (1 year or 3 year reserved) are a good option for the most consistently running resources.<br><br> [Azure Savings Plan](../cost-management-billing/savings-plan/savings-plan-compute-overview.md) (1 year or 3 year savings plan) provide additional flexibility and automated cost optimization. Ideally post migration, you could use Azure reservation and savings plan at the same time (reservation will be consumed first), but in the Azure Migrate assessments, you can only see cost estimates of 1 savings option at a time. <br><br> When you select 'None', the Azure compute cost is based on the Pay as you go rate or based on actual usage.<br><br> You need to select pay-as-you-go in offer/licensing program to be able to use Reserved Instances or Azure Savings Plan. When you select any savings option other than 'None', the 'Discount (%)' and 'VM uptime' properties are not applicable.The monthly cost estimates are calculated by multiplying 744 hours in the VM uptime field with the hourly price of the recommended SKU.
**Sizing criteria** | Used to rightsize the Azure VM.<br><br> Use as-is sizing or performance-based sizing. **Performance history** | Used with performance-based sizing. Performance history specifies the duration used when performance data is evaluated. **Percentile utilization** | Used with performance-based sizing. Percentile utilization specifies the percentile value of the performance sample used for rightsizing.
migrate Concepts Azure Sql Assessment Calculation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-azure-sql-assessment-calculation.md
The appliance collects performance data for compute settings with these steps:
The Azure SQL assessment properties include:
-**Section** | **Property** | **Details**
+**Section** | **Setting** | **Details**
| | | Target and pricing settings | **Target location** | The Azure region to which you want to migrate. Azure SQL configuration and cost recommendations are based on the location that you specify. Target and pricing settings | **Environment type** | The environment for the SQL deployments to apply pricing applicable to Production or Dev/Test. Target and pricing settings | **Offer/Licensing program** |The Azure offer if you're enrolled. Currently the field is defaulted to Pay-as-you-go, which will give you retail Azure prices. <br/><br/>You can avail additional discount by applying reserved capacity and Azure Hybrid Benefit on top of Pay-as-you-go offer.<br/>You can apply Azure Hybrid Benefit on top of Pay-as-you-go offer and Dev/Test environment. The assessment does not support applying Reserved Capacity on top of Pay-as-you-go offer and Dev/Test environment. <br/>If the offer is set to *Pay-as-you-go* and Reserved capacity is set to *No reserved instances*, the monthly cost estimates are calculated by multiplying number of hours chosen in the VM uptime field with the hourly price of the recommended SKU.
-Target and pricing settings | **Reserved Capacity** | You can specify reserved capacity so that cost estimations in the assessment take them into account.<br/><br/> If you select a reserved capacity option, you can't specify "Discount (%)" or "VM uptime". <br/>If the Reserved capacity is set to *1 year reserved* or *3 years reserved*, the monthly cost estimates are calculated by multiplying 744 hours in the VM uptime field with the hourly price of the recommended SKU.
+Target and pricing settings | **Savings options - Azure SQL MI and DB (PaaS)** | Specify the reserved capacity savings option that you want the assessment to consider to help optimize your Azure compute cost. <br><br> [Azure reservations](../cost-management-billing/reservations/save-compute-costs-reservations.md) (1 year or 3 year reserved) are a good option for the most consistently running resources.<br><br> When you select 'None', the Azure compute cost is based on the Pay as you go rate or based on actual usage.<br><br> You need to select pay-as-you-go in offer/licensing program to be able to use Reserved Instances. When you select any savings option other than 'None', the 'Discount (%)' and "VM uptime" settings are not applicable. The monthly cost estimates are calculated by multiplying 744 hours with the hourly price of the recommended SKU.
+Target and pricing settings | **Savings options - SQL Server on Azure VM (IaaS)** | Specify the savings option that you want the assessment to consider to help optimize your Azure compute cost. <br><br> [Azure reservations](../cost-management-billing/reservations/save-compute-costs-reservations.md) (1 year or 3 year reserved) are a good option for the most consistently running resources.<br><br> [Azure Savings Plan](../cost-management-billing/savings-plan/savings-plan-compute-overview.md) (1 year or 3 year savings plan) provide additional flexibility and automated cost optimization. Ideally post migration, you could use Azure reservation and savings plan at the same time (reservation will be consumed first), but in the Azure Migrate assessments, you can only see cost estimates of 1 savings option at a time. <br><br> When you select 'None', the Azure compute cost is based on the Pay as you go rate or based on actual usage.<br><br> You need to select pay-as-you-go in offer/licensing program to be able to use Reserved Instances or Azure Savings Plan. When you select any savings option other than 'None', the 'Discount (%)' and "VM uptime" settings are not applicable. The monthly cost estimates are calculated by multiplying 744 hours in the VM uptime field with the hourly price of the recommended SKU.
Target and pricing settings | **Currency** | The billing currency for your account. Target and pricing settings | **Discount (%)** | Any subscription-specific discounts you receive on top of the Azure offer. The default setting is 0%. Target and pricing settings | **VM uptime** | You can specify the duration (days per month/hour per day) that servers/VMs will run. This is useful for computing cost estimates for SQL Server on Azure VM where you are aware that Azure VMs might not run continuously. <br/> Cost estimates for servers where recommended target is *SQL Server on Azure VM* are based on the duration specified. Default is 31 days per month/24 hours per day.
Assessment criteria | **Sizing criteria** | Defaulted to *Performance-based*, wh
Assessment criteria | **Performance history** | You can indicate the data duration on which you want to base the assessment. (Default is one day) Assessment criteria | **Percentile utilization** | You can indicate the percentile value you want to use for the performance sample. (Default is 95th percentile) Assessment criteria | **Comfort factor** | You can indicate the buffer you want to use during assessment. This accounts for issues like seasonal usage, short performance history, and likely increases in future usage.
+Assessment criteria | **Optimization preference** | You can specify the preference for the recommended assessment report. Selecting 'Minimize cost' would result in the Recommended assessment report recommending those deployment types that have least migration issues and are most cost effective, whereas selecting 'Modernize to PaaS' would result in Recommended assessment report recommending PaaS(Azure SQL MI or DB) deployment types over IaaS Azure(VMs), wherever the SQL Server instance is ready for migration to PaaS irrespective of cost.
Azure SQL Managed Instance sizing | **Service Tier** | You can choose the most appropriate service tier option to accommodate your business needs for migration to Azure SQL Managed Instance:<br/><br/>Select *Recommended* if you want Azure Migrate to recommend the best suited service tier for your servers. This can be General purpose or Business critical.<br/><br/>Select *General Purpose* if you want an Azure SQL configuration designed for budget-oriented workloads.<br/><br/>Select *Business Critical* if you want an Azure SQL configuration designed for low-latency workloads with high resiliency to failures and fast failovers. Azure SQL Managed Instance sizing | **Instance type** | Defaulted to *Single instance*. Azure SQL Managed Instance sizing | **Pricing Tier** | Defaulted to *Standard*.
migrate Concepts Azure Webapps Assessment Calculation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-azure-webapps-assessment-calculation.md
Follow our tutorial for assessing [ASP.NET web apps](tutorial-assess-webapps.md)
Here's what's included in Azure App Service assessment properties:
-**Property** | **Details**
+**Setting** | **Details**
| **Target location** | The Azure region to which you want to migrate. Azure App Service configuration and cost recommendations are based on the location that you specify. **Isolation required** | Select yes if you want your web apps to run in a private and dedicated environment in an Azure datacenter using Dv2-series VMs with faster processors, SSD storage, and double the memory to core ratio compared to Standard plans.
-**Reserved instances** | Specifies reserved instances so that cost estimations in the assessment take them into account.<br/><br/> If you select a reserved instance option, you can't specify ΓÇ£Discount (%)ΓÇ¥.
+**Savings options (compute)** | Specify the savings option that you want the assessment to consider to help optimize your Azure compute cost. <br><br> [Azure reservations](../cost-management-billing/reservations/save-compute-costs-reservations.md) (1 year or 3 year reserved) are a good option for the most consistently running resources.<br><br> [Azure Savings Plan](../cost-management-billing/savings-plan/savings-plan-compute-overview.md) (1 year or 3 year savings plan) provide additional flexibility and automated cost optimization. Ideally post migration, you could use Azure reservation and savings plan at the same time (reservation will be consumed first), but in the Azure Migrate assessments, you can only see cost estimates of 1 savings option at a time. <br><br> When you select 'None', the Azure compute cost is based on the Pay as you go rate or based on actual usage.<br><br> You need to select pay-as-you-go in offer/licensing program to be able to use Reserved Instances or Azure Savings Plan. When you select any savings option other than 'None', the 'Discount (%)' setting is not applicable. The monthly cost estimates are calculated by multiplying 744 hours with the hourly price of the recommended SKU.
**Offer** | The [Azure offer](https://azure.microsoft.com/support/legal/offer-details/) in which you're enrolled. The assessment estimates the cost for that offer. **Currency** | The billing currency for your account. **Discount (%)** | Any subscription-specific discounts you receive on top of the Azure offer. The default setting is 0%.
migrate How To Create Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-create-assessment.md
Run an assessment as follows:
- In **Storage type**, - If you want to use performance-based data in the assessment, select **Automatic** for Azure Migrate to recommend a storage type, based on disk IOPS and throughput. - Alternatively, select the storage type you want to use for VM when you migrate it.
- - In **Reserved Instances**, specify whether you want to use reserve instances for the VM when you migrate it.
- - If you select to use a reserved instance, you can't specify '**Discount (%)**, or **VM uptime**. [Learn more](https://aka.ms/azurereservedinstances).
+- In **Savings options (compute)**, specify the savings option that you want the assessment to consider to help optimize your Azure compute cost.
+ - [Azure reservations](../cost-management-billing/reservations/save-compute-costs-reservations.md) (1 year or 3 year reserved) are a good option for the most consistently running resources.
+ - [Azure Savings Plan](../cost-management-billing/savings-plan/savings-plan-compute-overview.md) (1 year or 3 year savings plan) provide additional flexibility and automated cost optimization. Ideally post migration, you could use Azure reservation and savings plan at the same time (reservation will be consumed first), but in the Azure Migrate assessments, you can only see cost estimates of 1 savings option at a time.
+ - When you select 'None', the Azure compute cost is based on the Pay as you go rate or based on actual usage.
+ - You need to select pay-as-you-go in offer/licensing program to be able to use Reserved Instances or Azure Savings Plan. When you select any savings option other than 'None', the 'Discount (%)' and 'VM uptime' properties are not applicable.
1. In **VM Size**: - In **Sizing criterion**, select if you want to base the assessment on server configuration data/metadata, or on performance-based data. If you use performance data: - In **Performance history**, indicate the data duration on which you want to base the assessment.
Run an assessment as follows:
1. Select **Save** if you make changes.
- :::image type="content" source="./media/tutorial-assess-vmware-azure-vm/assessment-properties.png" alt-text="Screenshot of Assessment properties.":::
- 1. In **Assess Servers**, select **Next**. 1. In **Select servers to assess** > **Assessment name** > specify a name for the assessment.
migrate How To Create Azure App Service Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-create-azure-app-service-assessment.md
An Azure App Service assessment provides one sizing criteria:
**Sizing criteria** | **Details** | **Data** | |
-**Configuration-based** | Assessments that make recommendations based on collected configuration data | The Azure App Service assessment takes only configuration data in to consideration for assessment calculation. Performance data for web apps is not collected.
+**Configuration-based** | Assessments that makes recommendations based on collected configuration data | The Azure App Service assessment takes only configuration data in to consideration for assessment calculation. Performance data for web apps isn't collected.
[Learn more](concepts-azure-webapps-assessment-calculation.md) about Azure App Service assessments.
Run an assessment as follows:
:::image type="content" source="./media/tutorial-assess-webapps/discover-assess-migrate.png" alt-text="Overview page for Azure Migrate"::: 2. On **Azure Migrate: Discovery and assessment**, click **Assess** and choose the assessment type as **Azure App Service**. :::image type="content" source="./media/tutorial-assess-webapps/assess.png" alt-text="Dropdown to choose assessment type as Azure App Service":::
-3. In **Create assessment** > you will be able to see the assessment type pre-selected as **Azure App Service** and the discovery source defaulted to **Servers discovered from Azure Migrate appliance**.
+3. In **Create assessment** > you'll be able to see the assessment type pre-selected as **Azure App Service** and the discovery source defaulted to **Servers discovered from Azure Migrate appliance**.
4. Click **Edit** to review the assessment properties. :::image type="content" source="./media/tutorial-assess-webapps/assess-webapps.png" alt-text="Edit button from where assessment properties can be customized":::
Run an assessment as follows:
| **Property** | **Details** | | | | | **Target location** | The Azure region to which you want to migrate. Azure App Service configuration and cost recommendations are based on the location that you specify. |
- | **Isolation required** | Select yes if you want your web apps to run in a private and dedicated environment in an Azure datacenter using Dv2-series VMs with faster processors, SSD storage, and double the memory to core ratio compared to Standard plans. |
- | **Reserved instances** | Specifies reserved instances so that cost estimations in the assessment take them into account.<br/><br/> If you select a reserved instance option, you can't specify ΓÇ£Discount (%)ΓÇ¥. |
+ | **Isolation required** | Select *Yes* if you want your web apps to run in a private and dedicated environment in an Azure datacenter using Dv2-series VMs. It provides faster processors, SSD storage, and double the memory to core ratio compared to Standard plans. |
+ | **Savings options (compute)** | Specify the savings option that you want the assessment to consider to help optimize your Azure compute cost. <br><br> [Azure reservations](../cost-management-billing/reservations/save-compute-costs-reservations.md) (1 year or 3 year reserved) are a good option for the most consistently running resources.<br><br> [Azure Savings Plan](../cost-management-billing/savings-plan/savings-plan-compute-overview.md) (1 year or 3 year savings plan) provide more flexibility and automated cost optimization. Ideally post migration, you could use Azure reservation and savings plan at the same time (reservation will be consumed first), but in the Azure Migrate assessments, you can only see cost estimates of 1 savings option at a time. <br><br> When you select 'None', the Azure compute cost is based on the Pay as you go rate or based on actual usage.<br><br> You need to select pay-as-you-go in offer/licensing program to be able to use Reserved Instances or Azure Savings Plan. When you select any savings option other than 'None', the 'Discount (%)' setting isn't applicable. The monthly cost estimates are calculated by multiplying 744 hours with the hourly price of the recommended SKU.|
| **Offer** | The [Azure offer](https://azure.microsoft.com/support/legal/offer-details/) in which you're enrolled. The assessment estimates the cost for that offer. | | **Currency** | The billing currency for your account. | | **Discount (%)** | Any subscription-specific discounts you receive on top of the Azure offer. The default setting is 0%. | | **EA subscription** | Specifies that an Enterprise Agreement (EA) subscription is used for cost estimation. Takes into account the discount applicable to the subscription. <br/><br/> Leave the settings for reserved instances, and discount (%) properties with their default settings. |
- :::image type="content" source="./media/tutorial-assess-webapps/webapps-assessment-properties.png" alt-text="App Service assessment properties":::
1. In **Create assessment** > click Next. 1. In **Select servers to assess** > **Assessment name** > specify a name for the assessment.
Run an assessment as follows:
:::image type="content" source="./media/tutorial-assess-webapps/tile-refresh.png" alt-text="Refresh discovery and assessment tool data"::: 1. Click on the number next to Azure App Service assessment. :::image type="content" source="./media/tutorial-assess-webapps/assessment-webapps-navigation.png" alt-text="Navigation to created assessment":::
-1. Click on the assessment name which you wish to view.
+1. Click on the assessment name that you wish to view.
## Review an assessment **To view an assessment**: 1. **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment** > Click on the number next to Azure App Service assessment.
-2. Click on the assessment name which you wish to view.
+2. Click on the assessment name that you wish to view.
:::image type="content" source="./media/tutorial-assess-webapps/assessment-webapps-summary.png" alt-text="App Service assessment overview"::: 3. Review the assessment summary. You can also edit the assessment properties or recalculate the assessment. #### Azure App Service readiness
-This indicates the distribution of assessed web apps. You can drill-down to understand details around migration issues/warnings that you can remediate before migration to Azure App Service. [Learn More](concepts-azure-webapps-assessment-calculation.md)
+This card indicates the distribution of assessed web apps. You can drill down to understand details around migration issues/warnings that you can remediate before migration to Azure App Service. [Learn More](concepts-azure-webapps-assessment-calculation.md)
You can also review the recommended App Service SKU for migrating to Azure App Service. #### Azure App Service cost details
An [App Service plan](../app-service/overview-hosting-plans.md) carries a [charg
:::image type="content" source="./media/tutorial-assess-webapps/assessment-webapps-readiness.png" alt-text="Azure App Service readiness details"::: 1. Review Azure App Service readiness column in table, for the assessed web apps: 1. If there are no compatibility issues found, the readiness is marked as **Ready** for the target deployment type.
- 1. If there are non-critical compatibility issues, such as degraded or unsupported features that do not block the migration to a specific target deployment type, the readiness is marked as **Ready with conditions** (hyperlinked) with **warning** details and recommended remediation guidance.
+ 1. If there are non-critical compatibility issues, such as degraded or unsupported features that does not block the migration to a specific target deployment type, the readiness is marked as **Ready with conditions** (hyperlinked) with **warning** details and recommended remediation guidance.
1. If there are any compatibility issues that may block the migration to a specific target deployment type, the readiness is marked as **Not ready** with **issue** details and recommended remediation guidance.
- 1. If the discovery is still in progress or there are any discovery issues for a web app, the readiness is marked as **Unknown** as the assessment could not compute the readiness for that web app.
-1. Review the recommended SKU for the web apps which is determined as per the matrix below:
+ 1. If the discovery is still in progress or there are any discovery issues for a web app, the readiness is marked as **Unknown** as the assessment couldn't compute the readiness for that web app.
+1. Review the recommended SKU for the web apps that is determined as per the matrix below:
**Isolation required** | **Reserved instance** | **App Service plan/ SKU** | |
Unknown | No | No
### Review cost estimates The assessment summary shows the estimated monthly costs for hosting you web apps in App Service. In App Service, you pay charges per App Service plan and not per web app. One or more apps can be configured to run on the same computing resources (or in the same App Service plan). Whatever apps you put into this App Service plan run on these compute resources as defined by your App Service plan.
-To optimize cost, Azure Migrate assessment allocates multiple web apps to each recommended App Service plan. Number of web apps allocated to each plan instance is as per below table.
+To optimize cost, Azure Migrate assessment allocates multiple web apps to each recommended App Service plan. Number of web apps allocated to each plan instance is as per the table below:
**App Service plan** | **Web apps per App Service plan** |
migrate How To Create Azure Sql Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-create-azure-sql-assessment.md
Run an assessment as follows:
- You can avail additional discount by applying reserved capacity and Azure Hybrid Benefit on top of Pay-as-you-go offer. - You can apply Azure Hybrid Benefit on top of the Pay-as-you-go offer and Dev/Test environment. The assessment does not support applying Reserved Capacity on top of the Pay-as-you-go offer and Dev/Test environment. - If the offer is set to *Pay-as-you-go* and Reserved capacity is set to *No reserved instances*, the monthly cost estimates are calculated by multiplying the number of hours chosen in the VM uptime field with the hourly price of the recommended SKU.
- - In **Reserved Capacity**, specify whether you want to use reserved capacity for the SQL server after migration.
- - If you select a reserved capacity option, you can't specify "Discount (%)" or "VM uptime".
- - If the Reserved capacity is set to *1 year reserved* or *3 years reserved*, the monthly cost estimates are calculated by multiplying 744 hours in the VM uptime field with the hourly price of the recommended SKU.
+ - In **Savings options - Azure SQL MI and DB (PaaS)**, Specify the reserved capacity savings option that you want the assessment to consider to help optimize your Azure compute cost.
+ - [Azure reservations](../cost-management-billing/reservations/save-compute-costs-reservations.md) (1 year or 3 year reserved) are a good option for the most consistently running resources.
+ - When you select 'None', the Azure compute cost is based on the Pay as you go rate or based on actual usage.
+ - You need to select pay-as-you-go in **offer/licensing program** setting to be able to use Reserved Instances. When you select any savings option other than 'None', the 'Discount (%)' and "VM uptime" settings are not applicable.
+ - In **Savings options - SQL Server on Azure VM (IaaS)**, specify the savings option that you want the assessment to consider to help optimize your Azure compute cost.
+ - [Azure reservations](../cost-management-billing/reservations/save-compute-costs-reservations.md) (1 year or 3 year reserved) are a good option for the most consistently running resources.
+ - [Azure Savings Plan](../cost-management-billing/savings-plan/savings-plan-compute-overview.md) (1 year or 3 year savings plan) provide additional flexibility and automated cost optimization. Ideally post migration, you could use Azure reservation and savings plan at the same time (reservation will be consumed first), but in the Azure Migrate assessments, you can only see cost estimates of 1 savings option at a time.
+ - When you select 'None', the Azure compute cost is based on the Pay as you go rate or based on actual usage.
+ - You need to select pay-as-you-go in **offer/licensing program** setting to be able to use Reserved Instances or Azure Savings Plan. When you select any savings option other than 'None', the 'Discount (%)' and 'VM uptime' properties are not applicable.
- In **Currency**, select the billing currency for your account. - In **Discount (%)**, add any subscription-specific discounts you receive on top of the Azure offer. The default setting is 0%. - In **VM uptime**, specify the duration (days per month/hour per day) that servers/VMs will run. This is useful for computing cost estimates for SQL Server on Azure VM where you are aware that Azure VMs might not run continuously.
Run an assessment as follows:
| | Cores | 2 | 4 Memory | 8 GB | 16 GB-
+ - In **Optimization preference**, specify the preference for the recommended assessment report. Selecting *Minimize cost* would result in the Recommended assessment report recommending those deployment types that have least migration issues and are most cost effective, whereas selecting *Modernize to PaaS* would result in Recommended assessment report recommending PaaS(Azure SQL MI or DB) deployment types over IaaS Azure(VMs), wherever the SQL Server instance is ready for migration to PaaS irrespective of cost.
1. In **Assessment settings** > **Azure SQL Managed Instance sizing**, - In **Service Tier**, choose the most appropriate service tier option to accommodate your business needs for migration to Azure SQL Managed Instance: - Select *Recommended* if you want Azure Migrate to recommend the best suited service tier for your servers. This can be General purpose or Business critical.
Run an assessment as follows:
- Select **Save** if you made changes.
- :::image type="content" source="./media/tutorial-assess-sql/view-all-inline.png" alt-text="Screenshot to save the assessment properties." lightbox="./media/tutorial-assess-sql/view-all-expanded.png":::
- 8. In **Assess Servers**, select **Next**. 9. In **Select servers to assess** > **Assessment name** > specify a name for the assessment. 10. In **Select or create a group** > select **Create New** and specify a group name.
migrate Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/policy-reference.md
Title: Built-in policy definitions for Azure Migrate description: Lists Azure Policy built-in policy definitions for Azure Migrate. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/28/2022 Last updated : 01/05/2023
migrate Tutorial App Containerization Java Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-app-containerization-java-kubernetes.md
description: Tutorial:Containerize & migrate Java web applications to Azure Kube
ms.-+ Previously updated : 03/24/2022 Last updated : 01/04/2023 # Java web app containerization and migration to Azure Kubernetes Service
To troubleshoot any issues with the tool, you can look at the log files on the W
- Containerizing Java web apps on Apache Tomcat (on Linux servers) and deploying them on Linux containers on App Service. [Learn more](./tutorial-app-containerization-java-app-service.md) - Containerizing ASP.NET web apps and deploying them on Windows containers on AKS. [Learn more](./tutorial-app-containerization-aspnet-kubernetes.md) - Containerizing ASP.NET web apps and deploying them on Windows containers on Azure App Service. [Learn more](./tutorial-app-containerization-aspnet-app-service.md)
+- What are solutions for running Oracle WebLogic Server on the Azure Kubernetes Service? [Learn more](../virtual-machines/workloads/oracle/weblogic-aks.md)
+- Open Liberty and WebSphere Liberty on AKS. [Learn more](/azure/developer/java/ee/websphere-family#open-liberty-and-websphere-liberty-on-aks)
migrate Tutorial Assess Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-aws.md
Run an assessment as follows:
- In **Storage type**, - If you want to use performance-based data in the assessment, select **Automatic** for Azure Migrate to recommend a storage type, based on disk IOPS and throughput. - Alternatively, select the storage type you want to use for VM when you migrate it.
- - In **Reserved Instances**, specify whether you want to use reserve instances for the VM when you migrate it.
- - If you select to use a reserved instance, you can't specify '**Discount (%)**, or **VM uptime**.
- - [Learn more](https://aka.ms/azurereservedinstances).
+ - In **Savings options (compute)**, specify the savings option that you want the assessment to consider to help optimize your Azure compute cost.
+ - [Azure reservations](../cost-management-billing/reservations/save-compute-costs-reservations.md) (1 year or 3 year reserved) are a good option for the most consistently running resources.
+ - [Azure Savings Plan](../cost-management-billing/savings-plan/savings-plan-compute-overview.md) (1 year or 3 year savings plan) provide additional flexibility and automated cost optimization. Ideally post migration, you could use Azure reservation and savings plan at the same time (reservation will be consumed first), but in the Azure Migrate assessments, you can only see cost estimates of 1 savings option at a time.
+ - When you select 'None', the Azure compute cost is based on the Pay as you go rate or based on actual usage.
+ - You need to select pay-as-you-go in offer/licensing program to be able to use Reserved Instances or Azure Savings Plan. When you select any savings option other than 'None', the 'Discount (%)' and 'VM uptime' properties are not applicable.
1. In **VM Size**: - In **Sizing criterion**, select if you want to base the assessment on server configuration data/metadata, or on performance-based data. If you use performance data: - In **Performance history**, indicate the data duration on which you want to base the assessment
Run an assessment as follows:
1. Click **Save** if you make changes.
- ![Assessment properties](./media/tutorial-assess-vmware-azure-vm/assessment-properties.png)
- 1. In **Assess Servers** > click **Next**. 1. In **Select servers to assess** > **Assessment name** > specify a name for the assessment.
migrate Tutorial Assess Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-gcp.md
Run an assessment as follows:
- In **Storage type**, - If you want to use performance-based data in the assessment, select **Automatic** for Azure Migrate to recommend a storage type, based on the disk IOPS and throughput. - Alternatively, select the storage type you want to use for VM when you migrate it.
- - In **Reserved Instances**, specify whether you want to use reserved instances for the VM when you migrate it.
- - If you select to use a reserved instance, you can't specify '**Discount (%)**, or **VM uptime**.
- - [Learn more](https://aka.ms/azurereservedinstances) about VM resrved instances.
+ - In **Savings options (compute)**, specify the savings option that you want the assessment to consider to help optimize your Azure compute cost.
+ - [Azure reservations](../cost-management-billing/reservations/save-compute-costs-reservations.md) (1 year or 3 year reserved) are a good option for the most consistently running resources.
+ - [Azure Savings Plan](../cost-management-billing/savings-plan/savings-plan-compute-overview.md) (1 year or 3 year savings plan) provide additional flexibility and automated cost optimization. Ideally post migration, you could use Azure reservation and savings plan at the same time (reservation will be consumed first), but in the Azure Migrate assessments, you can only see cost estimates of 1 savings option at a time.
+ - When you select 'None', the Azure compute cost is based on the Pay as you go rate or based on actual usage.
+ - You need to select pay-as-you-go in offer/licensing program to be able to use Reserved Instances or Azure Savings Plan. When you select any savings option other than 'None', the 'Discount (%)' and 'VM uptime' properties are not applicable.
1. In **VM Size**: - In **Sizing criteria**, select if you want to base the assessment on server configuration data/metadata, or on performance-based data. If you use performance data: - In **Performance history**, indicate the data duration on which you want to base the assessment.
Run an assessment as follows:
1. Click **Save** if you make changes.
- ![Assessment properties](./media/tutorial-assess-vmware-azure-vm/assessment-properties.png)
- 1. In **Assess Servers**, click **Next**. 1. In **Select servers to assess** > **Assessment name**, specify a name for the assessment.
migrate Tutorial Assess Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-hyper-v.md
Run an assessment as follows:
- In **Storage type**, - If you want to use performance-based data in the assessment, select **Automatic** for Azure Migrate to recommend a storage type, based on disk IOPS and throughput. - Alternatively, select the storage type you want to use for VM when you migrate it.
- - In **Reserved Instances**, specify whether you want to use reserve instances for the VM when you migrate it.
- - If you select to use a reserved instance, you can't specify '**Discount (%)**, or **VM uptime**.
- - [Learn more](https://aka.ms/azurereservedinstances).
+ - In **Savings options (compute)**, specify the savings option that you want the assessment to consider to help optimize your Azure compute cost.
+ - [Azure reservations](../cost-management-billing/reservations/save-compute-costs-reservations.md) (1 year or 3 year reserved) are a good option for the most consistently running resources.
+ - [Azure Savings Plan](../cost-management-billing/savings-plan/savings-plan-compute-overview.md) (1 year or 3 year savings plan) provide additional flexibility and automated cost optimization. Ideally post migration, you could use Azure reservation and savings plan at the same time (reservation will be consumed first), but in the Azure Migrate assessments, you can only see cost estimates of 1 savings option at a time.
+ - When you select 'None', the Azure compute cost is based on the Pay as you go rate or based on actual usage.
+ - You need to select pay-as-you-go in offer/licensing program to be able to use Reserved Instances or Azure Savings Plan. When you select any savings option other than 'None', the 'Discount (%)' and 'VM uptime' properties are not applicable.
1. In **VM Size**: - In **Sizing criterion**, select if you want to base the assessment on server configuration data/metadata, or on performance-based data. If you use performance data: - In **Performance history**, indicate the data duration on which you want to base the assessment
Run an assessment as follows:
1. Click **Save** if you make changes.
- ![Assessment properties](./media/tutorial-assess-vmware-azure-vm/assessment-properties.png)
1. In **Assess Servers** > click **Next**.
migrate Tutorial Assess Physical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-physical.md
Run an assessment as follows:
- In **Storage type**, - If you want to use performance-based data in the assessment, select **Automatic** for Azure Migrate to recommend a storage type, based on disk IOPS and throughput. - Alternatively, select the storage type you want to use for VM when you migrate it.
- - In **Reserved Instances**, specify whether you want to use reserve instances for the VM when you migrate it.
- - If you select to use a reserved instance, you can't specify '**Discount (%)**, or **VM uptime**.
- - [Learn more](https://aka.ms/azurereservedinstances).
+ - In **Savings options (compute)**, specify the savings option that you want the assessment to consider to help optimize your Azure compute cost.
+ - [Azure reservations](../cost-management-billing/reservations/save-compute-costs-reservations.md) (1 year or 3 year reserved) are a good option for the most consistently running resources.
+ - [Azure Savings Plan](../cost-management-billing/savings-plan/savings-plan-compute-overview.md) (1 year or 3 year savings plan) provide additional flexibility and automated cost optimization. Ideally post migration, you could use Azure reservation and savings plan at the same time (reservation will be consumed first), but in the Azure Migrate assessments, you can only see cost estimates of 1 savings option at a time.
+ - When you select 'None', the Azure compute cost is based on the Pay as you go rate or based on actual usage.
+ - You need to select pay-as-you-go in offer/licensing program to be able to use Reserved Instances or Azure Savings Plan. When you select any savings option other than 'None', the 'Discount (%)' and 'VM uptime' properties are not applicable.
1. In **VM Size**: - In **Sizing criterion**, select if you want to base the assessment on server configuration data/metadata, or on performance-based data. If you use performance data: - In **Performance history**, indicate the data duration on which you want to base the assessment
Run an assessment as follows:
1. Click **Save** if you make changes.
- ![Assessment properties](./media/tutorial-assess-vmware-azure-vm/assessment-properties.png)
- 1. In **Assess Servers** > click **Next**. 1. In **Select servers to assess** > **Assessment name** > specify a name for the assessment.
migrate Tutorial Assess Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-sql.md
Run an assessment as follows:
- You can avail additional discount by applying reserved capacity and Azure Hybrid Benefit on top of Pay-as-you-go offer. - You can apply Azure Hybrid Benefit on top of the Pay-as-you-go offer and Dev/Test environment. The assessment does not support applying Reserved Capacity on top of the Pay-as-you-go offer and Dev/Test environment. - If the offer is set to *Pay-as-you-go* and Reserved capacity is set to *No reserved instances*, the monthly cost estimates are calculated by multiplying the number of hours chosen in the VM uptime field with the hourly price of the recommended SKU.
- - In **Reserved Capacity**, specify whether you want to use reserved capacity for the SQL server after migration.
- - If you select a reserved capacity option, you can't specify "Discount (%)" or "VM uptime".
- - If the Reserved capacity is set to *1 year reserved* or *3 years reserved*, the monthly cost estimates are calculated by multiplying 744 hours in the VM uptime field with the hourly price of the recommended SKU.
+ - In **Savings options - Azure SQL MI and DB (PaaS)**, Specify the reserved capacity savings option that you want the assessment to consider to help optimize your Azure compute cost.
+ - [Azure reservations](../cost-management-billing/reservations/save-compute-costs-reservations.md) (1 year or 3 year reserved) are a good option for the most consistently running resources.
+ - When you select 'None', the Azure compute cost is based on the Pay as you go rate or based on actual usage.
+ - You need to select pay-as-you-go in **offer/licensing program** setting to be able to use Reserved Instances. When you select any savings option other than 'None', the 'Discount (%)' and "VM uptime" settings are not applicable.
+ - In **Savings options - SQL Server on Azure VM (IaaS)**, specify the savings option that you want the assessment to consider to help optimize your Azure compute cost.
+ - [Azure reservations](../cost-management-billing/reservations/save-compute-costs-reservations.md) (1 year or 3 year reserved) are a good option for the most consistently running resources.
+ - [Azure Savings Plan](../cost-management-billing/savings-plan/savings-plan-compute-overview.md) (1 year or 3 year savings plan) provide additional flexibility and automated cost optimization. Ideally post migration, you could use Azure reservation and savings plan at the same time (reservation will be consumed first), but in the Azure Migrate assessments, you can only see cost estimates of 1 savings option at a time.
+ - When you select 'None', the Azure compute cost is based on the Pay as you go rate or based on actual usage.
+ - You need to select pay-as-you-go in **offer/licensing program** setting to be able to use Reserved Instances or Azure Savings Plan. When you select any savings option other than 'None', the 'Discount (%)' and 'VM uptime' properties are not applicable.
- In **Currency**, select the billing currency for your account. - In **Discount (%)**, add any subscription-specific discounts you receive on top of the Azure offer. The default setting is 0%. - In **VM uptime**, specify the duration (days per month/hour per day) that servers/VMs will run. This is useful for computing cost estimates for SQL Server on Azure VM where you are aware that Azure VMs might not run continuously.
Run an assessment as follows:
| | Cores | 2 | 4 Memory | 8 GB | 16 GB-
+ - In **Optimization preference**, specify the preference for the recommended assessment report. Selecting *Minimize cost* would result in the Recommended assessment report recommending those deployment types that have least migration issues and are most cost effective, whereas selecting *Modernize to PaaS* would result in Recommended assessment report recommending PaaS(Azure SQL MI or DB) deployment types over IaaS Azure(VMs), wherever the SQL Server instance is ready for migration to PaaS irrespective of cost.
1. In **Assessment settings** > **Azure SQL Managed Instance sizing**,
- - In **Service Tier**, choose the most appropriate service tier option to accommodate your business needs for migration to Azure SQL Managed Instance:
+ - In **Service Tier**, choose the most appropriate service tier option to accommodate your business needs for migration to Azure SQL Managed Instance:
- Select *Recommended* if you want Azure Migrate to recommend the best suited service tier for your servers. This can be General purpose or Business critical. - Select *General Purpose* if you want an Azure SQL configuration designed for budget-oriented workloads. - Select *Business Critical* if you want an Azure SQL configuration designed for low-latency workloads with high resiliency to failures and fast failovers.
Run an assessment as follows:
- Select **Save** if you made changes.
- :::image type="content" source="./media/tutorial-assess-sql/view-all-inline.png" alt-text="Screenshot to save the assessment properties." lightbox="./media/tutorial-assess-sql/view-all-expanded.png":::
-
-8. In **Assess Servers**, select **Next**.
-9. In **Select servers to assess** > **Assessment name** > specify a name for the assessment.
-10. In **Select or create a group** > select **Create New** and specify a group name.
+1. In **Assess Servers**, select **Next**.
+1. In **Select servers to assess** > **Assessment name** > specify a name for the assessment.
+1. In **Select or create a group** > select **Create New** and specify a group name.
:::image type="content" source="./media/tutorial-assess-sql/assessment-add-servers-inline.png" alt-text="Screenshot of Location of New group button." lightbox="./media/tutorial-assess-sql/assessment-add-servers-expanded.png":::
-11. Select the appliance and select the servers you want to add to the group and select **Next**.
-12. In **Review + create assessment**, review the assessment details, and select **Create Assessment** to create the group and run the assessment.
-13. After the assessment is created, go to **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**, select the number next to Azure SQL assessment. If you do not see the number populated, select **Refresh** to get the latest updates.
+1. Select the appliance and select the servers you want to add to the group and select **Next**.
+1. In **Review + create assessment**, review the assessment details, and select **Create Assessment** to create the group and run the assessment.
+1. After the assessment is created, go to **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**, select the number next to Azure SQL assessment. If you do not see the number populated, select **Refresh** to get the latest updates.
:::image type="content" source="./media/tutorial-assess-sql/assessment-sql-navigation.png" alt-text="Screenshot of Navigation to created assessment.":::
-15. Select the assessment name, which you wish to view.
+1. Select the assessment name, which you wish to view.
> [!NOTE] > As Azure SQL assessments are performance-based assessments, we recommend that you wait at least a day after starting discovery before you create an assessment. This provides time to collect performance data with higher confidence. If your discovery is still in progress, the readiness of your SQL instances will be marked as **Unknown**. Ideally, after you start discovery, **wait for the performance duration you specify (day/week/month)** to create or recalculate the assessment for a high-confidence rating.
migrate Tutorial Assess Vmware Azure Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-vmware-azure-vm.md
Run an assessment as follows:
- In **Storage type**, - If you want to use performance-based data in the assessment, select **Automatic** for Azure Migrate to recommend a storage type, based on disk IOPS and throughput. - Alternatively, select the storage type you want to use for VM when you migrate it.
- - In **Reserved Instances**, specify whether you want to use reserve instances for the VM when you migrate it.
- - If you select to use a reserved instance, you can't specify '**Discount (%)**, or **VM uptime**.
- - [Learn more](https://aka.ms/azurereservedinstances).
+ - In **Savings options (compute)**, specify the savings option that you want the assessment to consider to help optimize your Azure compute cost.
+ - [Azure reservations](../cost-management-billing/reservations/save-compute-costs-reservations.md) (1 year or 3 year reserved) are a good option for the most consistently running resources.
+ - [Azure Savings Plan](../cost-management-billing/savings-plan/savings-plan-compute-overview.md) (1 year or 3 year savings plan) provide additional flexibility and automated cost optimization. Ideally post migration, you could use Azure reservation and savings plan at the same time (reservation will be consumed first), but in the Azure Migrate assessments, you can only see cost estimates of 1 savings option at a time.
+ - When you select 'None', the Azure compute cost is based on the Pay as you go rate or based on actual usage.
+ - You need to select pay-as-you-go in offer/licensing program to be able to use Reserved Instances or Azure Savings Plan. When you select any savings option other than 'None', the 'Discount (%)' and 'VM uptime' properties are not applicable.
1. In **VM Size**: - In **Sizing criterion**, select if you want to base the assessment on server configuration data/metadata, or on performance-based data. If you use performance data: - In **Performance history**, indicate the data duration on which you want to base the assessment
Run an assessment as follows:
1. Click **Save** if you make changes.
- ![Assessment properties](./media/tutorial-assess-vmware-azure-vm/assessment-properties.png)
1. In **Assess Servers** > click **Next**.
migrate Tutorial Assess Webapps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-webapps.md
Run an assessment as follows:
| **Target location** | The Azure region to which you want to migrate. Azure App Service configuration and cost recommendations are based on the location that you specify. **Isolation required** | Select yes if you want your web apps to run in a private and dedicated environment in an Azure datacenter using Dv2-series VMs with faster processors, SSD storage, and double the memory to core ratio compared to Standard plans.
- **Reserved instances** | Specifies reserved instances so that cost estimations in the assessment take them into account.<br/><br/> If you select a reserved instance option, you can't specify *Discount (%)*.
+ - In **Savings options (compute)**, specify the savings option that you want the assessment to consider to help optimize your Azure compute cost.
+ - [Azure reservations](../cost-management-billing/reservations/save-compute-costs-reservations.md) (1 year or 3 year reserved) are a good option for the most consistently running resources.
+ - [Azure Savings Plan](../cost-management-billing/savings-plan/savings-plan-compute-overview.md) (1 year or 3 year savings plan) provide additional flexibility and automated cost optimization. Ideally post migration, you could use Azure reservation and savings plan at the same time (reservation will be consumed first), but in the Azure Migrate assessments, you can only see cost estimates of 1 savings option at a time.
+ - When you select 'None', the Azure compute cost is based on the Pay as you go rate or based on actual usage.
+ - You need to select pay-as-you-go in offer/licensing program to be able to use Reserved Instances or Azure Savings Plan. When you select any savings option other than 'None', the 'Discount (%)' setting is not applicable.
**Offer** | The [Azure offer](https://azure.microsoft.com/support/legal/offer-details/) in which you're enrolled. The assessment estimates the cost for that offer. **Currency** | The billing currency for your account. **Discount (%)** | Any subscription-specific discounts you receive on top of the Azure offer. The default setting is 0%. **EA subscription** | Specifies that an Enterprise Agreement (EA) subscription is used for cost estimation. Takes into account the discount applicable to the subscription. <br/><br/> Leave the settings for reserved instances, and discount (%) properties with their default settings.
- :::image type="content" source="./media/tutorial-assess-webapps/webapps-assessment-properties.png" alt-text="Screenshot of App Service assessment properties.":::
- 1. In **Create assessment**, select **Next**. 1. In **Select servers to assess** > **Assessment name** > specify a name for the assessment. 1. In **Select or create a group**, select **Create New** and specify a group name.
migrate Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/whats-new.md
[Azure Migrate](migrate-services-overview.md) helps you to discover, assess, and migrate on-premises servers, apps, and data to the Microsoft Azure cloud. This article summarizes new releases and features in Azure Migrate.
+## Update (January 2022)
+- Envision savings with [Azure Savings Plan for compute](https://azure.microsoft.com/pricing/offers/savings-plan-compute) (ASP) savings option with Azure Migrate assessments. ASP as a savings option setting is now available for Azure VM assessment, Azure SQL assessment and Azure App Service assessment.
+- Azure Migrate is now supported in Sweden geography. [Learn more](migrate-support-matrix.md#public-cloud)
+ ## Update (December 2022) - General Availability: Perform software inventory and agentless dependency analysis at-scale for Hyper-V virtual machines and bare metal servers or servers running on other clouds like AWS, GCP etc. Learn more on how to perform [software inventory](how-to-discover-applications.md) and [agentless dependency analysis](how-to-create-group-machine-dependencies-agentless.md).
Azure Migrate supports deployments in Azure Government.
A script-based installation is now available to set up the [Azure Migrate appliance](migrate-appliance.md): -- The script-based installation is an alternative to the .OVA (VMware)/VHD (Hyper-V) installation of the appliance.
+- The script-based installation is an alternative to the *.OVA* (VMware)/VHD (Hyper-V) installation of the appliance.
- It provides a PowerShell installer script that can be used to set up the appliance for VMware/Hyper-V on an existing machine running Windows Server 2016. ## Update (November 2019)
mysql Concepts Version Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-version-policy.md
Title: Version support policy - Azure Database for MySQL - Single Server and Flexible Server description: Describes the policy around MySQL major and minor versions in Azure Database for MySQL--++
mysql Concepts Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-azure-ad-authentication.md
Once you authenticate against the Active Directory, you retrieve a token. This t
- Azure Database for MySQL flexible server matches access tokens to the Azure Database for MySQL users using the user's unique Azure AD user ID instead of the username. This means that if an Azure AD user is deleted in Azure AD and a new user is created with the same name, Azure Database for MySQL considers that a different user. Therefore, if a user is deleted from Azure AD and then a new user with the same name is added, the new user isn't able to connect with the existing user.
+> [!NOTE]
+> The subscriptions of an Azure MySQL flexible server with Azure AD authentication enabled cannot be transferred to another tenant or directory.
+ ## Next steps - To learn how to configure Azure AD with Azure Database for MySQL, see [Set up Azure Active Directory authentication for Azure Database for MySQL flexible server](how-to-azure-ad.md)
mysql Concepts Customer Managed Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-customer-managed-key.md
As you configure Key Vault to use data encryption using a customer-managed key,
- Keep a copy of the customer-managed key in a secure place or escrow it to the escrow service. - If Key Vault generates the key, create a key backup before using the key for the first time. You can only restore the backup to Key Vault. For more information about the backup command, see [Backup-AzKeyVaultKey](/powershell/module/az.keyVault/backup-azkeyVaultkey).
+> [!NOTE]
+> It is advised to use a key vault from the same region, but if necessary, you can use a key vault from another region by specifying the "enter key identifier" information.
+ ## Inaccessible customer-managed key condition When you configure data encryption with a CMK in Key Vault, continuous access to this key is required for the server to stay online. If the flexible server loses access to the customer-managed key in Key Vault, the server begins denying all connections within 10 minutes. The flexible server issues a corresponding error message and changes the server state to Inaccessible. The server can reach this state for various reasons. - If you delete the KeyVault, the Azure Database for MySQL Flexible server will be unable to access the key and will move to _Inaccessible_ state. Recover the [Key Vault](../../key-vault/general/key-vault-recovery.md) and revalidate the data encryption to make the Flexible server _Available_. - If we delete the key from the KeyVault, the Azure Database for MySQL Flexible server will be unable to access the key and will move to _Inaccessible_ state. Recover the [Key](../../key-vault/general/key-vault-recovery.md) and revalidate the data encryption to make the Flexible server _Available_.-- If the key stored in the Azure KeyVault expires, the key will become invalid, and the Azure Database for MySQL Flexible server will transition into _Inaccessible_ state. Extend the key expiry date using [CLI](/cli/azure/keyvault/key?view=azure-cli-latest#az-keyvault-key-set-attributes) and then revalidate the data encryption to make the Flexible server _Available_.
+- If the key stored in the Azure KeyVault expires, the key will become invalid, and the Azure Database for MySQL Flexible server will transition into _Inaccessible_ state. Extend the key expiry date using [CLI](/cli/azure/keyvault/key#az-keyvault-key-set-attributes) and then revalidate the data encryption to make the Flexible server _Available_.
## Accidental key access revocation from Key Vault
mysql Concepts Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-data-in-replication.md
It isn't supported to configure Data-in replication for servers that have high a
### Filter
-Modifying the parameter `replicate_wild_ignore_table` used to create replication filter for tables is currently not supported for Azure Database for MySQL -Flexible server.
+Parameter `replicate_wild_ignore_table` is used to create replication filter for tables on the replica server. To modify this parameter from Azure portal, navigate to Azure Database for MySQL - Flexible Server used as replica and select "Server Parameters" to view/edit the `replicate_wild_ignore_table` parameter.
### Requirements
mysql Concepts Data Out Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-data-out-replication.md
Data-out replication isn't supported on Azure Database for MySQL - Flexible Serv
You must use the replication filter to filter out Azure custom tables on the replica server. This can be achieved by setting Replicate_Wild_Ignore_Table = "mysql.\_\_%" to filter the Azure MySQL internal tables on the replica. To modify this parameter from the Azure portal, navigate to Azure Database for MySQL Flexible server and select "Server parameters" to view/edit the Replicate_Wild_Ignore_Table parameter.
-Refer to the following general guidance on the replication filter:
-- MySQL 5.7 Reference Manual - 13.4.2.2 CHANGE REPLICATION FILTER Statement-- MySQL 5.7 Reference Manual - 16.1.6.3 Replica Server Options and Variables-- MySQL 8.0 Reference Manual - 17.2.5.4 Replication Channel Based Filters.
+Refer to the following general guidance on the replication filter in MySQL Manual:
+- MySQL 5.7 Reference Manual - [13.4.2.2 CHANGE REPLICATION FILTER Statement](https://dev.mysql.com/doc/refman/5.7/en/change-replication-filter.html)
+- MySQL 5.7 Reference Manual - [16.1.6.3 Replica Server Options and Variables](https://dev.mysql.com/doc/refman/5.7/en/replication-options-replica.html#option_mysqld_replicate-wild-ignore-table)
+- MySQL 8.0 Reference Manual - [17.2.5.4 Replication Channel Based Filters](https://dev.mysql.com/doc/refman/8.0/en/replication-rules-channel-based-filters.html)
++++ ## Next steps
mysql Concepts Service Tiers Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-service-tiers-storage.md
description: This article describes the compute and storage options in Azure Dat
--++ Last updated 05/24/2022
mysql Concepts Supported Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-supported-versions.md
description: Learn which versions of the MySQL server are supported in the Azure
--++ Last updated 05/24/2022
mysql Connect Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/connect-java.md
Title: 'Quickstart: Use Java and JDBC with Azure Database for MySQLFlexible Server'
+ Title: 'Quickstart: Use Java and JDBC with Azure Database for MySQL Flexible Server'
description: Learn how to use Java and JDBC with an Azure Database for MySQL Flexible Server database.
This file is an [Apache Maven](https://maven.apache.org/) file that configures y
### Prepare a configuration file to connect to Azure Database for MySQL
-Run the following script in the project root directory to create a *src/main/resources/application.properties* file and add configuration details:
+Run the following script in the project root directory to create a *src/main/resources/database.properties* file and add configuration details:
#### [Passwordless connection (Recommended)](#tab/passwordless) ```bash
-mkdir -p src/main/resources && touch src/main/resources/application.properties
+mkdir -p src/main/resources && touch src/main/resources/database.properties
-cat << EOF > src/main/resources/application.properties
+cat << EOF > src/main/resources/database.properties
url=jdbc:mysql://${AZ_DATABASE_NAME}.mysql.database.azure.com:3306/demo?sslMode=REQUIRED&serverTimezone=UTC&defaultAuthenticationPlugin=com.azure.identity.providers.mysql.AzureIdentityMysqlAuthenticationPlugin&authenticationPlugins=com.azure.identity.providers.mysql.AzureIdentityMysqlAuthenticationPlugin user=${AZ_MYSQL_AD_NON_ADMIN_USERNAME} EOF ```
+> [!NOTE]
+> If you are using MysqlConnectionPoolDataSource class as the datasource in your application, please remove "defaultAuthenticationPlugin=com.azure.identity.providers.mysql.AzureIdentityMysqlAuthenticationPlugin" in the url.
+
+```bash
+mkdir -p src/main/resources && touch src/main/resources/database.properties
+
+cat << EOF > src/main/resources/database.properties
+url=jdbc:mysql://${AZ_DATABASE_NAME}.mysql.database.azure.com:3306/demo?sslMode=REQUIRED&serverTimezone=UTC&authenticationPlugins=com.azure.identity.providers.mysql.AzureIdentityMysqlAuthenticationPlugin
+user=${AZ_MYSQL_AD_NON_ADMIN_USERNAME}
+EOF
+```
+ #### [Password](#tab/password) ```bash
-mkdir -p src/main/resources && touch src/main/resources/application.properties
+mkdir -p src/main/resources && touch src/main/resources/database.properties
-cat << EOF > src/main/resources/application.properties
+cat << EOF > src/main/resources/database.properties
url=jdbc:mysql://${AZ_DATABASE_NAME}.mysql.database.azure.com:3306/demo?useSSL=true&sslMode=REQUIRED&serverTimezone=UTC user=${AZ_MYSQL_NON_ADMIN_USERNAME} password=${AZ_MYSQL_NON_ADMIN_PASSWORD}
public class DemoApplication {
public static void main(String[] args) throws Exception { log.info("Loading application properties"); Properties properties = new Properties();
- properties.load(DemoApplication.class.getClassLoader().getResourceAsStream("application.properties"));
+ properties.load(DemoApplication.class.getClassLoader().getResourceAsStream("database.properties"));
log.info("Connecting to the database"); Connection connection = DriverManager.getConnection(properties.getProperty("url"), properties);
public class DemoApplication {
[Having any issues? Let us know.](https://github.com/MicrosoftDocs/azure-docs/issues)
-This Java code will use the *application.properties* and the *schema.sql* files that you created earlier, in order to connect to the MySQL server and create a schema that will store your data.
+This Java code will use the *database.properties* and the *schema.sql* files that you created earlier, in order to connect to the MySQL server and create a schema that will store your data.
In this file, you can see that we commented methods to insert, read, update and delete data: you'll code those methods in the rest of this article, and you'll be able to uncomment them one after each other. > [!NOTE]
-> The database credentials are stored in the *user* and *password* properties of the *application.properties* file. Those credentials are used when executing `DriverManager.getConnection(properties.getProperty("url"), properties);`, as the properties file is passed as an argument.
+> The database credentials are stored in the *user* and *password* properties of the *database.properties* file. Those credentials are used when executing `DriverManager.getConnection(properties.getProperty("url"), properties);`, as the properties file is passed as an argument.
> [!NOTE] > The `AbandonedConnectionCleanupThread.uncheckedShutdown();` line at the end is a MySQL driver specific command to destroy an internal thread when shutting down the application.
az group delete \
## Next steps > [!div class="nextstepaction"]
-> [Migrate your MySQL database to Azure Database for MySQL using dump and restore](../concepts-migrate-dump-restore.md)
+> [Migrate your MySQL database to Azure Database for MySQL using dump and restore](../concepts-migrate-dump-restore.md)
mysql How To Data Out Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-data-out-replication.md
Restore the dump file to the server created in the Azure Database for MySQL Flex
1. Filtering
- Suppose data-out replication is being set up between Azure MySQL and an external MySQL on other cloud providers or on-premises. In that case, you must use the replication filter to filter out Azure custom tables. This can be achieved by setting Replicate_Wild_Ignore_Table = "mysql.\_\_%" to filter the Azure mysql internal tables. To modify this parameter from the Azure portal, navigate to Azure Database for MySQL Flexible server used as source and select "Server parameters" to view/edit the "Replicate_Wild_Ignore_Table" parameter. Refer to [MySQL :: MySQL 5.7 Reference Manual :: 13.4.2.2 CHANGE REPLICATION FILTER Statement](https://dev.mysql.com/doc/refman/5.7/en/change-replication-filter.html) for more details on modifying this server parameter.
+ Suppose data-out replication is being set up between Azure MySQL and an external MySQL on other cloud providers or on-premises. In that case, you must use the replication filter to filter out Azure custom tables on the replica server. This can be achieved by setting Replicate_Wild_Ignore_Table = "mysql.\_\_%" to filter the Azure mysql internal tables. Refer to [MySQL :: MySQL 5.7 Reference Manual :: 13.4.2.2 CHANGE REPLICATION FILTER Statement](https://dev.mysql.com/doc/refman/5.7/en/change-replication-filter.html) for more details on modifying this server parameter.
1. Set the replica server by connecting to it and opening the MySQL shell on the replica server. From the prompt, run the following operation, which configures several MySQL replication settings at the same time:
mysql How To Restart Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-restart-server-portal.md
description: This article describes how you can restart an Azure Database for My
--++ Last updated 10/26/2020
mysql How To Restart Stop Start Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-restart-stop-start-server-cli.md
description: This article describes how to restart/stop/start operations in Azur
--++ Last updated 03/30/2021
mysql How To Restore Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-restore-server-portal.md
description: This article describes how to perform restore operations in Azure D
--++ Last updated 07/26/2022
mysql Tutorial Logic Apps With Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-logic-apps-with-mysql.md
+
+ Title: Create a Logic app with Azure Database for MySQL Flexible Server
+description: Create a Logic app with Azure Database for MySQL Flexible Server
+++++ Last updated : 12/15/2022++
+# Tutorial: Create a Logic app with Azure Database for MySQL Flexible Server
++
+This quickstart shows how to create an automated workflow using Azure Logic Apps with Azure database for MySQL Flexible Server.
+
+## Prerequisites
+
+* An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free).
+
+- Create an Azure Database for MySQL Flexible server using [Azure portal](./quickstart-create-server-portal.md) <br/> or [Azure CLI](./quickstart-create-server-cli.md) if you don't have one.
+- Get the [inbound](../../logic-apps/logic-apps-limits-and-config.md#inbound) and [outbound](../../logic-apps/logic-apps-limits-and-config.md#outbound) IP addresses used by the Logic Apps service in the Azure region where you create your logic app workflow.
+- Configure networking settings of Azure Database for MySQL flexible server to make sure your logic Apps IP address have access to it. If you're using Azure App Service or Azure Kubernetes service, enable **Allow public access from any Azure service within Azure to this server** setting in the Azure portal.
+- Populate the database server with a new database `orderdb` and a table `orders` using the SQL script
+
+```sql
+CREATE DATABASE `orderdb`;
+USE `orderdb`;
+CREATE TABLE `orders` (
+ `orderNumber` int(11) NOT NULL,
+ `orderDate` date NOT NULL,
+ `status` varchar(15) NOT NULL,
+ PRIMARY KEY (`orderNumber`),
+ ) ;
+```
+
+[Having issues? Let us know](https://github.com/MicrosoftDocs/azure-docs/issues)
+
+ ## Create a Consumption logic app resource
+
+1. Sign in to the [Azure portal](https://portal.azure.com) with your Azure account.
+
+2. In the Azure search box, enter `logic apps`, and select **Logic apps**.
+
+ :::image type="content" source="./media/tutorial-logic-apps-with-mysql/find-select-logic-apps.png" alt-text="Screenshot that shows Azure portal search box with logic apps":::
+
+3. On the **Logic apps** page, select **Add**.
+
+4. On the **Create Logic App** pane, on the **Basics** tab, provide the following basic information about your logic app:
+ - **Subscription**: Your Azure subscription name.
+ - **Resource Group**: The Azure resource group where you create your logic app and related resources. This name must be unique across regions and can contain only letters, numbers, hyphens (**-**), underscores (**_**), parentheses (**()**), and periods (**.**).
+ - **Logic App name**: Your logic app name, which must be unique across regions and can contain only letters, numbers, hyphens (`-`), underscores (`_`), parentheses (`(`, `)`), and periods (`.`).
+
+5. Before you continue making selections, go to the **Plan** section. For **Plan type**, select **Consumption** so that you view only the settings that apply to the Consumption plan-based logic app type. The **Plan type** property specifies the logic app type and billing model to use.
+
+6. Now continue making the following selections:
+
+ - **Region**: The Azure datacenter region for storing your app's information. This example deploys the sample logic app to the **West US** region in Azure.
+ - **Enable log analytics**: This option appears and applies only when you select the **Consumption** logic app type. <p><p>Change this option only when you want to enable diagnostic logging. For this quickstart, keep the default selection.
+
+7. When you're ready, select **Review + Create**.
+
+8. On the validation page that appears, confirm all the information that you provided, and select **Create**.
+
+## Select HTTP request trigger template
+Follow this section to create a new logic app starting with a **When an HTTP Request is received** trigger to perform a data operation on MySQL database.
+
+1. After Azure successfully deploys your app, select **Go to resource**. Or, find and select your logic app resource by typing the name in the Azure search box.
+
+ :::image type="content" source="./media/tutorial-logic-apps-with-mysql/go-to-new-logic-app-resource.png" alt-text="Screenshot showing the resource deployment page and selected button" :::
+
+2. Scroll down past the video and the section named **Start with a common trigger**.
+
+3. Select **When an HTTP Request is received**.
+
+ :::image type="content" source="./media/tutorial-logic-apps-with-mysql/add-http-request-trigger.png" alt-text="Screenshot showing the template gallery and selected template":::
+
+4. Add a sample payload in json
+
+ ```json
+ {
+ "orderNumber":"100",
+ "orderDate":"2023-01-01",
+ "orderStatus":"Shipped"
+ }
+ ```
+
+ :::image type="content" source="./media/tutorial-logic-apps-with-mysql/add-http-sample-payload.png" alt-text="Screenshot showing sample payload":::
+
+5. An HTTP request body payload will be generated.
+
+ :::image type="content" source="./media/tutorial-logic-apps-with-mysql/https-request-body-payload-generated.png" alt-text="Screenshot showing sample payload is generated":::
+
+## Add a MySQL database action
+You can add an action as the next step after the HTTP request trigger to run subsequent operations in your workflow. You can add an action get, insert or update or delete data in the MySQL database. For this tutorial we will insert a new row into the `orders` table.
+
+1. Add a **New Step** in the workflow
+
+2. Search for **Azure database for MySQL** connector.
+
+ :::image type="content" source="./media/tutorial-logic-apps-with-mysql/search-for-azure-db-for-mysql.png" alt-text="Screenshot searching for azure database for mysql":::
+
+3. View all the actions for Azure database for MySQL connector.
+
+ :::image type="content" source="./media/tutorial-logic-apps-with-mysql/azure-db-for-mysql-connector-actions.png" alt-text="Screenshot Azure database for mysql action listed":::
+
+4. Select the **Insert Row** action. Select **Change connection** to add a new connection
+
+ :::image type="content" source="./media/tutorial-logic-apps-with-mysql/insert-row-action-mysql-database.png" alt-text="Screenshot Insert row action for Azure database for MySQL":::
+
+5. Add a new connection to the existing Azure database for MySQL database.
+
+ :::image type="content" source="./media/tutorial-logic-apps-with-mysql/azure-mysql-database-add-connection.png" alt-text="Screenshot add new connection for Azure database for MySQL":::
+
+## Run your workflow
+Select **Run Trigger** to execute the workflow and test if it actually inserts the row into the table. You can use any MySQL client to check if the row was inserted into the table.
+
+## Next steps
+- [Create Schedule based workflows](../../logic-apps/tutorial-build-schedule-recurring-logic-app-workflow.md)
+- [Create approval based workflows](../../logic-apps/tutorial-process-mailing-list-subscriptions-workflow.md)
+
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/whats-new.md
This article summarizes new releases and features in Azure Database for MySQL -
> [!NOTE] > This article contains references to the term slave, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
+## December 2022
+- **New Replication Metrics**
+
+ You can now have a better visibility into replication performance and health through newly exposed replication status metrics based on different replication types offered by Azure Database for MySQL- Flexible Server. [Learn More](./concepts-monitoring.md#replication-metrics)
++
+- **Support for Data-out Replication**
+
+ Azure Database for MySQL- Flexible Server now supports Data-out replication. This capability will allow customers to synchronize data out of Azure Database for MySQL - Flexible Server (source) to another MySQL (replica) which could be either be on-premises, in virtual machines, or a database service hosted outside of Azure. Learn more about [How to configure Data-out Replication](how-to-data-out-replication.md).
+++ ## November 2022 - **Azure Active Directory authentication for Azure Database for MySQL ΓÇô Flexible Server (General Availability)**
mysql Concept Performance Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concept-performance-best-practices.md
description: This article describes some recommendations to monitor and tune per
--++ Last updated 07/22/2022
mysql Concepts Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-azure-ad-authentication.md
Please note that management operations, such as adding new users, are only suppo
- If the Azure AD admin is removed from the server, the server will no longer be associated with an Azure AD tenant, and therefore all Azure AD logins will be disabled for the server. Adding a new Azure AD admin from the same tenant will re-enable Azure AD logins. - Azure Database for MySQL matches access tokens to the Azure Database for MySQL user using the userΓÇÖs unique Azure AD user ID, as opposed to using the username. This means that if an Azure AD user is deleted in Azure AD and a new user created with the same name, Azure Database for MySQL considers that a different user. Therefore, if a user is deleted from Azure AD and then a new user with the same name added, the new user will not be able to connect with the existing user.
+> [!NOTE]
+> The subscriptions of an Azure MySQL with Azure AD authentication enabled cannot be transferred to another tenant or directory.
+ ## Next steps - To learn how to create and populate Azure AD, and then configure Azure AD with Azure Database for MySQL, see [Configure and sign in with Azure AD for Azure Database for MySQL](how-to-configure-sign-in-azure-ad-authentication.md).
mysql Concepts Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-backup.md
The primary means of controlling the backup storage cost is by setting the appro
## Restore
-In Azure Database for MySQL, performing a restore creates a new server from the original server's backups and restores all databases contained in the server.
+In Azure Database for MySQL, performing a restore creates a new server from the original server's backups and restores all databases contained in the server. Restore is currently not supported if original server is in stopped state.
There are two types of restore available:
mysql Concepts Connectivity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-connectivity-architecture.md
Title: Connectivity architecture - Azure Database for MySQL
description: Describes the connectivity architecture for your Azure Database for MySQL server. --++ Last updated 06/20/2022
mysql Concepts Pricing Tiers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-pricing-tiers.md
Azure Database for MySQL ΓÇô Single Server supports the following the backend st
> Basic storage does not provide an IOPS guarantee. In General Purpose storage, the IOPS scale with the provisioned storage size in a 3:1 ratio. ### Basic storage
-Basic storage is the backend storage supporting Basic pricing tier servers. Basic storage leverages Azure standard storage in the backend where iops provisioned are not guaranteed and latency is variable. Basic tier is best suited for workloads that require light compute, low cost and I/O performance for development or small-scale infrequently used applications.
+Basic storage is the backend storage supporting Basic pricing tier servers. Basic storage uses Azure standard storage in the backend where iops provisioned are not guaranteed and latency is variable. Basic tier is best suited for workloads that require light compute, low cost and I/O performance for development or small-scale infrequently used applications.
### General purpose storage General purpose storage is the backend storage supporting General Purpose and Memory Optimized tier server. In General Purpose storage, the IOPS scale with the provisioned storage size in a 3:1 ratio. There are two generations of general purpose storage as described below:
General purpose storage v2 is supported in the following Azure regions:
| West US | :heavy_check_mark: | | West US 2 | :heavy_check_mark: | | West Europe | :heavy_check_mark: |
-| Central India* | :heavy_check_mark: |
+| Central India | :heavy_check_mark: |
| France Central* | :heavy_check_mark: | | UAE North* | :heavy_check_mark: | | South Africa North* | :heavy_check_mark: |
mysql Concepts Server Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-server-parameters.md
Title: Server parameters - Azure Database for MySQL
description: This topic provides guidelines for configuring server parameters in Azure Database for MySQL. --++ Last updated 06/20/2022
mysql Concepts Troubleshooting Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-troubleshooting-best-practices.md
description: This article describes some recommendations for troubleshooting you
--++ Last updated 07/22/2022
mysql Connect Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-java.md
Replace the placeholders with the following values, which are used throughout th
- `<YOUR_DATABASE_SERVER_NAME>`: The name of your MySQL server, which should be unique across Azure. - `<YOUR_AZURE_REGION>`: The Azure region you'll use. You can use `eastus` by default, but we recommend that you configure a region closer to where you live. You can see the full list of available regions by entering `az account list-locations`.-- `<YOUR_LOCAL_IP_ADDRESS>`: The IP address of your local computer, from which you'll run your Spring Boot application. One convenient way to find it is to open [whatismyip.akamai.com](http://whatismyip.akamai.com/).
+- `<YOUR_LOCAL_IP_ADDRESS>`: The IP address of your local computer, from which you'll run your application. One convenient way to find it is to open [whatismyip.akamai.com](http://whatismyip.akamai.com/).
### [Password](#tab/password)
This file is an [Apache Maven](https://maven.apache.org/) file that configures y
### Prepare a configuration file to connect to Azure Database for MySQL
-Run the following script in the project root directory to create a *src/main/resources/application.properties* file and add configuration details:
+Run the following script in the project root directory to create a *src/main/resources/database.properties* file and add configuration details:
#### [Passwordless connection (Recommended)](#tab/passwordless) ```bash
-mkdir -p src/main/resources && touch src/main/resources/application.properties
+mkdir -p src/main/resources && touch src/main/resources/database.properties
-cat << EOF > src/main/resources/application.properties
+cat << EOF > src/main/resources/database.properties
url=jdbc:mysql://${AZ_DATABASE_SERVER_NAME}.mysql.database.azure.com:3306/${AZ_DATABASE_NAME}?sslMode=REQUIRED&serverTimezone=UTC&defaultAuthenticationPlugin=com.azure.identity.providers.mysql.AzureIdentityMysqlAuthenticationPlugin&authenticationPlugins=com.azure.identity.providers.mysql.AzureIdentityMysqlAuthenticationPlugin user=${AZ_MYSQL_AD_NON_ADMIN_USERNAME}@${AZ_DATABASE_SERVER_NAME} EOF ``` +
+> [!NOTE]
+> If you are using MysqlConnectionPoolDataSource class as the datasource in your application, please remove "defaultAuthenticationPlugin=com.azure.identity.providers.mysql.AzureIdentityMysqlAuthenticationPlugin" in the url.
+
+```bash
+mkdir -p src/main/resources && touch src/main/resources/database.properties
+
+cat << EOF > src/main/resources/database.properties
+url=jdbc:mysql://${AZ_DATABASE_SERVER_NAME}.mysql.database.azure.com:3306/${AZ_DATABASE_NAME}?sslMode=REQUIRED&serverTimezone=UTC&authenticationPlugins=com.azure.identity.providers.mysql.AzureIdentityMysqlAuthenticationPlugin
+user=${AZ_MYSQL_AD_NON_ADMIN_USERNAME}@${AZ_DATABASE_SERVER_NAME}
+EOF
+```
#### [Password](#tab/password) ```bash
-mkdir -p src/main/resources && touch src/main/resources/application.properties
+mkdir -p src/main/resources && touch src/main/resources/database.properties
-cat << EOF > src/main/resources/application.properties
+cat << EOF > src/main/resources/database.properties
url=jdbc:mysql://${AZ_DATABASE_SERVER_NAME}.mysql.database.azure.com:3306/${AZ_DATABASE_NAME}?useSSL=true&sslMode=REQUIRED&serverTimezone=UTC user=${AZ_MYSQL_NON_ADMIN_USERNAME}@${AZ_DATABASE_SERVER_NAME} password=${AZ_MYSQL_NON_ADMIN_PASSWORD}
public class DemoApplication {
public static void main(String[] args) throws Exception { log.info("Loading application properties"); Properties properties = new Properties();
- properties.load(DemoApplication.class.getClassLoader().getResourceAsStream("application.properties"));
+ properties.load(DemoApplication.class.getClassLoader().getResourceAsStream("database.properties"));
log.info("Connecting to the database"); Connection connection = DriverManager.getConnection(properties.getProperty("url"), properties);
public class DemoApplication {
} ```
-This Java code will use the *application.properties* and the *schema.sql* files that you created earlier. After connecting to the MySQL server, you can create a schema to store your data.
+This Java code will use the *database.properties* and the *schema.sql* files that you created earlier. After connecting to the MySQL server, you can create a schema to store your data.
In this file, you can see that we commented methods to insert, read, update and delete data. You'll implement those methods in the rest of this article, and you'll be able to uncomment them one after each other. > [!NOTE]
-> The database credentials are stored in the *user* and *password* properties of the *application.properties* file. Those credentials are used when executing `DriverManager.getConnection(properties.getProperty("url"), properties);`, as the properties file is passed as an argument.
+> The database credentials are stored in the *user* and *password* properties of the *database.properties* file. Those credentials are used when executing `DriverManager.getConnection(properties.getProperty("url"), properties);`, as the properties file is passed as an argument.
> [!NOTE] > The `AbandonedConnectionCleanupThread.uncheckedShutdown();` line at the end is a MySQL driver command to destroy an internal thread when shutting down the application. You can safely ignore this line.
mysql How To Configure Server Logs In Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-server-logs-in-portal.md
Title: Access slow query logs - Azure portal - Azure Database for MySQL
description: This article describes how to configure and access the slow logs in Azure Database for MySQL from the Azure portal. --++ Last updated 06/20/2022
mysql How To Create Manage Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-create-manage-server-portal.md
Title: Manage server - Azure portal - Azure Database for MySQL
description: Learn how to manage an Azure Database for MySQL server from the Azure portal. --++ Last updated 06/20/2022
mysql How To Major Version Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-major-version-upgrade.md
Title: Major version upgrade in Azure Database for MySQL - Single Server
description: This article describes how you can upgrade major version for Azure Database for MySQL - Single Server --++ Last updated 06/20/2022
mysql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/policy-reference.md
Previously updated : 11/28/2022 Last updated : 01/05/2023 # Azure Policy built-in definitions for Azure Database for MySQL
mysql Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/security-controls-policy.md
Previously updated : 11/04/2022 Last updated : 01/05/2023 # Azure Policy Regulatory Compliance controls for Azure Database for MySQL
mysql Select Right Deployment Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/select-right-deployment-type.md
The main differences between these options are listed in the following table:
| Cross-region support (Geo-replication) | Yes | Not supported | User Managed | | Hybrid scenarios | Supported with [Data-in Replication](./concepts-data-in-replication.md)| Supported with [Data-in Replication](../flexible-server/concepts-data-in-replication.md) | User Managed | | Gtid support for data-in replication | Supported | Not Supported | User Managed |
-| Data-out replication | Not Supported | In preview | Supported |
+| Data-out replication | Not Supported | Supported | Supported |
| [**Backup and Recovery**](../flexible-server/concepts-backup-restore.md) | | | | | Automated backups | Yes | Yes | No | | Backup retention | 7-35 days | 1-35 days | User Managed |
network-watcher Connection Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor.md
Title: 'Tutorial: Monitor network communication between two virtual machines using the Azure portal' description: In this tutorial, you learn how to monitor network communication between two virtual machines with Azure Network Watcher's connection monitor capability. -+ tags: azure-resource-manager Last updated 10/28/2022-+ # Customer intent: I need to monitor communication between a VM and another VM. If the communication fails, I need to know why, so that I can resolve the problem.
network-watcher Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/data-residency.md
Title: Data residency for Azure Network Watcher | Microsoft Docs
description: This article will help you understand data residency for the Azure Network Watcher service. documentationcenter: na-+ editor:
na Last updated 06/16/2021-+
network-watcher Diagnose Communication Problem Between Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-communication-problem-between-networks.md
description: In this tutorial, learn how to diagnose a communication problem between an Azure virtual network connected to an on-premises, or other virtual network, through an Azure virtual network gateway, using Network Watcher's VPN diagnostics capability. documentationcenter: na-+ # Customer intent: I need to determine why resources in a virtual network can't communicate with resources in a different network.
na Last updated 01/07/2021-+
network-watcher Diagnose Vm Network Routing Problem Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-routing-problem-cli.md
description: In this article, you learn how to use Azure CLI to diagnose a virtual machine network routing problem using the next hop capability of Azure Network Watcher. documentationcenter: network-watcher-+ tags: azure-resource-manager # Customer intent: I need to diagnose virtual machine (VM) network routing problem that prevents communication to different destinations. ms.assetid:
network-watcher Last updated 03/18/2022-+
network-watcher Diagnose Vm Network Routing Problem Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-routing-problem-powershell.md
description: In this article, you learn how to diagnose a virtual machine network routing problem using the next hop capability of Azure Network Watcher. documentationcenter: network-watcher-+ editor: '' tags: azure-resource-manager # Customer intent: I need to diagnose virtual machine (VM) network routing problem that prevents communication to different destinations.
network-watcher Last updated 01/07/2021-+
network-watcher Diagnose Vm Network Routing Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-routing-problem.md
description: In this tutorial, you learn how to diagnose a virtual machine network routing problem using the next hop capability of Azure Network Watcher. documentationcenter: network-watcher-+ editor: '' tags: azure-resource-manager # Customer intent: I need to diagnose virtual machine (VM) network routing problem that prevents communication to different destinations.
network-watcher Last updated 01/07/2021-+
network-watcher Network Watcher Analyze Nsg Flow Logs Graylog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-analyze-nsg-flow-logs-graylog.md
Title: Analyze Azure network security group flow logs - Graylog | Microsoft Docs
description: Learn how to manage and analyze network security group flow logs in Azure using Network Watcher and Graylog. documentationcenter: na-+ editor: '' tags: azure-resource-manager
na Last updated 07/03/2021-+
network-watcher Network Watcher Connectivity Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-connectivity-cli.md
description: Learn how to use the connection troubleshoot capability of Azure Network Watcher using the Azure CLI. documentationcenter: na---+ na Last updated 01/07/2021-+ # Troubleshoot connections with Azure Network Watcher using the Azure CLI
network-watcher Network Watcher Connectivity Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-connectivity-powershell.md
description: Learn how to use the connection troubleshoot capability of Azure Network Watcher using PowerShell. documentationcenter: na--+ na Last updated 01/07/2021-+
network-watcher Network Watcher Connectivity Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-connectivity-rest.md
description: Learn how to use the connection troubleshoot capability of Azure Network Watcher using the Azure REST API. documentationcenter: na-+ na
network-watcher Network Watcher Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-create.md
Title: Create an Azure Network Watcher instance description: Learn how to create or delete an Azure Network Watcher using the Azure portal, PowerShell, the Azure CLI or the REST API. -+ ms.assetid: b1314119-0b87-4f4d-b44c-2c4d0547fb76 Last updated 12/30/2022-+ ms.devlang: azurecli
network-watcher Network Watcher Deep Packet Inspection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-deep-packet-inspection.md
Title: Packet inspection with Azure Network Watcher | Microsoft Docs
description: This article describes how to use Network Watcher to perform deep packet inspection collected from a VM documentationcenter: na-+ ms.assetid: 7b907d00-9c35-40f5-a61e-beb7b782276f na Last updated 01/07/2021-+ # Packet inspection with Azure Network Watcher
-Using the packet capture feature of Network Watcher, you can initiate and manage captures sessions on your Azure VMs from the portal, PowerShell, CLI, and programmatically through the SDK and REST API. Packet capture allows you to address scenarios that require packet level data by providing the information in a readily usable format. Leveraging freely available tools to inspect the data, you can examine communications sent to and from your VMs and gain insights into your network traffic. Some example uses of packet capture data include: investigating network or application issues, detecting network misuse and intrusion attempts, or maintaining regulatory compliance. In this article, we show how to open a packet capture file provided by Network Watcher using a popular open source tool. We will also provide examples showing how to calculate a connection latency, identify abnormal traffic, and examine networking statistics.
+Using the packet capture feature of Network Watcher, you can initiate and manage captures sessions on your Azure VMs from the portal, PowerShell, CLI, and programmatically through the SDK and REST API. Packet capture allows you to address scenarios that require packet level data by providing the information in a readily usable format. Leveraging freely available tools to inspect the data, you can examine communications sent to and from your VMs and gain insights into your network traffic. Some example uses of packet capture data include: investigating network or application issues, detecting network misuse and intrusion attempts, or maintaining regulatory compliance. In this article, we show how to open a packet capture file provided by Network Watcher using a popular open source tool. We'll also provide examples showing how to calculate a connection latency, identify abnormal traffic, and examine networking statistics.
## Before you begin
In this scenario, you:
In this scenario, we show how to view the initial Round Trip Time (RTT) of a Transmission Control Protocol (TCP) conversation occurring between two endpoints.
-When a TCP connection is established, the first three packets sent in the connection follow a pattern commonly referred to as the three-way handshake. By examining the first two packets sent in this handshake, an initial request from the client and a response from the server, we can calculate the latency when this connection was established. This latency is referred to as the Round Trip Time (RTT). For more information on the TCP protocol and the three-way handshake refer to the following resource. [https://support.microsoft.com/en-us/help/172983/explanation-of-the-three-way-handshake-via-tcp-ip](https://support.microsoft.com/en-us/help/172983/explanation-of-the-three-way-handshake-via-tcp-ip)
+When a TCP connection is established, the first three packets sent in the connection follow a pattern commonly referred to as the three-way handshake. By examining the first two packets sent in this handshake, an initial request from the client and a response from the server, we can calculate the latency when this connection was established. This latency is referred to as the Round Trip Time (RTT). For more information on the TCP protocol and the three-way handshake, refer to the following resource. [https://support.microsoft.com/en-us/help/172983/explanation-of-the-three-way-handshake-via-tcp-ip](https://support.microsoft.com/en-us/help/172983/explanation-of-the-three-way-handshake-via-tcp-ip)
### Step 1
Load the **.cap** file from your packet capture. This file can be found in the b
### Step 3
-To view the initial Round Trip Time (RTT) in TCP conversations, we will only be looking at the first two packets involved in the TCP handshake. We will be using the first two packets in the three-way handshake, which are the [SYN], [SYN, ACK] packets. They are named for flags set in the TCP header. The last packet in the handshake, the [ACK] packet, will not be used in this scenario. The [SYN] packet is sent by the client. Once it is received the server sends the [ACK] packet as an acknowledgment of receiving the SYN from the client. Leveraging the fact that the serverΓÇÖs response requires very little overhead, we calculate the RTT by subtracting the time the [SYN, ACK] packet was received by the client by the time [SYN] packet was sent by the client.
+To view the initial Round Trip Time (RTT) in TCP conversations, we'll only be looking at the first two packets involved in the TCP handshake. We'll be using the first two packets in the three-way handshake, which are the [SYN], [SYN, ACK] packets. They're named for flags set in the TCP header. The last packet in the handshake, the [ACK] packet, won't be used in this scenario. The [SYN] packet is sent by the client. Once it's received, the server sends the [ACK] packet as an acknowledgment of receiving the SYN from the client. Leveraging the fact that the serverΓÇÖs response requires very little overhead, we calculate the RTT by subtracting the time the [SYN, ACK] packet was received by the client by the time [SYN] packet was sent by the client.
Using WireShark this value is calculated for us.
-To more easily view the first two packets in the TCP three-way handshake, we will utilize the filtering capability provided by WireShark.
+To more easily view the first two packets in the TCP three-way handshake, we'll utilize the filtering capability provided by WireShark.
To apply the filter in WireShark, expand the ΓÇ£Transmission Control ProtocolΓÇ¥ Segment of a [SYN] packet in your capture and examine the flags set in the TCP header.
-Since we are looking to filter on all [SYN] and [SYN, ACK] packets, under flags confirm that the Syn bit is set to 1, then right click on the Syn bit -> Apply as Filter -> Selected.
+Since we're looking to filter on all [SYN] and [SYN, ACK] packets, under flags confirm that the Syn bit is set to 1, then right-select on the Syn bit -> Apply as Filter -> Selected.
![figure 7][7] ### Step 4
-Now that you have filtered the window to only see packets with the [SYN] bit set, you can easily select conversations you are interested in to view the initial RTT. A simple way to view the RTT in WireShark simply click the dropdown marked ΓÇ£SEQ/ACKΓÇ¥ analysis. You will then see the RTT displayed. In this case, the RTT was 0.0022114 seconds, or 2.211 ms.
+Now that you've filtered the window to only see packets with the [SYN] bit set, you can easily select conversations you are interested in to view the initial RTT. A simple way to view the RTT in WireShark is to simply select the dropdown marked ΓÇ£SEQ/ACKΓÇ¥ analysis. You'll then see the RTT displayed. In this case, the RTT was 0.0022114 seconds, or 2.211 ms.
![figure 8][8] ## Unwanted protocols
-You can have many applications running on a virtual machine instance you have deployed in Azure. Many of these applications communicate over the network, perhaps without your explicit permission. Using packet capture to store network communication, we can investigate how application are talking on the network and look for any issues.
+You can have many applications running on a virtual machine instance you've deployed in Azure. Many of these applications communicate over the network, perhaps without your explicit permission. Using packet capture to store network communication, we can investigate how applications are talking on the network and look for any issues.
In this example, we review a previous ran packet capture for unwanted protocols that may indicate unauthorized communication from an application running on your machine. ### Step 1
-Using the same capture in the previous scenario click **Statistics** > **Protocol Hierarchy**
+Using the same capture in the previous scenario, select **Statistics** > **Protocol Hierarchy**.
![protocol hierarchy menu][2]
The protocol hierarchy window appears. This view provides a list of all the prot
![protocol hierarchy opened][3]
-As you can see in the following screen capture, there was traffic using the BitTorrent protocol, which is used for peer to peer file sharing. As an administrator you do not expect to see BitTorrent traffic on this particular virtual machines. Now you aware of this traffic, you can remove the peer to peer software that installed on this virtual machine, or block the traffic using Network Security Groups or a Firewall. Additionally, you may elect to run packet captures on a schedule, so you can review the protocol use on your virtual machines regularly. For an example on how to automate network tasks in azure visit [Monitor network resources with azure automation](network-watcher-monitor-with-azure-automation.md)
+As you can see in the following screen capture, there was traffic using the BitTorrent protocol, which is used for peer to peer file sharing. As an administrator you don't expect to see BitTorrent traffic on this particular virtual machine. Now you aware of this traffic, you can remove the peer to peer software that installed on this virtual machine, or block the traffic using Network Security Groups or a Firewall. Additionally, you may elect to run packet captures on a schedule, so you can review the protocol use on your virtual machines regularly. For an example on how to automate network tasks in Azure, visit [Monitor network resources with Azure automation](network-watcher-monitor-with-azure-automation.md).
## Finding top destinations and ports
-Understanding the types of traffic, the endpoints, and the ports communicated over is an important when monitoring or troubleshooting applications and resources on your network. Utilizing a packet capture file from above, we can quickly learn the top destinations our VM is communicating with and the ports being utilized.
+Understanding the types of traffic, the endpoints, and the ports communicated over is important when monitoring or troubleshooting applications and resources on your network. Utilizing a packet capture file, we can quickly learn the top destinations our VM is communicating with and the ports being utilized.
### Step 1
-Using the same capture in the previous scenario click **Statistics** > **IPv4 Statistics** > **Destinations and Ports**
+Using the same capture in the previous scenario, select **Statistics** > **IPv4 Statistics** > **Destinations and Ports**
![packet capture window][4]
Using the same capture in the previous scenario click **Statistics** > **IPv4 St
As we look through the results a line stands out, there were multiple connections on port 111. The most used port was 3389, which is remote desktop, and the remaining are RPC dynamic ports.
-While this traffic may mean nothing, it is a port that was used for many connections and is unknown to the administrator.
+While this traffic may mean nothing, it's a port that was used for many connections and is unknown to the administrator.
![figure 5][5] ### Step 3
-Now that we have determined an out of place port we can filter our capture based on the port.
+Now that we've determined an out of place port, we can filter our capture based on the port.
The filter in this scenario would be:
The filter in this scenario would be:
tcp.port == 111 ```
-We enter the filter text from above in the filter textbox and hit enter.
+We enter the filter text in the filter textbox and press enter.
![figure 6][6]
-From the results, we can see all the traffic is coming from a local virtual machine on the same subnet. If we still donΓÇÖt understand why this traffic is occurring, we can further inspect the packets to determine why it is making these calls on port 111. With this information we can take the appropriate action.
+From the results, we can see all the traffic is coming from a local virtual machine on the same subnet. If we still donΓÇÖt understand why this traffic is occurring, we can further inspect the packets to determine why it's making these calls on port 111. With this information, we can take the appropriate action.
## Next steps
-Learn about the other diagnostic features of Network Watcher by visiting [Azure network monitoring overview](network-watcher-monitoring-overview.md)
+Learn about the other diagnostic features of Network Watcher by visiting [Azure network monitoring overview](network-watcher-monitoring-overview.md).
[1]: ./media/network-watcher-deep-packet-inspection/figure1.png [2]: ./media/network-watcher-deep-packet-inspection/figure2.png
network-watcher Network Watcher Delete Nsg Flow Log Blobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-delete-nsg-flow-log-blobs.md
Title: Delete storage blobs for network security group flow logs in Azure Networ
description: This article explains how to delete the network security group flow log storage blobs that are outside their retention policy period in Azure Network Watcher. documentationcenter: na-+ editor:
na Last updated 01/07/2021-+
network-watcher Network Watcher Diagnose On Premises Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-diagnose-on-premises-connectivity.md
description: This article describes how to diagnose on-premises connectivity via VPN gateway with Azure Network Watcher resource troubleshooting. documentationcenter: na-+ ms.assetid: aeffbf3d-fd19-4d61-831d-a7114f7534f9 na Last updated 01/20/2021-+
network-watcher Network Watcher Intrusion Detection Open Source Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-intrusion-detection-open-source-tools.md
description: This article describes how to use Azure Network Watcher and open source tools to perform network intrusion detection documentationcenter: na-+ ms.assetid: 0f043f08-19e1-4125-98b0-3e335ba69681 na Last updated 09/15/2022-+
network-watcher Network Watcher Monitor With Azure Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-monitor-with-azure-automation.md
description: This article describes how to diagnose On-premises connectivity with Azure Automation and Network Watcher documentationcenter: na-+ na Last updated 11/20/2020 -+
network-watcher Network Watcher Monitoring Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-monitoring-overview.md
Title: Azure Network Watcher | Microsoft Docs
description: Learn about Azure Network Watcher's monitoring, diagnostics, metrics, and logging capabilities for resources in a virtual network. documentationcenter: na-+ # Customer intent: As someone with basic Azure network experience, I want to understand how Azure Network Watcher can help me resolve some of the network-related problems I've encountered and provide insight into how I use Azure networking.
network-watcher Network Watcher Network Configuration Diagnostics Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-network-configuration-diagnostics-overview.md
Title: Introduction to Network Configuration Diagnostics in Azure Network Watcher | Microsoft Docs
-description: This page provides an overview of the Network Watcher - Network Configuration Diagnostics
+description: This page provides an overview of the Network Watcher - NSG Diagnostics
documentationcenter: na-+ na Previously updated : 03/18/2022 Last updated : 01/04/2023 -+
-# Introduction to Network Configuration Diagnostics in Azure Network Watcher
+# Introduction to NSG Diagnostics in Azure Network Watcher
-The Network Configuration Diagnostic tool helps customers understand which traffic flows will be allowed or denied in your Azure Virtual Network along with detailed information for debugging. It can help you in understanding if your NSG rules are configured correctly.
+The NSG Diagnostics tool helps customers understand which traffic flows will be allowed or denied in your Azure Virtual Network along with detailed information for debugging. It can help you in understanding if your NSG rules are configured correctly.
## Pre-requisites
-For using Network Configuration Diagnostics, Network Watcher must be enabled in your subscription. See [Create an Azure Network Watcher instance](./network-watcher-create.md) to enable.
+For using NSG Diagnostics, Network Watcher must be enabled in your subscription. See [Create an Azure Network Watcher instance](./network-watcher-create.md) to enable.
## Background
For using Network Configuration Diagnostics, Network Watcher must be enabled in
- All traffic flows in your network are evaluated using the rules in the applicable NSG. - Rules are evaluated based on priority number from lowest to highest
-## How does Network Configuration Diagnostic work?
+## How does NSG Diagnostics work?
-For a given flow, the NCD tool runs a simulation of the flow and returns whether the flow would be allowed (or denied) and detailed information about rules allowing/denying the flow. Customers must provide details of a flow like source, destination, protocol, etc. The tool returns whether traffic was allowed or denied, the NSG rules that were evaluated for the specified flow and the evaluation results for every rule.
+For a given flow, the NSG Diagnostics tool runs a simulation of the flow and returns whether the flow would be allowed (or denied) and detailed information about rules allowing/denying the flow. Customers must provide details of a flow like source, destination, protocol, etc. The tool returns whether traffic was allowed or denied, the NSG rules that were evaluated for the specified flow and the evaluation results for every rule.
## Next steps
-Use Network Configuration Diagnostic through other interfaces
+Use NSG Diagnostics using [REST API](/rest/api/network-watcher/networkwatchers/getnetworkconfigurationdiagnostic), [PowerShell](/powershell/module/az.network/invoke-aznetworkwatchernetworkconfigurationdiagnostic), and [Azure CLI](/cli/azure/network/watcher#az-network-watcher-run-configuration-diagnostic).
network-watcher Network Watcher Next Hop Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-next-hop-overview.md
Title: Introduction to next hop in Azure Network Watcher | Microsoft Docs
description: This article provides an overview of the Network Watcher next hop capability. documentationcenter: na-+ ms.assetid: febf7bca-e0b7-41d5-838f-a5a40ebc5aac na Last updated 01/29/2020-+
network-watcher Network Watcher Nsg Auditing Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-auditing-powershell.md
description: This page provides instructions on how to configure auditing of a Network Security Group documentationcenter: na-+ na Last updated 03/01/2022-+
network-watcher Network Watcher Nsg Flow Logging Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-azure-resource-manager.md
Title: Network Watcher - Create NSG flow logs using an Azure Resource Manager te
description: Use an Azure Resource Manager template and PowerShell to easily set up NSG Flow Logs. documentationcenter: na-+ editor: tags: azure-resource-manager
na Last updated 02/09/2022-+
network-watcher Network Watcher Nsg Flow Logging Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-cli.md
Title: Manage NSG Flow logs - Azure CLI
description: This page explains how to manage Network Security Group Flow logs in Azure Network Watcher with Azure CLI -+ Last updated 12/09/2021-+
network-watcher Network Watcher Nsg Flow Logging Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-portal.md
Title: 'Tutorial: Log network traffic flow to and from a virtual machine - Azure portal' description: Learn how to log network traffic flow to and from a virtual machine using Network Watcher's NSG flow logs capability. -+ Last updated 10/28/2022-+ # Customer intent: I need to log the network traffic to and from a VM so I can analyze it for anomalies.
network-watcher Network Watcher Nsg Flow Logging Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-powershell.md
Title: Manage NSG Flow logs - Azure PowerShell description: This page explains how to manage Network Security Group Flow logs in Azure Network Watcher with Azure PowerShell-+ Last updated 12/24/2021-+
network-watcher Network Watcher Nsg Flow Logging Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-rest.md
description: This page explains how to manage Network Security Group flow logs in Azure Network Watcher with REST API documentationcenter: na-+ na Last updated 07/13/2021-+
network-watcher Network Watcher Nsg Grafana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-grafana.md
description: Manage and analyze Network Security Group Flow Logs in Azure using Network Watcher and Grafana. documentationcenter: na-+ tags: azure-resource-manager
na Last updated 09/15/2022-+ # Manage and analyze Network Security Group flow logs using Network Watcher and Grafana
network-watcher Network Watcher Packet Capture Manage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-packet-capture-manage-cli.md
Title: Manage packet captures with Azure Network Watcher - Azure CLI | Microsoft
description: This page explains how to manage the packet capture feature of Network Watcher using the Azure CLI documentationcenter: na-+ ms.assetid: cb0c1d10-f7f2-4c34-b08c-f73452430be8 na Last updated 12/09/2021-+
network-watcher Network Watcher Packet Capture Manage Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-packet-capture-manage-portal.md
Title: Manage packet captures - Azure portal
+ Title: Manage packet captures in VMs with Network Watcher - Azure portal
-description: Learn how to manage the packet capture feature of Network Watcher using the Azure portal.
+description: Learn how to manage packet captures in virtual machines with the packet capture feature of Network Watcher using the Azure portal.
-+ Previously updated : 01/07/2021- Last updated : 01/04/2023++
-# Manage packet captures with Azure Network Watcher using the portal
+# Manage packet captures in virtual machines with Azure Network Watcher using the Azure portal
+
+> [!div class="op_single_selector"]
+> - [Azure portal](network-watcher-packet-capture-manage-portal.md)
+> - [PowerShell](network-watcher-packet-capture-manage-powershell.md)
+> - [Azure CLI](network-watcher-packet-capture-manage-cli.md)
+> - [Azure REST API](network-watcher-packet-capture-manage-rest.md)
Network Watcher packet capture allows you to create capture sessions to track traffic to and from a virtual machine. Filters are provided for the capture session to ensure you capture only the traffic you want. Packet capture helps to diagnose network anomalies, both reactively, and proactively. Other uses include gathering network statistics, gaining information on network intrusions, to debug client-server communication, and much more. Being able to remotely trigger packet captures, eases the burden of running a packet capture manually on a desired virtual machine, which saves valuable time. In this article, you learn to start, stop, download, and delete a packet capture.
-## Before you begin
+## Prerequisites
-Packet capture requires the following outbound TCP connectivity:
-- to the chosen storage account over port 443-- to 169.254.169.254 over port 80-- to 168.63.129.16 over port 8037
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- A virtual machine with the following outbound TCP connectivity:
+ - to the chosen storage account over port 443
+ - to 169.254.169.254 over port 80
+ - to 168.63.129.16 over port 8037
> [!NOTE]
-> The ports mentioned in the latter two cases above are common across all Network Watcher features that involve the Network Watcher extension and might occasionally change.
-
+> The ports mentioned in the latter two cases are common across all Network Watcher features that involve the Network Watcher extension and might occasionally change.
-If a network security group is associated to the network interface, or subnet that the network interface is in, ensure that rules exist that allow the previous ports. Similarly, adding user-defined traffic routes to your network may prevent connectivity to the above mentioned IPs and ports. Please ensure they are reachable.
+If a network security group is associated to the network interface, or subnet that the network interface is in, ensure that rules exist to allow outbound connectivity over the previous ports. Similarly, ensure outbound connectivity over the previous ports when adding user-defined routes to your network.
## Start a packet capture
-1. In your browser, navigate to the [Azure portal](https://portal.azure.com) and select **All services**, and then select **Network Watcher** in the **Networking section**.
-2. Select **Packet capture** under **Network diagnostic tools**. Any existing packet captures are listed, regardless of their status.
-3. Select **Add** to create a packet capture. You can select values for the following properties:
- - **Subscription**: The subscription that the virtual machine you want to create the packet capture for is in.
- - **Resource group**: The resource group of the virtual machine.
- - **Target virtual machine**: The virtual machine that you want to create the packet capture for.
- - **Packet capture name**: A name for the packet capture.
- - **Storage account or file**: Select **Storage account**, **File**, or both. If you select **File**, the capture is written to a path within the virtual machine.
- - **Local file path**: The local path on the virtual machine where the packet capture will be saved (valid only when *File* is selected). The path must be a valid path. If you are using a Linux virtual machine, the path must start with */var/captures*.
- - **Storage accounts**: Select an existing storage account, if you selected *Storage account*. This option is only available if you selected **Storage**.
-
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. In the search box at the top of the portal, enter *Network Watcher*.
+1. In the search results, select **Network Watcher**.
+1. Select **Packet capture** under **Network diagnostic tools**. Any existing packet captures are listed, regardless of their status.
+1. Select **+ Add** to create a packet capture. In **Add packet capture**, enter or select values for the following settings:
+
+ | Setting | Value |
+ | | |
+ | **Basic Details** | |
+ | Subscription | Select the Azure subscription of the virtual machine. |
+ | Resource group | Select the resource group of the virtual machine. |
+ | Target type | Select **Virtual machine**. |
+ | Target instance | Select the virtual machine. |
+ | Packet capture name | Enter a name or leave the default name. |
+ | **Packet capture configuration** | |
+ | Capture location | Select **Storage account**, **File**, or **Both**. |
+ | Storage account | Select your **Standard** storage account. <br> This option is available if you selected **Storage account** or **Both**. |
+ | Local file path | Enter a valid local file path where you want the capture to be saved in the target virtual machine. If you're using a Linux machine, the path must start with */var/captures*. <br> This option is available if you selected **File** or **Both**. |
+ | Maximum bytes per packet | Enter the maximum number of bytes to be captured per each packet. All bytes are captured if left blank or 0 entered. |
+ | Maximum bytes per session | Enter the total number of bytes that are captured. Once the value is reached the packet capture stops. Up to 1 GB is captured if left blank. |
+ | Time limit (seconds) | Enter the time limit of the packet capture session in seconds. Once the value is reached the packet capture stops. Up to 5 hours (18,000 seconds) is captured if left blank. |
+ | **Filtering (optional)** | |
+ | Add filter criteria | Select **Add filter criteria** to add a new filter. |
+ | Protocol | Filters the packet capture based on the selected protocol. Available values are **TCP**, **UDP**, or **Any**. |
+ | Local IP address | Filters the packet capture for packets where the local IP address matches this value. |
+ | Local port | Filters the packet capture for packets where the local port matches this value. |
+ | Remote IP address | Filters the packet capture for packets where the remote IP address matches this value. |
+ | Remote port | Filters the packet capture for packets where the remote port matches this value. |
+ > [!NOTE] > Premium storage accounts are currently not supported for storing packet captures.
- - **Maximum bytes per packet**: The number of bytes from each packet that are captured. If left blank, all bytes are captured.
- - **Maximum bytes per session**: The total number of bytes that are captured. Once the value is reached the packet capture stops.
- - **Time limit (seconds)**: The time limit before the packet capture is stopped. The default is 18,000 seconds.
- - Filtering (Optional). Select **+ Add filter**
- - **Protocol**: The protocol to filter for the packet capture. The available values are TCP, UDP, and Any.
- - **Local IP address**: Filters the packet capture for packets where the local IP address matches this value.
- - **Local port**: Filters the packet capture for packets where the local port matches this value.
- - **Remote IP address**: Filters the packet capture for packets where the remote IP address matches this value.
- - **Remote port**: Filters the packet capture for packets where the remote port matches this value.
-
> [!NOTE]
- > Port and IP address values can be a single value, range of values, or a range, such as 80-1024, for port. You can define as many filters as you need.
+ > Port and IP address values can be a single value, multiple values, or a range, such as 80-1024, for port. You can define as many filters as you need.
+
+1. Select **Start packet capture**.
-4. Select **OK**.
+ :::image type="content" source="./media/network-watcher-packet-capture-manage-portal/add-packet-capture.png" alt-text="Screenshot of Add packet capture in Azure portal showing available options.":::
-After the time limit set on the packet capture has expired, the packet capture is stopped, and can be reviewed. You can also manually stop a packet capture session.
+1. Once the time limit set on the packet capture is reached, the packet capture stops and can be reviewed. To manually stop a packet capture session before it reaches its time limit, select the **...** on the right-side of the packet capture in **Packet capture** page, or right-click it, then select **Stop**.
+
+ :::image type="content" source="./media/network-watcher-packet-capture-manage-portal/stop-packet-capture.png" alt-text="Screenshot showing how to stop a packet capture in Azure portal.":::
> [!NOTE]
-> The portal automatically:
-> * Creates a network watcher in the same region as the region the virtual machine you selected exists in, if the region doesn't already have a network watcher.
-> * Adds the *AzureNetworkWatcherExtension* [Linux](../virtual-machines/extensions/network-watcher-linux.md) or [Windows](../virtual-machines/extensions/network-watcher-windows.md) virtual machine extension to the virtual machine, if it's not already installed.
+> The Azure portal automatically:
+> * Creates a network watcher in the same region as the region of the target virtual machine, if the region doesn't already have a network watcher.
+> * Adds `AzureNetworkWatcherExtension` to [Linux](../virtual-machines/extensions/network-watcher-linux.md) or [Windows](../virtual-machines/extensions/network-watcher-windows.md) virtual machines, if the extension isn't already installed.
## Delete a packet capture
-1. In the packet capture view, select **...** on the right-side of the packet capture, or right-click an existing packet capture, and select **Delete**.
-2. You are asked to confirm you want to delete the packet capture. Select **Yes**.
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. In the search box at the top of the portal, enter *Network Watcher*, then select **Network Watcher** from the search results.
+1. Select **Packet capture** under **Network diagnostic tools**.
+1. In the **Packet capture** page, select **...** on the right-side of the packet capture that you want to delete, or right-click it, then select **Delete**.
-> [!NOTE]
-> Deleting a packet capture does not delete the capture file in the storage account or on the virtual machine.
+ :::image type="content" source="./media/network-watcher-packet-capture-manage-portal/delete-packet-capture.png" alt-text="Screenshot showing how to delete a packet capture from Network Watcher in Azure portal.":::
-## Stop a packet capture
+1. Select **Yes**.
-In the packet capture view, select **...** on the right-side of the packet capture, or right-click an existing packet capture, and select **Stop**.
+> [!NOTE]
+> Deleting a packet capture does not delete the capture file in the storage account or on the virtual machine.
## Download a packet capture
-Once your packet capture session has completed, the capture file is uploaded to blob storage or to a local file on the virtual machine. The storage location of the packet capture is defined during creation of the packet capture. A convenient tool to access capture files saved to a storage account is Microsoft Azure Storage Explorer, which you can [download](https://storageexplorer.com/).
+Once your packet capture session has completed, the capture file is saved to a blob storage or a local file on the target virtual machine. The storage location of the packet capture is defined during creation of the packet capture. A convenient tool to access capture files saved to a storage account is Azure Storage Explorer, which you can [download](https://storageexplorer.com/) after selecting the operating system.
-If a storage account is specified, packet capture files are saved to a storage account at the following location:
+- If a storage account is specified, packet capture files are saved to a storage account at the following location:
-```
-https://{storageAccountName}.blob.core.windows.net/network-watcher-logs/subscriptions/{subscriptionId}/resourcegroups/{storageAccountResourceGroup}/providers/microsoft.compute/virtualmachines/{VMName}/{year}/{month}/{day}/packetCapture_{creationTime}.cap
-```
+ ```
+ https://{storageAccountName}.blob.core.windows.net/network-watcher-logs/subscriptions/{subscriptionId}/resourcegroups/{storageAccountResourceGroup}/providers/microsoft.compute/virtualmachines/{VMName}/{year}/{month}/{day}/packetCapture_{creationTime}.cap
+ ```
-If you selected **File** when you created the capture, you can view or download the file from the path you configured on the virtual machine.
+- If a file path is specified, the capture file can be viewed on the virtual machine or downloaded.
## Next steps
network-watcher Network Watcher Packet Capture Manage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-packet-capture-manage-powershell.md
description: This page explains how to manage the packet capture feature of Network Watcher using PowerShell documentationcenter: na-+ na Last updated 02/01/2021-+
network-watcher Network Watcher Packet Capture Manage Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-packet-capture-manage-rest.md
Title: Manage packet captures with Azure Network Watcher - REST API | Microsoft
description: This page explains how to manage the packet capture feature of Network Watcher using Azure REST API documentationcenter: na-+ na Last updated 05/28/2021-+
network-watcher Network Watcher Read Nsg Flow Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-read-nsg-flow-logs.md
Title: Read NSG flow logs | Microsoft Docs
description: Learn how to use Azure PowerShell to parse Network Security Group flow logs, which are created hourly and updated every few minutes in Azure Network Watcher. documentationcenter: na-+ na Last updated 02/09/2021-+
network-watcher Network Watcher Security Group View Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-security-group-view-cli.md
description: This article will describe how to use Azure CLI to analyze a virtual machines security with Security Group View. documentationcenter: na-+ na Last updated 12/09/2021-+
network-watcher Network Watcher Security Group View Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-security-group-view-overview.md
Title: Introduction to Effective security rules view in Azure Network Watcher |
description: This page provides an overview of the Network Watcher - Effective security rules view capability documentationcenter: na-+ na Last updated 03/18/2022-+
network-watcher Network Watcher Security Group View Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-security-group-view-powershell.md
description: This article will describe how to use PowerShell to analyze a virtual machines security with Security Group View. documentationcenter: na-+ na Last updated 11/20/2020-+
network-watcher Network Watcher Security Group View Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-security-group-view-rest.md
description: This article will describe how to the Azure REST API to analyze a virtual machines security with Security Group View. documentationcenter: na-+ na Last updated 03/01/2022-+
network-watcher Network Watcher Troubleshoot Manage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-troubleshoot-manage-cli.md
description: This page explains how to use the Azure Network Watcher troubleshoot Azure CLI documentationcenter: na-+ na Last updated 07/25/2022-+
network-watcher Network Watcher Troubleshoot Manage Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-troubleshoot-manage-rest.md
description: This page explains how to troubleshoot Virtual Network Gateways and Connections with Azure Network Watcher using REST documentationcenter: na-+ na Last updated 01/07/2021-+
network-watcher Network Watcher Troubleshoot Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-troubleshoot-overview.md
description: This page provides an overview of the Network Watcher resource troubleshooting capabilities documentationcenter: na-+ na Last updated 03/31/2022-+
network-watcher Network Watcher Using Open Source Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-using-open-source-tools.md
description: This page describes how to use Network Watcher packet capture with Capanalysis to visualize traffic patterns to and from your VMs. documentationcenter: na-+ na Last updated 02/25/2021 -+ # Visualize network traffic patterns to and from your VMs using open-source tools
network-watcher Network Watcher Visualize Nsg Flow Logs Open Source Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-visualize-nsg-flow-logs-open-source-tools.md
description: Manage and analyze Network Security Group Flow Logs in Azure using Network Watcher and Elastic Stack. documentationcenter: na-+ na Last updated 09/15/2022-+
network-watcher Network Watcher Visualize Nsg Flow Logs Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-visualize-nsg-flow-logs-power-bi.md
description: Learn how to use Power BI to visualize Network Security Group flow logs to allow you to view information about IP traffic in Azure Network Watcher. documentationcenter: na-+ na Last updated 06/23/2021-+
network-watcher Nsg Flow Logs Policy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/nsg-flow-logs-policy-portal.md
description: This article explains how to use the built-in policies to manage the deployment of NSG flow logs documentationcenter: na-+ na Last updated 02/09/2022-+
network-watcher Quickstart Configure Network Security Group Flow Logs From Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/quickstart-configure-network-security-group-flow-logs-from-arm-template.md
Title: 'Quickstart: Configure Network Watcher network security group flow logs by using an Azure Resource Manager template (ARM template)' description: Learn how to enable network security group (NSG) flow logs programmatically by using an Azure Resource Manager template (ARM template) and Azure PowerShell. --++ Last updated 09/01/2022
network-watcher Quickstart Configure Network Security Group Flow Logs From Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/quickstart-configure-network-security-group-flow-logs-from-bicep.md
Title: 'Quickstart: Configure Network Watcher network security group flow logs by using a Bicep file' description: Learn how to enable network security group (NSG) flow logs programmatically by using Bicep and Azure PowerShell. --++ Last updated 08/26/2022
network-watcher Resource Move https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/resource-move.md
Title: Move Azure Network Watcher resources | Microsoft Docs
description: Move Azure Network Watcher resources across regions documentationcenter: na-+ editor:
na Last updated 06/10/2021-+
network-watcher Traffic Analytics Policy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/traffic-analytics-policy-portal.md
Title: Deploy and manage Traffic Analytics using Azure Policy
description: This article explains how to use the built-in policies to manage the deployment of Traffic Analytics -+ Last updated 02/09/2022-+
network-watcher Traffic Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/traffic-analytics.md
Title: Azure traffic analytics | Microsoft Docs
-description: Learn about traffic analytics. Gain an overview of this solution for viewing network activity, securing networks, and optimizing performance.
+ Title: Azure traffic analytics
+description: Learn what traffic analytics is, and how to use traffic analytics for viewing network activity, securing networks, and optimizing performance.
- Previously updated : 06/01/2022+ Last updated : 01/06/2023 -
- - references_regions
- - devx-track-azurepowershell
- - kr2b-contr-experiment
+ # Traffic analytics
Traffic analytics is a cloud-based solution that provides visibility into user a
> [!NOTE] > Traffic analytics now supports collecting NSG flow logs data at a frequency of every 10 minutes. - ## Why traffic analytics? It's vital to monitor, manage, and know your own network for uncompromised security, compliance, and performance. Knowing your own environment is of paramount importance to protect and optimize it. You often need to know the current state of the network, including the following information:
Traffic analytics provides the following information:
## Key components -- **Network security group (NSG)**: A resource that contains a list of security rules that allow or deny network traffic to resources that are connected to an Azure virtual network. NSGs can be associated with subnets, individual VMs (classic), or individual network interfaces (NICs) that are attached to VMs (Resource Manager). For more information, see [Network security group overview](../virtual-network/network-security-groups-overview.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json).
+- **Network security group (NSG)**: A resource that contains a list of security rules that allow or deny network traffic to or from resources that are connected to an Azure virtual network. NSGs can be associated with subnets, network interfaces (NICs) that are attached to VMs (Resource Manager), or individual VMs (classic). For more information, see [Network security group overview](../virtual-network/network-security-groups-overview.md).
- **NSG flow logs**: Recorded information about ingress and egress IP traffic through an NSG. NSG flow logs are written in JSON format and include: - Outbound and inbound flows on a per rule basis. - The NIC that the flow applies to.
- - Information about the flow, such as the source and destination IP address, the source and destination port, and the protocol.
+ - Information about the flow, such as the source and destination IP addresses, the source and destination ports, and the protocol.
- The status of the traffic, such as allowed or denied. For more information about NSG flow logs, see [NSG flow logs](network-watcher-nsg-flow-logging-overview.md). -- **Log Analytics**: A tool in the Azure portal that you use to work with Azure Monitor Logs data. Azure Monitor Logs is an Azure service that collects monitoring data and stores the data in a central repository. This data can include events, performance data, or custom data that's provided through the Azure API. After this data is collected, it's available for alerting, analysis, and export. Monitoring applications such as network performance monitor and traffic analytics use Azure Monitor Logs as a foundation. For more information, see [Azure Monitor Logs](../azure-monitor/logs/log-query-overview.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json). Log Analytics provides a way to edit and run queries on logs. You can also use this tool to analyze query results. For more information, see [Overview of Log Analytics in Azure Monitor](../azure-monitor/logs/log-analytics-overview.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json).
+- **Log Analytics**: A tool in the Azure portal that you use to work with Azure Monitor Logs data. Azure Monitor Logs is an Azure service that collects monitoring data and stores the data in a central repository. This data can include events, performance data, or custom data that's provided through the Azure API. After this data is collected, it's available for alerting, analysis, and export. Monitoring applications such as network performance monitor and traffic analytics use Azure Monitor Logs as a foundation. For more information, see [Azure Monitor Logs](../azure-monitor/logs/log-query-overview.md). Log Analytics provides a way to edit and run queries on logs. You can also use this tool to analyze query results. For more information, see [Overview of Log Analytics in Azure Monitor](../azure-monitor/logs/log-analytics-overview.md).
-- **Log Analytics workspace**: The environment that stores Azure Monitor log data that pertains to an Azure account. For more information about Log Analytics workspaces, see [Create a Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json).
+- **Log Analytics workspace**: The environment that stores Azure Monitor log data that pertains to an Azure account. For more information about Log Analytics workspaces, see [Overview of Log Analytics workspace](../azure-monitor/logs/log-analytics-workspace-overview.md).
-- **Network Watcher**: A regional service that you can use to monitor and diagnose conditions at a network-scenario level in Azure. You can use Network Watcher to turn NSG flow logs on and off. For more information, see [Network Watcher](network-watcher-monitoring-overview.md).
+- **Network Watcher**: A regional service that you can use to monitor and diagnose conditions at a network-scenario level in Azure. You can use Network Watcher to turn NSG flow logs on and off. For more information, see [What is Azure Network Watcher?](network-watcher-monitoring-overview.md).
## How traffic analytics works
Reduced logs are enhanced with geography, security, and topology information and
## Prerequisites
-Before you use traffic analytics, ensure your environment meets the following requirements.
+Traffic Analytics requires:
-### User access requirements
+- A Network Watcher enabled subscription. For more information, see [Create an Azure Network Watcher instance](network-watcher-create.md)
+- Network Security Group (NSG) flow logs enabled for the NSGs you want to monitor. For more information, see [Enable NSG flow log](network-watcher-nsg-flow-logging-portal.md#enable-nsg-flow-log).
+- An Azure Storage account to store raw flow logs. For more information, see [Create a storage account](../storage/common/storage-account-create.md)
+- An Azure Log Analytics workspace with read and write access. For more information, see [Create a Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md)
-One of the following [Azure built-in roles](../role-based-access-control/built-in-roles.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json) needs to be assigned to your account:
+One of the following [Azure built-in roles](../role-based-access-control/built-in-roles.md) needs to be assigned to your account:
|Deployment model | Role | | | |
If none of the preceding built-in roles are assigned to your account, assign a [
- `Microsoft.Network/virtualNetworks/read` - `Microsoft.Network/expressRouteCircuits/read`
-For information about how to check user access permissions, see [Traffic analytics FAQ](traffic-analytics-faq.yml).
+For information about how to check user access permissions, see [Traffic analytics FAQ](traffic-analytics-faq.yml#what-are-the-prerequisites-to-use-traffic-analytics-).
## Frequently asked questions
To get answers to frequently asked questions about traffic analytics, see [Traff
## Next steps -- To learn how to turn on flow logs, see [Enable NSG flow log](network-watcher-nsg-flow-logging-portal.md#enable-nsg-flow-log).-- To understand the schema and processing details of traffic analytics, see [Traffic analytics schema](traffic-analytics-schema.md).
+- To learn how to use traffic analytics, see [Usage scenarios](usage-scenarios-traffic-analytics.md).
+- To understand the schema and processing details of traffic analytics, see [Schema and data aggregation in Traffic Analytics](traffic-analytics-schema.md).
network-watcher View Network Topology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/view-network-topology.md
Title: View Azure virtual network topology | Microsoft Docs description: Learn how to view the resources in a virtual network, and the relationships between the resources. -++ na Last updated 11/11/2022- # View the topology of an Azure virtual network
network-watcher View Relative Latencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/view-relative-latencies.md
Title: View relative latencies to Azure regions from specific locations
description: Learn how to view relative latencies across Internet providers to Azure regions from specific locations. documentationcenter: ''-+ na Last updated 04/20/2022-+
networking Create Zero Trust Network Web Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/create-zero-trust-network-web-apps.md
You'll create a route table with user-defined route force traffic all App Servic
1. Repeat this process for another subnet by selecting **+ Associate**. 1. Select the **mySpokeVNet** virtual network, and then select the **AppGwSubnet** subnet. Select **OK**. 1. After the association appears, select the link to the **App1** association.
-1. In the **Network policy for private endpoints** section, select **Route Tables** and select **Save**.
+1. In the **Network policy for private endpoints** section, select **Network security groups** and **Route Tables**, and then select **Save**.
### Test again
networking Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/policy-reference.md
Title: Built-in policy definitions for Azure networking services description: Lists Azure Policy built-in policy definitions for Azure networking services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/28/2022 Last updated : 01/05/2023
networking Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure networking services description: Lists Azure Policy Regulatory Compliance controls available for Azure networking services. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/04/2022 Last updated : 01/05/2023
openshift Howto Create Private Cluster 4X https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-create-private-cluster-4x.md
After executing the `az aro create` command, it normally takes about 35 minutes
> By default OpenShift uses self-signed certificates for all of the routes created on `*.apps.<random>.<location>.aroapp.io`. If you choose Custom DNS, after connecting to the cluster, you will need to follow the OpenShift documentation to [configure a custom certificate for your ingress controller](https://docs.openshift.com/container-platform/4.8/security/certificates/replacing-default-ingress-certificate.html) and [custom certificate for your API server](https://docs.openshift.com/container-platform/4.8/security/certificates/api-server.html).
-### Create a private cluster without a public IP address
+### Create a private cluster without a public IP address (preview)
Typically, private clusters are created with a public IP address and load balancer, providing a means for outbound connectivity to other services. However, you can create a private cluster without a public IP address. This may be required in situations in which security or policy requirements prohibit the use of public IP addresses.
+> [!IMPORTANT]
+> Currently, this Azure Red Hat OpenShift feature is being offered in preview only. Preview features are available on a self-service, opt-in basis. Previews are provided "as is" and "as available," and they are excluded from the service-level agreements and limited warranty. Azure Red Hat OpenShift previews are partially covered by customer support on a best-effort basis. As such, these features are not meant for production use.
+ To create a private cluster without a public IP address, register for the feature flag `UserDefinedRouting` using the following command structure: ```
openshift Howto Use Key Vault Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-use-key-vault-secrets.md
+
+ Title: Use Azure Key Vault Provider for Secrets Store CSI Driver on Azure Red Hat OpenShift
+description: This article explains how to use Azure Key Vault Provider for Secrets Store CSI Driver on Azure Red Hat OpenShift.
++++ Last updated : 12/30/2022
+keywords: azure, openshift, red hat, key vault
+#Customer intent: I need to understand how to use Azure Key Vault Provider for Secrets Store CSI Driver on Azure Red Hat OpenShift.
+
+# Use Azure Key Vault Provider for Secrets Store CSI Driver on Azure Red Hat OpenShift
+
+Azure Key Vault Provider for Secrets Store CSI Driver allows you to get secret contents stored in an [Azure Key Vault instance](/azure/key-vault/general/basic-concepts) and use the [Secrets Store CSI Driver](https://secrets-store-csi-driver.sigs.k8s.io/introduction.html) to mount them into Kubernetes pods. This article explains how to use Azure Key Vault Provider for Secrets Store CSI Driver on Azure Red Hat OpenShift.
+
+> [!NOTE]
+> Azure Key Vault Provider for Secrets Store CSI Driver is an Open Source project that works with Azure Red Hat OpenShift. While the instructions presented in this article show an example of how the Secrets Store CSI driver can be implemented, they are intended as a general guide to using the driver with ARO. Support for this implementation of an Open Source project would be provided by the project.
+
+## Prerequisites
+
+The following prerequisites are required:
+
+- An Azure Red Hat OpenShift cluster (See [Create an Azure Red Hat OpenShift cluster](howto-create-private-cluster-4x.md) to learn more.)
+- Azure CLI (logged in)
+- Helm 3.x CLI
+
+### Set environment variables
+
+Set the following variables that will be used throughout this procedure:
+
+```
+export KEYVAULT_RESOURCE_GROUP=${AZR_RESOURCE_GROUP:-"openshift"}
+export KEYVAULT_LOCATION=${AZR_RESOURCE_LOCATION:-"eastus"}
+export KEYVAULT_NAME=secret-store-$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 10 | head -n 1)
+export AZ_TENANT_ID=$(az account show -o tsv --query tenantId)
+```
+
+## Install the Kubernetes Secrets Store CSI Driver
+
+1. Create an ARO project; you'll deploy the CSI Driver into this project:
+
+ ```
+ oc new-project k8s-secrets-store-csi
+ ```
+
+1. Set SecurityContextConstraints to allow the CSI Driver to run (otherwise, the CSI Driver will not be able to create pods):
+
+ ```
+ oc adm policy add-scc-to-user privileged \
+ system:serviceaccount:k8s-secrets-store-csi:secrets-store-csi-driver
+ ```
+
+1. Add the Secrets Store CSI Driver to your Helm repositories:
+
+ ```
+ helm repo add secrets-store-csi-driver \
+ https://kubernetes-sigs.github.io/secrets-store-csi-driver/charts
+ ```
+
+1. Update your Helm repositories:
+
+ ```
+ helm repo update
+ ```
+
+1. Install the Secrets Store CSI Driver:
+
+ ```
+ helm install -n k8s-secrets-store-csi csi-secrets-store \
+ secrets-store-csi-driver/secrets-store-csi-driver \
+ --version v1.0.1 \
+ --set "linux.providersDir=/var/run/secrets-store-csi-providers"
+ ```
+ Optionally, you can enable autorotation of secrets by adding the following parameters to the command above:
+
+ `--set "syncSecret.enabled=true" --set "enableSecretRotation=true"`
+
+1. Verify that the CSI Driver DaemonSets are running:
+
+ ```
+ kubectl --namespace=k8s-secrets-store-csi get pods -l "app=secrets-store-csi-driver"
+ ```
+
+ After running the command above, you should see the following:
+
+ ```
+ NAME READY STATUS RESTARTS AGE
+ csi-secrets-store-secrets-store-csi-driver-cl7dv 3/3 Running 0 57s
+ csi-secrets-store-secrets-store-csi-driver-gbz27 3/3 Running 0 57s
+ ```
+
+## Deploy Azure Key Vault Provider for Secrets Store CSI Driver
+
+1. Add the Azure Helm repository:
+
+ ```
+ helm repo add csi-secrets-store-provider-azure \
+ https://azure.github.io/secrets-store-csi-driver-provider-azure/charts
+ ```
+
+1. Update your local Helm repositories:
+
+ ```
+ helm repo update
+ ```
+
+1. Install the Azure Key Vault CSI provider:
+
+ ```
+ helm install -n k8s-secrets-store-csi azure-csi-provider \
+ csi-secrets-store-provider-azure/csi-secrets-store-provider-azure \
+ --set linux.privileged=true --set secrets-store-csi-driver.install=false \
+ --set "linux.providersDir=/var/run/secrets-store-csi-providers" \
+ --version=v1.0.1
+ ```
+
+1. Set SecurityContextConstraints to allow the CSI driver to run:
+
+ ```
+ oc adm policy add-scc-to-user privileged \
+ system:serviceaccount:k8s-secrets-store-csi:csi-secrets-store-provider-azure
+ ```
+
+## Create key vault and a secret
+
+1. Create a namespace for your application.
+
+ ```
+ oc new-project my-application
+ ```
+
+1. Create an Azure key vault in your resource group that contains ARO.
+
+ ```
+ az keyvault create -n ${KEYVAULT_NAME} \
+ -g ${KEYVAULT_RESOURCE_GROUP} \
+ --location ${KEYVAULT_LOCATION}
+ ```
+
+1. Create a secret in the key vault.
+
+ ```
+ az keyvault secret set \
+ --vault-name ${KEYVAULT_NAME} \
+ --name secret1 --value "Hello"
+ ```
+
+1. Create a service principal for the key vault.
+
+ > [!NOTE]
+ > If you receive an error when creating the service principal, you may need to upgrade your Azure CLI to the latest version.
+
+ ```
+ export SERVICE_PRINCIPAL_CLIENT_SECRET="$(az ad sp create-for-rbac --skip-assignment --name http://$KEYVAULT_NAME --query 'password' -otsv)"
+ export SERVICE_PRINCIPAL_CLIENT_ID="$(az ad sp list --display-name http://$KEYVAULT_NAME --query '[0].appId' -otsv)"
+ ```
+
+1. Set an access policy for the service principal.
+
+ ```
+ az keyvault set-policy -n ${KEYVAULT_NAME} \
+ --secret-permissions get \
+ --spn ${SERVICE_PRINCIPAL_CLIENT_ID}
+ ```
+
+1. Create and label a secret for Kubernetes to use to access the key vault.
+
+ ```
+ kubectl create secret generic secrets-store-creds \
+ -n my-application \
+ --from-literal clientid=${SERVICE_PRINCIPAL_CLIENT_ID} \
+ --from-literal clientsecret=${SERVICE_PRINCIPAL_CLIENT_SECRET}
+ kubectl -n my-application label secret \
+ secrets-store-creds secrets-store.csi.k8s.io/used=true
+ ```
+
+## Deploy an application that uses the CSI Driver
+
+1. Create a `SecretProviderClass` to give access to this secret:
+
+ ```
+ cat <<EOF | kubectl apply -f -
+ apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
+ kind: SecretProviderClass
+ metadata:
+ name: azure-kvname
+ namespace: my-application
+ spec:
+ provider: azure
+ parameters:
+ usePodIdentity: "false"
+ useVMManagedIdentity: "false"
+ userAssignedIdentityID: ""
+ keyvaultName: "${KEYVAULT_NAME}"
+ objects: |
+ array:
+ - |
+ objectName: secret1
+ objectType: secret
+ objectVersion: ""
+ tenantId: "${AZ_TENANT_ID}"
+ EOF
+ ```
+
+1. Create a pod that uses the `SecretProviderClass` created in the previous step:
+
+ ```
+ cat <<EOF | kubectl apply -f -
+ kind: Pod
+ apiVersion: v1
+ metadata:
+ name: busybox-secrets-store-inline
+ namespace: my-application
+ spec:
+ containers:
+ - name: busybox
+ image: k8s.gcr.io/e2e-test-images/busybox:1.29
+ command:
+ - "/bin/sleep"
+ - "10000"
+ volumeMounts:
+ - name: secrets-store-inline
+ mountPath: "/mnt/secrets-store"
+ readOnly: true
+ volumes:
+ - name: secrets-store-inline
+ csi:
+ driver: secrets-store.csi.k8s.io
+ readOnly: true
+ volumeAttributes:
+ secretProviderClass: "azure-kvname"
+ nodePublishSecretRef:
+ name: secrets-store-creds
+ EOF
+ ```
+
+1. Check that the secret is mounted:
+
+ ```
+ kubectl exec busybox-secrets-store-inline -- ls /mnt/secrets-store/
+ ```
+
+ The output should match the following:
+
+ ```
+ secret1
+ ```
+
+1. Print the secret:
+
+ ```
+ kubectl exec busybox-secrets-store-inline \
+ -- cat /mnt/secrets-store/secret1
+ ```
+
+ The output should match the following:
+
+ ```azurecli
+ Hello
+ ```
+
+## Cleanup
+
+Uninstall the Key Vault Provider and the CSI Driver.
+
+### Uninstall the Key Vault Provider
+
+1. Uninstall Helm chart:
+
+ ```azurecli
+ helm uninstall -n k8s-secrets-store-csi azure-csi-provider
+ ```
+
+1. Delete the app:
+
+ ```
+ oc delete project my-application
+ ```
+
+1. Delete the Azure key vault:
+
+ ```
+ az keyvault delete -n ${KEYVAULT_NAME}
+ ```
+
+1. Delete the service principal:
+
+ ```
+ az ad sp delete --id ${SERVICE_PRINCIPAL_CLIENT_ID}
+ ```
+
+## Uninstall the Kubernetes Secret Store CSI Driver
+
+1. Delete the Secrets Store CSI Driver:
+
+ ```
+ helm uninstall -n k8s-secrets-store-csi csi-secrets-store
+ ```
+
+1. Delete the SecurityContextConstraints:
+
+ ```
+ oc adm policy remove-scc-from-user privileged \
+ system:serviceaccount:k8s-secrets-store-csi:secrets-store-csi-driver
+ ```
+
orbital Downlink Aqua https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/downlink-aqua.md
Sign in to the [Azure portal - Orbital Preview](https://aka.ms/orbital/portal).
2. [Create a virtual machine (VM)](../virtual-network/quick-create-portal.md#create-virtual-machines) within the virtual network above. Ensure that this VM has the following specifications: - Operation System: Linux (Ubuntu 18.04 or higher) - Size: at least 32 GiB of RAM-- Ensure that the VM has at least one standard public IP
+- Ensure that the VM has internet access for downloading tools by having one standard public IP address
+
+> [!TIP]
+> The Public IP Address here is only for internet connectivity not Contact Data. For more information, see [Default outbound access in Azure](../virtual-network/ip-services/default-outbound-access.md).
+ 3. Create a tmpfs on the virtual machine. This virtual machine is where the data will be written to in order to avoid slow writes to disk: ```console sudo mkdir /media/aqua
orbital Partner Network Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/partner-network-integration.md
Previously updated : 07/06/2022 Last updated : 01/05/2023
This article describes how to integrate partner network ground stations.
- KSAT Lite - [Viasat RTE](https://azuremarketplace.microsoft.com/marketplace/apps/viasatinc1628707641775.viasat-real-time-earth?tab=overview)
-## Request integration resource information
-
-1. Email the Azure Orbital Ground Station (AOGS) team at **azorbitalpm@microsoft.com** to initiate partner network integration by providing the details below:
- - Azure Subscription ID
- - List of partner networks you've contracted with
- - List of ground station locations included in partner contracts
-2. The AOGS team will reply to your message with additional requested information, or, the Contact Profile resource parameters that will enable your partner network integration.
-3. Create a contact profile resource with the parameters provided by the AOGS team.
-4. Await integration confirmation prior to scheduling Contacts with the newly integrated partner network(s).
-
-> [!NOTE]
-> It is important that the contact profile resource parameters match those provided by the AOGS team.
-
+## Request authorization of the new spacecraft resource
+
+1. Navigate to the newly created spacecraft resource's overview page.
+1. Select **New support request** in the Support + troubleshooting section of the left-hand blade.
+1. In the **New support request** page, enter or select this information in the Basics tab:
+
+| **Field** | **Value** |
+| | |
+| Summary | Request Authorization for [Spacecraft Name] |
+| Issue type | Select **Technical** |
+| Subscription | Select the subscription in which the spacecraft resource was created |
+| Service | Select **My services** |
+| Service type | Search for and select **Azure Orbital** |
+| Problem type | Select **Spacecraft Management and Setup** |
+| Problem subtype | Select **Spacecraft Registration** |
+
+1. Select the Details tab at the top of the page
+1. In the Details tab, enter this information in the Problem details section:
+
+| **Field** | **Value** |
+| | |
+| When did the problem start? | Select the current date & time |
+| Description | List your spacecraft's frequency bands and desired ground stations |
+| File upload | Upload any pertinent licensing material, if applicable |
+
+1. Complete the **Advanced diagnostic information** and **Support method** sections of the **Details** tab.
+1. Select the **Review + create** tab, or select the **Review + create** button.
+1. Select **Create**.
+
+ > [!NOTE]
+ > A [Basic Support Plan](https://azure.microsoft.com/support/plans/) or higher is required for a spacecraft authorization request.
+
## Next steps - [Configure a contact profile](./contact-profile.md)
orbital Satellite Imagery With Orbital Ground Station https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/satellite-imagery-with-orbital-ground-station.md
The above tutorial provides a walkthrough for scheduling a contact with Aqua and
> - **Name:** receiver-vm > - **Operating System:** Linux (CentOS Linux 7 or higher) > - **Size:** Standard_D8s_v5 or higher
-> - **IP Address:** Ensure that the VM has at least one standard public IP address
+> - **IP Address:** Ensure that the VM has internet access for downloading tools by having one standard public IP address
+
+> [!TIP]
+> The Public IP Address here is only for internet connectivity not Contact Data. For more information, see [Default outbound access in Azure](../virtual-network/ip-services/default-outbound-access.md).
At the end of this step, you should have the raw direct broadcast data saved as ```.bin``` files under the ```~/aquadata``` folder on the ```receiver-vm```.
sudo yum groups install "GNOME Desktop"
``` Start VNC server: ```bash
-vncsever
+vncserver
``` Enter a password when prompted.
Port forward the vncserver port (5901) over SSH to your local machine:
```bash ssh -L 5901:localhost:5901 azureuser@receiver-vm ```
+> [!NOTE]
+> Use either public IP address of VM DNS name to replace receiver-Vm in this command.
+ 1. On your local machine, download and install [TightVNC Viewer](https://www.tightvnc.com/download.php). 1. Start the TightVNC Viewer and connect to ```localhost:5901```. 1. Enter the vncserver password you entered in the previous step.
From the GNOME Desktop, go to **Applications** > **Internet** > **Firefox** to s
Log on to the [NASA DRL](https://directreadout.sci.gsfc.nasa.gov/?id=dspContent&cid=325&type=software) website and download the **RT-STPS** installation files and the **IPOPP downloader script** under software downloads. The downloaded files will land under ~/Downloads.
-Alternatively, you can download the installation files on your local machine first and then upload to a container in Azure Storage. Then use [AzCopy](../storage/common/storage-use-azcopy-v10.md) to download to your ```receiver-vm```.
+> [!NOTE]
+> Use the same machine to download and run
+> `downloader_DRL-IPOPP_4.1.sh.`
### Install RT-STPS ```bash
peering-service Location Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/peering-service/location-partners.md
The table in this article provides information on the Peering Service connectivi
| [BICS](https://www.bics.com/services/capacity-solutions/cloud-connect/microsoft-azure-cloud-connect/) |Europe| | [CCL](https://concepts.co.nz/news/general-news/) |Oceania | | [CMC Networks](https://www.cmcnetworks.net/products/microsoft-azure-peering-services.html) |Africa|
-| [Colt](https://www.colt.net/why-colt/strategic-alliances/microsoft-partnership/)|Europe, Asia|
+| [Colt](https://www.colt.net/why-colt/partner-hub/)|Europe, Asia|
| [Converge ICT](https://www.convergeict.com/enterprise/microsoft-azure-peering-service-maps/) |Asia| | [Dimension Data](https://www.dimensiondata.com/en-gb/about-us/our-partners/microsoft/)|Africa | | [DE-CIX](https://www.de-cix.net/)|Europe, North America |
postgresql Concepts Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-backup-restore.md
description: Learn about the concepts of backup and restore with Azure Database
--++ Last updated 06/16/2021
postgresql Concepts Compare Single Server Flexible Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-compare-single-server-flexible-server.md
Title: Compare Azure Database for PostgreSQL - Single Server and Flexible Server description: Detailed comparison of features and capabilities between Azure Database for PostgreSQL Single Server and Flexible Server--++
postgresql Concepts Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-compliance.md
Title: 'Security and Compliance Certifications in Azure Database for PostgreSQL - Flexible Server'
-description: Learn about security in the Flexible Server deployment option for Azure Database for PostgreSQL.
+description: Learn about compliance in the Flexible Server deployment option for Azure Database for PostgreSQL.
ms.devlang: python
Last updated 10/20/2022 + # Security and Compliance Certifications in Azure Database for PostgreSQL - Flexible Server [!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
Last updated 10/20/2022
## Overview of Compliance Certifications on Microsoft Azure
-Customers experience an increasing demand for highly secure and compliant solutions as they face data breaches along with requests from governments to access online customer information. Important regulatory requirements such as the [General Data Protection Regulation (GDPR)](https://learn.microsoft.com/compliance/regulatory/gdpr) or [Sarbanes-Oxley (SOX)](https://azure.microsoft.com/resources/microsoft-azure-guidance-for-sarbanes-oxley-sox/) make selecting cloud services that help customers achieve trust, transparency, security, and compliance essential. To help customers achieve compliance with national, regional, and industry specific regulations and requirements Azure Database for PostgreSQL - Flexible Server build upon Microsoft AzureΓÇÖs compliance offerings to provide the most rigorous compliance certifications to customers at service general availability.
+Customers experience an increasing demand for highly secure and compliant solutions as they face data breaches along with requests from governments to access online customer information. Important regulatory requirements such as the [General Data Protection Regulation (GDPR)](/compliance/regulatory/gdpr) or [Sarbanes-Oxley (SOX)](/resources/microsoft-azure-guidance-for-sarbanes-oxley-sox/) make selecting cloud services that help customers achieve trust, transparency, security, and compliance essential. To help customers achieve compliance with national, regional, and industry specific regulations and requirements Azure Database for PostgreSQL - Flexible Server build upon Microsoft AzureΓÇÖs compliance offerings to provide the most rigorous compliance certifications to customers at service general availability.
To help customers meet their own compliance obligations across regulated industries and markets worldwide, Azure maintains the largest compliance portfolio in the industry both in terms of breadth (total number of offerings), as well as depth (number of customer-facing services in assessment scope). Azure compliance offerings are grouped into four segments: globally applicable, US government, industry specific, and region/country specific. Compliance offerings are based on various types of assurances, including formal certifications, attestations, validations, authorizations, and assessments produced by independent third-party auditing firms, as well as contractual amendments, self-assessments and customer guidance documents produced by Microsoft. More detailed information about Azure compliance offerings is available from the [Trust](https://www.microsoft.com/trust-center/compliance/compliance-overview) Center.
industry specific, and region/country specific. Compliance offerings are based o
> [!div class="mx-tableFixed"] > | **Certification**| **Applicable To** |
-> |||
-> |HIPAA and HITECH Act (U.S.) | Healthcare|
-> |HITRUST | Healthcare|
-> |CFTC 1.31 | Financial|
-> |DPP (UK) | Media|
-> |EU EN 301 549 | Accessibility|
-> |EU ENISA IAF | Public and private companies, government entities and not-for-profits|
-> |EU US Privacy Shield | Public and private companies, government entities and not-for-profits|
-> |SO/IEC 27018 | Public and private companies, government entities and not-for-profits that provides PII processing services via the cloud|
-> |EU Model Clauses | Public and private companies, government entities and not-for-profits that provides PII processing services via the cloud|
-> |FERPA | Educational Institutions|
-> |FedRAMP High | US Federal Agencies and Contractors|
-> |GLBA | Financial|
-> |ISO 27001:2013 | Public and private companies, government entities and not-for-profits|
-> |Japan My Number Act | Public and private companies, government entities and not-for-profits|
-> |TISAX | Automotive |
-> |NEN Netherlands 7510 | Healthcare |
-> |NHS IG Toolkit UK | Healthcare |
-> |BIR 2012 Netherlands | Public and private companies, government entities and not-for-profits|
-> |PCI DSS Level 1 | Payment processors and Financial|
-> |SOC 2 Type 2 |Public and private companies, government entities and not-for-profits|
-> |Sec 17a-4 |Financial|
-> |Spain DPA |Public and private companies, government entities and not-for-profits|
+> ||-|
+> |HIPAA and HITECH Act (U.S.) | Healthcare |
+> | HITRUST | Healthcare |
+> | CFTC 1.31 | Financial |
+> | DPP (UK) | Media |
+> | EU EN 301 549 | Accessibility |
+> | EU ENISA IAF | Public and private companies, government entities and not-for-profits |
+> | EU US Privacy Shield | Public and private companies, government entities and not-for-profits |
+> | SO/IEC 27018 | Public and private companies, government entities and not-for-profits that provides PII processing services via the cloud |
+> | EU Model Clauses | Public and private companies, government entities and not-for-profits that provides PII processing services via the cloud |
+> | FERPA | Educational Institutions |
+> | FedRAMP High | US Federal Agencies and Contractors |
+> | GLBA | Financial |
+> | ISO 27001:2013 | Public and private companies, government entities and not-for-profits |
+> | Japan My Number Act | Public and private companies, government entities and not-for-profits |
+> | TISAX | Automotive |
+> | NEN Netherlands 7510 | Healthcare |
+> | NHS IG Toolkit UK | Healthcare |
+> | BIR 2012 Netherlands | Public and private companies, government entities and not-for-profits |
+> | PCI DSS Level 1 | Payment processors and Financial |
+> | SOC 2 Type 2 | Public and private companies, government entities and not-for-profits |
+> | Sec 17a-4 | Financial |
+> | Spain DPA | Public and private companies, government entities and not-for-profits |
## Next Steps * [Azure Compliance on Trusted Cloud](https://azure.microsoft.com/explore/trusted-cloud/compliance/)
postgresql Concepts Pgbouncer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-pgbouncer.md
Title: PgBouncer - Azure Database for PostgreSQL - Flexible Server description: This article provides an overview with the built-in PgBouncer extension.--++
postgresql How To Autovacuum Tuning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-autovacuum-tuning.md
Use the following query to list the tables in a database and identify the tables
'pg_catalog' ,'information_schema' )
- AND N.nspname ! ~ '^pg_toast'
+ AND N.nspname !~ '^pg_toast'
) AS av ORDER BY av_needed DESC ,n_dead_tup DESC; ```
postgresql How To Configure Sign In Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-configure-sign-in-azure-ad-authentication.md
The following steps are mandatory to use Azure AD authentication with Azure Data
```powershell Connect-AzureAD -TenantId <customer tenant id> ```
+A successful output will look similar to the following.
+
+```
+Account Environment TenantId TenantDomain AccountType
+- -- -- --
+passwordless-user@contoso.com AzureCloud 456e5515-431d-4a70-874d-bdae2ba97c1d <your tenant name>.onmicrosoft.com User
+```
+
+Ensure that your Azure tenant has the service principal for the Azure Database for PostgreSQL Flexible Server. This only needs to be done once per Azure tenant. First, check for the existence of the service principal in your tenant with this command. The specific ObjectId value is for the Azure Database for PostgreSQL Flexible Server service principal.
+```
+Get-AzureADServicePrincipal -ObjectId 0049e2e2-fcea-4bc4-af90-bdb29a9bbe98
+```
+If the service principal exists, you'll see the following output.
+```
+ObjectId AppId DisplayName
+-- -- --
+0049e2e2-fcea-4bc4-af90-bdb29a9bbe98 5657e26c-cc92-45d9-bc47-9da6cfdb4ed9 Azure OSSRDBMS PostgreSQL Flexible Server
+```
### Grant read access
postgresql How To Connect Scram https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-connect-scram.md
Title: Connectivity using SCRAM in Azure Database for PostgreSQL - Flexible Server description: Instructions and information on how to configure and connect using SCRAM in Azure Database for PostgreSQL - Flexible Server.--++
postgresql How To Manage High Availability Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-manage-high-availability-portal.md
Title: Manage high availability - Azure portal - Azure Database for PostgreSQL - Flexible Server description: This article describes how to enable or disable high availability in Azure Database for PostgreSQL - Flexible Server through the Azure portal.--++
postgresql How To Restart Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-restart-server-portal.md
Title: Restart - Azure portal - Azure Database for PostgreSQL - Flexible Server description: This article describes how to perform restart operations in Azure Database for PostgreSQL through the Azure portal.--++
postgresql How To Restore Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-restore-server-portal.md
Title: Point-in-time restore of a flexible server - Azure portal description: This article describes how to perform restore operations in Azure Database for PostgreSQL Flexible Server through the Azure portal.--++
postgresql How To Scale Compute Storage Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-scale-compute-storage-portal.md
Title: Scale operations - Azure portal - Azure Database for PostgreSQL - Flexible Server description: This article describes how to perform scale operations in Azure Database for PostgreSQL through the Azure portal.--++
postgresql Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/release-notes.md
Title: Azure Database for PostgreSQL - Flexible Server Release notes description: Release notes of Azure Database for PostgreSQL - Flexible Server.--++
postgresql How To Migrate From Oracle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-from-oracle.md
description: This guide helps you to migrate your Oracle schema to Azure Databas
--++ Last updated 03/18/2021
postgresql How To Migrate Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-online.md
--++ Last updated 5/6/2019
postgresql How To Migrate Using Dump And Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-using-dump-and-restore.md
description: You can extract a PostgreSQL database into a dump file. Then, you c
--++ Last updated 09/22/2020
postgresql How To Migrate Using Export And Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-using-export-and-import.md
description: Describes how extract a PostgreSQL database into a script file and
--++ Last updated 09/22/2020 # Migrate your PostgreSQL database using export and import
postgresql Sample Point In Time Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/scripts/sample-point-in-time-restore.md
Title: Azure CLI script - Restore an Azure Database for PostgreSQL server description: This sample Azure CLI script shows how to restore an Azure Database for PostgreSQL server and its databases to a previous point in time.--++ ms.devlang: azurecli
postgresql Concepts Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-backup.md
description: Learn about automatic backups and restoring your Azure Database for
--++ Last updated 06/24/2022
postgresql Concepts Business Continuity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-business-continuity.md
description: This article describes business continuity (point in time restore,
--++ Last updated 06/24/2022
postgresql Concepts Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-certificate-rotation.md
openssl s_client -showcerts -connect <your-postgresql-server-name>:443
``` ### 14. What if I have further questions?
-If you have questions, get answers from community experts in [Microsoft Q&A](mailto:AzureDatabaseforPostgreSQL@service.microsoft.com). If you have a support plan and you need technical help please create a [support request](https://learn.microsoft.com/azure/azure-portal/supportability/how-to-create-azure-support-request):
+If you have questions, get answers from community experts in [Microsoft Q&A](mailto:AzureDatabaseforPostgreSQL@service.microsoft.com). If you have a support plan and you need technical help please create a [support request](/azure/azure-portal/supportability/how-to-create-azure-support-request):
* ForΓÇ»*Issue type*, selectΓÇ»*Technical*. * ForΓÇ»*Subscription*, select your *subscription*. * ForΓÇ»*Service*, selectΓÇ»*My Services*, then selectΓÇ»*Azure Database for PostgreSQL ΓÇô Single Server*.
postgresql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-high-availability.md
description: This article provides information on high availability in Azure Dat
--++ Last updated 08/3/2022
postgresql Concepts Known Issues Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-known-issues-limitations.md
description: Lists the known issues that customers should be aware of.
--++ Last updated 06/24/2022
postgresql Concepts Logical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-logical.md
description: Describes logical decoding and wal2json for change data capture in
--++ Last updated 06/24/2022
postgresql Concepts Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-read-replicas.md
description: This article describes the read replica feature in Azure Database f
--++ Last updated 06/24/2022
postgresql Concepts Version Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-version-policy.md
description: Describes the policy around Postgres major and minor versions in Az
--++ Last updated 09/14/2022
postgresql How To Restore Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-restore-server-cli.md
description: Learn how to set backup configurations and restore a server in Azur
--++ ms.devlang: azurecli Last updated 06/24/2022
postgresql How To Restore Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-restore-server-portal.md
description: This article describes how to restore a server in Azure Database fo
--++ Last updated 06/24/2022
postgresql How To Restore Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-restore-server-powershell.md
description: Learn how to backup and restore a server in Azure Database for Post
--++ ms.devlang: azurepowershell Last updated 06/24/2022
postgresql How To Upgrade Using Dump And Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-upgrade-using-dump-and-restore.md
description: Describes offline upgrade methods using dump and restore databases
--++ Last updated 06/24/2022
postgresql Overview Single Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/overview-single-server.md
The service runs community version of PostgreSQL. This allows full application c
## Frequently Asked Questions
-Will Flexible Server replace Single Server or Will Single Server be retired soon?
+Will Flexible Server replace Single Server or will Single Server be retired soon?
We continue to support Single Server and encourage you to adopt Flexible Server which has richer capabilities such as zone resilient HA, predictable performance, maximum control, custom maintenance window, cost optimization controls and simplified developer experience suitable for your enterprise workloads. If we decide to retire any service, feature, API or SKU, you will receive advance notice including a migration or transition path. Learn more about Microsoft Lifecycle policies [here](/lifecycle/faq/general-lifecycle).
postgresql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/policy-reference.md
Previously updated : 11/28/2022 Last updated : 01/05/2023 # Azure Policy built-in definitions for Azure Database for PostgreSQL
postgresql Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/security-controls-policy.md
Previously updated : 11/04/2022 Last updated : 01/05/2023 # Azure Policy Regulatory Compliance controls for Azure Database for PostgreSQL
private-link Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/availability.md
The following tables list the Private Link services and the regions where they'r
|Supported services |Available regions | Other considerations | Status | |:-|:--|:-|:--| |Azure-managed Disks | All public regions<br/> All Government regions<br/>All China regions | [Select for known limitations](../virtual-machines/disks-enable-private-links-for-import-export-portal.md#limitations) | GA <br/> [Learn how to create a private endpoint for Azure Managed Disks.](../virtual-machines/disks-enable-private-links-for-import-export-portal.md) |
-| Azure Batch (batchAccount) | All public regions<br/> All Government regions | | GA <br/> [Learn how to create a private endpoint for Azure Batch.](../batch/private-connectivity.md) |
-| Azure Batch (nodeManagement) | [Selected regions](../batch/simplified-compute-node-communication.md#supported-regions) | Supported for [simplified compute node communication](../batch/simplified-compute-node-communication.md) | Preview <br/> [Learn how to create a private endpoint for Azure Batch.](../batch/private-connectivity.md) |
+| Azure Batch (batchAccount) | All public regions<br/> All Government regions<br/>All China regions | | GA <br/> [Learn how to create a private endpoint for Azure Batch.](../batch/private-connectivity.md) |
+| Azure Batch (nodeManagement) | [Selected regions](../batch/simplified-compute-node-communication.md#supported-regions) | Supported for [simplified compute node communication](../batch/simplified-compute-node-communication.md) | GA <br/> [Learn how to create a private endpoint for Azure Batch.](../batch/private-connectivity.md) |
| Azure Functions | All public regions | | GA </br> [Learn how to create a private endpoint for Azure Functions.](../azure-functions/functions-create-vnet.md) | ### Containers
purview Apply Classifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/apply-classifications.md
Title: Apply classifications on assets
-description: This document describes how to apply classifications on assets.
--
+ Title: Automatically apply classifications on assets
+description: This document describes how to automatically apply classifications on assets.
++ Previously updated : 09/27/2021 Last updated : 12/30/2022
-# Apply classifications on assets in Microsoft Purview
+# Automatically apply classifications on assets in Microsoft Purview
-This article discusses how to apply classifications on assets.
+After data sources are [registered](manage-data-sources.md#register-a-new-source) in the Microsoft Purview Data Map, the next step is to [scan](concept-scans-and-ingestion.md) the data sources. The scanning process establishes a connection to the data source, captures technical metadata, and can automatically classify data using either the [supported system classifications](supported-classifications.md) or [rules for your custom classifications](create-a-custom-classification-and-classification-rule.md#custom-classification-rules). For example, if you have a file named *multiple.docx* and it has a National ID number in its content, during the scanning process Microsoft Purview adds the classification **EU National Identification Number** to the file asset's detail page.
-## Introduction
+These [classifications](concept-classification.md) help you and your team identify the kinds of data you have across your data estate. For example: if files or tables contain credit card numbers, or addresses. Then you can more easily search for certain kinds of information, like customer IDs, or prioritize security for sensitive data types.
-Classifications can be system or custom types. System classifications are present in Microsoft Purview by default. Custom classifications can be created based on a regular expression pattern and keyword lists. Classifications can be applied to assets either automatically via scanning or manually.
+Classifications can be automatically applied on file and column assets during scanning.
-This document explains how to apply classifications to your data.
+In this article we'll discuss:
-## Prerequisites
+- [How Microsoft Purview classifies data](#how-microsoft-purview-classifies-assets)
+- [The steps to automatically apply classifications](#automatically-apply-classifications)
-- Create custom classifications based on your need.-- Set up scan on your data sources.
+## How Microsoft Purview classifies assets
-## Apply classifications
-In Microsoft Purview, you can apply system or custom classifications on a file, table, or column asset. This article describes the steps to manually apply classifications on your assets.
+When a data source is scanned, Microsoft Purview compares data in the asset to a list of possible classifications called a [scan rule set](create-a-scan-rule-set.md).
-### Apply classification to a file asset
-Microsoft Purview can scan and automatically classify documents. For example, if you have a file named *multiple.docx* and it has a National ID number in its content, Microsoft Purview adds the classification **EU National Identification Number** to the file asset's detail page.
+There are [system scan rule sets](create-a-scan-rule-set.md#system-scan-rule-sets) already available for each data source that contains every currently available system classification for that data source. Or, you can [create a custom scan rule set](create-a-scan-rule-set.md) to make a list of classifications tailored to your data set.
-In some scenarios, you might want to manually add classifications to your file asset or if you have multiple files that are grouped into a resource set, add classifications at the resource set level.
+Making a custom rule sets for your data can be a good idea if your data is limited to specific kinds of information, or regions, as comparing your data to fewer classification types will speed up the scanning process. For example, if your dataset only contains European data, you could create a custom scan rule set that excludes identification for other regions.
-Follow these steps to add a custom or system classification to a partition resource set:
+You might also make a custom rule set if you've created [custom classifications](create-a-custom-classification-and-classification-rule.md#steps-to-create-a-custom-classification) and [classification rules](create-a-custom-classification-and-classification-rule.md#custom-classification-rules), so that your custom classifications can be automatically applied during scanning.
-1. Search or browse the partition and navigate to the asset detail page.
+For more information about the available system classifications and how your data is classified, see the [system classifications page.](supported-classifications.md)
- :::image type="content" source="./media/apply-classifications/asset-detail-page.png" alt-text="Screenshot showing the asset detail page.":::
+## Automatically apply classifications
-1. On the **Overview** tab, view the **Classifications** section to see if there are any existing classifications. Select **Edit**.
+>[!NOTE]
+>Table assets are not automatically assigned classifications, because the classifications are assigned to their columns, but you can [manually apply classifications to table assets](manually-apply-classifications.md#manually-apply-classification-to-a-table-asset).
-1. From the **Classifications** drop-down list, select the specific classifications you're interested in. For example, **Credit Card Number**, which is a system classification and **CustomerAccountID**, which is a custom classification.
+After data sources are [registered](manage-data-sources.md#register-a-new-source), you can automatically classify data in that source's data assets by running a [scan](concept-scans-and-ingestion.md).
- :::image type="content" source="./media/apply-classifications/select-classifications.png" alt-text="Screenshot showing how to select classifications to add to an asset.":::
+1. Check the **Scan** section of the [source article](microsoft-purview-connector-overview.md) for your data source to confirm any prerequisites or authentication are set up and ready for a scan.
-1. Select **Save**
+1. Search the Microsoft Purview Data Map the registered source that has the data assets (files and columns), you want to classify.
-1. On the **Overview** tab, confirm that the classifications you selected appear under the **Classifications** section.
+1. Select the **New Scan** icon under the resource.
- :::image type="content" source="./media/apply-classifications/confirm-classifications.png" alt-text="Screenshot showing how to confirm classifications were added to an asset.":::
+ :::image type="content" source="./media/apply-classifications/new-scan.png" alt-text="Screenshot of the Microsoft Purview Data Map, with the new scan button selected under a registered source.":::
-### Apply classification to a table asset
+ >[!TIP]
+ >If you don't see the New Scan button, you may not have correct permissions. To run a scan, you'll need at least [data source administrator permissions](catalog-permissions.md) on the collection where the source is registered.
-When Microsoft Purview scans your data sources, it doesn't automatically assign classifications to table assets. If you want your table asset to have a classification, you must add it manually.
+1. Select your credential and authenticate with your source. (For more information about authenticating with your source, see the **prerequisite** and **scan** sections of your specific source [source article](microsoft-purview-connector-overview.md).) Select **Continue**.
-To add a classification to a table asset:
+1. If necessary, select the assets in the source you want to scan. You can scan all assets, or a subset of folders, files, or tables depending on the source.
-1. Find a table asset that you're interested in. For example, **Customer** table.
+1. Select your scan rule set. You'll see a list of available scan rule sets and can select one, or you can choose to create a new scan rule set using the **New scan rule set** button at the top. The scan rule set will determine which classifications will be compared and applied to your data. For more information, see [how Microsoft Purview classifies assets](#how-microsoft-purview-classifies-assets).
-1. Confirm that no classifications are assigned to the table. Select **Edit**
+ :::image type="content" source="./media/apply-classifications/select-scan-rule-set.png" alt-text="Screenshot of the scan rule set page of the scan menu, with the new scan rule set and existing scan rule set buttons highlighted.":::
- :::image type="content" source="./media/apply-classifications/select-edit-from-table-asset.png" alt-text="Screenshot showing how to view and edit the classifications of a table asset.":::
+ >[!TIP]
+ >For more information about the options available when creating a scan rule set, start at step 4 of these [steps to create a scan rule set](create-a-scan-rule-set.md#steps-to-create-a-scan-rule-set).
-1. From the **Classifications** drop-down list, select one or more classifications. This example uses a custom classification named **CustomerInfo**, but you can select any classifications for this step.
+1. Schedule your scan.
- :::image type="content" source="./media/apply-classifications/select-classifications-in-table.png" alt-text="Screenshot showing how to select classifications to add to a table asset.":::
+1. Save and run your scan. Applicable classifications in your scan rule set will be automatically applied to the assets you scan. You'll be able to view and manage them once the scan is complete.
-1. Select **Save** to save the classifications.
-
-1. On the **Overview** page, verify that Microsoft Purview added your new classifications.
-
- :::image type="content" source="./media/apply-classifications/verify-classifications-added-to-table.png" alt-text="Screenshot showing how to verify that classifications were added to a table asset.":::
-
-### Add classification to a column asset
-
-Microsoft Purview automatically scans and adds classifications to all column assets. However, if you want to change the classification, you can do so at the column level.
-
-To add a classification to a column:
-
-1. Find and select the column asset, and then select **Edit** from the **Overview** tab.
-
-1. Select the **Schema** tab
-
- :::image type="content" source="./media/apply-classifications/edit-column-schema.png" alt-text="Screenshot showing how to edit the schema of a column.":::
-
-1. Identify the columns you're interested in and select **Add a classification**. This example adds a **Common Passwords** classification to the **PasswordHash** column.
-
- :::image type="content" source="./media/apply-classifications/add-classification-to-column.png" alt-text="Screenshot showing how to add a classification to a column.":::
-
-1. Select **Save**
-
-1. Select the **Schema** tab and confirm that the classification has been added to the column.
-
- :::image type="content" source="./media/apply-classifications/confirm-classification-added.png" alt-text="Screenshot showing how to confirm that a classification was added to a column schema.":::
-
-## View classification details
-Microsoft Purview captures important details like who applied a classification and when it was applied. To view the details, hover over the classification to revel the Classification details card. The classification details card shows the following information:
-- Classification name - Name of the classification applied on the asset or column.-- Applied by - Who applied the classification. Possible values are scan and user name.-- Applied time - Local timestamp when the classification was applied via scan or manually.-- Classification type - System or custom.-
-Users with *Data Curator* role will see additional details for classifications that were applied automatically via scan. These details will include sample count that the scanner read to classify the data and distinct data count in the sample that the scanner found.
--
-## Impact of rescanning on existing classifications
-
-Classifications are applied the first time, based on sample set check on your data and matching it against the set regex pattern. At the time of rescan, if new classifications apply, the column gets additional classifications on it. Existing classifications stay on the column, and must be removed manually.
## Next steps
-To learn how to create a custom classification, see [Create a custom classification](create-a-custom-classification-and-classification-rule.md).
+
+- To learn how to create a custom classification, see [create a custom classification](create-a-custom-classification-and-classification-rule.md).
+- To learn about how to manually apply classifications, see [manually apply classifications](manually-apply-classifications.md).
purview Catalog Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-permissions.md
Previously updated : 06/17/2022 Last updated : 12/19/2022 # Access control in the Microsoft Purview governance portal
The Microsoft Purview governance portal uses a set of predefined roles to contro
|User Scenario|Appropriate Role(s)| |-|--| |I just need to find assets, I don't want to edit anything|Data reader|
-|I need to edit information about assets, assign classifications, associate them with glossary entries, and so on.|Data curator|
-|I need to edit the glossary or set up new classification definitions|Data curator|
+|I need to edit and manage information about assets|Data curator|
+|I want to create custom classifications | Data curator **or** data source administrator |
+|I need to edit the business glossary |Data curator|
|I need to view Data Estate Insights to understand the governance posture of my data estate|Data curator| |My application's Service Principal needs to push data to the Microsoft Purview Data Map|Data curator| |I need to set up scans via the Microsoft Purview governance portal|Data curator on the collection **or** data curator **and** data source administrator where the source is registered.|
purview Concept Best Practices Classification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-best-practices-classification.md
Title: Classification best practices for the Microsoft Purview governance portal description: This article provides best practices for classification in the Microsoft Purview governance portal so you can effectively identify sensitive data across your environment.--++
Here are some considerations to bear in mind as you're defining classifications:
## Next steps -- [Apply system classification](./apply-classifications.md)
+- [Automatically apply classifications](./apply-classifications.md)
+- [Manually apply classifications](./manually-apply-classifications.md)
- [Create custom classification](./create-a-custom-classification-and-classification-rule.md)
purview Concept Best Practices Lineage Azure Data Factory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-best-practices-lineage-azure-data-factory.md
Title: Microsoft Purview Data Lineage best practices description: This article provides best practices for data Lineage various data sources in Microsoft Purview.--++ Previously updated : 10/25/2021 Last updated : 12/12/2022 # Microsoft Purview Data Lineage best practices Data Lineage is broadly understood as the lifecycle that spans the dataΓÇÖs origin, and where it moves over time across the data estate. Microsoft Purview can capture lineage for data in different parts of your organization's data estate, and at different levels of preparation including:
-* Completely raw data staged from various platforms
+* Raw data staged from various platforms
* Transformed and prepared data * Data used by visualization platforms ## Why do you need adopt Lineage?
-Data lineage is the process of describing what data exists, where it is stored and how it flows between systems. There are many reasons why data lineage is important, but at a high level these can all be boiled down to three categories which we will explore here:
+Data lineage is the process of describing what data exists, where it's stored and how it flows between systems. There are many reasons why data lineage is important, but at a high level these can all be boiled down to three categories that we'll explore here:
* Track data in reports * Impact analysis * Capture the changes and where the data has resided through the data life cycle
Data lineage is the process of describing what data exists, where it is
### Azure Data Factory instance
-* Data lineage won't be reported to the catalog automatically until the Data Factory connection status turns to Connected. The rest of status Disconnected and CannotAccess cannot capture lineage.
+* Data lineage won't be reported to the catalog automatically until the Data Factory connection status turns to Connected. The rest of status Disconnected and CannotAccess can't capture lineage.
:::image type="content" source="./media/how-to-link-azure-data-factory/data-factory-connection.png" alt-text="Screen shot showing a data factory connection list." lightbox="./media/how-to-link-azure-data-factory/data-factory-connection.png":::
Data lineage is the process of describing what data exists, where it is
* [Execute SSIS Package activity](../data-factory/how-to-invoke-ssis-package-ssis-activity.md) * Microsoft Purview drops lineage if the source or destination uses an unsupported data storage system.
- * Supported data sources in copy activity is listed **Copy activity support** of [Connect to Azure Data Factory](how-to-link-azure-data-factory.md)
- * Supported data sources in data flow activity is listed **Data Flow support** of [Connect to Azure Data Factory](how-to-link-azure-data-factory.md)
- * Supported data sources in SSIS is listed **SSIS execute package activity support** of [Lineage from SQL Server Integration Services](how-to-lineage-sql-server-integration-services.md)
+ * Supported data sources in copy activity are listed **Copy activity support** of [Connect to Azure Data Factory](how-to-link-azure-data-factory.md)
+ * Supported data sources in data flow activity are listed **Data Flow support** of [Connect to Azure Data Factory](how-to-link-azure-data-factory.md)
+ * Supported data sources in SSIS are listed **SSIS execute package activity support** of [Lineage from SQL Server Integration Services](how-to-lineage-sql-server-integration-services.md)
-* Microsoft Purview cannot capture lineage if Azure Data Factory copy activity uses copy activity features listed in **Limitations on copy activity lineage** of [Connect to Azure Data Factory](how-to-link-azure-data-factory.md)
+* Microsoft Purview can't capture lineage if Azure Data Factory copy activity uses copy activity features listed in **Limitations on copy activity lineage** of [Connect to Azure Data Factory](how-to-link-azure-data-factory.md)
-* For the lineage of Dataflow activity, Microsoft Purview only support source and sink. The lineage for Dataflow transformation is not supported yet.
+* For the lineage of Dataflow activity, Microsoft Purview only support source and sink. The lineage for Dataflow transformation isn't supported yet.
* Data flow lineage doesn't integrate with Microsoft Purview resource set. **Resource set example:** Qualified name: https://myblob.blob.core.windows.net/sample-data/data{N}.csv Display name: "data"
-* For the lineage of Execute SSIS Package activity, we only support source and destination. The lineage for transformation is not supported yet.
+* For the lineage of Execute SSIS Package activity, we only support source and destination. The lineage for transformation isn't supported yet.
:::image type="content" source="./media/concept-best-practices-lineage/ssis-lineage.png" alt-text="Screenshot of the Execute SSIS lineage in Microsoft Purview." lightbox="./media/concept-best-practices-lineage/ssis-lineage.png":::
purview Concept Classification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-classification.md
Custom classification rules can be based on a *regular expression* pattern or *d
* [Read about classification best practices](concept-best-practices-classification.md) * [Create custom classifications](create-a-custom-classification-and-classification-rule.md)
-* [Apply classifications](apply-classifications.md)
+* [Automatically apply classifications](apply-classifications.md)
+* [Manually apply classifications](manually-apply-classifications.md)
* [Use the Microsoft Purview governance portal](use-azure-purview-studio.md)
purview Create A Custom Classification And Classification Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/create-a-custom-classification-and-classification-rule.md
Previously updated : 09/27/2021 Last updated : 12/29/2022 # Custom classifications in Microsoft Purview This article describes how you can create custom classifications to define data types in your data estate that are unique to your organization. It also describes the creation of custom classification rules that let you find specified data throughout your data estate.
+>[IMPORTANT]
+>To create a custom classification you need either **data curator** or **data source administrator** permission on a collection. Permissions at any collection level are sufficient.
+>For more information about permissions, see: [Microsoft Purview permissions](catalog-permissions.md).
+ ## Default system classifications The Microsoft Purview Data Catalog provides a large set of default system classifications that represent typical personal data types that you might have in your data estate. For the entire list of available system classifications, see [Supported classifications in Microsoft Purview](supported-classifications.md).
You also have the ability to create custom classifications, if any of the defaul
To create a custom classification, follow these steps:
+1. You'll need [**data curator** or **data source administrator** permissions on any collection](catalog-permissions.md) to be able to create a custom classification.
+ 1. From your catalog, select **Data Map** from the left menu.
-2. Select **Classifications** under **Annotation management**.
+1. Select **Classifications** under **Annotation management**.
-3. Select **+ New**
+1. Select **+ New**
:::image type="content" source="media/create-a-custom-classification-and-classification-rule/new-classification.png" alt-text="New classification" border="true":::
These details include the count of how many instances there are, the formal name
The catalog service provides a set of default classification rules, which are used by the scanner to automatically detect certain data types. You can also add your own custom classification rules to detect other types of data that you might be interested in finding across your data estate. This capability can be powerful when you're trying to find data within your data estate.
+>[!NOTE]
+>Custom classification rules are only supported in the English language.
+ As an example, let\'s say that a company named Contoso has employee IDs that are standardized throughout the company with the word \"Employee\" followed by a GUID to create EMPLOYEE{GUID}. For example, one instance of an employee ID looks like `EMPLOYEE9c55c474-9996-420c-a285-0d0fc23f1f55`. Contoso can configure the scanning system to find instances of these IDs by creating a custom classification rule. They can supply a regular expression that matches the data pattern, in this
The scanning system can then use this rule to examine the actual data in the col
To create a custom classification rule:
-1. Create a custom classification by following the instructions in the previous section. You will add this custom classification in the classification rule configuration so that the system applies it when it finds a match in the column.
+1. Create a custom classification by following the instructions in the previous section. You'll add this custom classification in the classification rule configuration so that the system applies it when it finds a match in the column.
2. Select the **Data Map** icon.
To create a custom classification rule:
### Creating a Regular Expression Rule
-1. If creating a regular expression rule, you will see the following screen. You may optionally upload a file that will be used to **generate suggested regex patterns** for your rule.
+>[!IMPORTANT]
+>Regular expressions in custom classifications are case insensitive.
+
+1. If creating a regular expression rule, you'll see the following screen. You may optionally upload a file that will be used to **generate suggested regex patterns** for your rule. Only English language rules are supported.
:::image type="content" source="media/create-a-custom-classification-and-classification-rule/create-new-regex-rule.png" alt-text="Create new regex rule" border="true":::
To create a custom classification rule:
|Field |Description | |||
- |Data Pattern |Optional. A regular expression that represents the data that's stored in the data field. The limit is very large. In the previous example, the data patterns test for an employee ID that's literally the word `Employee{GUID}`. |
- |Column Pattern |Optional. A regular expression that represents the column names that you want to match. The limit is very large. |
+ |Data Pattern |Optional. A regular expression that represents the data that's stored in the data field. The limit is large. In the previous example, the data patterns test for an employee ID that's literally the word `Employee{GUID}`. |
+ |Column Pattern |Optional. A regular expression that represents the column names that you want to match. The limit is large. |
1. Under **Data Pattern** you can use the **Minimum match threshold** to set the minimum percentage of the distinct data value matches in a column that must be found by the scanner for the classification to be applied. The suggested value is 60%. If you specify multiple data patterns, this setting is disabled and the value is fixed at 60%.
To create a custom classification rule:
1. You can now verify your rule and **create** it. 1. Test the classification rule before completing the creation process to validate that it will apply tags to your assets. The classifications in the rule will be applied to the sample data uploaded just as it would in a scan. This means all of the system classifications and your custom classification will be matched to the data in your file.
- Input files may include delimited files (CSV, PSV, SSV, TSV), JSON, or XML content. The content will be parsed based on the file extension of the input file. Delimited data may have a file extension that matches any of the mentioned types. For example, TSV data can exist in a file named MySampleData.csv. Delimited content must also have a minimum of 3 columns.
+ Input files may include delimited files (CSV, PSV, SSV, TSV), JSON, or XML content. The content will be parsed based on the file extension of the input file. Delimited data may have a file extension that matches any of the mentioned types. For example, TSV data can exist in a file named MySampleData.csv. Delimited content must also have a minimum of three columns.
:::image type="content" source="media/create-a-custom-classification-and-classification-rule/test-rule-screen.png" alt-text="Test rule before creating" border="true":::
To create a custom classification rule:
### Creating a Dictionary Rule
-1. If creating a dictionary rule, you will see the following screen. Upload a file that contains all possible values for the classification you're creating in a single column.
+1. If creating a dictionary rule, you'll see the following screen. Upload a file that contains all possible values for the classification you're creating in a single column. Only English language rules are supported.
:::image type="content" source="media/create-a-custom-classification-and-classification-rule/dictionary-rule.png" alt-text="Create dictionary rule" border="true":::
purview How To Data Share Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-data-share-faq.md
Here are some frequently asked questions for Microsoft Purview Data Sharing.
* **Recipient** - A recipient is a user or service principal to which the share is sent. ## Can I use the API or SDK for storage in-place sharing?
-Yes, you can use [REST API](/rest/api/purview/) or [.NET SDK](/dotnet/api/overview/azure/purviewresourceprovider/) for programmatic experience to share data.
+Yes, you can use [REST API](/rest/api/purview/) or [.NET SDK](/dotnet/api/overview/azure/purview) for programmatic experience to share data.
## What are the roles and permissions required to share data or receive shares?
purview How To Workflow Asset Curation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-workflow-asset-curation.md
+
+ Title: Asset curation approval workflow
+description: This article describes how to create and manage workflows to approve data asset curation in Microsoft Purview.
+++++ Last updated : 01/03/2023++++
+# Approval workflow for asset curation
++
+This guide will take you through the creation and management of approval workflows for asset curation.
+
+## Create and enable a new approval workflow for asset curation
+
+1. Sign in to the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/) and select the Management center. You'll see three new icons in the table of contents.
+
+ :::image type="content" source="./media/how-to-workflow-business-terms-approval/workflow-section.png" alt-text="Screenshot showing the management center left menu with the new workflow section highlighted.":::
+
+1. To create new workflows, select **Authoring** in the workflow section. This will take you to the workflow authoring experiences.
+
+ :::image type="content" source="./media/how-to-workflow-business-terms-approval/workflow-authoring-experience.png" alt-text="Screenshot showing the authoring workflows page, showing a list of all workflows.":::
+
+ >[!NOTE]
+ >If the authoring tab is greyed out, you don't have the permissions to be able to author workflows. You'll need the [workflow admin role](catalog-permissions.md).
+
+1. To create a new workflow, select **+New** button.
+
+ :::image type="content" source="./media/how-to-workflow-business-terms-approval/workflow-authoring-select-new.png" alt-text="Screenshot showing the authoring workflows page, with the plus sign New button highlighted.":::
+
+1. To create **Approval workflows for asset curation** Select **Data Catalog** and select **Continue**
+
+ :::image type="content" source="./media/how-to-workflow-business-terms-approval/select-data-catalog.png" alt-text="Screenshot showing the new workflows menu, with Data Catalog selected.":::
+
+1. In the next screen, you'll see all the templates provided by Microsoft Purview to create a workflow. Select the template using which you want to start your authoring experiences and select **Continue**. Each of these templates specifies the kind of action that will trigger the workflow. In the screenshot below we've selected **Update asset attributes** to create approval workflow for asset updates.
+
+ :::image type="content" source="./media/how-to-workflow-asset-curation/update-asset-attributes-continue.png" alt-text="Screenshot showing the new data catalog workflow menu, showing template options, with the Continue button selected.":::
+
+1. Next, enter a workflow name and optionally add a description. Then select **Continue**.
+
+ :::image type="content" source="./media/how-to-workflow-business-terms-approval/name-and-continue.png" alt-text="Screenshot showing the new data catalog workflow menu with a name entered into the name textbox.":::
+
+1. You'll now be presented with a canvas where the selected template is loaded by default.
+
+ :::image type="content" source="./media/how-to-workflow-asset-curation/workflow-authoring-canvas-inline.png" alt-text="Screenshot showing the workflow authoring canvas, with the selected template workflow populated in the central workspace." lightbox="./media/how-to-workflow-asset-curation/workflow-authoring-canvas-inline.png":::
+
+1. The default template can be used as it is by populating the approver's email address in **Start and Wait for approval** Connector.
+
+ :::image type="content" source="./media/how-to-workflow-asset-curation/add-approver-email-inline.png" alt-text="Screenshot showing the workflow authoring canvas, with the start and wait for an approval step opened, and the Assigned to textbox highlighted." lightbox="./media/how-to-workflow-asset-curation/add-approver-email-inline.png":::
+
+ The default template has the following steps:
+ 1. Trigger when an asset is updated. The update can be done in overview, schema or contacts tab.
+ 1. Approval connector that specifies a user or group that will be contacted to approve the request.
+ 1. Condition to check approval status
+ - If approved:
+ 1. Update the asset in Purview data catalog.
+ 1. Send an email to requestor that their request is approved, and asset update operation is successful.
+ - If rejected:
+ 1. Send email to requestor that their asset update request is denied.
+
+1. You can also modify the template by adding more connectors to suit your organizational needs. Add a new step to the end of the template by selecting the **New step** button. Add steps between any already existing steps by selecting the arrow icon between any steps.
+
+ :::image type="content" source="./media/how-to-workflow-asset-curation/modify-template-inline.png" alt-text="Screenshot showing the workflow authoring canvas, with a plus sign button highlighted on the arrow between the two top steps, and the Next Step button highlighted at the bottom of the workspace." lightbox="./media/how-to-workflow-asset-curation/modify-template-inline.png":::
+
+1. Once you're done defining a workflow, you need to bind the workflow to a collection hierarchy path. The binding implies that this workflow is triggered only for update operation on data assets in that collection. A workflow can be bound to only one hierarchy path. To bind a workflow or to apply a scope to a workflow, you need to select ΓÇÿApply workflowΓÇÖ. Select the scopes you want this workflow to be associated with and select **OK**.
+
+ :::image type="content" source="./media/how-to-workflow-asset-curation/select-apply-workflow.png" alt-text="Screenshot showing the new data catalog workflow menu with the Apply Workflow button highlighted at the top of the workspace.":::
+
+ :::image type="content" source="./media/how-to-workflow-asset-curation/select-okay.png" alt-text="Screenshot showing the apply workflow window, showing a list of items that the workflow can be applied to. At the bottom of the window, the O K button is selected.":::
+
+ >[!NOTE]
+ > - The Microsoft Purview workflow engine will always resolve to the closest workflow that the collection hierarchy path is associated with. In case a direct binding is not found, it will traverse up in the tree to find the workflow associated with the closest parent in the collection tree.
+
+
+1. By default, the workflow will be enabled. To disable, toggle the Enable button in the top menu.
+
+1. Finally select **Save and close** to create and the workflow.
+
+ :::image type="content" source="./media/how-to-workflow-business-terms-approval/workflow-enabled.png" alt-text="Screenshot showing the workflow authoring page, showing the newly created workflow listed among all other workflows.":::
+
+## Edit an existing workflow
+
+To modify an existing workflow, select the workflow and then select **Edit** in the top menu. You'll then be presented with the canvas containing workflow definition. Modify the workflow and select **Save** to commit changes.
++
+## Disable a workflow
+
+To disable a workflow, select the workflow and then select **Disable** in the top menu. You can also disable the workflow by selecting **Edit** and changing the enable toggle in workflow canvas.
++
+## Delete a workflow
+
+To delete a workflow, select the workflow and then select **Delete** in the top menu.
++
+## Limitations for asset curation with approval workflow enabled
+
+* Lineage updates are directly stored in Purview data catalog without any approvals.
+
+## Next steps
+
+For more information about workflows, see these articles:
+
+- [What are Microsoft Purview workflows](concept-workflow.md)
+- [Self-service data access workflow for hybrid data estates](how-to-workflow-self-service-data-access-hybrid.md)
+- [Manage workflow requests and approvals](how-to-workflow-manage-requests-approvals.md)
purview Manually Apply Classifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/manually-apply-classifications.md
+
+ Title: Manually apply classifications on assets
+description: This document describes how to manually apply classifications on assets.
+++++ Last updated : 12/30/2022+
+# Manually apply classifications on assets in Microsoft Purview
+
+This article discusses how to manually apply classifications on assets in the Microsoft Purview Governance Portal.
+
+[Classifications](concept-classification.md) are logical labels to help you and your team identify the kinds of data you have across your data estate. For example: if files or tables contain credit card numbers, or addresses.
+
+Microsoft Purview [automatically applies classifications to some assets during the scanning process](apply-classifications.md), but there are some scenarios when you may want to manually apply more classifications. For example, Microsoft Purview doesn't automatically apply classifications to table assets (only their columns), or you might want to apply custom classifications, or add classifications to assets grouped a [resource set](concept-resource-sets.md).
+
+>[!NOTE]
+>Some custom classifications can be [automatically applied](apply-classifications.md) after setting up a [custom classification rule.](create-a-custom-classification-and-classification-rule.md#custom-classification-rules)
+
+Follow the steps in this article to manually apply classifications to [file](#manually-apply-classification-to-a-file-asset), [table](#manually-apply-classification-to-a-table-asset), and [column](#manually-add-classification-to-a-column-asset) assets.
+
+## Manually apply classification to a file asset
+
+1. [Search](how-to-search-catalog.md) or [browse](how-to-browse-catalog.md) the Microsoft Purview Data Catalog for the file you're interested in and navigate to the asset detail page.
+
+ :::image type="content" source="./media/apply-classifications/asset-detail-page.png" alt-text="Screenshot showing the asset detail page." lightbox="./media/apply-classifications/asset-detail-page.png":::
+
+1. On the **Overview** tab, view the **Classifications** section to see if there are any existing classifications. Select **Edit**.
+
+1. From the **Classifications** drop-down list, select the specific classifications you're interested in. In our example, we're adding **Credit Card Number**, which is a system classification and **CustomerAccountID**, which is a custom classification.
+
+ :::image type="content" source="./media/apply-classifications/select-classifications.png" alt-text="Screenshot showing how to select classifications to add to an asset." lightbox="./media/apply-classifications/select-classifications.png":::
+
+1. Select **Save**.
+
+1. On the **Overview** tab, confirm that the classifications you selected appear under the **Classifications** section.
+
+ :::image type="content" source="./media/apply-classifications/confirm-classifications.png" alt-text="Screenshot showing how to confirm classifications were added to an asset." lightbox="./media/apply-classifications/confirm-classifications.png":::
+
+## Manually apply classification to a table asset
+
+When Microsoft Purview scans your data sources, it doesn't automatically assign classifications to table assets (only on columns). For a table asset to have classifications, you must add them manually:
+
+To add a classification to a table asset:
+
+1. [Search](how-to-search-catalog.md) or [browse](how-to-browse-catalog.md) the data catalog for the table asset that you're interested in. For example, **Customer** table.
+
+1. Confirm that no classifications are assigned to the table. Select **Edit**.
+
+ :::image type="content" source="./media/apply-classifications/select-edit-from-table-asset.png" alt-text="Screenshot showing how to view and edit the classifications of a table asset." lightbox="./media/apply-classifications/select-edit-from-table-asset.png":::
+
+1. From the **Classifications** drop-down list, select one or more classifications. This example uses a custom classification named **CustomerInfo**, but you can select any classifications for this step.
+
+ :::image type="content" source="./media/apply-classifications/select-classifications-in-table.png" alt-text="Screenshot showing how to select classifications to add to a table asset." lightbox="./media/apply-classifications/select-classifications-in-table.png":::
+
+1. Select **Save** to save the classifications.
+
+1. On the **Overview** page, verify that Microsoft Purview added your new classifications.
+
+ :::image type="content" source="./media/apply-classifications/verify-classifications-added-to-table.png" alt-text="Screenshot showing how to verify that classifications were added to a table asset." lightbox="./media/apply-classifications/verify-classifications-added-to-table.png":::
+
+## Manually add classification to a column asset
+
+Microsoft Purview automatically scans and adds classifications to all column assets. However, if you want to change the classification, you can do so at the column level:
+
+1. [Search](how-to-search-catalog.md) or [browse](how-to-browse-catalog.md) the data catalog for the table asset that contains the column you want to update.
+
+1. Select **Edit** from the **Overview** tab.
+
+1. Select the **Schema** tab.
+
+ :::image type="content" source="./media/apply-classifications/edit-column-schema.png" alt-text="Screenshot showing how to edit the schema of a column." lightbox="./media/apply-classifications/edit-column-schema.png":::
+
+1. Identify the columns you're interested in and select **Add a classification**. This example adds a **Common Passwords** classification to the **PasswordHash** column.
+
+ :::image type="content" source="./media/apply-classifications/add-classification-to-column.png" alt-text="Screenshot showing how to add a classification to a column." lightbox="./media/apply-classifications/add-classification-to-column.png":::
+
+1. Select **Save**.
+
+1. Select the **Schema** tab and confirm that the classification has been added to the column.
+
+ :::image type="content" source="./media/apply-classifications/confirm-classification-added.png" alt-text="Screenshot showing how to confirm that a classification was added to a column schema." lightbox="./media/apply-classifications/confirm-classification-added.png":::
++
+## Next steps
+
+- To learn how to create a custom classification, see [create a custom classification](create-a-custom-classification-and-classification-rule.md).
+- To learn about how to automatically apply classifications, see [automatically apply classifications](apply-classifications.md).
purview Register Scan Amazon Rds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-amazon-rds.md
Title: Amazon RDS Multi-cloud scanning connector for Microsoft Purview
+ Title: Amazon RDS Multicloud scanning connector for Microsoft Purview
description: This how-to guide describes details of how to scan Amazon RDS databases, including both Microsoft SQL and PostgreSQL data.--++ Previously updated : 10/18/2021 Last updated : 12/07/2022 # Customer intent: As a security officer, I need to understand how to use the Microsoft Purview connector for Amazon RDS service to set up, configure, and scan my Amazon RDS databases.
-# Amazon RDS Multi-Cloud Scanning Connector for Microsoft Purview (Public preview)
+# Amazon RDS Multicloud Scanning Connector for Microsoft Purview (Public preview)
-The Multi-Cloud Scanning Connector for Microsoft Purview allows you to explore your organizational data across cloud providers, including Amazon Web Services, in addition to Azure storage services.
+The Multicloud Scanning Connector for Microsoft Purview allows you to explore your organizational data across cloud providers, including Amazon Web Services, in addition to Azure storage services.
[!INCLUDE [feature-in-preview](includes/feature-in-preview.md)] This article describes how to use Microsoft Purview to scan your structured data currently stored in Amazon RDS, including both Microsoft SQL and PostgreSQL databases, and discover what types of sensitive information exists in your data. You'll also learn how to identify the Amazon RDS databases where the data is currently stored for easy information protection and data compliance.
-For this service, use Microsoft Purview to provide a Microsoft account with secure access to AWS, where the Multi-Cloud Scanning Connectors for Microsoft Purview will run. The Multi-Cloud Scanning Connectors for Microsoft Purview use this access to your Amazon RDS databases to read your data, and then reports the scanning results, including only the metadata and classification, back to Azure. Use the Microsoft Purview classification and labeling reports to analyze and review your data scan results.
+For this service, use Microsoft Purview to provide a Microsoft account with secure access to AWS, where the Multicloud Scanning Connectors for Microsoft Purview will run. The Multicloud Scanning Connectors for Microsoft Purview use this access to your Amazon RDS databases to read your data, and then reports the scanning results, including only the metadata and classification, back to Azure. Use the Microsoft Purview classification and labeling reports to analyze and review your data scan results.
> [!IMPORTANT]
-> The Multi-Cloud Scanning Connectors for Microsoft Purview are separate add-ons to Microsoft Purview. The terms and conditions for the Multi-Cloud Scanning Connectors for Microsoft Purview are contained in the agreement under which you obtained Microsoft Azure Services. For more information, see Microsoft Azure Legal Information at https://azure.microsoft.com/support/legal/.
+> The Multicloud Scanning Connectors for Microsoft Purview are separate add-ons to Microsoft Purview. The terms and conditions for the Multicloud Scanning Connectors for Microsoft Purview are contained in the agreement under which you obtained Microsoft Azure Services. For more information, see Microsoft Azure Legal Information at https://azure.microsoft.com/support/legal/.
> ## Microsoft Purview scope for Amazon RDS
Ensure that you've performed the following prerequisites before adding your Amaz
Microsoft Purview supports scanning only when your database is hosted in a virtual private cloud (VPC), where your RDS database can only be accessed from within the same VPC.
-The Azure Multi-Cloud Scanning Connectors for Microsoft Purview service run in a separate, Microsoft account in AWS. To scan your RDS databases, the Microsoft AWS account needs to be able to access your RDS databases in your VPC. To allow this access, youΓÇÖll need to configure [AWS PrivateLink](https://aws.amazon.com/privatelink/) between the RDS VPC (in the customer account) to the VPC where the Multi-Cloud Scanning Connectors for Microsoft Purview run (in the Microsoft account).
+The Azure Multicloud Scanning Connectors for Microsoft Purview service run in a separate, Microsoft account in AWS. To scan your RDS databases, the Microsoft AWS account needs to be able to access your RDS databases in your VPC. To allow this access, youΓÇÖll need to configure [AWS PrivateLink](https://aws.amazon.com/privatelink/) between the RDS VPC (in the customer account) to the VPC where the Multicloud Scanning Connectors for Microsoft Purview run (in the Microsoft account).
-The following diagram shows the components in both your customer account and Microsoft account. Highlighted in yellow are the components youΓÇÖll need to create to enable connectivity RDS VPC in your account to the VPC where the Multi-Cloud Scanning Connectors for Microsoft Purview run in the Microsoft account.
+The following diagram shows the components in both your customer account and Microsoft account. Highlighted in yellow are the components youΓÇÖll need to create to enable connectivity RDS VPC in your account to the VPC where the Multicloud Scanning Connectors for Microsoft Purview run in the Microsoft account.
> [!IMPORTANT]
This CloudFormation template is available for download from the [Azure GitHub re
|Name |Description | |||
- |**Endpoint & port** | Enter the resolved IP address of the RDS endpoint URL and port. For example: `192.168.1.1:5432` <br><br>- **If an RDS proxy is configured**, use the IP address of the read/write endpoint of the proxy for the relevant database. We recommend using an RDS proxy when working with Microsoft Purview, as the IP address is static.<br><br>- **If you have multiple endpoints behind the same VPC**, enter up to 10, comma-separated endpoints. In this case, a single load balancer is created to the VPC, allowing a connection from the Amazon RDS Multi-Cloud Scanning Connector for Microsoft Purview in AWS to all RDS endpoints in the VPC. |
+ |**Endpoint & port** | Enter the resolved IP address of the RDS endpoint URL and port. For example: `192.168.1.1:5432` <br><br>- **If an RDS proxy is configured**, use the IP address of the read/write endpoint of the proxy for the relevant database. We recommend using an RDS proxy when working with Microsoft Purview, as the IP address is static.<br><br>- **If you have multiple endpoints behind the same VPC**, enter up to 10, comma-separated endpoints. In this case, a single load balancer is created to the VPC, allowing a connection from the Amazon RDS Multicloud Scanning Connector for Microsoft Purview in AWS to all RDS endpoints in the VPC. |
|**Networking** | Enter your VPC ID | |**VPC IPv4 CIDR** | Enter the value your VPC's CIDR. You can find this value by selecting the VPC link on your RDS database page. For example: `192.168.0.0/16` | |**Subnets** |Select all the subnets that are associated with your VPC. |
This CloudFormation template is available for download from the [Azure GitHub re
1. On the **Sources** page, select **Register.** On the **Register sources** page that appears on the right, select the **Database** tab, and then select **Amazon RDS (PostgreSQL)** or **Amazon RDS (SQL)**.
- :::image type="content" source="media/register-scan-amazon-rds/register-amazon-rds.png" alt-text="Screenshot of the Register sources page to select Amazon RDS (PostgreSQL).":::
+ :::image type="content" source="media/register-scan-amazon-rds/register-amazon-rds.png" alt-text="Screenshot of the Register sources page to select Amazon RDS (PostgreSQL)." lightbox="media/register-scan-amazon-rds/register-amazon-rds.png":::
1. Enter the details for your source:
To configure a Microsoft Purview scan for your RDS database:
- **Name**: Enter a meaningful name for your scan. - **Database name**: Enter the name of the database you want to scan. YouΓÇÖll need to find the names available from outside Microsoft Purview, and create a separate scan for each database in the registered RDS server.
- - **Credential**: Select the credential you created earlier for the Multi-Cloud Scanning Connectors for Microsoft Purview to access the RDS database.
+ - **Credential**: Select the credential you created earlier for the Multicloud Scanning Connectors for Microsoft Purview to access the RDS database.
1. On the **Select a scan rule set** pane, select the scan rule set you want to use, or create a new one. For more information, see [Create a scan rule set](create-a-scan-rule-set.md).
If an error of `Invalid VPC service name` or `Invalid endpoint service` appears
1. Make sure that your VPC service name is correct. For example:
- :::image type="content" source="media/register-scan-amazon-rds/locate-service-name.png" alt-text="Screenshot of the VPC service name in AWS.":::
+ :::image type="content" source="media/register-scan-amazon-rds/locate-service-name.png" alt-text="Screenshot of the VPC service name in AWS." lightbox="media/register-scan-amazon-rds/locate-service-name.png":::
1. Make sure that the Microsoft ARN is listed in the allowed principals: `arn:aws:iam::181328463391:root`
The following errors may appear in Microsoft Purview:
Learn more about Microsoft Purview Insight reports: > [!div class="nextstepaction"]
-> [Understand Data Estate Insights in Microsoft Purview](concept-insights.md)
+> [Understand Data Estate Insights in Microsoft Purview](concept-insights.md)
purview Supported Classifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/supported-classifications.md
Person Name machine learning model has been trained using global datasets of nam
## Person's Address Person's address classification is used to detect full address stored in a single column containing the following elements: House number, Street Name, City, State, Country, Zip Code. Person's Address classifier uses machine learning model that is trained on the global addresses data set in English language.
+Currently the address model supports the following formats in the same column:
+
+- number, street, city
+- name, street, pincode or zipcode
+- number, street, area, pincode or zipcode
+- street, city, pincode or zipcode
+- landmark, city
+ ## RegEx Classifications ## ABA routing number
reliability Availability Zones Migration Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-zones-migration-overview.md
The table below lists each product that offers migration guidance and/or informa
| [Azure App Service: App Service Environment](migrate-app-service-environment.md)| | [Azure Cache for Redis](migrate-cache-redis.md)| | [Azure Container Instances](migrate-container-instances.md) |
+| [Azure Database for MySQL - Flexible Server](migrate-database-mysql-flex.md) |
| [Azure Monitor: Log Analytics](migrate-monitor-log-analytics.md)| | Azure Storage:ΓÇ»[Files Storage](migrate-storage.md)| | Virtual Machines:ΓÇ»[Azure Dedicated Host](migrate-vm.md) |
reliability Availability Zones Service Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-zones-service-support.md
Azure offerings are grouped into three categories that reflect their _regional_
| [Azure Container Registry](../container-registry/zone-redundancy.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Data Explorer](/azure/data-explorer/create-cluster-database-portal) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Data Factory](../data-factory/concepts-data-redundancy.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
-| Azure Database for MySQL ΓÇôΓÇ»[Flexible Server](../mysql/flexible-server/concepts-high-availability.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
-| Azure Database for PostgreSQL ΓÇôΓÇ»[Flexible Server](../postgresql/flexible-server/overview.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure Database for MySQL – Flexible Server](../mysql/flexible-server/concepts-high-availability.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure Database for PostgreSQL – Flexible Server](../postgresql/flexible-server/overview.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
| [Azure DDoS Protection](../ddos-protection/ddos-faq.yml) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Disk Encryption](../virtual-machines/disks-redundancy.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Event Grid](../event-grid/overview.md) | ![An icon that signifies this service is zone-redundant](media/icon-zone-redundant.svg) |
Azure offerings are grouped into three categories that reflect their _regional_
| [Azure HDInsight](../hdinsight/hdinsight-use-availability-zones.md) | ![An icon that signifies this service is zonal](media/icon-zonal.svg) | | [Azure IoT Hub](../iot-hub/iot-hub-ha-dr.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Kubernetes Service (AKS)](../aks/availability-zones.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal](media/icon-zonal.svg) |
-| Azure Logic Apps | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure Logic Apps](../logic-apps/logic-apps-overview.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
| [Azure Monitor](../azure-monitor/logs/availability-zones.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Monitor: Application Insights](../azure-monitor/logs/availability-zones.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Monitor: Log Analytics](../azure-monitor/logs/availability-zones.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Network Watcher](../network-watcher/frequently-asked-questions.yml) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
-| Azure Network Watcher:ΓÇ»[Traffic Analytics](../network-watcher/frequently-asked-questions.yml) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
-| Azure Notification Hubs | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure Network Watcher: Traffic Analytics](../network-watcher/frequently-asked-questions.yml) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure Notification Hubs](../notification-hubs/availability-zones.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
| [Azure Private Link](../private-link/private-link-overview.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Route Server](../route-server/route-server-faq.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | Azure Stream Analytics | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
reliability Migrate Api Mgt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-api-mgt.md
In this article, we'll take you through the different options for availability z
* South Central US * Southeast Asia * Switzerland North
+ * UAE North
* UK South * West Europe * West US 2
There are no downtime requirements for any of the migration options.
* Changes can take from 15 to 45 minutes to apply. The API Management gateway can continue to handle API requests during this time.
+* When migrating an API Management deployed in an external or internal virtual network to availability zones, a new public IP address resource must be specified. In an internal VNet, the public IP address is used only for management operations, not for API requests. Learn more about [IP addresses of API Management](../api-management/api-management-howto-ip-addresses.md).
+ * Migrating to availability zones or changing the availability zone configuration will trigger a public [IP address change](../api-management/api-management-howto-ip-addresses.md#changes-to-the-ip-addresses). * If you've configured autoscaling for your API Management instance in the primary location, you might need to adjust your autoscale settings after enabling zone redundancy. The number of API Management units in autoscale rules and limits must be a multiple of the number of zones. - ## Option 1: Migrate existing location of API Management instance, not injected in VNet Use this option to migrate an existing location of your API Management instance to availability zones when itΓÇÖs not injected (deployed) in a virtual network.
-### How to migrate API Management in a VNet
- 1. In the Azure portal, navigate to your API Management service. 1. Select **Locations** in the menu, and then select the location to be migrated. The location must [support availability zones](#prerequisites).
reliability Migrate App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-app-service.md
description: Learn how to migrate Azure App Service to availability zone support
Previously updated : 12/12/2022 Last updated : 12/14/2022
Availability zone support is a property of the App Service plan. The following a
- Japan East - North Europe - Qatar Central
+ - South Africa North
- South Central US - Southeast Asia - Sweden Central
reliability Migrate Database Mysql Flex https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-database-mysql-flex.md
+
+ Title: Migrate Azure Database for MySQL ΓÇô Flexible Server to availability zone support
+description: Learn how to migrate your Azure Database for MySQL ΓÇô Flexible Server to availability zone support.
++++ Last updated : 12/13/2022++++
+
+# Migrate MySQL ΓÇô Flexible Server to availability zone support
+
+This guide describes how to migrate MySQL ΓÇô Flexible Server from non-availability zone support to availability zone support.
+
+You can configure Azure Database for MySQL Flexible server to use one of two high availability (HA) architectural models:
+
+- **Same-zone HA architecture (zonal).** This option is preferred for infrastructure redundancy with lower network latency because the primary and standby servers will be in the same availability zone. It provides high availability without the need to configure application redundancy across zones. Same-zone HA is preferred when you want to achieve the highest level of availability within a single availability zone with the lowest network latency. Same-zone HA is available in all Azure regions where you can use Azure Database for MySQL - Flexible Server. To learn more about same-zone HA architecture, see [Same-zone HA architecture](../mysql/flexible-server/concepts-high-availability.md#same-zone-ha-architecture).
+
+- **Zone-redundant HA architecture.** This option is preferred for complete isolation and redundancy of infrastructure across multiple availability zones. It provides the highest level of availability, but it requires you to configure application redundancy across zones. Zone-redundant HA is preferred when you want to achieve the highest level of availability against any infrastructure failure in the availability zone and when latency across the availability zone is acceptable. It can be enabled only when the server is created. Zone-redundant HA is available in a [subset of Azure regions](../mysql/flexible-server/overview.md#azure-regions) where the region supports multiple availability zones and [zone-redundant Premium file shares](../storage/common/storage-redundancy.md#zone-redundant-storage) are available. To learn more about zone-redundant HA architecture, see [Zone-redundant HA architecture](../mysql/flexible-server/concepts-high-availability.md#zone-redundant-ha-architecture).
+
+To migrate your existing workload from zonal (same-zone HA) to zone-redundant HA, you'll need to do the following:
+
+1. Deploy and configure a new server that's been configured for zone-redundant HA.
+
+1. Follow the migration guidance in this document to move your resources to your new server.
++
+## Prerequisites
+
+**To migrate to availability-zone support:**
+
+1. You'll need at least one of the following two servers:
+
+ - A source server that's running Azure Database for MySQL Flexible Server in a region that doesnΓÇÖt support availability zones.
+
+ - An Azure Database for MySQL Flexible Server that wasn't enabled for HA at the time of create.
+
+ >[!IMPORTANT]
+ > If you've originally provisioned your Azure Database for MySQL Flexible server as a non-HA server, you can simply enable it for same-zone HA architecture. However, if you want to enable it for zone-redundant HA architecture, then you'll need to implement one of the available migration options listed in this article.
+
+2. You'll need to create a target server that's running Azure Database for MySQL Flexible Server [in a region that supports availability zones](../mysql/flexible-server/overview.md#azure-regions). For more information on how to create an Azure Database for MySQL flexible server, see [Use the Azure portal to create an Azure Database for MySQL flexible server](../mysql/flexible-server/quickstart-create-server-portal.md). Make sure that the created server is configured for zone redundancy by enabling HA and selecting the *Zone-Redundant* option.
+
+>[!TIP]
+> If you want the flexibility of being able to move between zonal (same-zone) and zone-redundant HA in the future, you can provision your Azure Database for MySQL Flexible server with zone-redundant HA enabled during server create. Once the server is provisioned, you can then disable HA.
+++
+## Downtime requirements
+
+Migrations can be categorized as either online or offline:
+
+ΓÇó **Offline migration**. If your application can afford some downtime, offline migrations are always the preferred choice, as they're simple and easy to execute. With an offline migration, the source server is taken offline, and a dump and restores of the databases are performed on the target server. This option will require the most downtime. The duration of the downtime is determined by the time it takes to perform the restoration on the target server.
+
+ΓÇó **Online migration**. This option has minimal downtime and is the best choice if you want less downtime. The source server allows updates, and the migration solution will take care of replicating the ongoing changes between the source and target server along with the initial dump and restore on the target.
+
+## Migration Option 1: Offline Migration
+
+You can migrate from one Azure Database for Flexible server to another by using one of the following tools. Both of these options will require downtime.
+
+1. **Data Migration Service (DMS).** To learn how to migrate MySQL Flexible Server to another with DMS, see [Migrate Azure Database for MySQL - Single Server to Flexible Server offline using DMS via the Azure portal](../dms/tutorial-mysql-azure-single-to-flex-offline-portal.md). Although the tutorial outlines steps for migrating from Azure MySQL Single Server to Flexible server, you can use the same procedure for migrating data from one Azure Database for MySQL Flexible server that doesnΓÇÖt support availability zones to another that supports availability zones.
+
+2. **Open-source tools**. You can migrate offline with open-source tools, such as **MySQL Workbench**, **mydumper/myloader**, or **mysqldump** to backup and restore the database. For information on how to use these tools, see [Options for migrating Azure Database for MySQL - Single Server to Flexible Server](https://techcommunity.microsoft.com/t5/azure-database-for-mysql-blog/options-for-migrating-azure-database-for-mysql-single-server-to/ba-p/2674062). Although the tutorial outlines steps for migrating from Azure MySQL Single Server to Flexible server, you can use the same procedure for migrating data from one Azure Database for MySQL Flexible server that doesnΓÇÖt support availability zones to another that supports availability zones.
+
+## Migration Option 2: Online Migration
+
+You can migrate from one Azure Database for Flexible server to another with minimum downtime to your applications by using one of the following tools:
+
+1. **Data Migration Service (DMS).** To learn how to migrate MySQL Flexible Server to another with DMS, see [Migrate Azure Database for MySQL - Single Server to Flexible Server online using DMS via the Azure portal](../dms/tutorial-mysql-azure-single-to-flex-online-portal.md). Although the tutorial outlines steps for migrating from Azure MySQL Single Server to Flexible server, you can use the same procedure for migrating data from one Azure Database for MySQL Flexible server that doesnΓÇÖt support availability zones to another that supports availability zones.
+
+2. **Open-source tools.** You can use a combination of open-source tools such as **mydumper/myloader** together with **Data-in replication**. To learn how to set up Data-in Replication, see [How to configure Azure Database for MySQL Data-in Replication](../mysql/single-server/how-to-data-in-replication.md).
+
+>[!IMPORTANT]
+>Data-in Replication isn't supported for HA-enabled servers. The workaround is to provision the target server with zone-redundant HA first and then disable HA before configuring Data-in Replication. Once the replication completes, enable zone-redundant HA once again on the target server.
++
+## Next steps
+
+Learn more about:
+
+> [!div class="nextstepaction"]
+> [Overview of business continuity with Azure Database for MySQL - Flexible Server](../mysql/flexible-server/concepts-business-continuity.md)
+
+> [!div class="nextstepaction"]
+> [High availability concepts in Azure Database for MySQL Flexible Server](../mysql/flexible-server/concepts-high-availability.md)
+
reliability Reliability Bot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-bot.md
+
+ Title: Reliability in Azure Bot Service
+description: Find out about reliability in Azure Bot Service
+++++ Last updated : 01/06/2022 +++
+# What is reliability in Azure Bot Service?
+
+When you create an application (bot) in Azure, you can choose whether or not your bot resource will have global or local data residency. Local data residency ensures that your bot's personal data is preserved, stored, and processed within certain geographic boundaries (like EU boundaries).
+
+>[!IMPORTANT]
+>Availability zone support is not enabled for any standard channels in the regional bot service.
+
+This article describes reliability support in Azure Bot Service, and covers both regional reliability with availability zones and cross-region resiliency with disaster recovery for bots with local data residency. For a more detailed overview of reliability in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview).
+
+For more information on deploying bots with local data residency and regional compliance, see [Regionalization in Azure Bot Service](/azure/bot-service/bot-builder-concept-regionalization?view=azure-bot-service-4.0).
+
+## Availability zone support
+
+Azure availability zones are at least three physically separate groups of datacenters within each Azure region. Datacenters within each zone are equipped with independent power, cooling, and networking infrastructure. In the case of a local zone failure, availability zones are designed so that if the one zone is affected, regional services, capacity, and high availability are supported by the remaining two zones. Failures can range from software and hardware failures to events such as earthquakes, floods, and fires. Tolerance to failures is achieved with redundancy and logical isolation of Azure services. For more detailed information on availability zones in Azure, see [Regions and availability zones](/azure/availability-zones/az-overview.md).
+
+For regional bots, Azure Bot Service supports zone redundancy by default. You don't need to set it up or reconfigure for availability zone support.
+
+### Prerequisites
+
+- Your bot must be regional (not global).
+- Currently, only the "westeurope" region supports availability zones.
+
+### Zone down experience
+
+During a zone-wide outage, the customer should expect a brief degradation of performance, until the service's self-healing re-balances underlying capacity to adjust to healthy zones. This is not dependent on zone restoration; it is expected that the Microsoft-managed service self-healing state will compensate for a lost zone, leveraging capacity from other zones.
++
+### Cross-region disaster recovery in multi-region geography
+
+Azure Bot Service runs in active-active mode for both global and regional services. When an outage occurs, you don't need to detect errors or manage the service. Azure Bot Service automatically performs auto-failover and auto recovery in a multi-region geographical architecture. For the EU bot regional service, Azure Bot Service provides two full regions inside Europe with active/active replication to ensure redundancy. For the global bot service, all available regions/geographies can be served as the global footprint.
++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Resiliency in Azure](/azure/availability-zones/overview.md)
reliability Reliability Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-containers.md
+ Last updated 11/29/2022
When an entire Azure region or datacenter experiences downtime, your mission-cri
> [Azure Cache for Redis Premium service tiers](../container-instances/availability-zones.md#next-steps) > [!div class="nextstepaction"]
-> [Reliability in Azure](/azure/reliability/overview.md)
+> [Reliability in Azure](/azure/reliability/overview)
reliability Reliability Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-functions.md
description: Find out about reliability in Azure Functions
+ Last updated 10/07/2022
Zone-redundant Premium plans are available in the following regions:
| Americas | Europe | Middle East | Africa | Asia Pacific | ||-||--|-|
-| Brazil South | France Central | Qatar Central | | Australia East |
+| Brazil South | France Central | Qatar Central | South Africa North | Australia East |
| Canada Central | Germany West Central | | | Central India | | Central US | North Europe | | | China North 3 | | East US | Sweden Central | | | East Asia |
reliability Sovereign Cloud China https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/sovereign-cloud-china.md
This section outlines variations and considerations when using Azure Bot Service
| Cognitive | Cognitive
+### Azure AD External Identities
+
+This section outlines variations and considerations when using Azure AD External Identities B2B collaboration.
+
+| Product | Unsupported, limited, and/or modified features | Notes |
+||--||
+| Azure AD External Identities | For Azure AD External Identities B2B feature variations in Microsoft Azure for customers in China, see [Azure AD B2B in national clouds](../active-directory/external-identities/b2b-government-national-clouds.md) and [Microsoft cloud settings (Preview)](../active-directory/external-identities/cross-cloud-settings.md). |
### Media
This section outlines variations and considerations when using Media services.
| Product | Unsupported, limited, and/or modified features | Notes | ||--||
-| Azure Media Services | For Azure Media Services v3 feature variations in Azure in China, see [Azure Media Services v3 clouds and regions availability](/azure/media-services/latest/azure-clouds-regions#china). |
+| Azure Media Services | For Azure Media Services v3 feature variations in Microsoft Azure for customers in China, see [Azure Media Services v3 clouds and regions availability](/azure/media-services/latest/azure-clouds-regions#china). |
+
+### Microsoft Authentication Library (MSAL)
+
+This section outlines variations and considerations when using Microsoft Authentication Library (MSAL) services.
+
+| Product | Unsupported, limited, and/or modified features | Notes |
+||--||
+| Microsoft Authentication Library (MSAL) | For feature variations and limitations, see [National clouds and MSAL](../active-directory/develop/msal-national-cloud.md). |
### Networking
For IP rangers for Azure in China, download [Azure Datacenter IP Ranges in China
| Azure Resource Manager | [https://management.azure.com](https://management.azure.com/) | [https://management.chinacloudapi.cn](https://management.chinacloudapi.cn/) | | Azure portal | [https://portal.azure.com](https://portal.azure.com/) | [https://portal.azure.cn](https://portal.azure.cn/) | | SQL Database | \*.database.windows.net | \*.database.chinacloudapi.cn |
-| SQL Azure DB management API | [https://management.database.windows.net](https://management.database.windows.net/) | [https://management.database.chinacloudapi.cn](https://management.database.chinacloudapi.cn/) |
+| SQL Azure DB management API | `https://management.database.windows.net` | `https://management.database.chinacloudapi.cn` |
| Azure Service Bus | \*.servicebus.windows.net | \*.servicebus.chinacloudapi.cn | | Azure SignalR Service| \*.service.signalr.net | \*.signalr.azure.cn | | Azure Time Series Insights | \*.timeseries.azure.com \*.insights.timeseries.azure.cn | \*.timeseries.azure.cn \*.insights.timeseries.azure.cn | | Azure Access Control Service | \*.accesscontrol.windows.net | \*.accesscontrol.chinacloudapi.cn | | Azure HDInsight | \*.azurehdinsight.net | \*.azurehdinsight.cn |
-| SQL DB import/export service endpoint | |  1. China East [https://sh1prod-dacsvc.chinacloudapp.cn/dacwebservice.svc](https://sh1prod-dacsvc.chinacloudapp.cn/dacwebservice.svc) <br>2. China North [https://bj1prod-dacsvc.chinacloudapp.cn/dacwebservice.svc](https://bj1prod-dacsvc.chinacloudapp.cn/dacwebservice.svc) |
+| SQL DB import/export service endpoint | |  1. China East `https://sh1prod-dacsvc.chinacloudapp.cn/dacwebservice.svc` <br>2. China North `https://bj1prod-dacsvc.chinacloudapp.cn/dacwebservice.svc` |
| MySQL PaaS | | \*.mysqldb.chinacloudapi.cn | | Azure Service Fabric cluster | \*.cloudapp.azure.com | \*.chinaeast.chinacloudapp.cn | | Azure Spring Cloud| \*.azuremicroservices.io | \*.microservices.azure.cn | | Azure Active Directory (Azure AD) | \*.onmicrosoft.com | \*.partner.onmschina.cn | | Azure AD logon | [https://login.microsoftonline.com](https://login.windows.net/) | [https://login.partner.microsoftonline.cn](https://login.chinacloudapi.cn/) | | Microsoft Graph | [https://graph.microsoft.com](https://graph.microsoft.com/) | [https://microsoftgraph.chinacloudapi.cn](https://microsoftgraph.chinacloudapi.cn/) |
-| Azure Cognitive Services | <https://api.projectoxford.ai/face/v1.0> | <https://api.cognitive.azure.cn/face/v1.0> |
+| Azure Cognitive Services | `https://api.projectoxford.ai/face/v1.0` | `https://api.cognitive.azure.cn/face/v1.0` |
| Azure Bot Services | <\*.botframework.com> | <\*.botframework.azure.cn> | | Azure Key Vault API | \*.vault.azure.net | \*.vault.azure.cn | | Sign in with PowerShell: <br>- Azure classic portal <br>- Azure Resource Manager <br>- Azure AD| - Add-AzureAccount<br>- Connect-AzureRmAccount <br> - Connect-msolservice |  - Add-AzureAccount -Environment AzureChinaCloud <br> - Connect-AzureRmAccount -Environment AzureChinaCloud <br>- Connect-msolservice -AzureEnvironment AzureChinaCloud |
remote-rendering Override Materials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/how-tos/conversion/override-materials.md
The full JSON schema for materials files is given here. Except for `unlit` and `
"albedoColor": { "$ref": "#/definitions/colorOrAlpha" }, "roughness": { "type": "number" }, "metalness": { "type": "number" },
+ "normalMapScale": { "type": "number" },
"transparent": { "type" : "boolean" }, "alphaClipEnabled": { "type" : "boolean" }, "alphaClipThreshold": { "type": "number" },
remote-rendering Color Materials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/overview/features/color-materials.md
# Color materials
-*Color materials* are one of the supported [material types](../../concepts/materials.md) in Azure Remote Rendering. They are used for [meshes](../../concepts/meshes.md) that should not receive any kind of lighting, but rather be full brightness at all times. This might be the case for 'glowing' materials, such as car dashboards, light bulbs, or for data that already incorporates static lighting, such as models obtained through [photogrammetry](https://en.wikipedia.org/wiki/Photogrammetry).
+*Color materials* are one of the supported [material types](../../concepts/materials.md) in Azure Remote Rendering. They're used for [meshes](../../concepts/meshes.md) that shouldn't receive any kind of lighting, but rather always appear at full brightness. This might be the case for 'glowing' materials, such as car dashboards, light bulbs, or for data that already incorporates static lighting, such as models obtained through [photogrammetry](https://en.wikipedia.org/wiki/Photogrammetry).
Color materials are more efficient to render than [PBR materials](pbr-materials.md) because of their simpler shading model. They also support different transparency modes.
-## Common material properties
+## Color material properties
-These properties are common to all materials:
+The following material properties are exposed in the runtime API, for instance on the [C# ColorMaterial class](/dotnet/api/microsoft.azure.remoterendering.colormaterial) or the [C++ ColorMaterial class](/cpp/api/remote-rendering/colormaterial), respectively.
-* **albedoColor:** This color is multiplied with other colors, such as the *albedoMap* or *:::no-loc text="vertex"::: colors*. If *transparency* is enabled on a material, the alpha channel is used to adjust the opacity, with `1` meaning fully opaque and `0` meaning fully transparent. Default is white.
+* `ColorFlags`: Miscellaneous feature flags can be combined in this bit mask to enable the following features:
+ * `UseVertexColor`: If the mesh contains :::no-loc text="vertex"::: colors and this option is enabled, the mesh's :::no-loc text="vertex"::: color is multiplied into the `AlbedoColor` and `AlbedoMap`. By default `UseVertexColor` is disabled.
+ * `DoubleSided`: If double-sidedness is set to true, triangles with this material are rendered even if the camera is looking at their back faces. By default this option is disabled. See also [:::no-loc text="Single-sided"::: rendering](single-sided-rendering.md).
+ * `AlphaClipped`: Enables hard cut-outs on a per-pixel basis, based on the alpha value being below the value of `AlphaClipThreshold` (see below). This works for opaque materials as well.
+ * `TransparencyWritesDepth`: If the `TransparencyWritesDepth` flag is set on the material and the material is transparent, objects using this material will also contribute to the final depth buffer. See the color material property `ColorTransparencyMode` in the next section. Enabling this feature is recommended if your use case needs a more plausible [late stage reprojection](late-stage-reprojection.md) of fully transparent scenes. For mixed opaque/transparent scenes, this setting may introduce implausible reprojection behavior or reprojection artifacts. For this reason, the default and recommended setting for the general use case is to disable this flag. The written depth values are taken from the per-pixel depth layer of the object that is closest to the camera.
+ * `FresnelEffect`: This material flag enables the additive [fresnel effect](../../overview/features/fresnel-effect.md) on the respective material. The appearance of the effect is governed by the other fresnel parameters `FresnelEffectColor` and `FresnelEffectExponent` explained below.
+* `AlbedoColor`: This color is multiplied with other colors, such as the `AlbedoMap` or *:::no-loc text="vertex"::: colors*. If *transparency* is enabled on a material, the alpha channel is used to adjust the opacity, with `1` meaning fully opaque and `0` meaning fully transparent. The default albedo color is opaque white.
> [!NOTE] > Since color materials don't reflect the environment, a fully transparent color material becomes invisible. This is different for [PBR materials](pbr-materials.md).
-* **albedoMap:** A [2D texture](../../concepts/textures.md) for per-pixel albedo values.
-
-* **alphaClipEnabled** and **alphaClipThreshold:** If *alphaClipEnabled* is true, all pixels where the albedo alpha value is lower than *alphaClipThreshold* won't be drawn. Alpha clipping can be used even without enabling transparency and is much faster to render. Alpha clipped materials are still slower to render than fully opaque materials, though. By default alpha clipping is disabled.
-
-* **textureCoordinateScale** and **textureCoordinateOffset:** The scale is multiplied into the UV texture coordinates, the offset is added to it. Can be used to stretch and shift the textures. The default scale is (1, 1) and offset is (0, 0).
+* `AlbedoMap`: A [2D texture](../../concepts/textures.md) for per-pixel albedo values.
-* **useVertexColor:** If the mesh contains :::no-loc text="vertex"::: colors and this option is enabled, the meshes' :::no-loc text="vertex"::: color is multiplied into the *albedoColor* and *albedoMap*. By default *useVertexColor* is disabled.
+* `AlphaClipThreshold`: If the `AlphaClipped` flag is set on the `ColorFlags` property, all pixels where the albedo alpha value is lower than the value of `AlphaClipThreshold` won't be drawn. Alpha clipping can be used even without enabling transparency and is much faster to render. Alpha clipped materials are still slower to render than fully opaque materials, though. By default alpha clipping is disabled.
-* **isDoubleSided:** If double-sidedness is set to true, triangles with this material are rendered even if the camera is looking at their back faces. By default this option is disabled. See also [:::no-loc text="Single-sided"::: rendering](single-sided-rendering.md).
+* `TexCoordScale` and `TexCoordOffset`: The scale is multiplied into the UV texture coordinates, the offset is added to it. Can be used to stretch and shift the textures. The default scale is (1, 1) and offset is (0, 0).
-* **TransparencyWritesDepth:** If the TransparencyWritesDepth flag is set on the material and the material is transparent, objects using this material will also contribute to the final depth buffer. See the color material property *transparencyMode* in the next section. Enabling this feature is recommended if your use case needs a more plausible [late stage reprojection](late-stage-reprojection.md) of fully transparent scenes. For mixed opaque/transparent scenes, this setting may introduce implausible reprojection behavior or reprojection artifacts. For this reason, the default and recommended setting for the general use case is to disable this flag. The written depth values are taken from the per-pixel depth layer of the object that is closest to the camera.
+* `FresnelEffectColor`: The fresnel color used for this material. Only important when the fresnel effect flag has been set on this material (see above). This property controls the base color of the fresnel shine (see [fresnel effect](../../overview/features/fresnel-effect.md) for a full explanation). Currently only the RGB-channel values are important and the alpha value will be ignored.
-* **FresnelEffect:** This material flag enables the additive [fresnel effect](../../overview/features/fresnel-effect.md) on the respective material. The appearance of the effect is governed by the other fresnel parameters explained in the following.
+* `FresnelEffectExponent`: The fresnel exponent used for this material. Only important when the fresnel effect flag has been set on this material (see above). This property controls the spread of the fresnel shine. The minimum value 0.01 causes a spread across the whole object. The maximum value 10.0 constricts the shine to only the most grazing edges visible.
-* **FresnelEffectColor:** The fresnel color used for this material. Only important when the fresnel effect bit has been set on this material (see above). This property controls the base color of the fresnel shine (see [fresnel effect](../../overview/features/fresnel-effect.md) for a full explanation). Currently only the rgb-channel values are important and the alpha value will be ignored.
-
-* **FresnelEffectExponent:** The fresnel exponent used for this material. Only important when the fresnel effect bit has been set on this material (see above). This property controls the spread of the fresnel shine. The minimum value 0.01 causes a spread across the whole object. The maximum value 10.0 constricts the shine to only the most gracing edges visible.
-
-## Color material properties
+* `VertexMix`: This value between `0` and `1` specifies how strongly the :::no-loc text="vertex"::: color in a [mesh](../../concepts/meshes.md) contributes to the final color. At the default value of 1, the :::no-loc text="vertex"::: color is multiplied into the albedo color fully. With a value of 0, the :::no-loc text="vertex"::: colors are ignored entirely.
-The following properties are specific to color materials:
+* `ColorTransparencyMode`: Contrary to [PBR materials](pbr-materials.md), color materials distinguish between different transparency modes:
-* **vertexMix:** This value between `0` and `1` specifies how strongly the :::no-loc text="vertex"::: color in a [mesh](../../concepts/meshes.md) contributes to the final color. At the default value of 1, the :::no-loc text="vertex"::: color is multiplied into the albedo color fully. With a value of 0, the :::no-loc text="vertex"::: colors are ignored entirely.
+ * `Opaque`: The default mode disables transparency. Alpha clipping is still possible, though, and should be preferred, if sufficient.
+ * `AlphaBlended`: This mode is similar to the transparency mode for PBR materials. It should be used for see-through materials like glass.
+ * `Additive`: This mode is the simplest and most efficient transparency mode. The contribution of the material is added to the rendered image. This mode can be used to simulate glowing (but still transparent) objects, such as markers used for highlighting important objects.
-* **transparencyMode:** Contrary to [PBR materials](pbr-materials.md), color materials distinguish between different transparency modes:
+## Color material overrides during conversion
- 1. **Opaque:** The default mode disables transparency. Alpha clipping is still possible, though, and should be preferred, if sufficient.
-
- 1. **AlphaBlended:** This mode is similar to the transparency mode for PBR materials. It should be used for see-through materials like glass.
+A subset of color material properties can be overridden during model conversion through the [material override file](../../how-tos/conversion/override-materials.md).
+The following table shows the mapping between runtime properties documented above and the corresponding property name in the override file:
- 1. **Additive:** This mode is the simplest and most efficient transparency mode. The contribution of the material is added to the rendered image. This mode can be used to simulate glowing (but still transparent) objects, such as markers used for highlighting important objects.
+| Material property name | Property name in override file|
+|:-|:|
+| `ColorFlags.AlphaClipped` | `alphaClipEnabled` |
+| `ColorFlags.UseVertexColor` | `useVertexColor` |
+| `ColorFlags.DoubleSided` | `isDoubleSided` |
+| `ColorFlags.TransparencyWritesDepth` | `transparencyWritesDepth` |
+| `AlbedoColor` | `albedoColor` |
+| `TexCoordScale` | `textureCoordinateScale` |
+| `TexCoordOffset` | `textureCoordinateOffset` |
+| `ColorTransparencyMode` | `transparent` |
+| `AlphaClipThreshold` | `alphaClipThreshold` |
## API documentation
The following properties are specific to color materials:
* [PBR materials](pbr-materials.md) * [Textures](../../concepts/textures.md)
-* [Meshes](../../concepts/meshes.md)
+* [Meshes](../../concepts/meshes.md)
+* [Material override files](../../how-tos/conversion/override-materials.md).
remote-rendering Pbr Materials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/overview/features/pbr-materials.md
PBR stands for **P**hysically **B**ased **R**endering and means that the materia
![Helmet glTF sample model rendered by ARR](media/helmet.png)
-PBR materials aren't a universal solution, though. There are materials that reflect color differently depending on the viewing angle. For example, some fabrics or car paints. These kinds of materials aren't handled by the standard PBR model, and are currently not supported by Azure Remote Rendering. This limitation includes PBR extensions, such as *Thin-Film* (multi-layered surfaces) and *Clear-Coat* (for car paints).
-
-## Common material properties
-
-These properties are common to all materials:
+The core idea of physically based rendering is to use *BaseColor*, *Metalness*, and *Roughness* properties to emulate a wide range of real-world materials. A detailed description of PBR is beyond the scope of this article. For more information about PBR, see [other sources](http://www.pbr-book.org).
-* **albedoColor:** This color is multiplied with other colors, such as the *albedoMap* or *:::no-loc text="vertex "::: colors*. If *transparency* is enabled on a material, the alpha channel is used to adjust the opacity, with `1` meaning fully opaque and `0` meaning fully transparent. Default is white.
-
- > [!NOTE]
- > When a PBR material is fully transparent, like a perfectly clean piece of glass, it still reflects the environment. Bright spots like the sun are still visible in the reflection. This is different for [color materials](color-materials.md).
+PBR materials aren't a universal solution, though. There are materials that reflect color differently depending on the viewing angle. For example, some fabrics or car paints. These kinds of materials aren't handled by the standard PBR model, and are currently not supported by Azure Remote Rendering. This limitation includes PBR extensions, such as *Thin-Film* (multi-layered surfaces) and *Clear-Coat* (for car paints).
-* **albedoMap:** A [2D texture](../../concepts/textures.md) for per-pixel albedo values.
+## PBR material properties
-* **alphaClipEnabled** and **alphaClipThreshold:** If *alphaClipEnabled* is true, all pixels where the albedo alpha value is lower than *alphaClipThreshold* won't be drawn. Alpha clipping can be used even without enabling transparency and is much faster to render. Alpha clipped materials are still slower to render than fully opaque materials, though. By default alpha clipping is disabled.
+The following material properties are exposed in the runtime API, for instance on the [C# PbrMaterial class](/dotnet/api/microsoft.azure.remoterendering.pbrmaterial) or the [C++ PbrMaterial class](/cpp/api/remote-rendering/pbrmaterial), respectively.
-* **textureCoordinateScale** and **textureCoordinateOffset:** The scale is multiplied into the UV texture coordinates, the offset is added to it. Can be used to stretch and shift the textures. The default scale is (1, 1) and offset is (0, 0).
+* `PbrFlags`: Misc feature flags can be combined in this bit mask to enable the following features:
+ * `TransparentMaterial`: For PBR materials, there's only one transparency setting: it's enabled or not. The opacity is defined by the albedo color's alpha channel. When enabled, a more complex rendering method is invoked to draw semi-transparent surfaces. Azure Remote Rendering implements true [order independent transparency](https://en.wikipedia.org/wiki/Order-independent_transparency) (OIT).
+ Transparent geometry is expensive to render. If you only need holes in a surface, for example for the leaves of a tree, it's better to use alpha clipping instead.
-* **useVertexColor:** If the mesh contains :::no-loc text="vertex"::: colors and this option is enabled, the meshes' :::no-loc text="vertex"::: color is multiplied into the *albedoColor* and *albedoMap*. By default *useVertexColor* is disabled.
+ ![Spheres rendered with zero to full transparency](./media/transparency.png)
+ Notice in the image above, how the right-most sphere is fully transparent, but the reflection is still visible.
-* **isDoubleSided:** If double-sidedness is set to true, triangles with this material are rendered even if the camera is looking at their back faces. For PBR materials lighting is also computed properly for back faces. By default this option is disabled. See also [:::no-loc text="Single-sided"::: rendering](single-sided-rendering.md).
+ > [!IMPORTANT]
+ > If any material is supposed to be switched from opaque to transparent at runtime, the renderer must use the *TileBasedComposition* [rendering mode](../../concepts/rendering-modes.md). This limitation does not apply to materials that are converted as transparent materials to begin with.
+
+ * `UseVertexColor`: If the mesh contains :::no-loc text="vertex"::: colors and this option is enabled, the meshes' :::no-loc text="vertex"::: color is multiplied into the `AlbedoColor` and `AlbedoMap`. By default `UseVertexColor` is disabled.
+ * `DoubleSided`: If double-sidedness is set to true, triangles with this material are rendered even if the camera is looking at their back faces. For PBR materials lighting is also computed properly for back faces. By default this option is disabled. See also [:::no-loc text="Single-sided"::: rendering](single-sided-rendering.md).
+ * `SpecularHighlights`: Enables specular highlights for this material. By default, the `SpecularHighlights` flag is enabled.
+ * `AlphaClipped`: Enables hard cut-outs on a per-pixel basis, based on the alpha value being below the value of `AlphaClipThreshold` (see below). This works for opaque materials as well.
+ * `FresnelEffect`: This material flag enables the additive [fresnel effect](../../overview/features/fresnel-effect.md) on the respective material. The appearance of the effect is governed by the other fresnel parameters `FresnelEffectColor` and `FresnelEffectExponent` explained below.
+ * `TransparencyWritesDepth`: If the `TransparencyWritesDepth` flag is set on the material and the material is transparent, objects using this material will also contribute to the final depth buffer. See the PBR material flag *transparent* in the next section. Enabling this feature is recommended if your use case needs a more plausible [late stage reprojection](late-stage-reprojection.md) of fully transparent scenes. For mixed opaque/transparent scenes, this setting may introduce implausible reprojection behavior or reprojection artifacts. For this reason, the default and recommended setting for the general use case is to disable this flag. The written depth values are taken from the per-pixel depth layer of the object that is closest to the camera.
-* **TransparencyWritesDepth:** If the TransparencyWritesDepth flag is set on the material and the material is transparent, objects using this material will also contribute to the final depth buffer. See the PBR material flag *transparent* in the next section. Enabling this feature is recommended if your use case needs a more plausible [late stage reprojection](late-stage-reprojection.md) of fully transparent scenes. For mixed opaque/transparent scenes, this setting may introduce implausible reprojection behavior or reprojection artifacts. For this reason, the default and recommended setting for the general use case is to disable this flag. The written depth values are taken from the per-pixel depth layer of the object that is closest to the camera.
+* `AlbedoColor`: This color is multiplied with other colors, such as the `AlbedoMap` or *:::no-loc text="vertex"::: colors*. If *transparency* is enabled on a material, the alpha channel is used to adjust the opacity, with `1` meaning fully opaque and `0` meaning fully transparent. The default albedo color is opaque white.
-* **FresnelEffect:** This material flag enables the additive [fresnel effect](../../overview/features/fresnel-effect.md) on the respective material. The appearance of the effect is governed by the other fresnel parameters explained in the following.
+ > [!NOTE]
+ > When a PBR material is fully transparent, like a perfectly clean glass surface, it still reflects the environment. Bright spots like the sun are still visible in the reflection. This is different for [color materials](color-materials.md).
-* **FresnelEffectColor:** The fresnel color used for this material. Only important when the fresnel effect bit has been set on this material (see above). This property controls the base color of the fresnel shine (see [fresnel effect](../../overview/features/fresnel-effect.md) for a full explanation). Currently only the rgb-channel values are important and the alpha value will be ignored.
+* `AlbedoMap`: A [2D texture](../../concepts/textures.md) for per-pixel albedo values.
-* **FresnelEffectExponent:** The fresnel exponent used for this material. Only important when the fresnel effect bit has been set on this material (see above). This property controls the spread of the fresnel shine. The minimum value 0.01 causes a spread across the whole object. The maximum value 10.0 constricts the shine to only the most gracing edges visible.
+* `AlphaClipThreshold`: If the `AlphaClipped` flag is set on the `PbrFlags` property, all pixels where the albedo alpha value is lower than `AlphaClipThreshold` won't be drawn. Alpha clipping can be used even without enabling transparency and is much faster to render. Alpha clipped materials are still slower to render than fully opaque materials, though. By default alpha clipping is disabled.
-## PBR material properties
+* `TexCoordScale` and `TexCoordOffset`: The scale is multiplied into the UV texture coordinates, the offset is added to it. Can be used to stretch and shift the textures. The default scale is (1, 1) and offset is (0, 0).
-The core idea of physically based rendering is to use *BaseColor*, *Metalness*, and *Roughness* properties to emulate a wide range of real-world materials. A detailed description of PBR is beyond the scope of this article. For more information about PBR, see [other sources](http://www.pbr-book.org). The following properties are specific to PBR materials:
+* `FresnelEffectColor`: The fresnel color used for this material. Only important when the fresnel effect flag has been set on this material (see above). This property controls the base color of the fresnel shine (see [fresnel effect](../../overview/features/fresnel-effect.md) for a full explanation). Currently only the RGB-channel values are important and the alpha value will be ignored.
-* **baseColor:** In PBR materials, the *albedo color* is referred to as the *base color*. In Azure Remote Rendering the *albedo color* property is already present through the common material properties, so there's no additional base color property.
+* `FresnelEffectExponent`: The fresnel exponent used for this material. Only important when the fresnel effect flag has been set on this material (see above). This property controls the spread of the fresnel shine. The minimum value 0.01 causes a spread across the whole object. The maximum value 10.0 constricts the shine to only the most grazing edges visible.
-* **roughness** and **roughnessMap:** Roughness defines how rough or smooth the surface is. Rough surfaces scatter the light in more directions than smooth surfaces, which make reflections blurry rather than sharp. The value range is from `0.0` to `1.0`. When `roughness` equals `0.0`, reflections will be sharp. When `roughness` equals `0.5`, reflections will become blurry.
+* `PbrVertexAlphaMode`: Determines how the alpha channel of vertex colors is used. The following modes are provided:
+ * `Occlusion`: The alpha value represents an ambient occlusion value and therefore only affects the indirect lighting from the sky box.
+ * `LightMask`: The alpha value serves as a scale factor for the overall amount of lighting applied, meaning the alpha can be used to darken areas. This affects both indirect and direct lighting.
+ * `Opacity`: The alpha represents how opaque (1.0) or transparent (0.0) the material is.
- If both a roughness value and a roughness map are supplied, the final value will be the product of the two.
+* `NormalMap`: To simulate fine grained detail, a [normal map](https://en.wikipedia.org/wiki/Normal_mapping) can be provided.
+* `NormalMapScale`: A scalar value that scales the normal map strength. A value of 1.0 takes the normal map's normal as-is, a value of 0 makes the surface appear flat. Values larger than 1.0 exaggerate the normal map perturbation.
-* **metalness** and **metalnessMap:** In physics, this property corresponds to whether a surface is conductive or dielectric. Conductive materials have different reflective properties, and they tend to be reflective with no albedo color. In PBR materials, this property affects how much a surface reflects the surrounding environment. Values range from `0.0` to `1.0`. When metalness is `0.0`, the albedo color is fully visible and the material looks like plastic or ceramics. When metalness is `0.5`, it looks like painted metal. When metalness is `1.0`, the surface almost completely loses its albedo color and only reflects the surroundings. For instance, if `metalness` is `1.0` and `roughness` is `0.0` then a surface looks like real-world mirror.
+* `Roughness` and `RoughnessMap`: Roughness defines how rough or smooth the surface is. Rough surfaces scatter the light in more directions than smooth surfaces, which make reflections blurry rather than sharp. The value range is from `0.0` to `1.0`. When `Roughness` equals `0.0`, reflections will be sharp. When `Roughness` equals `0.5`, reflections will become blurry. If both a roughness value and a roughness map are supplied, the final value will be the product of the two.
- If both a metalness value and a metalness map are supplied, the final value will be the product of the two.
+* `Metalness` and `MetalnessMap`: In physics, this property corresponds to whether a surface is conductive or dielectric. Conductive materials have different reflective properties, and they tend to be reflective with no albedo color. In PBR materials, this property affects how much a surface reflects the surrounding environment. Values range from `0.0` to `1.0`. When metalness is `0.0`, the albedo color is fully visible, and the material looks like plastic or ceramics. When metalness is `0.5`, it looks like painted metal. When metalness is `1.0`, the surface almost completely loses its albedo color, and only reflects the surroundings. For instance, if `metalness` is `1.0` and `roughness` is `0.0` then a surface looks like real-world mirror. If both a metalness value and a metalness map are supplied, the final value will be the product of the two.
![Spheres rendered with different metalness and roughness values](./media/metalness-roughness.png) In the picture above, the sphere in the bottom-right corner looks like a real metal material, the bottom-left looks like ceramic or plastic. The albedo color is also changing according to physical properties. With increasing roughness, the material loses reflection sharpness.
-* **normalMap:** To simulate fine grained detail, a [normal map](https://en.wikipedia.org/wiki/Normal_mapping) can be provided.
-
-* **occlusionMap** and **aoScale:** [Ambient occlusion](https://en.wikipedia.org/wiki/Ambient_occlusion) makes objects with crevices look more realistic by adding shadows to occluded areas. Occlusion value range from `0.0` to `1.0`, where `0.0` means darkness (occluded) and `1.0` means no occlusions. If a 2D texture is provided as an occlusion map, the effect is enabled and *aoScale* acts as a multiplier.
+* `AOMap` and `AOScale`: [Ambient occlusion](https://en.wikipedia.org/wiki/Ambient_occlusion) makes objects with crevices look more realistic by adding shadows to occluded areas. Occlusion value range from `0.0` to `1.0`, where `0.0` means darkness (occluded) and `1.0` means no occlusions. If a 2D texture is provided as an occlusion map, the effect is enabled and `AOScale` acts as a multiplier.
![An object rendered with and without ambient occlusion](./media/boom-box-ao2.gif)
-* **transparent:** For PBR materials, there's only one transparency setting: it's enabled or not. The opacity is defined by the albedo color's alpha channel. When enabled, a more complex rendering pipeline is invoked to draw semi-transparent surfaces. Azure Remote Rendering implements true [order independent transparency](https://en.wikipedia.org/wiki/Order-independent_transparency) (OIT).
-
- Transparent geometry is expensive to render. If you only need holes in a surface, for example for the leaves of a tree, it's better to use alpha clipping instead.
-
- ![Spheres rendered with zero to full transparency](./media/transparency.png)
- Notice in the image above, how the right-most sphere is fully transparent, but the reflection is still visible.
-
- > [!IMPORTANT]
- > If any material is supposed to be switched from opaque to transparent at runtime, the renderer must use the *TileBasedComposition* [rendering mode](../../concepts/rendering-modes.md). This limitation does not apply to materials that are converted as transparent materials to begin with.
+## Color material overrides during conversion
+
+A subset of color material properties can be overridden during model conversion through the [material override file](../../how-tos/conversion/override-materials.md).
+The following table shows the mapping between runtime properties documented above and the corresponding property name in the override file:
+
+| Material property name | Property name in override file|
+|:-|:|
+| `PbrFlags.TransparentMaterial` | `transparent` |
+| `PbrFlags.AlphaClipped` | `alphaClipEnabled` |
+| `PbrFlags.UseVertexColor` | `useVertexColor` |
+| `PbrFlags.DoubleSided` | `isDoubleSided` |
+| `PbrFlags.TransparencyWritesDepth` | `transparencyWritesDepth` |
+| `AlbedoColor` | `albedoColor` |
+| `TexCoordScale` | `textureCoordinateScale` |
+| `TexCoordOffset` | `textureCoordinateOffset` |
+| `NormalmapScale` | `normalMapScale` |
+| `Metalness` | `metalness` |
+| `Roughness` | `roughness` |
+| `AlphaClipThreshold` | `alphaClipThreshold` |
## Technical details
Azure Remote Rendering uses the Cook-Torrance micro-facet BRDF with GGX NDF, Sch
* [Color materials](color-materials.md) * [Textures](../../concepts/textures.md)
-* [Meshes](../../concepts/meshes.md)
+* [Meshes](../../concepts/meshes.md)
+* [Material override files](../../how-tos/conversion/override-materials.md).
remote-rendering Material Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/reference/material-mapping.md
The following table shows the mapping:
| occlusionFactor | occlusion | | occlusionTexture | occlusionMap | | normalTexture | normalMap |
+| normalTextureInfo.scale | normalMapScale |
| alphaCutoff | alphaClipThreshold | | alphaMode.OPAQUE | alphaClipEnabled = false, isTransparent = false | | alphaMode.MASK | alphaClipEnabled = true, isTransparent = false |
Additionally to the base feature set, Azure Remote Rendering supports the follow
## FBX
-The FBX format is closed-source and FBX materials are not compatible with PBR materials in general. FBX uses a complex description of surfaces with many unique parameters and properties and **not all of them are used by the Azure Remote Rendering pipeline**.
+The FBX format is closed-source and FBX materials aren't compatible with PBR materials in general. FBX uses a complex description of surfaces with many unique parameters and properties and **not all of them are used by the Azure Remote Rendering pipeline**.
> [!IMPORTANT] > The Azure Remote Rendering model conversion pipeline only supports **FBX 2011 and higher**. The FBX format defines a conservative approach for materials, there are only two types in the official FBX specification:
-* *Lambert* - Not commonly used for quite some time already, but it is still supported by converting to Phong at conversion time.
+* *Lambert* - Not commonly used for quite some time already, but it's still supported by converting to Phong at conversion time.
* *Phong* - Almost all materials and most content tools use this type.
-The Phong model is more accurate and it is used as the *only* model for FBX materials. Below it will be referred as the *FBX Material*.
+The Phong model is more accurate and it's used as the *only* model for FBX materials. Below it will be referred as the *FBX Material*.
> Maya uses two custom extensions for FBX by defining custom properties for PBR and Stingray types of a material. These details are not included in the FBX specification, so it's not supported by Azure Remote Rendering currently.
FBX Materials use the Diffuse-Specular-SpecularLevel concept, so to convert from
> All colors and textures in FBX are in sRGB space (also known as Gamma space) but Azure Remote Rendering works with linear space during visualization and at the end of the frame converts everything back to sRGB space. The Azure Remote Rendering asset pipeline converts everything to linear space to send it as prepared data to the renderer.
-This table shows how textures are mapped from FBX Materials to Azure Remote Rendering materials. Some of them are not directly used but in combination with other textures participating in the formulas (for instance the diffuse texture):
+This table shows how textures are mapped from FBX Materials to Azure Remote Rendering materials. Some of them aren't directly used but in combination with other textures participating in the formulas (for instance the diffuse texture):
| FBX | Azure Remote Rendering | |:--|:-|
Roughness = sqrt(2 / (ShininessExponent * SpecularIntensity + 2))
`Metalness` is calculated from `Diffuse` and `Specular` using this [formula from the glTF specification](https://github.com/bghgary/glTF/blob/gh-pages/convert-between-workflows-bjs/js/babylon.pbrUtilities.js).
-The idea here is that we solve the equation: Ax<sup>2</sup> + Bx + C = 0.
+The idea here is, that we solve the equation: Ax<sup>2</sup> + Bx + C = 0.
Basically, dielectric surfaces reflect around 4% of light in a specular way, and the rest is diffuse. Metallic surfaces reflect no light in a diffuse way, but all in a specular way.
-This formula has a few drawbacks, because there is no way to distinguish between glossy plastic and glossy metallic surfaces. We assume most of the time the surface has metallic properties, and so glossy plastic/rubber surfaces may not look as expected.
+This formula has a few drawbacks, because there's no way to distinguish between glossy plastic and glossy metallic surfaces. We assume most of the time the surface has metallic properties, and so glossy plastic/rubber surfaces may not look as expected.
```cpp dielectricSpecularReflectance = 0.04
Metalness = clamp(value, 0.0, 1.0);
`Albedo` is computed from `Diffuse`, `Specular`, and `Metalness`. As described in the Metalness section, dielectric surfaces reflect around 4% of light.
-The idea here is to linearly interpolate between `Dielectric` and `Metal` colors using `Metalness` value as a factor. If metalness is `0.0`, then depending on specular it will be either a dark color (if specular is high) or diffuse will not change (if no specular is present). If metalness is a large value, then the diffuse color will disappear in favor of specular color.
+The idea here is to linearly interpolate between `Dielectric` and `Metal` colors using `Metalness` value as a factor. If metalness is `0.0`, then depending on specular it will be either a dark color (if specular is high) or diffuse won't change (if no specular is present). If metalness is a large value, then the diffuse color will disappear in favor of specular color.
```cpp dielectricSpecularReflectance = 0.04
albedoRawColor = lerpColors(dielectricColor, metalColor, metalness * metalness)
AlbedoRGB = clamp(albedoRawColor, 0.0, 1.0); ```
-`AlbedoRGB` has been computed by the formula above, but the alpha channel requires more computations. The FBX format is vague about transparency and has many ways to define it. Different content tools use different methods. The idea here is to unify them into one formula. It makes some assets incorrectly shown as transparent, though, if they are not created in a common way.
+`AlbedoRGB` has been computed by the formula above, but the alpha channel requires more computations. The FBX format is vague about transparency and has many ways to define it. Different content tools use different methods. The idea here is to unify them into one formula. It makes some assets incorrectly rendered as transparent, though, if they aren't created in a common way.
This is computed from `TransparentColor`, `TransparencyFactor`, `Opacity`: if `Opacity` is defined, then use it directly: `AlbedoAlpha` = `Opacity` else
-if `TransparencyColor` is defined, then `AlbedoAlpha` = 1.0 - ((`TransparentColor`.Red + `TransparentColor`.Green + `TransparentColor`.Blue) / 3.0) else
+if `TransparencyColor` is defined, then `AlbedoAlpha` = 1.0 - ((`TransparentColor.Red` + `TransparentColor.Green` + `TransparentColor.Blue`) / 3.0) else
if `TransparencyFactor`, then `AlbedoAlpha` = 1.0 - `TransparencyFactor` The final `Albedo` color has four channels, combining the `AlbedoRGB` with the `AlbedoAlpha`.
To summarize here, `Albedo` will be very close to the original `Diffuse`, if `Sp
### Known issues
-* The current formula does not work well for simple colored geometry. If `Specular` is bright enough, then all geometries become reflective metallic surfaces without any color. The workaround here is to lower `Specular` to 30% from the original or to use the conversion setting [fbxAssumeMetallic](../how-tos/conversion/configure-model-conversion.md#converting-from-older-fbx-formats-with-a-phong-material-model).
-* PBR materials were recently added to `Maya` and `3DS Max` content creation tools. They use custom user-defined black-box properties to pass it to FBX. Azure Remote Rendering does not read those properties because they are not documented and the format is closed-source.
+* The current formula doesn't work well for simple colored geometry. If `Specular` is bright enough, then all geometries become reflective metallic surfaces without any color. The workaround in this case is to lower `Specular` to 30% from the original or to use the conversion setting [fbxAssumeMetallic](../how-tos/conversion/configure-model-conversion.md#converting-from-older-fbx-formats-with-a-phong-material-model).
+* PBR materials were recently added to `Maya` and `3DS Max` content creation tools. They use custom user-defined black-box properties to pass it to FBX. Azure Remote Rendering doesn't read those properties because they aren't documented and the format is closed-source.
## Next steps
resource-mover About Move Process https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/about-move-process.md
Title: About the move process in Azure Resource Mover description: Learn about the process for moving resources across regions with Azure Resource Mover-+ Last updated 02/01/2021-+ #Customer intent: As an Azure admin, I want to understand how Azure Resource Mover works.
resource-mover Common Questions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/common-questions.md
Last updated 12/23/2022-+ # Common questions
resource-mover Manage Resources Created Move Process https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/manage-resources-created-move-process.md
Title: Manage resources that are created during the VM move process in Azure Resource Mover description: Learn how to manage resources that are created during the VM move process in Azure Resource Mover -+ Last updated 09/10/2020-+ # Manage resources created for the VM move
resource-mover Modify Target Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/modify-target-settings.md
Title: Modify destination settings when moving Azure VMs between regions with Azure Resource Mover description: Learn how to modify destination settings when moving Azure VMs between regions with Azure Resource Mover. -+ Last updated 02/08/2021-+ #Customer intent: As an Azure admin, I want to modify destination settings when moving resources to another region.
resource-mover Move Region Availability Zone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/move-region-availability-zone.md
Title: Move Azure VMs to availability zones in another region with Azure Resource Mover description: Learn how to move Azure VMs to availability zones with Azure Resource Mover. -+ Last updated 09/10/2020-+ #Customer intent: As an Azure admin, I want to move Azure VMs to a different Azure region.
resource-mover Move Region Within Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/move-region-within-resource-group.md
Title: Move resources to another region with Azure Resource Mover description: Learn how to move resources within a resource group to another region with Azure Resource Mover. -+ Last updated 09/08/2020-+ #Customer intent: As an Azure admin, I want to move Azure resources to a different Azure region. # Move resources across regions (from resource group)
resource-mover Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/overview.md
Last updated 12/23/2022-+ #Customer intent: As an Azure admin, I need a simple way to move Azure resources, and want to understand how Azure Resource Mover can help me do that.
resource-mover Remove Move Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/remove-move-resources.md
Title: Remove resources from a move collection in Azure Resource Mover description: Learn how to remove resources from a move collection in Azure Resource Mover. -+ Last updated 02/22/2020-+ #Customer intent: As an Azure admin, I want remove resources I've added to a move collection.
resource-mover Select Move Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/select-move-tool.md
Last updated 12/23/2022-+ #Customer intent: As an Azure admin, I need to compare tools for moving resources in Azure.
resource-mover Support Matrix Move Region Azure Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/support-matrix-move-region-azure-vm.md
Title: Support matrix for moving Azure VMs to another region with Azure Resource Mover description: Review support for moving Azure VMs between regions with Azure Resource Mover. -+ Last updated 02/08/2021-+
resource-mover Support Matrix Move Region Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/support-matrix-move-region-sql.md
Title: Support for moving Azure SQL resources between regions with Azure Resource Mover. description: Review support for moving Azure SQL resources between regions with Azure Resource Mover.-+ Last updated 09/07/2020-+ # Support for moving Azure SQL resources between Azure regions
resource-mover Tutorial Move Region Encrypted Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/tutorial-move-region-encrypted-virtual-machines.md
Last updated 12/21/2022-+ #Customer intent: As an Azure admin, I want to move Azure VMs to a different Azure region.
resource-mover Tutorial Move Region Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/tutorial-move-region-powershell.md
Title: Move resources across regions using PowerShell in Azure Resource Mover description: Learn how to move resources across regions using PowerShell in Azure Resource Mover. -+ Last updated 10/04/2021-+ #Customer intent: As an Azure admin, I want to move Azure VMs to a different Azure region using Azure Resource Mover with PowerShell
resource-mover Tutorial Move Region Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/tutorial-move-region-virtual-machines.md
Last updated 12/21/2022-+ #Customer intent: As an Azure admin, I want to move Azure VMs to a different Azure region.
role-based-access-control Conditions Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-troubleshoot.md
Previously updated : 10/24/2022 Last updated : 01/07/2023 #Customer intent:
To use principal (user) attributes, you must have all of the following: Azure AD
You don't meet the prerequisites. To use principal attributes, you must have **all** of the following: - Azure AD Premium P1 or P2 license-- Azure AD permissions for signed-in user, such as the [Attribute Assignment Administrator](../active-directory/roles/permissions-reference.md#attribute-assignment-administrator) role
+- Azure AD permissions for the signed-in user to read at least one attribute set
- Custom security attributes defined in Azure AD
-> [!IMPORTANT]
-> By default, [Global Administrator](../active-directory/roles/permissions-reference.md#global-administrator) and other administrator roles do not have permissions to read, define, or assign custom security attributes.
- **Solution**
-1. Open **Azure Active Directory** > **Overview** and check the license for your tenant.
+1. Open **Azure Active Directory** > **Custom security attributes**.
+
+ If the **Custom security attributes** page is disabled, you don't have an Azure AD Premium P1 or P2 license. Open **Azure Active Directory** > **Overview** and check the license for your tenant.
+
+ ![Screenshot that shows Custom security attributes page disabled in Azure portal.](./media/conditions-troubleshoot/attributes-disabled.png)
+
+ If you see the **Get started** page, you don't have permissions to read at least one attribute set or custom security attributes haven't been defined yet.
+
+ ![Screenshot that shows Custom security attributes Get started page.](./media/conditions-troubleshoot/attributes-get-started.png)
+
+1. If custom security attributes have been defined, assign one of the following roles at tenant scope or attribute set scope. For more information, see [Manage access to custom security attributes in Azure AD](../active-directory/fundamentals/custom-security-attributes-manage.md).
+
+ - [Attribute Definition Reader](../active-directory/roles/permissions-reference.md#attribute-definition-reader)
+ - [Attribute Assignment Reader](../active-directory/roles/permissions-reference.md#attribute-assignment-reader)
+ - [Attribute Definition Administrator](../active-directory/roles/permissions-reference.md#attribute-definition-administrator)
+ - [Attribute Assignment Administrator](../active-directory/roles/permissions-reference.md#attribute-assignment-administrator)
+
+ > [!IMPORTANT]
+ > By default, [Global Administrator](../active-directory/roles/permissions-reference.md#global-administrator) and other administrator roles do not have permissions to read, define, or assign custom security attributes.
+
+1. If custom security attributes haven't been defined yet, assign the [Attribute Definition Administrator](../active-directory/roles/permissions-reference.md#attribute-definition-administrator) role at tenant scope and add custom security attributes. For more information, see [Add or deactivate custom security attributes in Azure AD](../active-directory/fundamentals/custom-security-attributes-add.md).
-1. Open **Azure Active Directory** > **Users** > *user name* > **Assigned roles** and check if the Attribute Assignment Administrator role is assigned to you. If not, ask your Azure AD administrator to you assign you this role. For more information, see [Assign Azure AD roles to users](../active-directory/roles/manage-roles-portal.md).
+ When finished, you should be able to read at least one attribute set. **Principal** should now appear in the **Attribute source** list when you add a role assignment with a condition.
-1. Open **Azure Active Directory** > **Custom security attributes** to see if custom security attributes have been defined and which ones you have access to. If you don't see any custom security attributes, ask your Azure AD administrator to add an attribute set that you can manage. For more information, see [Manage access to custom security attributes in Azure AD](../active-directory/fundamentals/custom-security-attributes-manage.md) and [Add or deactivate custom security attributes in Azure AD](../active-directory/fundamentals/custom-security-attributes-add.md).
+ ![Screenshot that shows the attribute sets the user can read.](./media/conditions-troubleshoot/attribute-sets-read.png)
### Symptom - Principal does not appear in Attribute source when using PIM
role-based-access-control Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/policy-reference.md
Title: Built-in policy definitions for Azure RBAC description: Lists Azure Policy built-in policy definitions for Azure RBAC. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/28/2022 Last updated : 01/05/2023
role-based-access-control Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure RBAC description: Lists Azure Policy Regulatory Compliance controls available for Azure role-based access control (Azure RBAC). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/04/2022 Last updated : 01/05/2023
role-based-access-control Transfer Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/transfer-subscription.md
Several Azure resources have a dependency on a subscription or a directory. Depe
| System-assigned managed identities | Yes | Yes | [List managed identities](#list-role-assignments-for-managed-identities) | You must disable and re-enable the managed identities. You must re-create the role assignments. | | User-assigned managed identities | Yes | Yes | [List managed identities](#list-role-assignments-for-managed-identities) | You must delete, re-create, and attach the managed identities to the appropriate resource. You must re-create the role assignments. | | Azure Key Vault | Yes | Yes | [List Key Vault access policies](#list-key-vaults) | You must update the tenant ID associated with the key vaults. You must remove and add new access policies. |
-| Azure SQL databases with Azure AD authentication integration enabled | Yes | No | [Check Azure SQL databases with Azure AD authentication](#list-azure-sql-databases-with-azure-ad-authentication) | You cannot transfer an Azure SQL database with Azure AD authentication enabled to a different directory. For more information, see [Use Azure Active Directory authentication](/azure/azure-sql/database/authentication-aad-overview). |
+| Azure SQL databases with Azure AD authentication integration enabled | Yes | No | [Check Azure SQL databases with Azure AD authentication](#list-azure-sql-databases-with-azure-ad-authentication) | You cannot transfer an Azure SQL database with Azure AD authentication enabled to a different directory. For more information, see [Use Azure Active Directory authentication](/azure/azure-sql/database/authentication-aad-overview). |
+| Azure database for MySQL with Azure AD authentication integration enabled | Yes | No | | You cannot transfer an Azure database for MySQL (Single and Flexible server) with Azure AD authentication enabled to a different directory. |
| Azure Storage and Azure Data Lake Storage Gen2 | Yes | Yes | | You must re-create any ACLs. | | Azure Data Lake Storage Gen1 | Yes | Yes | | You must re-create any ACLs. | | Azure Files | Yes | Yes | | You must re-create any ACLs. |
role-based-access-control Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/troubleshooting.md
na Previously updated : 10/06/2022 Last updated : 01/07/2023
The second way to resolve this error is to create the role assignment by using t
az role assignment create --assignee-object-id 11111111-1111-1111-1111-111111111111 --role "Contributor" --scope "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}" ```
-### Symptom - Assigning a role sometimes fails with REST API or ARM templates
+### Symptom - Assigning a role to a new principal sometimes fails
-You create a new service principal and immediately try to assign a role to that service principal and the role assignment sometimes fails.
+You create a new user, group, or service principal and immediately try to assign a role to that principal and the role assignment sometimes fails. You get a message similar to following error:
+
+```
+PrincipalNotFound
+Principal {principalId} does not exist in the directory {tenantId}. Check that you have the correct principal ID. If you are creating this principal and then immediately assigning a role, this error might be related to a replication delay. In this case, set the role assignment principalType property to a value, such as ServicePrincipal, User, or Group. See https://aka.ms/docs-principaltype
+```
**Cause**
-The reason is likely a replication delay. The service principal is created in one region; however, the role assignment might occur in a different region that hasn't replicated the service principal yet.
+The reason is likely a replication delay. The principal is created in one region; however, the role assignment might occur in a different region that hasn't replicated the principal yet.
-**Solution**
+**Solution 1**
+
+If you are creating a new user or service principal using the REST API or ARM template, set the `principalType` property when creating the role assignment using the [Role Assignments - Create](/rest/api/authorization/role-assignments/create) API.
+
+| principalType | apiVersion |
+| | |
+| `User` | `2020-03-01-preview` or later |
+| `ServicePrincipal` | `2018-09-01-preview` or later |
+
+For more information, see [Assign Azure roles to a new service principal using the REST API](role-assignments-rest.md#new-service-principal) or [Assign Azure roles to a new service principal using Azure Resource Manager templates](role-assignments-template.md#new-service-principal).
+
+**Solution 2**
+
+If you are creating a new user or service principal using Azure PowerShell, set the `ObjectType` parameter to `User` or `ServicePrincipal` when creating the role assignment using [New-AzRoleAssignment](/powershell/module/az.resources/new-azroleassignment). The same underlying API version restrictions of Solution 1 still apply. For more information, see [Assign Azure roles using Azure PowerShell](role-assignments-powershell.md).
+
+**Solution 3**
-Set the `principalType` property to `ServicePrincipal` when creating the role assignment. You must also set the `apiVersion` of the role assignment to `2018-09-01-preview` or later. For more information, see [Assign Azure roles to a new service principal using the REST API](role-assignments-rest.md#new-service-principal) or [Assign Azure roles to a new service principal using Azure Resource Manager templates](role-assignments-template.md#new-service-principal).
+If you are creating a new group, wait a few minutes before creating the role assignment.
### Symptom - ARM template role assignment returns BadRequest status
route-server Expressroute Vpn Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/expressroute-vpn-support.md
Previously updated : 10/01/2021 Last updated : 01/09/2023 + # Azure Route Server support for ExpressRoute and Azure VPN Azure Route Server supports not only third-party network virtual appliances (NVA) running on Azure but also integrates seamlessly with ExpressRoute and Azure VPN gateways. You donΓÇÖt need to configure or manage the BGP peering between the gateway and Azure Route Server. You can enable route exchange between the gateways and Azure Route Server by enabling [branch-to-branch](quickstart-configure-route-server-portal.md#configure-route-exchange) in Azure portal. If you prefer, you can use [Azure PowerShell](quickstart-configure-route-server-powershell.md#route-exchange) or [Azure CLI](quickstart-configure-route-server-cli.md#configure-route-exchange) to enable the route exchange with the Route Server.
+> [!WARNING]
+> When you create or delete an Azure Route Server in a virtual network that contains a virtual network gateway (ExpressRoute or VPN), expect downtime until the operation complete.
+>
## How does it work?
-When you deploy an Azure Route Server along with an ExpressRoute gateway and an NVA in a virtual network, by default Azure Route Server doesnΓÇÖt propagate the routes it receives from the NVA and ExpressRoute gateway between each other. Once you enable the route exchange, ExpressRoute and the NVA will learn each otherΓÇÖs routes.
+When you deploy an Azure Route Server along with a virtual network gateway and an NVA in a virtual network, by default Azure Route Server doesnΓÇÖt propagate the routes it receives from the NVA and virtual network gateway between each other. Once you enable **branch-to-branch** in Route Server, the virtual network gateway and the NVA will exchange their routes.
For example, in the following diagram:
For example, in the following diagram:
* The ExpressRoute gateway will receive the route from ΓÇ£On-premises 1ΓÇ¥, which is connected to the SDWAN appliance, along with the virtual network route from Azure Route Server.
- ![Diagram showing ExpressRoute configured with Route Server.](./media/expressroute-vpn-support/expressroute-with-route-server.png)
You can also replace the SDWAN appliance with Azure VPN gateway. Since Azure VPN gateway and ExpressRoute are fully managed, you only need to enable the route exchange for the two on-premises networks to talk to each other. > [!IMPORTANT]
-> * Azure VPN gateway must be configured in [**active-active**](../vpn-gateway/vpn-gateway-activeactive-rm-powershell.md) mode and have the ASN set to 65515.
-> * When you create or delete an Azure Route Server from a virtual network that contains a Virtual Network Gateway (ExpressRoute or VPN), expect downtime until the operation complete.
+> Azure VPN gateway must be configured in [**active-active**](../vpn-gateway/vpn-gateway-activeactive-rm-powershell.md) mode and have the ASN set to 65515.
+>
-![Diagram showing ExpressRoute and VPN gateway configured with Route Server.](./media/expressroute-vpn-support/expressroute-and-vpn-with-route-server.png)
> [!IMPORTANT] > When the same route is learned over ExpressRoute, Azure VPN or an SDWAN appliance, the ExpressRoute network will be preferred. > - ## Next steps - Learn more about [Azure Route Server](route-server-faq.md). - Learn how to [configure Azure Route Server](quickstart-configure-route-server-powershell.md).-- Learn more about [Azure ExpressRoute and Azure VPN coexistence](../expressroute/expressroute-howto-coexist-resource-manager.md).
+- Learn more about [Azure ExpressRoute and Azure VPN coexistence](../expressroute/how-to-configure-coexisting-gateway-portal.md).
route-server Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/overview.md
Previously updated : 12/06/2022 Last updated : 01/09/2023 + #Customer intent: As an IT administrator, I want to learn about Azure Route Server and what I can use it for.
For pricing details, see [Azure Route Server pricing](https://azure.microsoft.co
## Service Level Agreement (SLA)
-For SLA, see [SLA for Azure Route Server](https://azure.microsoft.com/support/legal/sla/route-server/v1_0/).
+For service level agreement details, see [SLA for Azure Route Server](https://azure.microsoft.com/support/legal/sla/route-server/v1_0/).
## FAQs
route-server Route Server Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/route-server-faq.md
If the route has the same AS path length, Azure Route Server will program multip
Yes, Azure Route Server propagates the route with the BGP AS Path intact.
+### Do I need to peer each NVA with both Route Server instances?
+Yes, to ensure that VNet routes are successfully advertised over the target NVA connections, and to configure High Availability, we recommend peering each NVA instances with both instances of Route Server.
+ ### Does Azure Route Server preserve the BGP communities of the route it receives? Yes, Azure Route Server propagates the route with the BGP communities as is.
route-server Tutorial Protect Route Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/tutorial-protect-route-server.md
Title: 'Tutorial: Protect your route server with Azure DDoS protection'
+ Title: 'Tutorial: Protect your Route Server with Azure DDoS protection'
description: Learn how to set up a route server and protect it with Azure DDoS protection
Last updated 12/21/2022
-# Tutorial: Protect your route server with Azure DDoS protection
+# Tutorial: Protect your Route Server with Azure DDoS protection
This article helps you create an Azure Route Server with a DDoS protected virtual network. Azure DDoS protection protects your publicly accessible route server from Distributed Denial of Service attacks.
You'll need the Azure Route Server's peer IPs and ASN to complete the configurat
:::image type="content" source="./media/quickstart-configure-route-server-portal/route-server-overview.png" alt-text="Screenshot of Route Server overview page.":::
-## Configure route exchange
-
-1. In the search box at the top of the portal, enter **Route Server**. Select **Route Servers** in the search results.
-
-2. Select **myRouteServer**.
-
-3. In **Settings**, select **Configuration**.
-
-4. Select **Enabled** in **Branch-to-branch**.
-
-5. Select **Save**.
- ## Clean up resources If you're not going to continue to use this application, delete the virtual network, DDoS protection plan, and Route Server with the following steps:
If you're not going to continue to use this application, delete the virtual netw
Advance to the next article to learn how to: > [!div class="nextstepaction"]
-> [Configure peering between Azure Route Server and Quagga network virtual appliance](tutorial-configure-route-server-with-quagga.md)
+> [Configure peering between Azure Route Server and network virtual appliance](tutorial-configure-route-server-with-quagga.md)
search Performance Benchmarks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/performance-benchmarks.md
Title: Performance benchmarks description: Learn about the performance of Azure Cognitive Search through various performance benchmarks- -++ Previously updated : 04/07/2021 Last updated : 01/5/2022 # Azure Cognitive Search performance benchmarks
search Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/policy-reference.md
Title: Built-in policy definitions for Azure Cognitive Search description: Lists Azure Policy built-in policy definitions for Azure Cognitive Search. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/28/2022 Last updated : 01/05/2023
search Samples Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/samples-dotnet.md
Previously updated : 09/09/2022 Last updated : 01/04/2023
-# .NET (C#) code samples for Azure Cognitive Search
+# C# samples for Azure Cognitive Search
Learn about the C# code samples that demonstrate the functionality and workflow of an Azure Cognitive Search solution. These samples use the [**Azure Cognitive Search client library**](/dotnet/api/overview/azure/search) for the [**Azure SDK for .NET**](/dotnet/azure/), which you can explore through the following links.
Code samples from the Azure SDK development team demonstrate API usage. You can
## Doc samples
-Code samples from the Cognitive Search team demonstrate features and workflows. Many of these samples are referenced in tutorials, quickstarts, and how-to articles. You can find these samples in [**Azure-Samples/azure-search-dotnet-samples**](https://github.com/Azure-Samples/azure-search-dotnet-samples) and in [**Azure-Samples/search-dotnet-getting-started**](https://github.com/Azure-Samples/search-dotnet-getting-started/) on GitHub.
+Code samples from the Cognitive Search team demonstrate features and workflows. All of the following samples are referenced in tutorials, quickstarts, and how-to articles that explain the code in detail. You can find these samples in [**Azure-Samples/azure-search-dotnet-samples**](https://github.com/Azure-Samples/azure-search-dotnet-samples) and in [**Azure-Samples/search-dotnet-getting-started**](https://github.com/Azure-Samples/search-dotnet-getting-started/) on GitHub.
-| Samples | Article |
-||-|
-| [quickstart](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/quickstart) | Source code for [Quickstart: Create a search index ](search-get-started-dotnet.md). Covers the basic workflow for creating, loading, and querying a search index using sample data. |
-| [search-website](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/main/search-website-functions-v4) | Source code for [Tutorial: Add search to web apps](tutorial-csharp-overview.md). Demonstrates an end-to-end search app that includes a rich client plus components for hosting the app and handling search requests.|
-| [DotNetHowTo](https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowTo) | Source code for [How to use the .NET client library](search-howto-dotnet-sdk.md). Steps through the basic workflow, but in more detail and discussion of API usage. |
-| [DotNetHowToSynonyms](https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowToSynonyms) | Source code for [Example: Add synonyms in C#](search-synonyms-tutorial-sdk.md). Synonym lists are used for query expansion, providing matchable terms that are external to an index. |
-| [DotNetToIndexers](https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowToIndexers) | Source code for [Tutorial: Index Azure SQL data using the .NET SDK](search-indexer-tutorial.md). This article shows how to configure an Azure SQL indexer that has a schedule, field mappings, and parameters. |
-| [DotNetHowToEncryptionUsingCMK](https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowToEncryptionUsingCMK) | Source code for [How to configure customer-managed keys for data encryption](search-security-manage-encryption-keys.md). |
-| [Create your first app in C#](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/create-first-app/v11) | Source code for [Tutorial: Create your first search app](tutorial-csharp-create-first-app.md). While most samples are console applications, this MVC sample uses a web page to front the sample Hotels index, demonstrating basic search, pagination, autocomplete and suggested queries, facets, and filters. |
-| [multiple-data-sources](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/multiple-data-sources) | Source code for [Tutorial: Index from multiple data sources](tutorial-multiple-data-sources.md). |
-| [optimize-data-indexing](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/optimize-data-indexing) | Source code for [Tutorial: Optimize indexing with the push API](tutorial-optimize-indexing-push-api.md). |
-| [tutorial-ai-enrichment](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/tutorial-ai-enrichment) | Source code for [Tutorial: AI-generated searchable content from Azure blobs using the .NET SDK](cognitive-search-tutorial-blob-dotnet.md). |
-
-> [!Tip]
+> [!TIP]
> Try the [Samples browser](/samples/browse/?languages=csharp&products=azure-cognitive-search) to search for Microsoft code samples in GitHub, filtered by product, service, and language.
+| Code sample | Related article | Purpose |
+|-|||
+| [quickstart](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/quickstart) | [Quickstart: Create a search index](search-get-started-dotnet.md) | Covers the basic workflow for creating, loading, and querying a search index in C# using sample data. |
+| [search-website](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/main/search-website-functions-v4) | [Tutorial: Add search to web apps](tutorial-csharp-overview.md) | Demonstrates an end-to-end search app that includes a rich client plus components for hosting the app and handling search requests.|
+| [DotNetHowTo](https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowTo) | [How to use the .NET client library](search-howto-dotnet-sdk.md) | Steps through the basic workflow, but in more detail and with discussion of API usage. |
+| [DotNetHowToSynonyms](https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowToSynonyms) | [Example: Add synonyms in C#](search-synonyms-tutorial-sdk.md) | Synonym lists are used for query expansion, providing matchable terms that are external to an index. |
+| [DotNetToIndexers](https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowToIndexers) | [Tutorial: Index Azure SQL data](search-indexer-tutorial.md) | Shows how to configure an Azure SQL indexer that has a schedule, field mappings, and parameters. |
+| [DotNetHowToEncryptionUsingCMK](https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowToEncryptionUsingCMK) | [How to configure customer-managed keys for data encryption](search-security-manage-encryption-keys.md) | Shows how to create objects that are encrypted with a customer key. |
+| [multiple-data-sources](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/multiple-data-sources) | [Tutorial: Index from multiple data sources](tutorial-multiple-data-sources.md). | Merges content from two data sources into one search index.
+| [Optimize-data-indexing](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/optimize-data-indexing) | [Tutorial: Optimize indexing with the push API](tutorial-optimize-indexing-push-api.md).| Demonstrates optimization techniques for pushing data into a search index. |
+| [tutorial-ai-enrichment](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/tutorial-ai-enrichment) | [Tutorial: AI-generated searchable content from Azure blobs](cognitive-search-tutorial-blob-dotnet.md) | Shows how to configure an indexer and skillset. |
+| [Create your first app in C#](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/create-first-app/v11) | [Tutorial: Create your first search app](tutorial-csharp-create-first-app.md) | While most samples are console applications, this MVC sample uses a web page to front the sample Hotels index, demonstrating basic search, pagination, autocomplete and suggested queries, facets, and filters.|
+ ## Other samples The following samples are also published by the Cognitive Search team, but aren't referenced in documentation. Associated readme files provide usage instructions. | Samples | Description | ||-|
+| [Check storage](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/main/check-storage-usage/README.md) | Invokes an Azure function that checks search service storage on a schedule. |
+| [Export an index](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/main/export-dat) | C# console app that partitions and export a large index. |
+| [Query multiple services](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/main/multiple-search-services) | Issue a single query across multiple search services and combine the results into a single page. |
| [Index Data Lake Gen2 using Azure AD](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/master/data-lake-gen2-acl-indexing/README.md) | Source code demonstrating indexer connections and indexing of Azure Data Lake Gen2 files and folders that are secured through Azure AD and role-based access controls. | | [azure-search-power-skills](https://github.com/Azure-Samples/azure-search-power-skills) | Source code for consumable custom skills that you can incorporate in your won solutions. | | [Knowledge Mining Solution Accelerator](/samples/azure-samples/azure-search-knowledge-mining/azure-search-knowledge-mining/) | Includes templates, support files, and analytical reports to help you prototype an end-to-end knowledge mining solution. |
search Samples Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/samples-java.md
Previously updated : 09/09/2022 Last updated : 01/04/2023
-# Java code samples for Azure Cognitive Search
+# Java samples for Azure Cognitive Search
Learn about the Java code samples that demonstrate the functionality and workflow of an Azure Cognitive Search solution. These samples use the [**Azure Cognitive Search client library**](/java/api/overview/azure/search-documents-readme) for the [**Azure SDK for Java**](/azure/developer/java/sdk), which you can explore through the following links.
Code samples from the Azure SDK development team demonstrate API usage. You can
## Doc samples
-Code samples from the Cognitive Search team demonstrate features and workflows. Many of these samples are referenced in tutorials, quickstarts, and how-to articles. You can find these samples in [**Azure-Samples/azure-search-java-samples**](https://github.com/Azure-Samples/azure-search-java-samples) on GitHub.
+Code samples from the Cognitive Search team are located in [**Azure-Samples/azure-search-java-samples**](https://github.com/Azure-Samples/azure-search-java-samples) on GitHub.
| Samples | Article | ||-|
-| [quickstart](https://github.com/Azure-Samples/azure-search-java-samples/tree/java-rest-api/quickstart) | Source code for [Quickstart: Create a search index in Java and REST](search-get-started-java.md). This sample hasn't been updated for the Java SDK. It calls the REST APIs. |
+| [search-java-getting-started](https://github.com/Azure-Samples/azure-search-java-samples/tree/main/search-java-getting-started) | Source code for [Quickstart: Create a search index in Java and REST](search-get-started-java.md). |
-> [!Tip]
+> [!TIP]
> Try the [Samples browser](/samples/browse/?languages=java&products=azure-cognitive-search) to search for Microsoft code samples in GitHub, filtered by product, service, and language.-
-## Other samples
-
-The following samples are also published by the Cognitive Search team, but aren't referenced in documentation. Associated readme files provide usage instructions.
-
-| Samples | Description |
-||-|
-| [search-java-getting-started](https://github.com/Azure-Samples/azure-search-java-samples/tree/master/search-java-getting-started) | Uses the Java SDK client library to create, load, and query a search index. This sample is currently standalone. |
-| [search-java-indexer-demo](https://github.com/Azure-Samples/azure-search-java-samples/tree/java-rest-api/search-java-indexer-demo) | Demonstrates an Azure Cosmos DB indexer in Java. This sample hasn't been updated for the Java SDK. It calls the REST APIs.|
search Samples Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/samples-javascript.md
Previously updated : 09/09/2022 Last updated : 01/04/2023
-# JavaScript code samples for Azure Cognitive Search
+# JavaScript samples for Azure Cognitive Search
Learn about the JavaScript code samples that demonstrate the functionality and workflow of an Azure Cognitive Search solution. These samples use the [**Azure Cognitive Search client library**](/javascript/api/overview/azure/search-documents-readme) for the [**Azure SDK for JavaScript**](/azure/developer/javascript/), which you can explore through the following links.
Code samples from the Cognitive Search team demonstrate features and workflows.
| [quickstart](https://github.com/Azure-Samples/azure-search-javascript-samples/tree/master/quickstart/v11) | Source code for [Quickstart: Create a search index in JavaScript](search-get-started-javascript.md). Covers the basic workflow for creating, loading, and querying a search index using sample data. | | [search-website](https://github.com/Azure-Samples/azure-search-javascript-samples/tree/main/search-website-functions-v4) | Source code for [Tutorial: Add search to web apps](tutorial-javascript-overview.md). Demonstrates an end-to-end search app that includes a rich client plus components for hosting the app and handling search requests.|
-> [!Tip]
+> [!TIP]
> Try the [Samples browser](/samples/browse/?languages=javascript&products=azure-cognitive-search) to search for Microsoft code samples in GitHub, filtered by product, service, and language. ## Other samples
search Samples Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/samples-python.md
Previously updated : 09/09/2022 Last updated : 01/04/2023
-# Python code samples for Azure Cognitive Search
+# Python samples for Azure Cognitive Search
Learn about the Python code samples that demonstrate the functionality and workflow of an Azure Cognitive Search solution. These samples use the [**Azure Cognitive Search client library**](/python/api/overview/azure/search-documents-readme) for the [**Azure SDK for Python**](/azure/developer/python/), which you can explore through the following links.
Code samples from the Cognitive Search team demonstrate features and workflows.
| [tutorial-ai-enrichment](https://github.com/Azure-Samples/azure-search-python-samples/tree/master/Tutorial-AI-Enrichment) | Source code for [Tutorial: Use Python and AI to generate searchable content from Azure blobs](cognitive-search-tutorial-blob-python.md). This article shows how to create a blob indexer with a cognitive skillset, where the skillset creates and transforms raw content to make it searchable or consumable. | | [AzureML-Custom-Skill](https://github.com/Azure-Samples/azure-search-python-samples/tree/master/AzureML-Custom-Skill) | Source code for [Example: Create a custom skill using Python](cognitive-search-custom-skill-python.md). This article demonstrates indexer and skillset integration with deep learning models in Azure Machine Learning. |
-> [!Tip]
+> [!TIP]
> Try the [Samples browser](/samples/browse/?languages=python&products=azure-cognitive-search) to search for Microsoft code samples in GitHub, filtered by product, service, and language.
search Samples Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/samples-rest.md
Previously updated : 09/15/2022 Last updated : 01/04/2023
-# REST code samples for Azure Cognitive Search
+# REST samples for Azure Cognitive Search
Learn about the REST API samples that demonstrate the functionality and workflow of an Azure Cognitive Search solution. These samples use the [**Search REST APIs**](/rest/api/searchservice).
search Search Get Started Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-javascript.md
Title: 'Quickstart: Create a search index in JavaScript' description: In this JavaScript quickstart, learn how to create an index, load data, and run queries on Azure Cognitive Search using JavaScript--++++ ms.devlang: javascript Previously updated : 09/09/2022 Last updated : 01/05/2023
search Search How To Alias https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-how-to-alias.md
Title: Create an index alias
description: Create an alias to define a secondary name that can be used to refer to an index for querying, indexing, and other operations. --++ Previously updated : 03/01/2022 Last updated : 01/05/2023 # Create an index alias in Azure Cognitive Search
search Search Howto Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-aad.md
Title: Authorize search app requests using Azure AD
description: Acquire a token from Azure AD to authorize search requests to an app built on Azure Cognitive Search. --++ Previously updated : 7/20/2022 Last updated : 1/05/2022
search Search Manage Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-manage-azure-cli.md
Title: Azure CLI scripts using the az search module
description: Create and configure an Azure Cognitive Search service with the Azure CLI. You can scale a service up or down, manage admin and query api-keys, and query for system information. --++ ms.devlang: azurecli Previously updated : 06/08/2022 Last updated : 01/05/2023 # Manage your Azure Cognitive Search service with the Azure CLI
search Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cognitive Search description: Lists Azure Policy Regulatory Compliance controls available for Azure Cognitive Search. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/04/2022 Last updated : 01/05/2023
search Troubleshoot Shared Private Link Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/troubleshoot-shared-private-link-resources.md
Last updated 02/26/2022
# Troubleshooting common issues with Shared Private Links
-A shared private link allows Azure Cognitive Search to make secure outbound connections over a private endpoint when accessing customer resources in a virtual network. This article can help you resolve errors that might occur.
+A shared private link allows Azure Cognitive Search to make secure outbound connections over a private endpoint when accessing customer resources in a virtual network. This article can help you resolve errors that might occur.
Creating a shared private link is search service control plane operation. You can [create a shared private link](search-indexer-howto-access-private.md) using either the portal or a [Management REST API](/rest/api/searchmanagement/2021-04-01-preview/shared-private-link-resources/create-or-update). During provisioning, the state of the request is "Updating". After the operation completes successfully, status is "Succeeded". A private endpoint to the resource, along with any DNS zones and mappings, is created. This endpoint is used exclusively by your search service instance and is managed through Azure Cognitive Search.
Shared private link resources that have failed Azure Resource Manager deployment
A private endpoint is created to the target Azure resource as specified in the shared private link creation request. This is one of the final steps in the asynchronous Azure Resource Manager deployment operation, but Azure Cognitive Search needs to link the private endpoint's private IP address as part of its network configuration. Once this link is done, the `provisioningState` of the shared private link resource will go to a terminal success state `Succeeded`. Customers should only approve or deny(or in general modify the configuration of the backing private endpoint) after the state has transitioned to `Succeeded`. Modifying the private endpoint in any way before this could result in an incomplete deployment operation and can cause the shared private link resource to end up (either immediately, or usually within a few hours) in a `Failed` state.
-## Resource stalled in an "Updating" or "Incomplete" state
+## Search service network connectivity change stalled in an "Updating" state
-Typically, a shared private link resource should go a terminal state (`Succeeded` or `Failed`) in a few minutes after the request has been accepted by the search RP.
+Shared private links and private endpoints are used when search service **Public Network Access** is **Disabled**. Typically, changing network connectivity should succeed in a few minutes after the request has been accepted. In some circumstances, Azure Cognitive Search may take several hours to complete the connectivity change operation.
-In rare circumstances, Azure Cognitive Search can fail to correctly mark the state of the shared private link resource to a terminal state (`Succeeded` or `Failed`). This usually occurs due to an unexpected or catastrophic failure in the search RP. Shared private link resources are automatically transitioned to a `Failed` state if it has been "stuck" in a non-terminal state for more than a few hours.
+ :::image type="content" source="media/troubleshoot-shared-private-link-resources/update-network-access.png" alt-text="Screenshot of changing public network access to disabled." border="true":::
+
+If you observe that the connectivity change operation is taking a significant amount of time, wait for a few hours. Connectivity change operations involve operations such as updating DNS records which may take longer than expected.
+
+If **Public Network Access** is changed, existing shared private links and private endpoints may not work correctly. If existing shared private links and private endpoints stop working during a connectivity change operation, wait a few hours for the operation to complete. If they are still not working, try deleting and recreating them.
+
+## Shared private link resource stalled in an "Updating" or "Incomplete" state
+
+Typically, a shared private link resource should go a terminal state (`Succeeded` or `Failed`) in a few minutes after the request has been accepted.
+
+In rare circumstances, Azure Cognitive Search can fail to correctly mark the state of the shared private link resource to a terminal state (`Succeeded` or `Failed`). This usually occurs due to an unexpected failure. Shared private link resources are automatically transitioned to a `Failed` state if it has been "stuck" in a non-terminal state for more than a few hours.
If you observe that the shared private link resource has not transitioned to a terminal state, wait for a few hours to ensure that it becomes `Failed` before you can delete it and re-create it. Alternatively, instead of waiting you can try to create another shared private link resource with a different name (keeping all other parameters the same). ## Updating a shared private link resource
-An existing shared private link resource can be updated using the [Create or Update API](/rest/api/searchmanagement/2021-04-01-preview/shared-private-link-resources/create-or-update). Search RP only allows for narrow updates to the shared private link resource - only the request message can be modified via this API.
+An existing shared private link resource can be updated using the [Create or Update API](/rest/api/searchmanagement/2021-04-01-preview/shared-private-link-resources/create-or-update). Search only allows for narrow updates to the shared private link resource - only the request message can be modified via this API.
+ It isn't possible to update any of the "core" properties of an existing shared private link resource (such as `privateLinkResourceId` or `groupId`) and this will always be unsupported. If any other property besides the request message needs to be changed, we advise customers to delete and re-create the shared private link resource.
An existing shared private link resource can be updated using the [Create or Upd
Customers can delete an existing shared private link resource via the [Delete API](/rest/api/searchmanagement/2021-04-01-preview/shared-private-link-resources/delete). Similar to the process of creation (or update), this is also an asynchronous operation with four steps:
-1. You request a search RP to delete the shared private link resource.
+1. You request a search service to delete the shared private link resource.
-1. Search RP validates that the resource exists and is in a state valid for deletion. If so, it initiates an Azure Resource Manager delete operation to remove the resource.
+1. The search service validates that the resource exists and is in a state valid for deletion. If so, it initiates an Azure Resource Manager delete operation to remove the resource.
1. Search queries for the completion of the operation (which usually takes a few minutes). At this point, the shared private link resource would have a provisioning state of "Deleting".
search Tutorial Create Custom Analyzer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-create-custom-analyzer.md
Title: 'Tutorial: create a custom analyzer'
description: Learn how to build a custom analyzer to improve the quality of search results in Azure Cognitive Search. --++ Previously updated : 01/29/2021 Last updated : 01/05/2023 # Tutorial: Create a custom analyzer for phone numbers
search Tutorial Optimize Indexing Push Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-optimize-indexing-push-api.md
Title: 'C# tutorial optimize indexing with the push API'
description: Learn how to efficiently index data using Azure Cognitive Search's push API. This tutorial and sample code are in C#. --++ Previously updated : 1/29/2021 Last updated : 1/05/2023
security Customer Lockbox Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/customer-lockbox-overview.md
The following services are generally available for Customer Lockbox:
- Azure Data Explorer - Azure Data Factory - Azure Database for MySQL
+- Azure Database for MySQL Flexible Server
- Azure Database for PostgreSQL - Azure Databricks - Azure Edge Zone Platform Storage
security Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/feature-availability.md
The following table displays the current Microsoft Defender for IoT feature avai
| [Manual and automatic threat intelligence updates](../../defender-for-iot/how-to-work-with-threat-intelligence-packages.md) | GA | GA | | **Unify IT, and OT security with SIEM, SOAR and XDR** | | | | [Active Directory](../../defender-for-iot/organizations/integrate-with-active-directory.md) | GA | GA |
-| [ArcSight](../../defender-for-iot/organizations/how-to-accelerate-alert-incident-response.md#accelerate-incident-workflows-by-using-alert-groups) | GA | GA |
+| [ArcSight](../../defender-for-iot/organizations/integrate-overview.md#micro-focus-arcsight) | GA | GA |
| [ClearPass (Alerts & Inventory)](../../defender-for-iot/organizations/tutorial-clearpass.md) | GA | GA | | [CyberArk PSM](../../defender-for-iot/organizations/tutorial-cyberark.md) | GA | GA | | [Email](../../defender-for-iot/organizations/how-to-forward-alert-information-to-partners.md#email-address-action) | GA | GA |
security Threat Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/threat-detection.md
Azure provides a wide array of options to configure and customize security to me
Identity Protection uses adaptive machine learning algorithms and heuristics to detect anomalies and risk detections that might indicate that an identity has been compromised. Using this data, Identity Protection generates reports and alerts so that you can investigate these risk detections and take appropriate remediation or mitigation action.
-Azure Active Directory Identity Protection is more than a monitoring and reporting tool. Based on risk detections, Identity Protection calculates a user risk level for each user, so that you can configure risk-based policies to automatically protect the identities of your organization.
-
-These risk-based policies, in addition to other [Conditional Access controls](../../active-directory/conditional-access/overview.md) that are provided by Azure Active Directory and [EMS](../../active-directory/conditional-access/overview.md), can automatically block or offer adaptive remediation actions that include password resets and multi-factor authentication enforcement.
- ### Identity Protection capabilities
-Azure Active Directory Identity Protection is more than a monitoring and reporting tool. To protect your organization's identities, you can configure risk-based policies that automatically respond to detected issues when a specified risk level has been reached. These policies, in addition to other Conditional Access controls provided by Azure Active Directory and EMS, can either automatically block or initiate adaptive remediation actions including password resets and multi-factor authentication enforcement.
+Azure Active Directory Identity Protection is more than a monitoring and reporting tool. To protect your organization's identities, you can configure risk-based policies that automatically respond to detected issues when a specified risk level has been reached. These policies, in addition to other [Conditional Access controls](../../active-directory/conditional-access/overview.md) provided by Azure Active Directory and [EMS](../../active-directory/conditional-access/overview.md), can either automatically block or initiate adaptive remediation actions including password resets and multi-factor authentication enforcement.
Examples of some of the ways that Azure Identity Protection can help secure your accounts and identities include:
sentinel Ama Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/ama-migrate.md
This article provides specific details and differences for Microsoft Sentinel.
## Gap analysis between agents The following tables show gap analyses for the log types that currently rely on agent-based data collection for Microsoft Sentinel. This will be updated as support for AMA grows towards parity with the Log Analytics agent.
-> [!IMPORTANT]
-> The AMA currently has a limit of 5,000 Events Per Second (EPS). Verify whether this limit works for your organization, especially if you are using your servers as log forwarders, such as for Windows forwarded events or Syslog events.
- ### Windows logs |Log type / Support |Azure Monitor agent support |Log Analytics agent support |
The following tables show gap analyses for the log types that currently rely on
|**Sysmon** | Collection only | Collection only | |**DNS logs** | [Windows DNS servers via AMA connector](connect-dns-ama.md) (Public preview) | [Windows DNS Server connector](data-connectors-reference.md#windows-dns-server-preview) (Public preview) |
+> [!IMPORTANT]
+> The AMA **for Windows** currently has a limit of 5,000 Events Per Second (EPS). Verify whether this limit works for your organization, especially if you are using your servers as log forwarders for Windows security events or forwarded events.
+ ### Linux logs |Log type / Support |Azure Monitor agent support |Log Analytics agent support |
sentinel Audit Sentinel Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/audit-sentinel-data.md
Title: Audit Microsoft Sentinel queries and activities | Microsoft Docs
description: This article describes how to audit queries and activities performed in Microsoft Sentinel. Previously updated : 11/09/2021 Last updated : 01/09/2023
sentinel Basic Logs Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/basic-logs-use-cases.md
description: Learn what log sources might be appropriate for Basic Log ingestion
Previously updated : 04/25/2022 Last updated : 01/05/2023 # Log sources to use for Basic Logs ingestion
sentinel Best Practices Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/best-practices-data.md
description: Learn about best practices to employ when connecting data sources t
Previously updated : 11/09/2021- Last updated : 01/09/2023 # Data collection best practices
sentinel Best Practices Workspace Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/best-practices-workspace-architecture.md
description: Learn about best practices for designing your Microsoft Sentinel wo
Previously updated : 11/09/2021- Last updated : 01/09/2023 # Microsoft Sentinel workspace architecture best practices
sentinel Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/best-practices.md
description: Learn about best practices to employ when managing your Microsoft S
Previously updated : 11/09/2021- Last updated : 01/09/2023 # Best practices for Microsoft Sentinel
Schedule the following Microsoft Sentinel activities regularly to ensure continu
## Integrate with Microsoft security services
-Microsoft Sentinel is empowered by the components that send data to your workspace, and is made stronger through integrations with other Microsoft services. Any logs ingested into products such as Microsoft Defender for Cloud Apps, Microsoft Defender for Endpoint, and Microsoft Defender for Identity allow these services to create detections, and in turn provide those detections to Microsoft Sentinel. Logs can also be ingested directly into Microsoft Sentinel to provide a fuller picture of events and incidents.
+Microsoft Sentinel is empowered by the components that send data to your workspace, and is made stronger through integrations with other Microsoft services. Any logs ingested into products such as Microsoft Defender for Cloud Apps, Microsoft Defender for Endpoint, and Microsoft Defender for Identity allow these services to create detections, and in turn provide those detections to Microsoft Sentinel. Logs can also be ingested directly into Microsoft Sentinel to provide a fuller picture for events and incidents.
For example, the following image shows how Microsoft Sentinel ingests data from other Microsoft services and multi-cloud and partner platforms to provide coverage for your environment:
Entity behavior in Microsoft Sentinel allows users to review and investigate act
- [Enable User and Entity Behavior Analytics (UEBA) in Microsoft Sentinel](enable-entity-behavior-analytics.md) - [Investigate incidents with UEBA data](investigate-with-ueba.md)-- [Microsoft Sentinel UEBA reference](ueba-reference.md)
+- [Microsoft Sentinel UEBA enrichments reference](ueba-reference.md)
### Handle incidents with watchlists and threat intelligence
sentinel Billing Monitor Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/billing-monitor-costs.md
Previously updated : 02/22/2022 Last updated : 01/05/2023 # Manage and monitor costs for Microsoft Sentinel
sentinel Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/billing.md
Previously updated : 07/14/2022 Last updated : 10/04/2022 #Customer intent: As a SOC manager, plan Microsoft Sentinel costs so I can understand and optimize the costs of my SIEM.
sentinel Ci Cd Custom Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/ci-cd-custom-deploy.md
For more information, see the [Azure DevOps documentation](/azure/devops/pipelin
> In both GitHub and Azure DevOps, make sure that you keep the trigger path and deployment path directories consistent. >
+## Scale your deployments with parameter files
+
+Rather than passing parameters as inline values in your content files, you can [use a JSON file that contains the parameter values](../azure-resource-manager/templates/parameter-files.md). You can then map those parameter JSON files to their associated Sentinel content files to better scale your deployments across different workspaces. There are a number of ways to map parameter files to Sentinel files, and the repositories deployment pipeline considers them in the following order:
+
+
+1. Is there a mapping in the sentinel-deployment.config? [Customize your connection configuration](ci-cd-custom-deploy.md#customize-your-connection-configuration) to learn more.
+1. Is there a workspace-mapped parameter file? This would be a parameter file in the same directory as the content files that ends with .parameters-<WorkspaceID>.json
+1. Is there a default parameter file? This would be any parameter file in the same directory as the content files that ends with .parameters.json
+
+It is encouraged to map your parameter files through through the configuration file or by specifying the workspace ID in the file name to avoid clashes in scenarios with multiple deployments.
+
+> [!IMPORTANT]
+> Once a parameter file match is determined based on the above mapping precedence, the pipeline will ignore any remaining mappings.
+>
+
+Modifying the mapped parameter file listed in the sentinel-deployment.config will trigger the deployment of its paired content file. Adding or modifying a *.parameters-\<workspaceID\>.json* file or *.parameters.json* file will also trigger a deployment of the paired content file(s) along with the newly modified parameters, unless a higher precedence parameter mappings is in place. Other content files won't be deployed as long as the smart deployments feature is still enabled in the workflow/pipeline definition file.
## Customize your connection configuration
Here's an example of the entire contents of a valid *sentinel-deployment.config*
Add full path names to the `"prioritizedcontentfiles":` section. Wildcard matching is not supported at this time. -- **To exclude content files**, modify the `"excludecontentfiles":` section with full path names of individual .json deployment files.
+- **To exclude content files**, modify the `"excludecontentfiles":` section with full path names of individual .json content files.
- **To map parameters**:
- The deployment script will accept three methods to map parameters. The precedence is determined for each included .json deployment file in your repository as follows:
-
- :::image type="content" source="media/ci-cd-custom-deploy/deploy-parameter-file-precedence.svg" alt-text="A diagram showing the precedence of parameter file mappings.":::
-
- 1. Is there a mapping in the sentinel-deployment.config?
- 1. Is there a workspace parameter file?
- 1. Is there a default parameter file?
-
-Modifying the mapped parameter file listed in the sentinel-deployment.config will trigger the deployment of its paired content file. Adding or modifying a *.parameters-\<workspaceID\>.json* file or *.parameters.json* file triggers a deployment of that corresponding content file along with the newly modified parameters, unless a higher precedence parameter mappings is in place. Other content files won't be deployed if the smart deployments feature is still enabled.
+ The deployment script will accept three methods of mapping parameters as described in [Scale your deployments with parameter files](ci-cd-custom-deploy.md#scale-your-deployments-with-parameter-files). Mapping parameters through the sentinel-deployment.config takes the highest precedence and will guarantee that a given parameter file will be mapped to its associated content files. Simply modify the `"parameterfilemappings":` section with your target connection's workspace ID and full path names of individual .json files.
## Next steps
For more information, see:
- [Sentinel CICD repositories sample](https://github.com/SentinelCICD/RepositoriesSampleContent) - [Create Resource Manager parameter file](../azure-resource-manager/templates/parameter-files.md)-- [Parameters in ARM templates](../azure-resource-manager/templates/parameters.md)
+- [Parameters in ARM templates](../azure-resource-manager/templates/parameters.md)
sentinel Configure Data Retention https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/configure-data-retention.md
Previously updated : 10/03/2022 Last updated : 01/05/2023 #Customer intent: As an Azure account administrator, I want to archive older but less used data to save retention costs.
No resources were created but you might want to restore the data retention setti
## Next steps > [!div class="nextstepaction"]
-> [Configure data retention and archive policies in Azure Monitor Logs](../azure-monitor/logs/data-retention-archive.md?tabs=portal-1%2cportal-2)
+> [Configure data retention and archive policies in Azure Monitor Logs](../azure-monitor/logs/data-retention-archive.md?tabs=portal-1%2cportal-2)
sentinel Connect Azure Virtual Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-azure-virtual-desktop.md
Title: Connect Azure Virtual Desktop to Microsoft Sentinel | Microsoft Docs
description: Learn to connect your Azure Virtual Desktop data to Microsoft Sentinel. Previously updated : 11/09/2021 Last updated : 01/09/2023 - # Connect Azure Virtual Desktop data to Microsoft Sentinel
sentinel Connect Log Forwarder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-log-forwarder.md
Title: Deploy a log forwarder to ingest Syslog and CEF logs to Microsoft Sentine
description: Learn how to deploy a log forwarder, consisting of a Syslog daemon and the Log Analytics agent, as part of the process of ingesting Syslog and CEF logs to Microsoft Sentinel. Previously updated : 12/23/2021 Last updated : 01/09/2023 - # Deploy a log forwarder to ingest Syslog and CEF logs to Microsoft Sentinel
sentinel Connect Microsoft Purview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-microsoft-purview.md
+
+ Title: Stream data from Microsoft Purview Information Protection to Microsoft Sentinel
+description: Stream data from Microsoft Purview Information Protection (formerly Microsoft Information Protection) to Microsoft Sentinel so you can analyze and report on data from the Microsoft Purview labeling clients and scanners.
++ Last updated : 01/02/2023+
+#Customer intent: As a security operator, I want to get specific labeling data from Microsoft Purview, so I can track, analyze, report on the data and use it for compliance purposes.
++
+# Stream data from Microsoft Purview Information Protection to Microsoft Sentinel
+
+This article describes how to stream data from Microsoft Purview Information Protection (formerly Microsoft Information Protection or MIP) to Microsoft Sentinel. You can use the data ingested from the Microsoft Purview labeling clients and scanners to track, analyze, report on the data, and use it for compliance purposes.
+
+> [!IMPORTANT]
+> The Microsoft Purview Information Protection connector is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Overview
+
+Auditing and reporting are an important part of organizations' security and compliance strategy. With the continued expansion of the technology landscape that has an ever-increasing number of systems, endpoints, operations, and regulations, it becomes even more important to have a comprehensive logging and reporting solution in place.
+
+With the Microsoft Purview Information Protection connector, you stream auditing events generated from unified labeling clients and scanners. The data is then emitted to the Microsoft 365 audit log for central reporting in Microsoft Sentinel.
+
+With the connector, you can:
+
+- Track adoption of labels, explore, query, and detect events.
+- Monitor labeled and protected documents and emails.
+- Monitor user access to labeled documents and emails, while tracking classification changes.
+- Gain visibility into activities performed on labels, policies, configurations, files and documents. This visibility helps security teams identify security breaches, and risk and compliance violations.
+- Use the connector data during an audit, to prove that the organization is compliant.
+
+### Azure Information Protection connector vs. Microsoft Purview Information Protection connector
+
+This connector replaces the Azure Information Protection (AIP) data connector. The Azure Information Protection (AIP) data connector uses the AIP audit logs (public preview) feature. As of **March 31, 2023**, the AIP analytics and audit logs public preview will be retired, and moving forward will be using the [Microsoft 365 auditing solution](/microsoft-365/compliance/auditing-solutions-overview).
+
+For more information:
+- See [Removed and retired services](/azure/information-protection/removed-sunset-services#azure-information-protection-analytics).
+- Learn how to [disconnect the AIP connector](#disconnect-the-azure-information-protection-connector).
+
+When you enable the Microsoft Purview Information Protection connector, audit logs stream into the standardized
+`MicrosoftPurviewInformationProtection` table. Data is gathered through the [Office Management API](/office/office-365-management-api/office-365-management-activity-api-schema), which uses a structured schema. The new standardized schema is adjusted to enhance the deprecated schema used by AIP, with more fields and easier access to parameters.
+
+Review the list of supported [audit log record types and activities](microsoft-purview-record-types-activities.md).
+
+## Prerequisites
+
+Before you begin, verify that you have:
+
+- The Microsoft Sentinel solution enabled.
+- A defined Microsoft Sentinel workspace.
+- A valid license to [Microsoft Purview Information Protection](/office365/servicedescriptions/microsoft-365-service-descriptions/microsoft-365-tenantlevel-services-licensing-guidance/microsoft-365-security-compliance-licensing-guidance).
+- [Enabled Sensitivity labels for Office](/microsoft-365/compliance/sensitivity-labels-sharepoint-onedrive-files?view=o365-worldwide#use-the-microsoft-purview-compliance-portal-to-enable-support-for-sensitivity-labels&preserve-view=true) and [enabled auditing](/microsoft-365/compliance/turn-audit-log-search-on-or-off?view=o365-worldwide#use-the-compliance-center-to-turn-on-auditing&preserve-view=true).
+- The Global Administrator or Security Administrator role on the workspace.
+
+## Set up the connector
+
+> [!NOTE]
+> If you set the connector on a workspace located in a different region than your Office 365 location, data might be streamed across regions.
+
+1. Open the [Azure portal](https://portal.azure.com/) and navigate to the **Microsoft Sentinel** service.
+1. In the **Data connectors** blade, in the search bar, type *Purview*.
+1. Select the **Microsoft Purview Information Protection (Preview)** connector.
+1. Below the connector description, select **Open connector page**.
+1. Under **Configuration**, select **Connect**.
+
+ When a connection is established, the **Connect** button changes to **Disconnect**. You're now connected to the Microsoft Purview Information Protection.
+
+Review the list of supported [audit log record types and activities](microsoft-purview-record-types-activities.md).
+
+## Disconnect the Azure Information Protection connector
+
+We recommend using the Azure Information Protection connector and the Microsoft Purview Information Protection connector simultaneously (both enabled) for a short testing period. After the testing period, we recommend that you disconnect the Azure Information Protection connector to avoid data duplication and redundant costs.
+
+To disconnect the Azure Information Protection connector:
+
+1. In the **Data connectors** blade, in the search bar, type *Azure Information Protection*.
+1. Select **Azure Information Protection**.
+1. Below the connector description, select **Open connector page**.
+1. Under **Configuration**, select **Disconnect**.
+
+## Known issues and limitations
+
+- The Office Management API doesn't obtain a Downgrade Label with the names of the labels before and after the downgrade. To retrieve this information, extract the `labelId` of each label and enrich the results.
+
+ Here's an example KQL query:
+
+ ```kusto
+ let labelsMap = parse_json('{'
+ '"566a334c-ea55-4a20-a1f2-cef81bfaxxxx": "MyLabel1",'
+ '"aa1c4270-0694-4fe6-b220-8c7904b0xxxx": "MyLabel2",'
+ '"MySensitivityLabelId": "MyLabel3"'
+ '}');
+ MicrosoftPurviewInformationProtection
+ | extend SensitivityLabelName = iif(isnotempty(SensitivityLabelId),
+ tostring(labelsMap[tostring(SensitivityLabelId)]), "")
+ | extend OldSensitivityLabelName = iif(isnotempty(OldSensitivityLabelId),
+ tostring(labelsMap[tostring(OldSensitivityLabelId)]), "")
+ ```
+
+- The `MicrosoftPurviewInformationProtection` table and the `OfficeActivity` table might include some duplicated events.
+
+## Next steps
+
+In this article, you learned how to set up the Microsoft Purview Information Protection connector to track, analyze, report on the data, and use it for compliance purposes. To learn more about Microsoft Sentinel, see the following articles:
+
+- Learn how to [get visibility into your data, and potential threats](get-visibility.md).
+- Get started [detecting threats with Microsoft Sentinel](detect-threats-built-in.md).
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Create Custom Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/create-custom-connector.md
Title: Resources for creating Microsoft Sentinel custom connectors | Microsoft D
description: Learn about available resources for creating custom connectors for Microsoft Sentinel. Methods include the Log Analytics agent and API, Logstash, Logic Apps, PowerShell, and Azure Functions. - Previously updated : 11/21/2021 Last updated : 01/09/2023
sentinel Data Connectors Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors-reference.md
See [Microsoft Defender for Cloud](#microsoft-defender-for-cloud).
## Azure Information Protection (Preview)
+> [!NOTE]
+> The Azure Information Protection (AIP) data connector uses the AIP audit logs (public preview) feature. As of **March 31, 2023**, the AIP analytics and audit logs public preview will be retired, and moving forward will be using the [Microsoft 365 auditing solution](/microsoft-365/compliance/auditing-solutions-overview).
+>
+> For more information, see [Removed and retired services](/azure/information-protection/removed-sunset-services#azure-information-protection-analytics).
+>
+
+See the [Microsoft Purview Information Protection](#microsoft-purview-information-protection-preview) connector, which will replace this connector.
+ | Connector attribute | Description | | | | | **Data ingestion method** | [**Azure service-to-service integration**](connect-azure-windows-microsoft-services.md) |
See [Microsoft Defender for Cloud](#microsoft-defender-for-cloud).
| **DCR support** | Not currently supported | | **Supported by** | Microsoft | -
-> [!NOTE]
-> The Azure Information Protection (AIP) data connector uses the AIP audit logs (public preview) feature. As of **March 18, 2022**, we are sunsetting the AIP analytics and audit logs public preview, and moving forward will be using the [Microsoft 365 auditing solution](/microsoft-365/compliance/auditing-solutions-overview). Full retirement is scheduled for **September 30, 2022**.
->
-> For more information, see [Removed and retired services](/azure/information-protection/removed-sunset-services#azure-information-protection-analytics).
->
- ## Azure Key Vault | Connector attribute | Description |
You will only see the storage types that you actually have defined resources for
| **Log Analytics table(s)** | [Syslog](/azure/azure-monitor/reference/tables/syslog) | | **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-workspace-transformations-portal.md) | | **Kusto function alias:** | CGFWFirewallActivity |
-| **Kusto function URL:** | https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/Barracuda%20CloudGen%20Firewall/Parsers/CGFWFirewallActivity |
+| **Kusto function URL:** | https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/Barracuda%20CloudGen%20Firewall/Parsers/CGFWFirewallActivity.txt |
| **Vendor documentation/<br>installation instructions** | https://aka.ms/Sentinel-barracudacloudfirewall-connector | | **Supported by** | [Barracuda](https://www.barracuda.com/support) |
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| **Log Analytics table(s)** | ProjectActivity | | **Supported by** | Microsoft |
+## Microsoft Purview Information Protection (Preview)
+| Connector attribute | Description |
+| | |
+| **Data ingestion method** | **Azure service-to-service integration: <br>[API-based connections](connect-microsoft-purview.md)** |
+| **License prerequisites/<br>Cost information** | Your Office 365 deployment must be on the same tenant as your Microsoft Sentinel workspace.<br>Other charges may apply. |
+| **Log Analytics table(s)** | MicrosoftPurviewInformationProtection |
+| **Supported by** | Microsoft |
+ ## Microsoft Sysmon for Linux (Preview)
If a longer timeout duration is required, consider upgrading to an [App Service
| **Vendor documentation/<br>installation instructions** | [Salesforce REST API Developer Guide](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/quickstart.htm)<br>Under **Set up authorization**, use **Session ID** method instead of OAuth. | | **Connector deployment instructions** | <li>[Single-click deployment](connect-azure-functions-template.md?tabs=ARM) via Azure Resource Manager (ARM) template<li>[Manual deployment](connect-azure-functions-template.md?tabs=MPY) | | **Kusto function alias** | SalesforceServiceCloud |
-| **Kusto function URL/<br>Parser config instructions** | https://aka.ms/Sentinel-SalesforceServiceCloud-parser |
+| **Kusto function URL/<br>Parser config instructions** | https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/Salesforce%20Service%20Cloud/Parsers/SalesforceServiceCloud.txt |
| **Application settings** | <li>SalesforceUser<li>SalesforcePass<li>SalesforceSecurityToken<li>SalesforceConsumerKey<li>SalesforceConsumerSecret<li>WorkspaceID<li>WorkspaceKey<li>logAnalyticsUri (optional) | | **Supported by** | Microsoft |
sentinel Design Your Workspace Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/design-your-workspace-architecture.md
description: Use a decision tree to understand how you might want to design your
Previously updated : 11/09/2021- Last updated : 01/09/2023 # Design your Microsoft Sentinel workspace architecture
sentinel False Positives https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/false-positives.md
Title: Handle false positives in Microsoft Sentinel description: Learn how to resolve false positives in Microsoft Sentinel by creating automation rules or modifying analytics rules to specify exceptions.--++ Previously updated : 11/09/2021- Last updated : 01/09/2023 # Handle false positives in Microsoft Sentinel
sentinel Forward Syslog Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/forward-syslog-monitor-agent.md
Previously updated : 08/18/2022 Last updated : 01/05/2023 #Customer intent: As a security-engineer, I want to get syslog data into Microsoft Sentinel so that I can use the data with other data to do attack detection, threat visibility, proactive hunting, and threat response. As an IT administrator, I want to get syslog data into my Log Analytics workspace to monitor my linux-based devices.
Evaluate whether you still need the resources you created like the virtual machi
## Next steps > [!div class="nextstepaction"]
-> [Data collection rules in Azure Monitor](../azure-monitor/essentials/data-collection-rule-overview.md)
+> [Data collection rules in Azure Monitor](../azure-monitor/essentials/data-collection-rule-overview.md)
sentinel Geolocation Data Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/geolocation-data-api.md
Title: Enrich entities with geolocation data in Microsoft Sentinel using REST API | Microsoft Docs
+ Title: Enrich entities with geolocation data in Microsoft Sentinel using REST API
description: This article describes how you can enrich entities in Microsoft Sentinel with geolocation data via REST API.-+ - Previously updated : 11/09/2021- Last updated : 01/09/2023+ # Enrich entities in Microsoft Sentinel with geolocation data via REST API (Public preview)
sentinel Ingestion Delay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/ingestion-delay.md
Title: Handle ingestion delay in Microsoft Sentinel | Microsoft Docs
description: Handle ingestion delay in Microsoft Sentinel scheduled analytics rules. Previously updated : 04/25/2021 Last updated : 01/09/2023
sentinel Microsoft Purview Record Types Activities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/microsoft-purview-record-types-activities.md
+
+ Title: Microsoft Purview Information Protection connector reference - audit log record types and activities support in Microsoft Sentinel
+description: This article lists supported audit log record types and activities when using the Microsoft Purview Information Protection connector with Microsoft Sentinel.
+++ Last updated : 01/02/2023++
+# Microsoft Purview Information Protection connector reference - audit log record types and activities support
+
+This article lists supported audit log record types and activities when using the Microsoft Purview Information Protection connector with Microsoft Sentinel.
+
+When you use the [Microsoft Purview Information Protection connector](connect-microsoft-purview.md), you stream audit logs into the
+`MicrosoftPurviewInformationProtection` standardized table. Data is
+gathered through the [Office Management API](/office/office-365-management-api/office-365-management-activity-api-schema), which uses a structured schema.
+
+## Supported audit log record types
++
+|Value |Member |Name |Description |Operations |
+||||||
+|93 |`AipDiscover` |Microsoft Purview scanner events. |Describes the type of access. |
+|94 |`AipSensitivityLabelAction` |Microsoft Purview sensitivity label event. |The operation type for the audit log. The name of the user or admin activity for a description of the most common operations: <ul><li>`SensitivityLabelApplied`</li><li>`SensitivityLabelUpdated`</li><li>`SensitivityLabelRemoved`</li><li>`SensitivityLabelPolicyMatched`</li><li>`SensitivityLabeledFileOpened`</li></ul> |
+|95 |`AipProtectionAction` |Microsoft Purview protection events. |Contains information related to Microsoft Purview protection events. |
+|96 |`AipFileDeleted` | Microsoft Purview file deletion event. |Contains information related to Microsoft Purview file deletion events. |
+|97 |`AipHeartBeat` |Microsoft Purview heartbeat event. |The operation type for the audit log. The name of the user or admin activity for a description of the most common operations or activities:<ul><li>`SensitivityLabelApplied`</li>`SensitivityLabelUpdated`</li><li>`SensitivityLabelRemoved`</li><li>`SensitivityLabelPolicyMatched`</li><li>`SensitivityLabeledFileOpened`</li> |
+|43 |`MipLabel` | Events detected in the transport pipeline of email messages that are tagged (manually or automatically) with sensitivity labels. | |
+|82 |`SensitivityLabelPolicyMatch` |Events generated when a file labeled with a sensitive label is opened or renamed. |
+|83 |`SensitivityLabelAction` |Event generated when sensitivity labels are applied, updated or removed. | |
+|84 |`SensitivityLabeledFileAction` | Events generated when a file labeled with a sensitivity label is opened or renamed. | |
+|71 |`MipAutoLabelSharePointItem` |Auto-labeling events in SharePoint | |
+|72 |`MipAutoLabelSharePointPolicyLocation` |Auto-labeling policy events in SharePoint. | |
+|75 |`MipAutoLabelExchangeItem` |Auto-labeling events in Microsoft Exchange. | |
++
+## Supported activities
+
+|Friendly name |Operation |Description |
+||||
+|Applied sensitivity label to file |`FileSensitivityLabelApplied` |A sensitivity label was applied to a document via Microsoft 365 apps, Office on the web, or an auto-labeling policy. |
+|Changed sensitivity label applied to file |`FileSensitivityLabelChanged` |A different sensitivity label was applied to a document. An Office on the web or an auto-labeling policy changed. |
+|Removed sensitivity label from file |`FileSensitivityLabelRemoved` |A sensitivity label was removed from a document via Microsoft 365 apps, Office on the web, an auto-labeling policy, or the [Unlock-SPOSensitivityLabelEncryptedFile](/powershell/module/sharepoint-online/unlock-sposensitivitylabelencryptedFile) cmdlet. |
+|Applied sensitivity label to site |`SensitivityLabelApplied` | A sensitivity label was applied to a SharePoint or Teams site. |
+|Changed sensitivity label applied to file |`SensitivityLabelUpdated` |A different sensitivity label was applied to a document. |
+|Removed sensitivity label from site |`SensitivityLabelRemoved` |A sensitivity label was removed from a SharePoint or Teams site. |
+| |`SiteSensitivityLabelApplied` |A sensitivity label was applied to a SharePoint or Teams site. |
+|Changed sensitivity label on a site |`SensitivityLabelChanged` |A different sensitivity label was applied to a SharePoint or Teams site. |
+|Removed sensitivity label from site |`SiteSensitivityLabelRemoved` |A sensitivity label was removed from a SharePoint or Teams site. |
+|Document |`DocumentSensitivityMismatchDetected` |Non auditable activity. Signals to Substrate that the item was removed from the SharedWithMe view. This is the same as the `RemovedFromSharedWithMe` operation, but without audit. |
+
+## Next steps
+
+In this article, you learned about the audit log record types and activities supported when you use the Microsoft Purview Information Protection connector. To learn more about Microsoft Sentinel, see the following articles:
+
+- Learn how to [get visibility into your data, and potential threats](get-visibility.md).
+- Get started [detecting threats with Microsoft Sentinel](detect-threats-built-in.md).
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Migration Ingestion Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/migration-ingestion-tool.md
This article describes a set of different tools used to transfer your historical
## Azure Monitor Basic Logs/Archive
-Before you ingest data to Azure Monitor Basic Logs or Archive, for lower ingestion prices, ensure that the table you're writing to is [configured as Basic Logs](../azure-monitor/logs/basic-logs-configure.md#view-a-tables-log-data-plan). Review the [Azure Monitor custom log ingestion tool](#azure-monitor-custom-log-ingestion-tool) and the [direct API](#direct-api) method for Azure Monitor Basic Logs.
+Before you ingest data to Azure Monitor Basic Logs or Archive, for lower ingestion prices, ensure that the table you're writing to is [configured as Basic Logs](../azure-monitor/logs/basic-logs-configure.md). Review the [Azure Monitor custom log ingestion tool](#azure-monitor-custom-log-ingestion-tool) and the [direct API](#direct-api) method for Azure Monitor Basic Logs.
### Azure Monitor custom log ingestion tool
sentinel Monitor Key Vault Honeytokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/monitor-key-vault-honeytokens.md
Title: Deploy and monitor Azure Key Vault honeytokens with Microsoft Sentinel description: Plant Azure Key Vault honeytoken keys and secrets, and monitor them with Microsoft Sentinel.-+ Previously updated : 11/09/2021- Last updated : 01/09/2023+ # Deploy and monitor Azure Key Vault honeytokens with Microsoft Sentinel (Public preview)
sentinel Mssp Protect Intellectual Property https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/mssp-protect-intellectual-property.md
Title: Protecting managed security service provider (MSSPs) intellectual property in Microsoft Sentinel | Microsoft Docs
+ Title: Protecting managed security service provider (MSSPs) intellectual property in Microsoft Sentinel
description: Learn about how managed security service providers (MSSPs) can protect the intellectual property they've created in Microsoft Sentinel.-+ - Previously updated : 11/09/2021- Last updated : 01/09/2023+ # Protecting MSSP intellectual property in Microsoft Sentinel
sentinel Normalization Schema Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-schema-audit.md
Fields that appear in the table are common to all ASIM schemas. Any of guideline
||--||--| | <a name="actoruserid"></a>**ActorUserId** | Optional | String | A machine-readable, alphanumeric, unique representation of the Actor. For more information, and for alternative fields for other IDs, see [The User entity](normalization-about-schemas.md#the-user-entity). <br><br>Example: `S-1-12-1-4141952679-1282074057-627758481-2916039507` | | **ActorScope** | Optional | String | The scope, such as Azure AD Domain Name, in which [ActorUserId](#actoruserid) and [ActorUsername](#actorusername) are defined. or more information and list of allowed values, see [UserScope](normalization-about-schemas.md#userscope) in the [Schema Overview article](normalization-about-schemas.md).|
-| **ActorScopeId** | Optional | String | The scope ID, such as Azure AD Directory ID, in which [ActorUserId](#actoruserid) and [ActorUsername](#actorusername) are defined. or more information and list of allowed values, see [UserScopeId](normalization-about-schemas.md#userscopeid) in the [Schema Overview article](normalization-about-schemas.md).|
+| **ActorScopeId** | Optional | String | The scope ID, such as Azure AD Directory ID, in which [ActorUserId](#actoruserid) and [ActorUsername](#actorusername) are defined. for more information and list of allowed values, see [UserScopeId](normalization-about-schemas.md#userscopeid) in the [Schema Overview article](normalization-about-schemas.md).|
| **ActorUserIdType**| Optional | UserIdType | The type of the ID stored in the [ActorUserId](#actoruserid) field. For more information and list of allowed values, see [UserIdType](normalization-about-schemas.md#useridtype) in the [Schema Overview article](normalization-about-schemas.md).| | <a name="actorusername"></a>**ActorUsername** | Recommended | Username | The ActorΓÇÖs username, including domain information when available. For more information, see [The User entity](normalization-about-schemas.md#the-user-entity).<br><br>Example: `AlbertE` | | **User** | Alias | | Alias to [ActorUsername](#actorusername) |
sentinel Normalization Schema Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-schema-dns.md
The following list mentions fields that have specific guidelines for DNS events:
| **Field** | **Class** | **Type** | **Description** | | | | | |
-| **EventType** | Mandatory | Enumerated | Indicates the operation reported by the record. <br><br> For DNS records, this value would be the [DNS op code](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml). <br><br>Example: `lookup`|
+| **EventType** | Mandatory | Enumerated | Indicates the operation reported by the record. <br><br> For DNS records, this value would be the [DNS op code](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml). <br><br>Example: `Query`|
| **EventSubType** | Optional | Enumerated | Either `request` or `response`. <br><br>For most sources, [only the responses are logged](#guidelines-for-collecting-dns-events), and therefore the value is often **response**. | | <a name=eventresultdetails></a>**EventResultDetails** | Mandatory | Enumerated | For DNS events, this field provides the [DNS response code](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml). <br><br>**Notes**:<br>- IANA doesn't define the case for the values, so analytics must normalize the case.<br> - If the source provides only a numerical response code and not a response code name, the parser must include a lookup table to enrich with this value. <br>- If this record represents a request and not a response, set to **NA**. <br><br>Example: `NXDOMAIN` |
-| **EventSchemaVersion** | Mandatory | String | The version of the schema documented here is **0.1.6**. |
+| **EventSchemaVersion** | Mandatory | String | The version of the schema documented here is **0.1.7**. |
| **EventSchema** | Mandatory | String | The name of the schema documented here is **Dns**. | | **Dvc** fields| - | - | For DNS events, device fields refer to the system that reports the DNS event. |
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| **SrcGeoLatitude** | Optional | Latitude | The latitude of the geographical coordinate associated with the source IP address.<br><br>Example: `44.475833` | | **SrcGeoLongitude** | Optional | Longitude | The longitude of the geographical coordinate associated with the source IP address.<br><br>Example: `73.211944` | | **SrcRiskLevel** | Optional | Integer | The risk level associated with the source. The value should be adjusted to a range of `0` to `100`, with `0` for benign and `100` for a high risk.<br><br>Example: `90` |
+| **SrcOriginalRiskLevel** | Optional | Integer | The risk level associated with the source, as reported by the reporting device. <br><br>Example: `Suspicious` |
| <a name="srchostname"></a>**SrcHostname** | Recommended | String | The source device hostname, excluding domain information.<br><br>Example: `DESKTOP-1282V4D` | | **Hostname** | Alias | | Alias to [SrcHostname](#srchostname) | |<a name="srcdomain"></a> **SrcDomain** | Recommended | String | The domain of the source device.<br><br>Example: `Contoso` |
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| <a name="srcdvcscope"></a>**SrcDvcScope** | Optional | String | The cloud platform scope the device belongs to. **SrcDvcScope** map to a subscription ID on Azure and to an account ID on AWS. | | **SrcDvcIdType** | Optional | Enumerated | The type of [SrcDvcId](#srcdvcid), if known. Possible values include:<br> - `AzureResourceId`<br>- `MDEid`<br><br>If multiple IDs are available, use the first one from the list, and store the others in the **SrcDvcAzureResourceId** and **SrcDvcMDEid**, respectively.<br><br>**Note**: This field is required if [SrcDvcId](#srcdvcid) is used. | | **SrcDeviceType** | Optional | Enumerated | The type of the source device. Possible values include:<br>- `Computer`<br>- `Mobile Device`<br>- `IOT Device`<br>- `Other` |
+| <a name = "srcdescription"></a>**SrcDescription** | Optional | String | A descriptive text associated with the device. For example: `Primary Domain Controller`. |
### Source user fields
Fields that appear in the table below are common to all ASIM schemas. Any guidel
|-|-||-| | <a name="srcuserid"></a>**SrcUserId** | Optional | String | A machine-readable, alphanumeric, unique representation of the source user. For more information, and for alternative fields for additional IDs, see [The User entity](normalization-about-schemas.md#the-user-entity). <br><br>Example: `S-1-12-1-4141952679-1282074057-627758481-2916039507` | | **SrcUserScope** | Optional | String | The scope, such as Azure AD tenant, in which [SrcUserId](#srcuserid) and [SrcUsername](#srcusername) are defined. or more information and list of allowed values, see [UserScope](normalization-about-schemas.md#userscope) in the [Schema Overview article](normalization-about-schemas.md).|
+| **SrcUserScopeId** | Optional | String | The scope ID, such as Azure AD Directory ID, in which [SrcUserId](#srcuserid) and [SrcUsername](#srcusername) are defined. for more information and list of allowed values, see [UserScopeId](normalization-about-schemas.md#userscopeid) in the [Schema Overview article](normalization-about-schemas.md).|
| <a name="srcuseridtype"></a>**SrcUserIdType** | Optional | UserIdType | The type of the ID stored in the [SrcUserId](#srcuserid) field. For more information and list of allowed values, see [UserIdType](normalization-about-schemas.md#useridtype) in the [Schema Overview article](normalization-about-schemas.md).| | <a name="srcusername"></a>**SrcUsername** | Optional | Username | The source username, including domain information when available. For more information, see [The User entity](normalization-about-schemas.md#the-user-entity).<br><br>Example: `AlbertE` | | <a name="srcusernametype"></a>**SrcUsernameType** | Optional | UsernameType | Specifies the type of the user name stored in the [SrcUsername](#srcusername) field. For more information, and list of allowed values, see [UsernameType](normalization-about-schemas.md#usernametype) in the [Schema Overview article](normalization-about-schemas.md). <br><br>Example: `Windows` |
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| **DstGeoLatitude** | Optional | Latitude | The latitude of the geographical coordinate associated with the destination IP address. For more information, see [Logical types](normalization-about-schemas.md#logical-types).<br><br>Example: `44.475833` | | **DstGeoLongitude** | Optional | Longitude | The longitude of the geographical coordinate associated with the destination IP address. For more information, see [Logical types](normalization-about-schemas.md#logical-types).<br><br>Example: `73.211944` | | **DstRiskLevel** | Optional | Integer | The risk level associated with the destination. The value should be adjusted to a range of 0 to 100, which 0 being benign and 100 being a high risk.<br><br>Example: `90` |
+| **DstOriginalRiskLevel** | Optional | Integer | The risk level associated with the destination, as reported by the reporting device. <br><br>Example: `Malicious` |
| **DstPortNumber** | Optional | Integer | Destination Port number.<br><br>Example: `53` | | <a name="dsthostname"></a>**DstHostname** | Optional | String | The destination device hostname, excluding domain information. If no device name is available, store the relevant IP address in this field.<br><br>Example: `DESKTOP-1282V4D`<br><br>**Note**: This value is mandatory if [DstIpAddr](#dstipaddr) is specified. | | <a name="dstdomain"></a>**DstDomain** | Optional | String | The domain of the destination device.<br><br>Example: `Contoso` |
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| <a name="dstdvcscope"></a>**DstDvcScope** | Optional | String | The cloud platform scope the device belongs to. **DstDvcScope** map to a subscription ID on Azure and to an account ID on AWS. | | **DstDvcIdType** | Optional | Enumerated | The type of [DstDvcId](#dstdvcid), if known. Possible values include:<br> - `AzureResourceId`<br>- `MDEidIf`<br><br>If multiple IDs are available, use the first one from the list above, and store the others in the **DstDvcAzureResourceId** or **DstDvcMDEid** fields, respectively.<br><br>Required if **DstDeviceId** is used.| | **DstDeviceType** | Optional | Enumerated | The type of the destination device. Possible values include:<br>- `Computer`<br>- `Mobile Device`<br>- `IOT Device`<br>- `Other` |
+| <a name = "dstdescription"></a>**DstDescription** | Optional | String | A descriptive text associated with the device. For example: `Primary Domain Controller`. |
### DNS specific fields
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| <a name=responsecodename></a>**DnsResponseCodeName** | Alias | | Alias to [EventResultDetails](#eventresultdetails) | | **DnsResponseCode** | Optional | Integer | The [DNS numerical response code](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml). <br><br>Example: `3`| | <a name="transactionidhex"></a>**TransactionIdHex** | Recommended | String | The DNS query unique ID as assigned by the DNS client, in hexadecimal format. Note that this value is part of the DNS protocol and different from [DnsSessionId](#dnssessionid), the network layer session ID, typically assigned by the reporting device. |
-| **NetworkProtocol** | Optional | Enumerated | The transport protocol used by the network resolution event. The value can be **UDP** or **TCP**, and is most commonly set to **UDP** for DNS. <br><br>Example: `UDP`|
+| <a name="networkprotocol"></a>**NetworkProtocol** | Optional | Enumerated | The transport protocol used by the network resolution event. The value can be **UDP** or **TCP**, and is most commonly set to **UDP** for DNS. <br><br>Example: `UDP`|
+| **NetworkProtocolVersion** | Optional | Enumerated | The version of [NetworkProtocol](#networkprotocol). When using it to distinguish between IP version, use the values `IPv4` and `IPv6`. |
| **DnsQueryClass** | Optional | Integer | The [DNS class ID](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml).<br> <br>In practice, only the **IN** class (ID 1) is used, and therefore this field is less valuable.| | **DnsQueryClassName** | Optional | String | The [DNS class name](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml).<br> <br>In practice, only the **IN** class (ID 1) is used, and therefore this field is less valuable.<br><br>Example: `IN`|
-| <a name=flags></a>**DnsFlags** | Optional | List of strings | The flags field, as provided by the reporting device. If flag information is provided in multiple fields, concatenate them with comma as a separator. <br><br>Since DNS flags are complex to parse and are less often used by analytics, parsing, and normalization aren't required. Microsoft Sentinel can use an auxiliary function to provide flags information. For more information, see [Handling DNS response](#handling-dns-response). <br><br>Example: `["DR"]`|
+| <a name=flags></a>**DnsFlags** | Optional | String | The flags field, as provided by the reporting device. If flag information is provided in multiple fields, concatenate them with comma as a separator. <br><br>Since DNS flags are complex to parse and are less often used by analytics, parsing, and normalization aren't required. Microsoft Sentinel can use an auxiliary function to provide flags information. For more information, see [Handling DNS response](#handling-dns-response). <br><br>Example: `["DR"]`|
| <a name="dnsnetworkduration"></a>**DnsNetworkDuration** | Optional | Integer | The amount of time, in milliseconds, for the completion of DNS request.<br><br>Example: `1500` | | **Duration** | Alias | | Alias to [DnsNetworkDuration](#dnsnetworkduration) | | **DnsFlagsAuthenticated** | Optional | Boolean | The DNS `AD` flag, which is related to DNSSEC, indicates in a response that all data included in the answer and authority sections of the response have been verified by the server according to the policies of that server. For more information, see [RFC 3655 Section 6.1](https://tools.ietf.org/html/rfc3655#section-6.1) for more information. |
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| **DnsFlagsTruncated** | Optional | Boolean | The DNS `TC` flag indicates that a response was truncated as it exceeded the maximum response size. | | **DnsFlagsZ** | Optional | Boolean | The DNS `Z` flag is a deprecated DNS flag, which might be reported by older DNS systems. | |<a name="dnssessionid"></a>**DnsSessionId** | Optional | string | The DNS session identifier as reported by the reporting device. This value is different from [TransactionIdHex](#transactionidhex), the DNS query unique ID as assigned by the DNS client.<br><br>Example: `EB4BFA28-2EAD-4EF7-BC8A-51DF4FDF5B55` |
-| **SessionId** | Alias | String | Alias to [DnsSessionId](#dnssessionid) |
+| **SessionId** | Alias | | Alias to [DnsSessionId](#dnssessionid) |
| **DnsResponseIpCountry** | Optional | Country | The country associated with one of the IP addresses in the DNS response. For more information, see [Logical types](normalization-about-schemas.md#logical-types).<br><br>Example: `USA` | | **DnsResponseIpRegion** | Optional | Region | The region, or state, within a country associated with one of the IP addresses in the DNS response. For more information, see [Logical types](normalization-about-schemas.md#logical-types).<br><br>Example: `Vermont` | | **DnsResponseIpCity** | Optional | City | The city associated with one of the IP addresses in the DNS response. For more information, see [Logical types](normalization-about-schemas.md#logical-types).<br><br>Example: `Burlington` |
The following fields are used to represent an inspection, which a DNS security d
| Field | Class | Type | Description | |-|-||-| | <a name=UrlCategory></a>**UrlCategory** | Optional | String | A DNS event source may also look up the category of the requested Domains. The field is called **UrlCategory** to align with the Microsoft Sentinel network schema. <br><br>**DomainCategory** is added as an alias that's fitting to DNS. <br><br>Example: `Educational \\ Phishing` |
-| **DomainCategory** | Optional | Alias | Alias to [UrlCategory](#UrlCategory). |
+| **DomainCategory** | Alias | | Alias to [UrlCategory](#UrlCategory). |
+| <a name="networkrulename"></a>**NetworkRuleName** | Optional | String | The name or ID of the rule which identified the threat.<br><br> Example: `AnyAnyDrop` |
+| <a name="networkrulenumber"></a>**NetworkRuleNumber** | Optional | Integer | The number of the rule which identified the threat.<br><br>Example: `23` |
+| **Rule** | Mandatory | String | Either the value of [NetworkRuleName](#networkrulename) or the value of [NetworkRuleNumber](#networkrulenumber). If the value of [NetworkRuleNumber](#networkrulenumber) is used, the type should be converted to string. |
+| **ThreatId** | Optional | String | The ID of the threat or malware identified in the network session.<br><br>Example: `Tr.124` |
| **ThreatCategory** | Optional | String | If a DNS event source also provides DNS security, it may also evaluate the DNS event. For example, it can search for the IP address or domain in a threat intelligence database, and assign the domain or IP address with a Threat Category. | | **ThreatIpAddr** | Optional | IP Address | An IP address for which a threat was identified. The field [ThreatField](#threatfield) contains the name of the field **ThreatIpAddr** represents. If a threat is identified in the [Domain](#domain) field, this field should be empty. | | <a name="threatfield"></a>**ThreatField** | Optional | Enumerated | The field for which a threat was identified. The value is either `SrcIpAddr`, `DstIpAddr`, `Domain`, or `DnsResponseName`. |
The changes in version 0.1.5 of the schema are:
The changes in version 0.1.6 of the schema are: - Added the fields `DnsResponseIpCountry`, `DnsResponseIpRegion`, `DnsResponseIpCity`, `DnsResponseIpLatitude`, and `DnsResponseIpLongitude`.
+The changes in version 0.1.7 of the schema are:
+- Added the fields `SrcDescription`, `SrcOriginalRiskLevel`, `DstDescription`, `DstOriginalRiskLevel`, `SrcUserScopeId`, `NetworkProtocolVersion`, `Rule`, `RuleName`, `RuleNumber`, and `ThreatId`.
+ ## Source-specific discrepancies
sentinel Notebook Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/notebook-get-started.md
Title: Get started with Jupyter notebooks and MSTICPy in Microsoft Sentinel description: Walk through the Getting Started Guide For Microsoft Sentinel ML Notebooks to learn the basics of Microsoft Sentinel notebooks with MSTICPy and queries.--++ Previously updated : 11/09/2021 Last updated : 01/09/2023 # Tutorial: Get started with Jupyter notebooks and MSTICPy in Microsoft Sentinel
sentinel Notebooks Hunt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/notebooks-hunt.md
Previously updated : 04/04/2022 Last updated : 01/05/2023 #Customer intent: As a security analyst, I want to deploy and launch a Jupyter notebook to hunt for security threats.
To create your workspace, select one of the following tabs, depending on whether
|**KeyVault**| A key vault is used to store secrets and other sensitive information that is needed by the workspace. You may create a new Azure Key Vault resource or select an existing one in your subscription.| |**Application insights**| The workspace uses Azure Application Insights to store monitoring information about your deployed models. You may create a new Azure Application Insights resource or select an existing one in your subscription.| |**Container registry**| A container registry is used to register docker images used in training and deployments. To minimize costs, a new Azure Container Registry resource is created only after you build your first image. Alternatively, you may choose to create the resource now or select an existing one in your subscription, or select **None** if you don't want to use any container registry.|
- | | |
-1. On the **Networking** tab, select **Public endpoint (all networks)**.
+1. On the **Networking** tab, select **Enable public access from all networks**.
Define any relevant settings in the **Advanced** or **Tags** tabs, and then select **Review + create**.
The steps in this procedure reference specific articles in the Azure Machine Lea
|**KeyVault**| A key vault is used to store secrets and other sensitive information that is needed by the workspace. You may create a new Azure Key Vault resource or select an existing one in your subscription.| |**Application insights**| The workspace uses Azure Application Insights to store monitoring information about your deployed models. You may create a new Azure Application Insights resource or select an existing one in your subscription.| |**Container registry**| A container registry is used to register docker images used in training and deployments. To minimize costs, a new Azure Container Registry resource is created only after you build your first image. Alternatively, you may choose to create the resource now or select an existing one in your subscription, or select **None** if you don't want to use any container registry.|
- | | |
-1. On the **Networking** tab, select **Private endpoint**. Make sure to use the same VNet as you have in the VM jump box. For example:
+1. On the **Networking** tab, select **Disable public access and use private endpoint**. Make sure to use the same VNet as you have in the VM jump box. For example:
:::image type="content" source="media/notebooks/create-private-endpoint.png" alt-text="Screenshot of the Create private endpoint page in Microsoft Sentinel." lightbox="media/notebooks/create-private-endpoint.png":::
If you have multiple notebooks, make sure to select a default AML workspace to u
After you've created an AML workspace, start launching your notebooks in your Azure ML workspace, from Microsoft Sentinel.
-1. From the Azure portal, navigate to **Microsoft Sentinel** > **Threat management** > **Notebooks**, where you can see notebooks that Microsoft Sentinel provides.
+1. From the Azure portal, navigate to **Microsoft Sentinel** > **Threat management** > **Notebooks** > **Templates**, where you can see notebooks that Microsoft Sentinel provides.
1. Select a notebook to view its description, required data types, and data sources.
- When you've found the notebook you want to use, select **Save notebook** to clone it into your own workspace.
+ When you've found the notebook you want to use, select **Create from template** and **Save** to clone it into your own workspace.
Edit the name as needed. If the notebook already exists in your workspace, you can overwrite the existing notebook or create a new one.
sentinel Notebooks Msticpy Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/notebooks-msticpy-advanced.md
Title: Advanced configurations for Jupyter notebooks and MSTICPy in Microsoft Sentinel | Microsoft Docs
+ Title: Advanced configurations for Jupyter notebooks and MSTICPy in Microsoft Sentinel
description: Learn about advanced configurations available for Jupyter notebooks with MSTICPy when working in Microsoft Sentinel.--++ Previously updated : 11/09/2021- Last updated : 01/09/2023 # Advanced configurations for Jupyter notebooks and MSTICPy in Microsoft Sentinel
sentinel Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/notebooks.md
description: Learn about Jupyter notebooks with the Microsoft Sentinel hunting c
- Previously updated : 04/04/2022 Last updated : 01/05/2023 # Use Jupyter notebooks to hunt for security threats
sentinel Partner Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/partner-integrations.md
Title: Partner integrations with Microsoft Sentinel description: This article describes best practices for creating your own integrations with Microsoft Sentinel.--++ Previously updated : 11/15/2021 Last updated : 01/09/2023 # Best practices for partners integrating with Microsoft Sentinel
sentinel Powerbi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/powerbi.md
Title: Create a Power BI report from Microsoft Sentinel data description: Learn how to create a Power BI report using an exported query from Microsoft Sentinel Log Analytics. Share your report with others in the Power BI service and a Teams channel.--++ Previously updated : 11/09/2021- Last updated : 01/09/2023 # Tutorial: Create a Power BI report from Microsoft Sentinel data
sentinel Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/prerequisites.md
Title: Prerequisites for deploying Microsoft Sentinel description: Learn about pre-deployment activities and prerequisites for deploying Microsoft Sentinel.--++ Previously updated : 11/09/2021- Last updated : 01/09/2023 # Pre-deployment activities and prerequisites for deploying Microsoft Sentinel
sentinel Purview Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/purview-solution.md
Title: Integrate Microsoft Sentinel and Microsoft Purview | Microsoft Docs
+ Title: Integrate Microsoft Sentinel and Microsoft Purview
description: This tutorial describes how to use the **Microsoft Sentinel** data connector and solution for **Microsoft Purview** to enable data sensitivity insights, create rules to monitor when classifications have been detected, and get an overview about data found by Microsoft Purview, and where sensitive data resides in your organization.-+ Previously updated : 02/08/2022- Last updated : 01/09/2023+ # Tutorial: Integrate Microsoft Sentinel and Microsoft Purview (Public Preview)
sentinel Resource Context Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/resource-context-rbac.md
Title: Manage access to Microsoft Sentinel data by resource | Microsoft Docs
+ Title: Manage access to Microsoft Sentinel data by resource
description: This article explains you can manage access to Microsoft Sentinel data by the resources a user can access. Managing access by resource enables you to provide access to specific data only, without the entire Microsoft Sentinel experience. This method is also known as resource-context RBAC.-+ Previously updated : 11/09/2021-- Last updated : 01/09/2023+ # Manage access to Microsoft Sentinel data by resource
sentinel Sample Workspace Designs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sample-workspace-designs.md
Title: Sample Microsoft Sentinel workspace designs | Microsoft Docs
+ Title: Sample Microsoft Sentinel workspace designs
description: Learn from samples of Microsoft Sentinel architecture designs with multiple tenants, clouds or regions.--++ Previously updated : 11/09/2021- Last updated : 01/09/2023 # Microsoft Sentinel sample workspace designs
sentinel Deploy Sap Security Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deploy-sap-security-content.md
Track your SAP solution deployment journey through this series of articles:
1. Optional deployment steps - [Configure auditing](configure-audit.md) - [Configure data connector to use SNC](configure-snc.md)-
+ - [Select SAP ingestion profiles](select-ingestion-profiles.md)
## Deploy SAP security content
sentinel Deployment Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deployment-overview.md
Follow your deployment journey through this series of articles, in which you'll
| **4. Deploy data connector agent** | [Deploy and configure the container hosting the data connector agent](deploy-data-connector-agent-container.md) | | **5. Deploy SAP security content** | [Deploy SAP security content](deploy-sap-security-content.md) | **6. Microsoft Sentinel Solution for SAP** | [Configure Microsoft Sentinel Solution for SAP](deployment-solution-configuration.md) |
-| **7. Optional steps** | - [Configure auditing](configure-audit.md)<br>- [Configure Microsoft Sentinel for SAP data connector to use SNC](configure-snc.md)<br>- [Configure audit log monitoring rules](configure-audit-log-rules.md)
+| **7. Optional steps** | - [Configure auditing](configure-audit.md)<br>- [Configure Microsoft Sentinel for SAP data connector to use SNC](configure-snc.md)<br>- [Configure audit log monitoring rules](configure-audit-log-rules.md)<br>- [Select SAP ingestion profiles](select-ingestion-profiles.md) |
## Next steps
sentinel Deployment Solution Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deployment-solution-configuration.md
Track your SAP solution deployment journey through this series of articles:
1. Optional deployment steps - [Configure auditing](configure-audit.md) - [Configure data connector to use SNC](configure-snc.md)
+ - [Select SAP ingestion profiles](select-ingestion-profiles.md)
## Configure watchlists
sentinel Preparing Sap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/preparing-sap.md
This article discusses the installation of the following CRs:
|CR |Required/optional |Description | ||||
-|NPLK900271 |Required |This CR creates and configures a role. Alternatively, you can can load the authorizations directly from a file. [Review how to create and configure a role](prerequisites-for-deploying-sap-continuous-threat-monitoring.md#create-and-configure-a-role-required). |
+|NPLK900271 |Required |This CR creates and configures a role. Alternatively, you can load the authorizations directly from a file. [Review how to create and configure a role](prerequisites-for-deploying-sap-continuous-threat-monitoring.md#create-and-configure-a-role-required). |
|NPLK900201 or NPLK900202 |Optional |[Retrieves additional information from SAP](prerequisites-for-deploying-sap-continuous-threat-monitoring.md#retrieve-additional-information-from-sap-optional). You select one of these CRs according to your SAP version. | ## Prerequisites
Track your SAP solution deployment journey through this series of articles:
1. Optional deployment steps - [Configure auditing](configure-audit.md) - [Configure data connector to use SNC](configure-snc.md)
+ - [Select SAP ingestion profiles](select-ingestion-profiles.md)
To deploy the CRs, follow the steps outlined below. The steps below may differ according to the version of the SAP system and should be considered for demonstration purposes only.
sentinel Prerequisites For Deploying Sap Continuous Threat Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/prerequisites-for-deploying-sap-continuous-threat-monitoring.md
Track your SAP solution deployment journey through this series of articles:
1. Optional deployment steps - [Configure auditing](configure-audit.md) - [Configure data connector to use SNC](configure-snc.md)
+ - [Select SAP ingestion profiles](select-ingestion-profiles.md)
## Table of prerequisites
sentinel Sap Deploy Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-deploy-troubleshoot.md
description: Learn how to troubleshoot specific issues that may occur in your Mi
- Previously updated : 11/09/2021 Last updated : 01/09/2023 # Troubleshooting your Microsoft Sentinel Solution for SAP deployment
sentinel Sap Solution Security Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-solution-security-content.md
The following tables list the built-in [analytics rules](deploy-sap-security-con
| **SAP - Spool Takeover** |Identifies a user printing a spool request that was created by someone else. | Create a spool request using one user, and then output it in using a different user. <br><br>**Data sources**: SAPcon - Spool Log, SAPcon - Spool Output Log, SAPcon - Audit Log | Collection, Exfiltration, Command and Control | | **SAP - Dynamic RFC Destination** | Identifies the execution of RFC using dynamic destinations. <br><br>**Sub-use case**: [Attempts to bypass SAP security mechanisms](#built-in-sap-analytics-rules-for-attempts-to-bypass-sap-security-mechanisms)| Execute an ABAP report that uses dynamic destinations (cl_dynamic_destination). For example, DEMO_RFC_DYNAMIC_DEST. <br><br>**Data sources**: SAPcon - Audit Log | Collection, Exfiltration | | **SAP - Sensitive Tables Direct Access By Dialog Logon** | Identifies generic table access via dialog sign-in. | Open table contents using `SE11`/`SE16`/`SE16N`. <br><br>**Data sources**: SAPcon - Audit Log | Discovery |
+| **SAP - (Preview) File Downloaded From a Malicious IP Address** | Identifies download of a file from an SAP system using an IP address known to be malicious. Malicious IP addresses are obtained from [threat intelligence services](../understand-threat-intelligence.md). | Download a file from a malicious IP. <br><br>**Data sources**: SAP security Audit log, Threat Intelligence | Exfiltration |
+| **SAP - (Preview) Data Exported from a Production System using a Transport** | Identifies data export from a production system using a transport. Transports are used in development systems and are similar to pull requests. This alert rule triggers incidents with medium severity when a transport that includes data from any table is released from a production system. The rule creates a high severity incident when the export includes data from a sensitive table. | Release a transport from a production system. <br><br>**Data sources**: SAP CR log, [SAP - Sensitive Tables](#tables) | Exfiltration |
+| **SAP - (Preview) Sensitive Data Saved into a USB Drive** | Identifies export of SAP data via files. The rule checks for data saved into a recently mounted USB drive in proximity to an execution of a sensitive transaction, a sensitive program, or direct access to a sensitive table. | Export SAP data via files and save into a USB drive. <br><br>**Data sources**: SAP Security Audit Log, DeviceFileEvents (Microsoft Defender for Endpoint), [SAP - Sensitive Tables](#tables), [SAP - Sensitive Transactions](#transactions), [SAP - Sensitive Programs](#programs) | Exfiltration |
+| **SAP - (Preview) Printing of Potentially Sensitive data** | Identifies a request or actual printing of potentially sensitive data. Data is considered sensitive if the user obtains the data as part of a sensitive transaction, execution of a sensitive program, or direct access to a sensitive table. | Print or request to print sensitive data. <br><br>**Data sources**: SAP Security Audit Log, SAP Spool logs, [SAP - Sensitive Tables](#tables), [SAP - Sensitive Programs](#programs) | Exfiltration |
+| **SAP - (Preview) High Volume of Potentially Sensitive Data Exported** | Identifies export of a high volume of data via files in proximity to an execution of a sensitive transaction, a sensitive program, or direct access to sensitive table. | Export high volume of data via files. <br><br>**Data sources**: SAP Security Audit Log, [SAP - Sensitive Tables](#tables), [SAP - Sensitive Transactions](#transactions), [SAP - Sensitive Programs](#programs) | Exfiltration |
### Built-in SAP analytics rules for persistency
sentinel Select Ingestion Profiles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/select-ingestion-profiles.md
+
+ Title: Select the SAP ingestion profile for your Microsoft Sentinel for SAP solution
+description: This article shows you how to select the profile for your Microsoft Sentinel for SAP solution.
+++ Last updated : 01/03/2023++
+# Select SAP ingestion profile
+
+This article explains how to select the profile for your SAP solution. We recommend that you select an ingestion profile that maximizes your security coverage while meeting your budget requirements.
+
+Because SAP is a business application, and business processes tend to be seasonal, it may be difficult to predict the overall volume of logs over time. To address this issue, we recommend that you keep all logs on for two weeks, and learn from the observed activity. This learning can later be revised during business activity peaks, or major landscape transformations.
+
+The following sections show typical customer configuration profiles for SAP log ingestion.
+
+### Default profile (recommended)
+
+This profile includes complete coverage for:
+
+- Built-in analytics
+- The SAP user authorization master data tables, with users and privilege information
+- The ability to track changes and activities on the SAP landscape. This profile provides more logging information to allow for post-breach investigations and extended hunting abilities.
+
+### systemconfig.ini file
+
+```
+[Logs Activation Status]
+# ABAP RFC Logs - Retrieved by using RFC interface
+ABAPAuditLog = True
+ABAPJobLog = True
+ABAPSpoolLog = True
+ABAPSpoolOutputLog = True
+ABAPChangeDocsLog = True
+ABAPAppLog = True
+ABAPWorkflowLog = True
+ABAPCRLog = True
+ABAPTableDataLog = False
+# ABAP SAP Control Logs - Retrieved by using SAP Conntrol interface and OS Login
+ABAPFilesLogs = False
+SysLog = False
+ICM = False
+WP = False
+GW = False
+# Java SAP Control Logs - Retrieved by using SAP Conntrol interface and OS Login
+JAVAFilesLogs = False
+[ABAP Table Selector]
+AGR_TCODES_FULL = True
+USR01_FULL = True
+USR02_FULL = True
+USR02_INCREMENTAL = True
+AGR_1251_FULL = True
+AGR_USERS_FULL = True
+AGR_USERS_INCREMENTAL = True
+AGR_PROF_FULL = True
+UST04_FULL = True
+USR21_FULL = True
+ADR6_FULL = True
+ADCP_FULL = True
+USR05_FULL = True
+USGRP_USER_FULL = True
+USER_ADDR_FULL = True
+DEVACCESS_FULL = True
+AGR_DEFINE_FULL = True
+AGR_DEFINE_INCREMENTAL = True
+PAHI_FULL = True
+AGR_AGRS_FULL = True
+USRSTAMP_FULL = True
+USRSTAMP_INCREMENTAL = True
+AGR_FLAGS_FULL = True
+AGR_FLAGS_INCREMENTAL = True
+SNCSYSACL_FULL = False
+USRACL_FULL = False
+```
+
+## Detection focused profile
+
+This profile includes the core security logs of the SAP landscape required for the most of the analytics rules to perform well. Post-breach investigations and hunting capabilities are limited.
+
+### systemconfig.ini file
+
+```
+[Logs Activation Status]
+# ABAP RFC Logs - Retrieved by using RFC interface
+ABAPAuditLog = True
+ABAPJobLog = False
+ABAPSpoolLog = False
+ABAPSpoolOutputLog = False
+ABAPChangeDocsLog = True
+ABAPAppLog = False
+ABAPWorkflowLog = False
+ABAPCRLog = True
+ABAPTableDataLog = False
+# ABAP SAP Control Logs - Retrieved by using SAP Conntrol interface and OS Login
+ABAPFilesLogs = False
+SysLog = False
+ICM = False
+WP = False
+GW = False
+# Java SAP Control Logs - Retrieved by using SAP Conntrol interface and OS Login
+JAVAFilesLogs = False
+[ABAP Table Selector]
+AGR_TCODES_FULL = True
+USR01_FULL = True
+USR02_FULL = True
+USR02_INCREMENTAL = True
+AGR_1251_FULL = True
+AGR_USERS_FULL = True
+AGR_USERS_INCREMENTAL = True
+AGR_PROF_FULL = True
+UST04_FULL = True
+USR21_FULL = True
+ADR6_FULL = True
+ADCP_FULL = True
+USR05_FULL = True
+USGRP_USER_FULL = True
+USER_ADDR_FULL = True
+DEVACCESS_FULL = True
+AGR_DEFINE_FULL = True
+AGR_DEFINE_INCREMENTAL = True
+PAHI_FULL = False
+AGR_AGRS_FULL = True
+USRSTAMP_FULL = True
+USRSTAMP_INCREMENTAL = True
+AGR_FLAGS_FULL = True
+AGR_FLAGS_INCREMENTAL = True
+SNCSYSACL_FULL = False
+USRACL_FULL = False
+```
+## Minimal profile
+
+The SAP Security Audit Log is the most important source of data the Microsoft Sentinel Solution for SAP uses to analyze activities on the SAP landscape. Enabling this log is the minimal requirement to provide any security coverage.
+
+### systemconfig.ini file
+
+```
+[Logs Activation Status]
+# ABAP RFC Logs - Retrieved by using RFC interface
+ABAPAuditLog = True
+ABAPJobLog = False
+ABAPSpoolLog = False
+ABAPSpoolOutputLog = False
+ABAPChangeDocsLog = False
+ABAPAppLog = False
+ABAPWorkflowLog = False
+ABAPCRLog = False
+ABAPTableDataLog = False
+# ABAP SAP Control Logs - Retrieved by using SAP Conntrol interface and OS Login
+ABAPFilesLogs = False
+SysLog = False
+ICM = False
+WP = False
+GW = False
+# Java SAP Control Logs - Retrieved by using SAP Conntrol interface and OS Login
+JAVAFilesLogs = False
+[ABAP Table Selector]
+AGR_TCODES_FULL = False
+USR01_FULL = False
+USR02_FULL = False
+USR02_INCREMENTAL = False
+AGR_1251_FULL = False
+AGR_USERS_FULL = False
+AGR_USERS_INCREMENTAL = False
+AGR_PROF_FULL = False
+UST04_FULL = False
+USR21_FULL = False
+ADR6_FULL = False
+ADCP_FULL = False
+USR05_FULL = False
+USGRP_USER_FULL = False
+USER_ADDR_FULL = False
+DEVACCESS_FULL = False
+AGR_DEFINE_FULL = False
+AGR_DEFINE_INCREMENTAL = False
+PAHI_FULL = False
+AGR_AGRS_FULL = False
+USRSTAMP_FULL = False
+USRSTAMP_INCREMENTAL = False
+AGR_FLAGS_FULL = False
+AGR_FLAGS_INCREMENTAL = False
+SNCSYSACL_FULL = False
+USRACL_FULL = False
+```
+## Next steps
+
+Learn more about the Microsoft Sentinel Solution for SAP:
+
+- [Deploy Microsoft Sentinel Solution for SAP](deployment-overview.md)
+- [Prerequisites for deploying Microsoft Sentinel Solution for SAP](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
+- [Deploy SAP Change Requests (CRs) and configure authorization](preparing-sap.md)
+- [Deploy and configure the container hosting the SAP data connector agent](deploy-data-connector-agent-container.md)
+- [Deploy SAP security content](deploy-sap-security-content.md)
+- [Deploy the Microsoft Sentinel for SAP data connector with SNC](configure-snc.md)
+- [Enable and configure SAP auditing](configure-audit.md)
+- [Collect SAP HANA audit logs](collect-sap-hana-audit-logs.md)
+
+Troubleshooting:
+
+- [Troubleshoot your Microsoft Sentinel Solution for SAP solution deployment](sap-deploy-troubleshoot.md)
+- [Configure SAP Transport Management System](configure-transport.md)
+
+Reference files:
+
+- [Microsoft Sentinel Solution for SAP data reference](sap-solution-log-reference.md)
+- [Microsoft Sentinel Solution for SAP: security content reference](sap-solution-security-content.md)
+- [Update script reference](reference-update.md)
+- [Systemconfig.ini file reference](reference-systemconfig.md)
+
+For more information, see [Microsoft Sentinel solutions](../sentinel-solutions.md).
sentinel Sentinel Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sentinel-solution.md
+
+ Title: Build and monitor Zero Trust (TIC 3.0) security architectures with Microsoft Sentinel
+description: Install and learn how to use the Microsoft Sentinel Zero Trust (TIC3.0) solution for an automated visualization of Zero Trust principles, cross-walked to the Trusted Internet Connections framework.
Last updated : 01/09/2023+++++
+ - zerotrust-services
++
+# Build and monitor Zero Trust (TIC 3.0) security architectures with Microsoft Sentinel
+
+The Microsoft Sentinel solution for **Zero Trust (TIC 3.0)** enables governance and compliance teams to design, build, monitor, and respond to Zero Trust (TIC 3.0) requirements. This solution includes a workbook, analytics rules, and a playbook, which provide an automated visualization of Zero Trust principles, cross-walked to the Trust Internet Connections framework, helping organizations to monitor configurations over time.
+
+This article describes how to install and use the Microsoft Sentinel solution for **Zero Trust (TIC 3.0)** in your Microsoft Sentinel workspace.
+
+While only Microsoft Sentinel is required to get started, the solution is enhanced by integrations with other Microsoft Services, such as:
+
+- [Microsoft 365 Defender](https://www.microsoft.com/microsoft-365/security/microsoft-365-defender)
+- [Microsoft Information Protection](https://azure.microsoft.com/services/information-protection/)
+- [Azure Active Directory](https://azure.microsoft.com/services/active-directory/)
+- [Microsoft Defender for Cloud](https://azure.microsoft.com/services/active-directory/)
+- [Microsoft Defender for Endpoint](https://www.microsoft.com/microsoft-365/security/endpoint-defender)
+- [Microsoft Defender for Identity](https://www.microsoft.com/microsoft-365/security/identity-defender)
+- [Microsoft Defender for Cloud Apps](https://www.microsoft.com/microsoft-365/enterprise-mobility-security/cloud-app-security)
+- [Microsoft Defender for Office 365](https://www.microsoft.com/microsoft-365/security/office-365-defender)
+
+For more information, see [Guiding principles of Zero Trust](/azure/security/integrated/zero-trust-overview#guiding-principles-of-zero-trust).
+
+> [!NOTE]
+> Microsoft Sentinel solutions are sets of bundled content, pre-configured for a specific set of data. For more information, see [Microsoft Sentinel solutions documentation](sentinel-solutions.md).
+>
+
+## The Zero Trust solution and the TIC 3.0 framework
+
+Zero Trust and TIC 3.0 are not the same, but they share many common themes and together provide a common story. The Microsoft Sentinel solution for **Zero Trust (TIC 3.0)** offers detailed crosswalks between Microsoft Sentinel and the Zero Trust model with the TIC 3.0 framework. These crosswalks help users to better understand the overlaps between the two.
+
+While the Microsoft Sentinel solution for **Zero Trust (TIC 3.0)** provides best practice guidance, Microsoft does not guarantee nor imply compliance. All Trusted Internet Connection (TIC) requirements, validations, and controls are governed by the [Cybersecurity & Infrastructure Security Agency](https://www.cisa.gov/trusted-internet-connections).
+
+The **Zero Trust (TIC 3.0)** solution provides visibility and situational awareness for control requirements delivered with Microsoft technologies in predominantly cloud-based environments. Customer experience will vary by user, and some panes may require additional configurations and query modification for operation.
+
+Recommendations do not imply coverage of respective controls, as they are often one of several courses of action for approaching requirements, which is unique to each customer. Recommendations should be considered a starting point for planning full or partial coverage of respective control requirements.
+
+The Microsoft Sentinel solution for **Zero Trust (TIC 3.0)** is useful for any of the following users and use cases:
+
+- **Security governance, risk, and compliance professionals**, for compliance posture assessment and reporting
+- **Engineers and architects**, who need to design Zero Trust and TIC 3.0-aligned workloads
+- **Security analysts**, for alert and automation building
+- **Managed security service providers (MSSPs)** for consulting services
+- **Security managers**, who need to review requirements, analyze reporting, evaluating capabilities
+
+## Prerequisites
+
+Before installing the **Zero Trust (TIC 3.0)** solution, make sure you have the following prerequisites:
+
+- **Onboard Microsoft services**: Make sure that you have both [Microsoft Sentinel](quickstart-onboard.md) and [Microsoft Defender for Cloud](/azure/defender-for-cloud/get-started) enabled in your Azure subscription.
+
+- **Microsoft Defender for Cloud requirements**: In Microsoft Defender for Cloud:
+
+ - Add required regulatory standards to your dashboard. Make sure to add both the *Azure Security Benchmark* and *NIST SP 800-53 R5 Assessments* to your Microsoft Defender for Cloud dashboard. For more information, see [add a regulatory standard to your dashboard](/azure/security-center/update-regulatory-compliance-packages?WT.mc_id=Portal-fx#add-a-regulatory-standard-to-your-dashboard) in the Microsoft Defender for Cloud documentation.
+
+ - Continuously export Microsoft Defender for Cloud data to your Log Analytics workspace. For more information, see [Continuously export Microsoft Defender for Cloud data](/azure/defender-for-cloud/continuous-export?tabs=azure-portal).
+
+- **Required user permissions**. To install the **Zero Trust (TIC 3.0)** solution, you must have access to your Microsoft Sentinel workspace with [Security Reader](/azure/active-directory/roles/permissions-reference#security-reader) permissions.
+
+## Install the Zero Trust (TIC 3.0) solution
+
+**To deploy the *Zero Trust (TIC 3.0)* solution from the Azure portal**:
+
+1. In Microsoft Sentinel, select **Content hub** and locate the **Zero Trust (TIC 3.0)** solution.
+
+1. At the bottom-right, select **View details**, and then **Create**. Select the subscription, resource group, and workspace where you want to install the solution, and then review the related security content that will be deployed.
+
+ When you're done, select **Review + Create** to install the solution.
+
+For more information, see [Deploy out-of-the-box content and solutions](sentinel-solutions-deploy.md).
+
+## Sample usage scenario
+
+The following sections shows how a security operations analyst could use the resources deployed with the **Zero Trust (TIC 3.0)** solution to review requirements, explore queries, configure alerts, and implement automation.
+
+After [installing](#install-the-zero-trust-tic-30-solution) the **Zero Trust (TIC 3.0)** solution, use the workbook, analytics rules, and playbook deployed to your Microsoft Sentinel workspace to manage Zero Trust in your network.
+
+### Visualize Zero Trust data
+
+1. Navigate to the Microsoft Sentinel **Workbooks** > **Zero Trust (TIC 3.0)** workbook, and select **View saved workbook**.
+
+ In the **Zero Trust (TIC 3.0)** workbook page, select the TIC 3.0 capabilities you want to view. For this procedure, select **Intrusion Detection**.
+
+ > [!TIP]
+ > Use the **Guide** toggle at the top of the page to display or hide recommendations and guide panes. Make sure that the correct details are selected in the **Subscription**, **Workspace**, and **TimeRange** options so that you can view the specific data you want to find.
+ >
+
+1. **Review the control cards displayed**. For example, scroll down to view the **Adaptive Access Control** card:
+
+ :::image type="content" source="media/sentinel-workbook/review-query-output-sample.png" alt-text="Screenshot of the Adaptive Access Control card.":::
+
+ > [!TIP]
+ > Use the **Guides** toggle at the top left to view or hide recommendations and guide panes. For example, these may be helpful when you first access the workbook, but unnecessary once you've understood the relevant concepts.
+ >
+
+1. **Explore queries**. For example, at the top right of the **Adaptive Access Control** card, select the **:** *More* button, and then select the :::image type="icon" source="media/sentinel-workbook/icon-open-in-logs.png" border="false"::: **Open the last run query in the Logs view.** option.
+
+ The query is opened in the Microsoft Sentinel **Logs** page:
+
+ :::image type="content" source="media/sentinel-workbook/explore-query-logs.png" alt-text="Screenshot of the selected query in the Microsoft Sentinel Logs page.":::
+
+### Configure Zero Trust-related alerts
+
+In Microsoft Sentinel, navigate to the **Analytics** area. View out-of-the-box analytics rules deployed with the **Zero Trust (TIC 3.0)** solution by searching for **TIC3.0**.
+
+By default, the **Zero Trust (TIC 3.0)** solution installs a set of analytics rules that are configured to monitor Zero Trust (TIC3.0) posture by control family, and you can customize thresholds for alerting compliance teams to changes in posture.
+
+For example, if your workload's resiliency posture falls below a specified percentage in a week, Microsoft Sentinel will generate an alert to detail the respective policy status (pass/fail), the assets identified, the last assessment time, and provide deep links to Microsoft Defender for Cloud for remediation actions.
+
+ Update the rules as needed or configure a new one:
++
+For more information, see [Create custom analytics rules to detect threats](detect-threats-custom.md).
+
+### Respond with SOAR
+
+In Microsoft Sentinel, navigate to the **Automation** > **Active playbooks** tab, and locate the **Notify-GovernanceComplianceTeam** playbook.
+
+Use this playbook to automatically monitor CMMC alerts, and notify the governance compliance team with relevant details via both email and Microsoft Teams messages. Modify the playbook as needed:
++
+For more information, see [Use triggers and actions in Microsoft Sentinel playbooks](playbook-triggers-actions.md).
+
+## Frequently asked questions
+
+### Are custom views and reports supported?
+
+Yes. You can customize your **Zero Trust (TIC 3.0)** workbook to view data by subscription, workspace, time, control family, or maturity level parameters, and you can export and print your workbook.
+
+For more information, see [Use Azure Monitor workbooks to visualize and monitor your data](monitor-your-data.md).
+
+### Are additional products required?
+
+Both Microsoft Sentinel and Microsoft Defender for Cloud are [required](#prerequisites).
+
+Aside from these services, each control card is based on data from multiple services, depending on the types of data and visualizations being shown in the card. Over 25 Microsoft services provide enrichment for the **Zero Trust (TIC 3.0)** solution.
+
+### What should I do with panels with no data?
+
+Panels with no data provide a starting point for addressing Zero Trust and TIC 3.0 control requirements, including recommendations for addressing respective controls.
+
+### Are multiple subscriptions, clouds, and tenants supported?
+
+Yes. You can use workbook parameters, Azure Lighthouse, and Azure Arc to leverage the **Zero Trust (TIC 3.0)** solution across all of your subscriptions, clouds, and tenants.
+
+For more information, see [Use Azure Monitor workbooks to visualize and monitor your data](monitor-your-data.md) and [Manage multiple tenants in Microsoft Sentinel as an MSSP](multiple-tenants-service-providers.md).
+
+### Is partner integration supported?
+
+Yes. Both workbooks and analytics rules are customizable for integrations with partner services.
+
+For more information, see [Use Azure Monitor workbooks to visualize and monitor your data](monitor-your-data.md) and [Surface custom event details in alerts](surface-custom-details-in-alerts.md).
+
+### Is this available in government regions?
+
+Yes. The **Zero Trust (TIC 3.0)** solution is in Public Preview and deployable to Commercial/Government regions. For more information, see [Cloud feature availability for commercial and US Government customers](/azure/security/fundamentals/feature-availability).
+
+### Which permissions are required to use this content?
+
+- [Microsoft Sentinel Contributor](/azure/role-based-access-control/built-in-roles#microsoft-sentinel-contributor) users can create and edit workbooks, analytics rules, and other Microsoft Sentinel resources.
+
+- [Microsoft Sentinel Reader](/azure/role-based-access-control/built-in-roles#microsoft-sentinel-reader) users can view data, incidents, workbooks, and other Microsoft Sentinel resources.
+
+For more information, see [Permissions in Microsoft Sentinel](roles.md).
+
+## Next steps
+
+For more information, see:
+
+- [Get Started with Microsoft Sentinel](https://azure.microsoft.com/services/azure-sentinel/)
+- [Visualize and monitor your data with workbooks](monitor-your-data.md)
+- [Microsoft Zero Trust Model](https://www.microsoft.com/security/business/zero-trust)
+- [Zero Trust Deployment Center](/security/zero-trust/?WT.mc_id=Portal-fx)
+
+Watch our videos:
+
+- [Demo: Microsoft Sentinel Zero Trust (TIC 3.0) Solution](https://www.youtube.com/watch?v=OVGgRIzAvCI)
+- [Microsoft Sentinel: Zero Trust (TIC 3.0) Workbook Demo](https://www.youtube.com/watch?v=RpDas8fXzdU)
+
+Read our blogs!
+
+- [Announcing the Microsoft Sentinel: Zero Trust (TIC3.0) Solution](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/announcing-the-microsoft-sentinel-zero-trust-tic3-0-solution/ba-p/3031685)
+- [Building and monitoring Zero Trust (TIC 3.0) workloads for federal information systems with Microsoft Sentinel](https://devblogs.microsoft.com/azuregov/building-and-monitoring-zero-trust-tic-3-0-workloads-for-federal-information-systems-with-microsoft-sentinel/)
+- [Zero Trust: 7 adoption strategies from security leaders](https://www.microsoft.com/security/blog/2021/03/31/zero-trust-7-adoption-strategies-from-security-leaders/)
+- [Implementing Zero Trust with Microsoft Azure: Identity and Access Management (6 Part Series)](https://devblogs.microsoft.com/azuregov/implementing-zero-trust-with-microsoft-azure-identity-and-access-management-1-of-6/)
sentinel Store Logs In Azure Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/store-logs-in-azure-data-explorer.md
Title: Integrate Azure Data Explorer for long-term log retention | Microsoft Docs
+ Title: Integrate Azure Data Explorer for long-term log retention
description: Send Microsoft Sentinel logs to Azure Data Explorer for long-term retention to reduce data storage costs.-+ Previously updated : 11/09/2021-- Last updated : 01/09/2023+ # Integrate Azure Data Explorer for long-term log retention
sentinel Top Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/top-workbooks.md
Title: Commonly used Microsoft Sentinel workbooks | Microsoft Docs
+ Title: Commonly used Microsoft Sentinel workbooks
description: Learn about the most commonly used workbooks to use popular, out-of-the-box Microsoft Sentinel resources.-+ Previously updated : 11/09/2021-- Last updated : 01/09/2023+ # Commonly used Microsoft Sentinel workbooks
sentinel Troubleshooting Cef Syslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/troubleshooting-cef-syslog.md
Title: Troubleshoot a connection between Microsoft Sentinel and a CEF or Syslog
description: Learn how to troubleshoot issues with your Microsoft Sentinel CEF or Syslog data connector. Previously updated : 11/09/2021 Last updated : 01/09/2023
sentinel Watchlists Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/watchlists-queries.md
description: Use watchlists in searches or detection rules for Microsoft Sentine
Previously updated : 1/04/2022 Last updated : 01/05/2023 # Build queries or detection rules with watchlists in Microsoft Sentinel
To use a watchlist in search query, write a Kusto query that uses the _GetWatchl
1. In the Azure portal, go to **Microsoft Sentinel** and select the appropriate workspace. 1. Under **Configuration**, select **Watchlist**. 1. Select the watchlist you want to use.
-1. Select **View in Log Analytics**.
+1. Select **View in Logs**.
:::image type="content" source="./media/watchlists-queries/sentinel-watchlist-queries-list.png" alt-text="Screenshot that shows how to use watchlists in queries." lightbox="./media/watchlists-queries/sentinel-watchlist-queries-list.png" :::
To use watchlists in analytics rules, create a rule using the _GetWatchlist('wat
1. In the Azure portal, go to **Microsoft Sentinel** and select the appropriate workspace. 1. Under **Configuration**, select **Analytics**. 1. Select **Create** and the type of rule you want to create.
-1. On the **General**, enter the appropriate information.
+1. On the **General** tab, enter the appropriate information.
1. On the **Set rule logic** tab, under **Rule query** use the `_GetWatchlist('<watchlist>')` function in the query. For example, let's say you have a watchlist named ΓÇ£ipwatchlistΓÇ¥ that you created from a CSV file with the following values:
sentinel Watchlists https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/watchlists.md
description: Learn what watchlists are in Microsoft and when to use them.
- Previously updated : 02/07/2022 Last updated : 01/05/2023 # Use watchlists in Microsoft Sentinel
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
The listed features were released in the last three months. For information abou
[!INCLUDE [reference-to-feature-availability](includes/reference-to-feature-availability.md)]
+## January 2023
+
+### Microsoft Purview Information Protection connector (Preview)
+
+With the new [Microsoft Purview Information Protection connector](connect-microsoft-purview.md), you can stream data from Microsoft Purview Information Protection (formerly Microsoft Information Protection or MIP) to Microsoft Sentinel. You can use the data ingested from the Microsoft Purview labeling clients and scanners to track, analyze, report on the data, and use it for compliance purposes.
+
+This connector replaces the Azure Information Protection (AIP) data connector, aligned with the retirement of the AIP analytics and audit logs public preview as of **March 31, 2023**.
+
+The new connector streams audit logs into the standardized
+`MicrosoftPurviewInformationProtection` table, which has been adjusted to enhance the deprecated schema used by AIP, with more fields and easier access to parameters. Data is gathered through the [Office Management API](/office/office-365-management-api/office-365-management-activity-api-schema), which uses a structured schema. Review the list of supported [audit log record types and activities](microsoft-purview-record-types-activities.md).
+ ## December 2022 - [Create and run playbooks on entities on-demand (Preview)](#create-and-run-playbooks-on-entities-on-demand-preview)
service-bus-messaging Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/policy-reference.md
Title: Built-in policy definitions for Azure Service Bus Messaging description: Lists Azure Policy built-in policy definitions for Azure Service Bus Messaging. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/28/2022 Last updated : 01/05/2023
service-bus-messaging Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Service Bus Messaging description: Lists Azure Policy Regulatory Compliance controls available for Azure Service Bus Messaging. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/04/2022 Last updated : 01/05/2023
service-connector Concept Service Connector Internals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/concept-service-connector-internals.md
description: Learn about Service Connector internals, the architecture, the conn
-+ Previously updated : 05/03/2022 Last updated : 12/08/2022 # Service Connector internals
-Service Connector is an extension resource provider that provides an easy way to create and manage connections between services.
-- Support major databases, storage, real-time services, state, and secret stores that are used together with your cloud native application (the list is actively expanding).-- Configure network settings, authentication, and manage connection environment variables or properties by creating a service connection with just a single command or a few steps.-- Validate connections and find corresponding suggestions to fix a service connection.
+Service Connector is an Azure extension resource provider designed to provide a simple way to create and manage connections between Azure services.
+
+Service Connector offers the following features:
+
+- Lets you connect Azure services together with a single Azure CLI command or in a few steps using the Azure portal.
+- Supports an increasing number of databases, storage, real-time services, state, and secret stores that are used with your cloud native application.
+- Configures network settings, authentication, and manages connection environment variables or properties for you.
+- Validates connections and provides suggestions to fix faulty connections.
## Service connection overview
-Service connection is the key concept in the resource model of Service Connector. In Service Connector, a service connection represents an abstraction of the link between two services. The following properties are defined on service connection.
+The concept of *service connection* is a key concept in the resource model of Service Connector. A service connection represents an abstraction of the link between two services. Service connections have the following properties:
-| Property | Description |
-|--|--|
-| Connection Name | The unique name of the service connection. |
-| Source Service Type | Source services are usually Azure compute services. Service Connector functionalities can be found in supported compute services by extending these Azure compute service providers. |
+| Property | Description |
+||-|
+| Connection Name | The unique name of the service connection. |
+| Source Service Type | Source services are usually Azure compute services. These are the services you can connect to target services. Source services include Azure App Service, Azure Container Apps and Azure Spring Apps. |
| Target Service Type | Target services are backing services or dependency services that your compute services connect to. Service Connector supports various target service types including major databases, storage, real-time services, state, and secret stores. |
-| Client Type | Client type refers to your compute runtime stack, development framework, or specific client library type, which accepts the specific format of the connection environment variables or properties. |
-| Authentication Type | The authentication type used of service connection. It could be pure secret/connection string, Managed Identity, or Service Principal. |
+| Client Type | Client type refers to your compute runtime stack, development framework, or specific type of client library that accepts the specific format of the connection environment variables or properties. |
+| Authentication Type | The authentication type used for the service connection. It could be a secret/connection string, a managed identity, or a service principal. |
-You can create multiple service connections from one source service instance if your instance needs to connect multiple target resources. And the same target resource can be connected from multiple source instances. Service Connector will manage all connections in the properties of their source instance. It means that you can create, get, update, and delete the connections in the Azure portal or using CLI commands of the source service instance.
+Source services and target services support multiple simultaneous service connections, which means that you can connect each resource to multiple resources.
-Connections can be made across subscriptions or tenants. Source and target services can belong to different subscriptions or tenants. When you create a new service connection, the connection resource is in the same region as your compute service instance by default.
+Service Connector manages connections in the properties of the source instance. Creating, getting, updating, and deleting connections is done directly by opening the source service instance in the Azure portal or by using the CLI commands of the source service.
-## Create or update a service connection
+Connections can be made across subscriptions or tenants, meaning that source and target services can belong to different subscriptions or tenants. When you create a new service connection, the connection resource is created in the same region as your compute service instance by default.
-Service Connector will run multiple tasks while creating or updating a connection, including:
+## Service connection creation and update
-- Configure target resource network and firewall settings, making sure source and target services can talk to each other on the network level.-- Configure connection information on source resource-- Configure authentication information on source and target if needed-- Create or update connection support rollback in case of failure.
+Service Connector runs multiple tasks while creating or updating service connections, including:
-Creating and updating a connection contains multiple steps. If a step fails, Service Connector will roll back all previous steps to keep the initial settings in the source and target instances.
+- Configuring the network and firewall settings
+- Configuring connection information
+- Configuring authentication information
+- Creating or updating connection rollback in case of failure
+
+If a step fails during this process, Service Connector rolls back all previous steps to keep the initial settings in the source and target instances.
## Connection configurations
-Once a service connection is created, the connection configuration will be set to the source service.
+Connection configurations are set in the source service.
+
+In the Azure portal, open a source service and navigate to **Service Connector**. Expand each connection and view the connection configurations.
-In the Azure portal, navigate to **Service Connector**. You can expand each connection and view the connection configurations.
+In the CLI, use the `list-configuration` command to get the connection configurations.
-In the CLI, you can use the `list-configuration` command to view the connection configuration.
+```azurecli
+az webapp connection list-configuration --resource-group <source-service-resource-group> --name <source-service-name> --connection <connection-name>
+```
```azurecli
-az webapp connection list-configuration -g {webapp_rg} -n {webapp_name} --connection {service_connection_name}
+az spring connection list-configuration --resource-group <source-service-resource-group> --name <source-service-name> --connection <connection-name>
``` ```azurecli
-az spring-cloud connection list-configuration -g {spring_cloud_rg} -n {spring_cloud_name} --connection {service_connection_name}
+az containerapp connection list-configuration --resource-group <source-service-resource-group> --name <source-service-name> --connection <connection-name>
``` ## Configuration naming convention
-Service Connector sets the configuration (environment variables or Spring Boot configurations) when creating a connection. The environment variable key-value pair(s) are determined by your client type and authentication type. For example, using the Azure SDK with managed identity requires a client ID, client secret, etc. Using JDBC driver requires a database connection string. Follow this convention to name the configuration:
+Service Connector sets the connection configuration when creating a connection. The environment variable key-value pairs are determined by your client type and authentication type. For example, using the Azure SDK with a managed identity requires a client ID, client secret, etc. Using a JDBC driver requires a database connection string. Follow the conventions below to name the configurations:
+
+- Spring Boot client: the Spring Boot library for each target service has its own naming convention. For example, MySQL connection settings would be `spring.datasource.url`, `spring.datasource.username`, `spring.datasource.password`. Kafka connection settings would be `spring.kafka.properties.bootstrap.servers`.
+
+- Other clients:
+ - The key name of the first connection configuration uses the format `<Cloud>_<Type>_<Name>`. For example, `AZURE_STORAGEBLOB_RESOURCEENDPOINT`, `CONFLUENTCLOUD_KAFKA_BOOTSTRAPSERVER`.
+ - For the same type of target resource, the key name of the second connection configuration uses the format `<Cloud>_<Type>_<Connection Name>_<Name>`. For example, `AZURE_STORAGEBLOB_CONN2_RESOURCEENDPOINT`, `CONFLUENTCLOUD_KAFKA_CONN2_BOOTSTRAPSERVER`.
-If you're using **Spring Boot** as the client type:
+## Service connection validation
-* Spring Boot library for each target service has its own naming convention. For example, MySQL connection settings would be `spring.datasource.url`, `spring.datasource.username`, `spring.datasource.password`. Kafka connection settings would be `spring.kafka.properties.bootstrap.servers`.
+When validating a connection, Service connector checks the following elements:
-If you're using **other client types**, except for Spring Boot:
+- The source and target resources exist.
+- Source: correct connection information is registered.
+- Target: correct network and firewall settings are registered.
+- Source and target resources: correct authentication information is registered.
-* When connect to a target service, the key name of the first connection configuration is in format as `{Cloud}_{Type}_{Name}`. For example, `AZURE_STORAGEBLOB_RESOURCEENDPOINT`, `CONFLUENTCLOUD_KAFKA_BOOTSTRAPSERVER`.
-* For the same type of target resource, the key name of the second connection configuration will be format as `{Cloud}_{Type}_{Connection Name}_{Name}`. For example, `AZURE_STORAGEBLOB_CONN2_RESOURCEENDPOINT`, `CONFLUENTCLOUD_KAFKA_CONN2_BOOTSTRAPSERVER`.
+## Connection deletion
-## Validate a service connection
-The following items will be checked while validating the connection:
+When a service connection is deleted, the connection information is also deleted.
-* Validate whether source and target resources exist
-* Validate target resource network and firewall settings
-* Validate connection information on source resource
-* Validate authentication information on source and target if needed
+## Next steps
-## Delete connection
+Go to the concept article below to learn more about Service Connector.
-The connection information on source resource will be deleted when deleting connection.
+> [!div class="nextstepaction"]
+> [High availability](./concept-availability.md)
service-connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/overview.md
Once a service connection is created, developers can validate and check the heal
**Compute * Azure App Service
-* Azure Spring Cloud
+* Azure Spring Apps
* Azure Container Apps **Target
service-fabric Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/policy-reference.md
Previously updated : 11/28/2022 Last updated : 01/05/2023 # Azure Policy built-in definitions for Azure Service Fabric
service-fabric Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/security-controls-policy.md
Previously updated : 11/04/2022 Last updated : 01/05/2023 # Azure Policy Regulatory Compliance controls for Azure Service Fabric
service-fabric Service Fabric Cluster Resource Manager Cluster Description https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-resource-manager-cluster-description.md
Service Fabric expects that in some cases, particular workloads might need to ru
* A workload must be run on specific hardware for performance, scale, or security isolation reasons. * A workload should be isolated from other workloads for policy or resource consumption reasons.
-To support these sorts of configurations, Service Fabric includes tags that you can apply to nodes. These tags are called *node properties*. *Placement constraints* are the statements attached to individual services that you select for one or more node properties. Placement constraints define where services should run. The set of constraints is extensible. Any key/value pair can work. Starting with Service Fabric 8.1, node properties can be updated dynamically, with no disruption to running workloads.
+To support these sorts of configurations, Service Fabric includes tags that you can apply to nodes. These tags are called *node properties*. *Placement constraints* are the statements attached to individual services that you select for one or more node properties. Placement constraints define where services should run. The set of constraints is extensible. Any key/value pair can work.
![Different workloads for a cluster layout][Image5]
Just like for placement constraints and node properties, Service Fabric Cluster
## Capacity
-If you turned off all resource *balancing*, Service Fabric Cluster Resource Manager would still ensure that no node goes over its capacity. Managing capacity overruns is possible unless the cluster is too full or the workload is larger than any node. Capacity is another *constraint* that Cluster Resource Manager uses to understand how much of a resource a node has. Remaining capacity is also tracked for the cluster as a whole. Starting with Service Fabric 8.1, node capacities can be updated dynamically, with no disruption to running workloads.
+If you turned off all resource *balancing*, Service Fabric Cluster Resource Manager would still ensure that no node goes over its capacity. Managing capacity overruns is possible unless the cluster is too full or the workload is larger than any node. Capacity is another *constraint* that Cluster Resource Manager uses to understand how much of a resource a node has. Remaining capacity is also tracked for the cluster as a whole.
Both the capacity and the consumption at the service level are expressed in terms of metrics. For example, the metric might be "ClientConnections" and a node might have a capacity for "ClientConnections" of 32,768. Other nodes can have other limits. A service running on that node can say it's currently consuming 32,256 of the metric "ClientConnections."
service-fabric Service Fabric Tutorial Deploy Api Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-deploy-api-management.md
To add a front-end API operation, fill out the values:
[Microsoft.ApiManagement/service/apis/policies](/azure/templates/microsoft.apimanagement/service/apis/policies) creates a backend policy, which ties everything together. This is where you configure the backend Service Fabric service to which requests are routed. You can apply this policy to any API operation. For more information, see [Policies overview](../api-management/api-management-howto-policies.md).
-The [backend configuration for Service Fabric](../api-management/api-management-transformation-policies.md#SetBackendService) provides the following request routing controls:
+The [backend configuration for Service Fabric](../api-management/set-backend-service-policy.md) provides the following request routing controls:
* Service instance selection by specifying a Service Fabric service instance name, either hardcoded (for example, `"fabric:/myapp/myservice"`) or generated from the HTTP request (for example, `"fabric:/myapp/users/" + context.Request.MatchedParameters["name"]`). * Partition resolution by generating a partition key using any Service Fabric partitioning scheme.
The [backend configuration for Service Fabric](../api-management/api-management-
</policies> ```
-For a full set of Service Fabric back-end policy attributes, refer to the [API Management back-end documentation](../api-management/api-management-transformation-policies.md#SetBackendService)
+For a full set of Service Fabric back-end policy attributes, refer to the [API Management back-end documentation](../api-management/set-backend-service-policy.md)
## Set parameters and deploy API Management
service-fabric Service Fabric Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-versions.md
The tables in this article outline the Service Fabric and platform versions that
| Service Fabric runtime |Can upgrade directly from|Can downgrade to*|Compatible SDK or NuGet package version|Supported .NET runtimes** |OS Version |End of support | | | | | | | | |
-| 8.2 CU8<br>8.2.1723.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 30, 2022 |
+| 8.2 CU7<br>8.2.1692.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 30, 2022 |
| 8.2 CU6<br>8.2.1686.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 30, 2022 | | 8.2 CU4<br>8.2.1659.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 5.2 | .NET 5.0, >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 30, 2022 | | 8.2 CU3<br>8.2.1620.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 5.2 | .NET 5.0, >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 30, 2022 |
Support for Service Fabric on a specific OS ends when support for the OS version
| 9.0 CU2<br>9.0.1056.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | August 19, 2022 | | 9.0 CU1<br>9.0.1035.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | August 19, 2022 | | 9.0 RTO<br>9.0.1018.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | August 19, 2022 |
-| 8.2 CU8<br>8.2.1521.1 | 8.0 CU3<br>8.0.527.1 | N/A | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 30, 2022 |
| 8.2 CU6<br>8.2.1485.1 | 8.0 CU3<br>8.0.527.1 | N/A | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 30, 2022 | | 8.2 CU5.1<br>8.2.1483.1 | 8.0 CU3<br>8.0.527.1 | N/A | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 30, 2022 | | 8.2 CU4<br>8.2.1458.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 30, 2022 |
The following table lists the version names of Service Fabric and their correspo
| 9.0 CU4 | 9.0.1121.9590 | 9.0.1114.1 | | 9.0 CU3 | 9.0.1107.9590 | 9.0.1103.1 | | 9.0 CU2.1 | Not applicable | 9.0.1086.1 |
-| 8.2 CU8 | 8.2.1723.9590 | 8.2.1521.1 |
+| 8.2 CU7 | 8.2.1692.9590 | Not applicable |
| 8.2 CU6 | 8.2.1686.9590 | 8.2.1485.1 | | 8.2 CU5.1 | Not applicable | 8.2.1483.1 | | 9.0 CU2 | 9.0.1048.9590 | 9.0.1056.1 |
The following table lists the version names of Service Fabric and their correspo
| 5.3 CU3 | 5.3.311.9590 | Not applicable| | 5.3 CU2 | 5.3.301.9590 | Not applicable| | 5.3 CU1 | 5.3.204.9494 | Not applicable|
-| 5.3 RTO | 5.3.121.9494 | Not applicable|
+| 5.3 RTO | 5.3.121.9494 | Not applicable|
site-recovery Azure To Azure Tutorial Enable Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-tutorial-enable-replication.md
Title: Tutorial to set up Azure VM disaster recovery with Azure Site Recovery description: In this tutorial, set up disaster recovery for Azure VMs to another Azure region, using the Site Recovery service. Previously updated : 10/19/2022 Last updated : 01/04/2023 #Customer intent: As an Azure admin, I want to set up disaster recovery for my Azure VMs, so that they're available in a secondary region if the primary region becomes unavailable.
Your Azure account needs permissions to create a Recovery Services vault, and to
- If you just created a free Azure subscription, you're the account admin, and no further action is needed. - If you aren't the admin, work with the admin to get the permissions you need.
+ - **Azure Active Directory**: Application owner and application developer roles to enable replication.
- **Create a vault**: Admin or owner permissions on the subscription. - **Manage Site Recovery operations in the vault**: The *Site Recovery Contributor* built-in Azure role. - **Create Azure VMs in the target region**: Either the built-in *Virtual Machine Contributor* role, or specific permissions to:
site-recovery How To Move From Classic To Modernized Vmware Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/how-to-move-from-classic-to-modernized-vmware-disaster-recovery.md
Last updated 07/15/2022
# How to move from classic to modernized VMware disaster recoveryΓÇ»
-This article provides information about how you can move/migrate your VMware replications from [classic](./vmware-azure-architecture.md) to [modernized](./vmware-azure-architecture-modernized.md) protection architecture. With this capability to migrate, you can successfully transfer your replicated items from a configuration server to an Azure Site Recovery replication appliance. This migration is guided by a smart replication mechanism which ensures that the complete initial replication is not performed again for non-critical replicated items, and only the differential data is transferred.
+This article provides information about how you can move/migrate your VMware or Physical machine replications from [classic](./vmware-azure-architecture.md) to [modernized](./vmware-azure-architecture-modernized.md) protection architecture. With this capability to migrate, you can successfully transfer your replicated items from a configuration server to an Azure Site Recovery replication appliance. This migration is guided by a smart replication mechanism which ensures that the complete initial replication is not performed again for non-critical replicated items, and only the differential data is transferred.
> [!Note]
-> - Movement of physical servers to modernized architecture is not yet supported.  
-> - Movement of machines replicated in a Private Endpoint enabled Recovery Services vault is not supported yet.
> - Recovery plans will not be migrated and will need to be created again in the modernized Recovery Services vault. ## PrerequisitesΓÇ»
site-recovery Hybrid How To Enable Replication Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hybrid-how-to-enable-replication-private-endpoints.md
to private IPs.
1. Continue to the **Review \+ create** tab to review and create the DNS zone.
+ 1. If you're using modernized architecture for protection VMware or Physical machines, then create another private DNS zone for **privatelink.prod.migration.windowsazure.com** also. This endpoint will be used by Site Recovery to perform the discovery of on-premises environment.
+ 1. Link the private DNS zone to your virtual network. You now need to link the private DNS zone that you created to the bypass.
site-recovery Move From Classic To Modernized Vmware Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/move-from-classic-to-modernized-vmware-disaster-recovery.md
# Move from classic to modernized VMware disaster recovery  
-This article provides information about the architecture, necessary infrastructure, and FAQs about moving your VMware replications from [classic](./vmware-azure-architecture.md) to [modernized](./vmware-azure-architecture-modernized.md) protection architecture. With this capability to migrate, you can successfully transfer your replicated items from a configuration server to an Azure Site Recovery replication appliance. This migration is guided by a smart replication mechanism, which ensures that complete initial replication isn't performed again for non-critical replicated items, and only the differential data is transferred.
+This article provides information about the architecture, necessary infrastructure, and FAQs about moving your VMware or Physical machine replications from [classic](./vmware-azure-architecture.md) to [modernized](./vmware-azure-architecture-modernized.md) protection architecture. With this capability to migrate, you can successfully transfer your replicated items from a configuration server to an Azure Site Recovery replication appliance. This migration is guided by a smart replication mechanism, which ensures that complete initial replication isn't performed again for non-critical replicated items, and only the differential data is transferred.
> [!Note]
-> - Movement of physical servers to modernized architecture is not yet supported.  
-> - Movement of machines replicated in a Private Endpoint enabled Recovery Services vault is not supported yet.   
-> - Recovery plans won't be migrated and will need to be created again in the modernized Recovery Services vault.
+> Recovery plans won't be migrated and will need to be created again in the modernized Recovery Services vault.
## ArchitectureΓÇ»
Ensure the following before you move from classic architecture to modernized arc
### Prepare classic Recovery Services vault   Ensure the following for the replicated items you are planning to move: --- The Recovery Services vault does not have MSI enabled on it. -- The replicated item is a VMware machine replicating via a configuration server.
+
+- The replicated item is a VMware or Physcial machine replicating via a configuration server.
- Replication is not happening to an un-managed storage account but rather to managed disk. - Replication is happening from on-premises to Azure and the replicated item is not in a failed-over or in failed-back state. - The replicated item is not replicating the data from Azure to on-premises.ΓÇ»
Ensure the following for the replicated items you are planning to move:
- The configuration server’s version is 9.50 or later and its health is in a non-critical state.  - The configuration server has a healthy heartbeat.  - The mobility service agent’s version, installed on the source machine, is 9.50 or later.  -- The replicated item does not use Private Endpoint.  
+- The Recovery Services vaults with MSI enabled are supported.
+- The Recovery Services vaults with Private Endpoints enabled are supported.  
- The replicated item’s health is in a non-critical state, or its recovery points are being created successfully.  ### Prepare modernized Recovery Services vault  
spring-apps How To Configure Enterprise Spring Cloud Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-configure-enterprise-spring-cloud-gateway.md
The following table describes the default resource usage:
| VMware Spring Cloud Gateway | 2 | 1 core | 2Gi | | VMware Spring Cloud Gateway operator | 2 | 1 core | 2Gi |
+## Configure application performance monitoring
+
+There are several types of application performance monitoring (APM) Java agents provided by Spring Cloud Gateway to monitor a gateway managed by Azure Spring Apps.
+
+### [Azure portal](#tab/Azure-portal)
+
+Use the following steps to set up APM using the Azure portal:
+
+1. Open the **Spring Cloud Gateway** page and select the **Configuration** tab.
+
+1. Choose the APM type in the **APM** list to monitor a gateway.
+
+1. Fill in the key-value pairs for the APM environment variables in the **Properties** or **Secrets** sections. You can put variables with sensitive information in **Secrets**.
+
+1. When you've provided all the configurations, select **Save** to save your changes.
+
+Updating the configuration can take a few minutes. You should get a notification when the configuration is complete.
+
+### [Azure CLI](#tab/Azure-CLI)
+
+Use the following command to set up APM using Azure CLI:
+
+```azurecli
+az spring gateway update \
+ --apm-types <APM-type> \
+ --properties <key=value> \
+ --secrets <key=value>
+```
+++
+The supported APM types are `ApplicationInsights`, `AppDynamics`, `Dynatrace`, `NewRelic`, and `ElasticAPM`. For more information about the functions provided and which environment variables are exposed, see the public documentation for the APM Java agent you're using. Azure Spring Apps will upgrade the APM agent with the same cadence as deployed apps to keep compatibility of agents between Spring Cloud Gateway and apps.
+
+> [!NOTE]
+> By default, Azure Spring Apps prints the logs of the APM Java agent to `STDOUT`. These logs are mixed with the Spring Cloud Gateway logs. You can check the version of the APM agent used in the logs. You can query these logs in Log Analytics to troubleshoot.
+> To make the APM agents work correctly, increase the CPU and memory of Spring Cloud Gateway.
+ ## Next steps - [How to Use Spring Cloud Gateway](how-to-use-enterprise-spring-cloud-gateway.md)
spring-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/policy-reference.md
Title: Built-in policy definitions for Azure Spring Apps description: Lists Azure Policy built-in policy definitions for Azure Spring Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/28/2022 Last updated : 01/05/2023
spring-apps Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart.md
To learn how to use more Azure Spring capabilities, advance to the quickstart se
> [!div class="nextstepaction"] > [Introduction to the sample app](./quickstart-sample-app-introduction.md)
+For a packaged app template with Azure Spring Apps infrastructure provisioned using Bicep, see [Spring Boot PetClinic Microservices Application Deployed to Azure Spring Apps](https://github.com/Azure-Samples/apptemplates-microservices-spring-app-on-AzureSpringApps).
+ More samples are available on GitHub: [Azure Spring Apps Samples](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples).
spring-apps Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Spring Apps description: Lists Azure Policy Regulatory Compliance controls available for Azure Spring Apps. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/04/2022 Last updated : 01/05/2023
storage Lifecycle Management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/lifecycle-management-overview.md
description: Use Azure Storage lifecycle management policies to create automated
Previously updated : 12/21/2022 Last updated : 01/05/2023
The run conditions are based on age. Current versions use the last modified time
| Action run condition | Condition value | Description | |--|--|--| | daysAfterModificationGreaterThan | Integer value indicating the age in days | The condition for actions on a current version of a blob |
-| daysAfterCreationGreaterThan | Integer value indicating the age in days | The condition for actions on a previous version of a blob or a blob snapshot |
+| daysAfterCreationGreaterThan | Integer value indicating the age in days | The condition for actions on the current version or previous version of a blob or a blob snapshot |
| daysAfterLastAccessTimeGreaterThan<sup>1</sup> | Integer value indicating the age in days | The condition for a current version of a blob when access tracking is enabled | | daysAfterLastTierChangeGreaterThan | Integer value indicating the age in days after last blob tier change time | This condition applies only to `tierToArchive` actions and can be used only with the `daysAfterModificationGreaterThan` condition. | <sup>1</sup> If [last access time tracking](#move-data-based-on-last-accessed-time) is not enabled for a blob, **daysAfterLastAccessTimeGreaterThan** uses the date the lifecycle policy was enabled instead of the `LastAccessTime` property of the blob.
+## Lifecycle policy completed event
+
+The `LifecyclePolicyCompleted` event is generated when the actions defined by a lifecycle management policy are performed. The following json shows an example `LifecyclePolicyCompleted` event.
+
+```json
+{
+ "topic": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/contosoresourcegroup/providers/Microsoft.Storage/storageAccounts/contosostorageaccount",
+ "subject": "BlobDataManagement/LifeCycleManagement/SummaryReport",
+ "eventType": "Microsoft.Storage.LifecyclePolicyCompleted",
+ "eventTime": "2022-05-26T00:00:40.1880331",
+ "id": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "data": {
+ "scheduleTime": "2022/05/24 22:57:29.3260160",
+ "deleteSummary": {
+ "totalObjectsCount": 16,
+ "successCount": 14,
+ "errorList": ""
+ },
+ "tierToCoolSummary": {
+ "totalObjectsCount": 0,
+ "successCount": 0,
+ "errorList": ""
+ },
+ "tierToArchiveSummary": {
+ "totalObjectsCount": 0,
+ "successCount": 0,
+ "errorList": ""
+ }
+ },
+ "dataVersion": "1",
+ "metadataVersion": "1"
+}
+```
+
+The following table describes the schema of the `LifecyclePolicyCompleted` event.
+
+|Field|Type|Description|
+||||
+|scheduleTime|string|The time that the lifecycle policy was scheduled|
+|deleteSummary|vector\<byte\>|The results summary of blobs scheduled for delete operation|
+|tierToCoolSummary|vector\<byte\>|The results summary of blobs scheduled for tier-to-cool operation|
+|tierToArchiveSummary|vector\<byte\>|The results summary of blobs scheduled for tier-to-archive operation|
+ ## Examples of lifecycle policies The following examples demonstrate how to address common scenarios with lifecycle policy rules.
storage Object Replication Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/object-replication-overview.md
Object replication asynchronously copies block blobs in a container according to
Object replication requires that blob versioning is enabled on both the source and destination accounts. When a replicated blob in the source account is modified, a new version of the blob is created in the source account that reflects the previous state of the blob, before modification. The current version in the source account reflects the most recent updates. Both the current version and any previous versions are replicated to the destination account. For more information about how write operations affect blob versions, see [Versioning on write operations](versioning-overview.md#versioning-on-write-operations).
-When a blob in the source account is deleted, the current version of the blob becomes a previous version, and there's no longer a current version. All existing previous versions of the blob are preserved. This state is replicated to the destination account. For more information about how to delete operations affect blob versions, see [Versioning on delete operations](versioning-overview.md#versioning-on-delete-operations).
+### Deleting a blob in the source account
+
+When a blob in the source account is deleted, the current version of the blob becomes a previous version, and there's no longer a current version. All existing previous versions of the blob are preserved. This state is replicated to the destination account. For more information about how to delete operations affect blob versions, see [Versioning on delete operations](versioning-overview.md#versioning-on-delete-operations).
### Snapshots
You can use Azure Policy to audit a set of storage accounts to ensure that the *
You can check the replication status for a blob in the source account. For more information, see [Check the replication status of a blob](object-replication-configure.md#check-the-replication-status-of-a-blob).
+> [!NOTE]
+> While replication is in progress, there's no way to determine the percentage of data that has been replicated.
+ If the replication status for a blob in the source account indicates failure, then investigate the following possible causes: - Make sure that the object replication policy is configured on the destination account.
storage Storage Blob Container Create Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-create-java.md
The following example creates a new `BlobContainerClient` object with the contai
:::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/blob-devguide/blob-devguide-containers/src/main/java/com/blobs/devguide/containers/ContainerCreate.java" id="Snippet_CreateRootContainer":::
-## See also
+## Resources
+
+To learn more about creating a container using the Azure Blob Storage client library for Java, see the following resources.
+
+### REST API operations
+
+The Azure SDK for Java contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Java paradigms. The client library methods for creating a container use the following REST API operation:
-- [View code sample in GitHub](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-containers/src/main/java/com/blobs/devguide/containers/ContainerCreate.java)-- [Quickstart: Azure Blob Storage client library for Java](storage-quickstart-blobs-java.md) - [Create Container](/rest/api/storageservices/create-container) (REST API)-- [Delete Container](/rest/api/storageservices/delete-container) (REST API)+
+### Code samples
+
+- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-containers/src/main/java/com/blobs/devguide/containers/ContainerCreate.java)
+
storage Storage Feature Support In Storage Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-feature-support-in-storage-accounts.md
The following table describes whether a feature is supported in a standard gener
| [Access tier - cool](access-tiers-overview.md) | &#x2705; | &#x2705; | &#x2705;| &#x2705; | | [Access tier - hot](access-tiers-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Active Directory security](authorize-access-azure-active-directory.md) | &#x2705; | &#x2705; | &#x2705;<sup>1</sup> | &#x2705;<sup>1</sup> |
-| [Azure DNS Zone endpoints (preview)](/common/storage-account-overview.md#storage-account-endpoints) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Azure DNS Zone endpoints (preview)](../common/storage-account-overview.md#storage-account-endpoints) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [Blob inventory](blob-inventory.md) | &#x2705; | &#x1F7E6; | &#x1F7E6; | &#x1F7E6; | | [Blob index tags](storage-manage-find-blobs.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | | [Blob snapshots](snapshots-overview.md) | &#x2705; | &#x1F7E6; | &nbsp;&#x2B24; | &#x1F7E6; |
The following table describes whether a feature is supported in a premium block
| [Access tier - cool](access-tiers-overview.md) | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | | [Access tier - hot](access-tiers-overview.md) | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | | [Azure Active Directory security](authorize-access-azure-active-directory.md) | &#x2705; | &#x2705; | &#x2705;<sup>1</sup> | &#x2705;<sup>1</sup> |
-| [Azure DNS Zone endpoints (preview)](/common/storage-account-overview.md#storage-account-endpoints) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Azure DNS Zone endpoints (preview)](../common/storage-account-overview.md#storage-account-endpoints) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [Blob inventory](blob-inventory.md) | &#x2705; | &#x1F7E6; | &#x1F7E6; | &#x1F7E6; | | [Blob index tags](storage-manage-find-blobs.md) | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | | [Blob snapshots](snapshots-overview.md) | &#x2705; | &#x1F7E6; | &nbsp;&#x2B24; | &#x1F7E6; |
storage Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/policy-reference.md
Title: Built-in policy definitions for Azure Storage description: Lists Azure Policy built-in policy definitions for Azure Storage. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/28/2022 Last updated : 01/05/2023
storage Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Storage description: Lists Azure Policy Regulatory Compliance controls available for Azure Storage. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/04/2022 Last updated : 01/05/2023
storage Storage Account Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-create.md
The following table describes the fields on the **Networking** tab.
| Section | Field | Required or optional | Description | |--|--|--|--| | Network connectivity | Network access | Required | By default, incoming network traffic is routed to the public endpoint for your storage account. You can specify that traffic must be routed to the public endpoint through an Azure virtual network. You can also configure private endpoints for your storage account. For more information, see [Use private endpoints for Azure Storage](storage-private-endpoints.md). |
-| Network connectivity | Endpoint type | Required | Azure Storage supports two types of endpoints: standard endpoints (the default) and Azure DNS zone endpoints (preview). Within a given subscription, you can create up to 250 accounts with standard endpoints per region, and up to 5000 accounts with Azure DNS zone endpoints per region. To learn how to view the service endpoints for an existing storage account, see [Get service endpoints for the storage account](storage-account-get-info.md#get-service-endpoints-for-the-storage-account). |
+| Network connectivity | Endpoint type | Required | Azure Storage supports two types of endpoints: [standard endpoints](storage-account-overview.md#standard-endpoints) (the default) and [Azure DNS zone endpoints](storage-account-overview.md#azure-dns-zone-endpoints-preview) (preview). Within a given subscription, you can create up to 250 accounts with standard endpoints per region, and up to 5000 accounts with Azure DNS zone endpoints per region, for a total of 5250 storage accounts. To register for the preview, see [About the preview](storage-account-overview.md#about-the-preview). |
| Network routing | Routing preference | Required | The network routing preference specifies how network traffic is routed to the public endpoint of your storage account from clients over the internet. By default, a new storage account uses Microsoft network routing. You can also choose to route network traffic through the POP closest to the storage account, which may lower networking costs. For more information, see [Network routing preference for Azure Storage](network-routing-preference.md). | The following image shows a standard configuration of the networking properties for a new storage account.
storage Storage Account Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-overview.md
A storage account provides a unique namespace in Azure for your data. Every obje
There are two types of service endpoints available for a storage account: -- Standard endpoints (recommended). You can create up to 250 storage accounts per region with standard endpoints in a given subscription.-- Azure DNS zone endpoints (preview). You can create up to 5000 storage accounts per region with Azure DNS zone endpoints in a given subscription.
+- [Standard endpoints](#standard-endpoints) (recommended). You can create up to 250 storage accounts per region with standard endpoints in a given subscription.
+- [Azure DNS zone endpoints](#azure-dns-zone-endpoints-preview) (preview). You can create up to 5000 storage accounts per region with Azure DNS zone endpoints in a given subscription.
Within a single subscription, you can create accounts with either standard or Azure DNS Zone endpoints, for a maximum of 5250 accounts per subscription.
storage Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-introduction.md
You can access resources in a storage account by any language that can make HTTP
### Azure Storage management API and library references - [Storage Resource Provider REST API](/rest/api/storagerp/)-- [Storage Resource Provider Client Library for .NET](/dotnet/api/overview/azure/storage/management)
+- [Storage Resource Provider Client Library for .NET](/dotnet/api/overview/azure/resourcemanager.storage-readme)
- [Storage Service Management REST API (Classic)](/previous-versions/azure/reference/ee460790(v=azure.100)) - [Azure NetApp Files REST API](../../azure-netapp-files/azure-netapp-files-develop-with-rest-api.md)
storage Storage Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-network-security.md
You can use the same technique for an account that has the hierarchical namespac
| Service | Resource Provider Name | Purpose | | :-- | :- | :-- |
-| Azure API Management | Microsoft.ApiManagement/service | Enables Api Management service access to storage accounts behind firewall using policies. [Learn more](../../api-management/api-management-authentication-policies.md#use-managed-identity-in-send-request-policy). |
+| Azure API Management | Microsoft.ApiManagement/service | Enables API Management service access to storage accounts behind firewall using policies. [Learn more](../../api-management/authentication-managed-identity-policy.md#use-managed-identity-in-send-request-policy). |
| Azure Cache for Redis | Microsoft.Cache/Redis | Allows access to storage accounts through Azure Cache for Redis. [Learn more](../../azure-cache-for-redis/cache-managed-identity.md)| | Azure Cognitive Search | Microsoft.Search/searchServices | Enables Cognitive Search services to access storage accounts for indexing, processing and querying. | | Azure Cognitive Services | Microsoft.CognitiveService/accounts | Enables Cognitive Services to access storage accounts. [Learn more](../..//cognitive-services/cognitive-services-virtual-networks.md).|
storage File Sync Storsimple Cost Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-storsimple-cost-comparison.md
Azure File Sync has the following pricing components you should consider in the
### Translating quantities from StorSimple If you are trying to estimate the costs of Azure File Sync based on the expenses you see in StorSimple, be careful with the following items: -- **Azure Files bills on logical size (standard file shares).** Unlike StorSimple, which encodes your data in the StorSimple proprietary format before storing it to Azure Blob storage, Azure Files stores the data from Azure File Sync in the same form as you see it on your Windows File Server. This means that if you are trying to figure out how much storage you will consume in Azure Files, you should look at the logical size of the data from StorSimple, rather than the amount stored in Azure Blob storage. Although this may look like it will cause you to pay more when using Azure File Sync, you need to do the complete analysis including all aspects of StorSimple costs to see the true comparison. Additionally, Azure Files offers capacity reservations that enable you to buy storage at an up-to 36% discount over the list price. See [Capacity reservations in Azure Files](../files/understanding-billing.md#reserve-capacity).
+- **Azure Files bills on logical size (standard file shares).** Unlike StorSimple, which encodes your data in the StorSimple proprietary format before storing it to Azure Blob storage, Azure Files stores the data from Azure File Sync in the same form as you see it on your Windows File Server. This means that if you are trying to figure out how much storage you will consume in Azure Files, you should look at the logical size of the data from StorSimple, rather than the amount stored in Azure Blob storage. Although this may look like it will cause you to pay more when using Azure File Sync, you need to do the complete analysis including all aspects of StorSimple costs to see the true comparison. Additionally, Azure Files offers reservations that enable you to buy storage at an up-to 36% discount over the list price. See [Reservations in Azure Files](../files/understanding-billing.md#reservations).
- **Don't assume a 1:1 ratio between transactions on StorSimple and transactions in Azure File Sync.** It might be tempting to look at the number of transactions done by StorSimple in Azure Blob storage and assume that number will be similar to the number of transactions that Azure File Sync will do on Azure Files. This number may overstate or understate the number of transactions Azure File Sync will do, so it's not a good way to estimate transaction costs. The best way to estimate transaction costs is to do a small proof-of-concept in Azure File Sync with a live file share similar to the file shares stored in StorSimple.
storage Files Reserve Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-reserve-capacity.md
Title: Optimize costs for Azure Files with reserved capacity
+ Title: Optimize costs for Azure Files with Reservations
-description: Learn how to save costs on Azure file share deployments by using Azure Files reserved capacity.
+description: Learn how to save costs on Azure file share deployments by using Azure Files Reservations.
-# Optimize costs for Azure Files with reserved capacity
-You can save money on the storage costs for Azure file shares with capacity reservations. Azure Files reserved capacity offers you a discount on capacity for storage costs when you commit to a reservation for either one year or three years. A reservation provides a fixed amount of storage capacity for the term of the reservation.
+# Optimize costs for Azure Files with Reservations
+You can save money on the storage costs for Azure file shares with Azure Files Reservations. Azure Files Reservations (also referred to as *reserved instances*) offer you a discount on capacity for storage costs when you commit to a Reservation for either one year or three years. A Reservation provides a fixed amount of storage capacity for the term of the Reservation.
-Azure Files reserved capacity can significantly reduce your capacity costs for storing data in your Azure file shares. How much you save will depend on the duration of your reservation, the total capacity you choose to reserve, and the tier and redundancy settings that you've chosen for you Azure file shares. Reserved capacity provides a billing discount and doesn't affect the state of your Azure file shares.
+Azure Files reservations can significantly reduce your capacity costs for storing data in your Azure file shares. How much you save will depend on the duration of your Reservation, the total storage capacity you choose to reserve, and the tier and redundancy settings that you've chosen for your Azure file shares. Reservations provide a billing discount and don't affect the state of your Azure file shares.
-For pricing information about reservation capacity for Azure Files, see [Azure Files pricing](https://azure.microsoft.com/pricing/details/storage/files/).
+For pricing information about Azure Files Reservations, see [Azure Files pricing](https://azure.microsoft.com/pricing/details/storage/files/).
## Applies to | File share type | SMB | NFS |
For pricing information about reservation capacity for Azure Files, see [Azure F
| Premium file shares (FileStorage), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ## Reservation terms for Azure Files
-The following sections describe the terms of an Azure Files capacity reservation.
+The following sections describe the terms of an Azure Files Reservation.
-### Reservation capacity
-You can purchase Azure Files reserved capacity in units of 10 TiB and 100 TiB per month for a one-year or three-year term.
+### Reservation units and terms
+You can purchase Azure Files Reservations in units of 10 TiB and 100 TiB per month for a one-year or three-year term.
### Reservation scope
-Azure Files reserved capacity is available for a single subscription, multiple subscriptions (shared scope), and management groups. When scoped to a single subscription, the reservation discount is applied to the selected subscription only. When scoped to multiple subscriptions, the reservation discount is shared across those subscriptions within the customer's billing context. When scoped to a management group, the reservation discount is applied to subscriptions that are a part of both the management group and billing scope. A reservation applies to your usage within the purchased scope and cannot be limited to a specific storage account, container, or object within the subscription.
+Azure Files Reservations are available for a single subscription, multiple subscriptions (shared scope), and management groups. When scoped to a single subscription, the Reservation discount is applied to the selected subscription only. When scoped to multiple subscriptions, the Reservation discount is shared across those subscriptions within the customer's billing context. When scoped to a management group, the reservation discount is applied to subscriptions that are a part of both the management group and billing scope. A Reservation applies to your usage within the purchased scope and can't be limited to a specific storage account, container, or object within the subscription.
-A capacity reservation for Azure Files covers only the amount of data that is stored in a subscription or shared resource group. Transaction, bandwidth, data transfer, and metadata storage charges are not included in the reservation. As soon as you buy a reservation, the capacity charges that match the reservation attributes are charged at the discount rates instead of the pay-as-you go rates. For more information on Azure reservations, see [What are Azure Reservations?](../../cost-management-billing/reservations/save-compute-costs-reservations.md).
+An Azure Files Reservation covers only the amount of data that is stored in a subscription or shared resource group. Transaction, bandwidth, data transfer, and metadata storage charges are not included in the Reservation. As soon as you buy a Reservation, the capacity charges that match the Reservation attributes are charged at the discount rates instead of the pay-as-you go rates. For more information, see [What are Azure Reservations?](../../cost-management-billing/reservations/save-compute-costs-reservations.md).
-### Reserved capacity and snapshots
-If you're taking snapshots of Azure file shares, there are differences in how capacity reservations work for standard versus premium file shares. If you're taking snapshots of standard file shares, then the snapshot differentials count against the reserved capacity and are billed as part of the normal used storage meter. However, if you're taking snapshots of premium file shares, then the snapshots are billed using a separate meter and don't count against the capacity reservation. For more information, see [Snapshots](understanding-billing.md#snapshots).
+### Reservations and snapshots
+If you're taking snapshots of Azure file shares, there are differences in how Reservations work for standard versus premium file shares. If you're taking snapshots of standard file shares, then the snapshot differentials count against the Reservation and are billed as part of the normal used storage meter. However, if you're taking snapshots of premium file shares, then the snapshots are billed using a separate meter and don't count against the Reservation. For more information, see [Snapshots](understanding-billing.md#snapshots).
### Supported tiers and redundancy options
-Azure Files reserved capacity is available for premium, hot, and cool file shares. Reserved capacity isn't available for Azure file shares in the transaction optimized tier. All storage redundancies support reservations. For more information about redundancy options, see [Azure Files redundancy](storage-files-planning.md#redundancy).
+Azure Files Reservations are available for premium, hot, and cool file shares. Reservations aren't available for Azure file shares in the transaction optimized tier. All storage redundancies support Reservations. For more information about redundancy options, see [Azure Files redundancy](storage-files-planning.md#redundancy).
### Security requirements for purchase
-To purchase reserved capacity:
+To purchase a Reservation:
- You must be in the **Owner** role for at least one Enterprise or individual subscription with pay-as-you-go rates. - For Enterprise subscriptions, **Add Reserved Instances** must be enabled in the EA portal. Or, if that setting is disabled, you must be an EA Admin on the subscription.-- For the Cloud Solution Provider (CSP) program, only admin agents or sales agents can buy Azure Files reserved capacity.
+- For the Cloud Solution Provider (CSP) program, only admin agents or sales agents can buy Azure Files Reservations.
## Determine required capacity before purchase
-When you purchase an Azure Files reservation, you must choose the region, tier, and redundancy option for the reservation. Your reservation is valid only for data stored in that region, tier, and redundancy level. For example, suppose you purchase a reservation for data in US West for the hot tier using zone-redundant storage (ZRS). That reservation will not apply to data in US East, data in the cool tier, or data in geo-redundant storage (GRS). However, you can purchase another reservation for your additional needs.
+When you purchase an Azure Files Reservation, you must choose the region, tier, and redundancy option for the Reservation. Your Reservation is valid only for data stored in that region, tier, and redundancy level. For example, suppose you purchase a Reservation for data in West US for the hot tier using zone-redundant storage (ZRS). That Reservation will not apply to data in US East, data in the cool tier, or data in geo-redundant storage (GRS). However, you can purchase another Reservation for your additional needs.
-Reservations are available for 10 TiB or 100 TiB blocks, with higher discounts for 100 TiB blocks. When you purchase a reservation in the Azure portal, Microsoft may provide you with recommendations based on your previous usage to help determine which reservation you should purchase.
+Reservations are available for 10 TiB or 100 TiB blocks, with higher discounts for 100 TiB blocks. When you purchase a Reservation in the Azure portal, Microsoft may provide you with recommendations based on your previous usage to help determine which Reservation you should purchase.
-## Purchase Azure Files reserved capacity
-You can purchase Azure Files reserved capacity through the [Azure portal](https://portal.azure.com). Pay for the reservation up front or with monthly payments. For more information about purchasing with monthly payments, see [Purchase Azure reservations with up front or monthly payments](../../cost-management-billing/reservations/prepare-buy-reservation.md).
+## Purchase Azure Files Reservations
+You can purchase Azure Files Reservations through the [Azure portal](https://portal.azure.com). Pay for the Reservation up front or with monthly payments. For more information about purchasing with monthly payments, see [Purchase Azure Reservations with up front or monthly payments](../../cost-management-billing/reservations/prepare-buy-reservation.md).
-For help identifying the reservation terms that are right for your scenario, see [Understand the Azure Storage reserved capacity discount](../../cost-management-billing/reservations/understand-storage-charges.md).
+For help identifying the Reservation terms that are right for your scenario, see [Understand Azure Storage Reservation discounts](../../cost-management-billing/reservations/understand-storage-charges.md).
-Follow these steps to purchase reserved capacity:
+Follow these steps to purchase a Reservation:
-1. Navigate to the [Purchase reservations](https://portal.azure.com/#blade/Microsoft_Azure_Reservations/CreateBlade/referrer/Browse_AddCommand) blade in the Azure portal.
-1. Select **Azure Files** to buy a new reservation.
+1. Navigate to the [Purchase Reservations](https://portal.azure.com/#blade/Microsoft_Azure_Reservations/CreateBlade/referrer/Browse_AddCommand) blade in the Azure portal.
+1. Select **Azure Files** to buy a new Reservation.
1. Fill in the required fields as described in the following table:
- ![Screenshot showing how to purchase reserved capacity](./media/files-reserve-capacity/select-reserved-capacity.png)
+ ![Screenshot showing how to purchase Reservations.](./media/files-reserve-capacity/select-reserved-capacity.png)
|Field |Description | |||
- |**Scope** | Indicates how many subscriptions can use the billing benefit associated with the reservation. It also controls how the reservation is applied to specific subscriptions. <br/><br/> If you select **Shared**, the reservation discount is applied to Azure Files capacity in any subscription within your billing context. The billing context is based on how you signed up for Azure. For enterprise customers, the shared scope is the enrollment and includes all subscriptions within the enrollment. For pay-as-you-go customers, the shared scope includes all individual subscriptions with pay-as-you-go rates created by the account administrator. <br/><br/> If you select **Single subscription**, the reservation discount is applied to Azure Files capacity in the selected subscription. <br/><br/> If you select **Single resource group**, the reservation discount is applied to Azure Files capacity in the selected subscription and the selected resource group within that subscription. <br/><br/> You can change the reservation scope after you purchase the reservation. |
- |**Subscription** | The subscription that's used to pay for the Azure Files reservation. The payment method on the selected subscription is used in charging the costs. The subscription must be one of the following types: <br/><br/> Enterprise Agreement (offer numbers: MS-AZR-0017P or MS-AZR-0148P): For an Enterprise subscription, the charges are deducted from the enrollment's Azure Prepayment (previously called monetary commitment) balance or charged as overage. <br/><br/> Individual subscription with pay-as-you-go rates (offer numbers: MS-AZR-0003P or MS-AZR-0023P): For an individual subscription with pay-as-you-go rates, the charges are billed to the credit card or invoice payment method on the subscription. |
- | **Region** | The region where the reservation is in effect. |
- | **Tier** | The tier where the for which the reservation is in effect. Options include *Premium*, *Hot*, and *Cool*. |
- | **Redundancy** | The redundancy option for the reservation. Options include *LRS*, *ZRS*, *GRS*, and *GZRS*. For more information about redundancy options, see [Azure Files redundancy](storage-files-planning.md#redundancy). |
- | **Billing frequency** | Indicates how often the account is billed for the reservation. Options include *Monthly* or *Upfront*. |
+ |**Scope** | Indicates how many subscriptions can use the billing benefit associated with the Reservation. It also controls how the Reservation is applied to specific subscriptions. <br/><br/> If you select **Shared**, the Reservation discount is applied to Azure Files capacity in any subscription within your billing context. The billing context is based on how you signed up for Azure. For enterprise customers, the shared scope is the enrollment and includes all subscriptions within the enrollment. For pay-as-you-go customers, the shared scope includes all individual subscriptions with pay-as-you-go rates created by the account administrator. <br/><br/> If you select **Single subscription**, the Reservation discount is applied to Azure Files capacity in the selected subscription. <br/><br/> If you select **Single resource group**, the Reservation discount is applied to Azure Files capacity in the selected subscription and the selected resource group within that subscription. <br/><br/> You can change the Reservation scope after you purchase the Reservation. |
+ |**Subscription** | The subscription that's used to pay for the Azure Files Reservation. The payment method on the selected subscription is used in charging the costs. The subscription must be one of the following types: <br/><br/> Enterprise Agreement (offer numbers: MS-AZR-0017P or MS-AZR-0148P): For an Enterprise subscription, the charges are deducted from the enrollment's Azure Prepayment (previously called monetary commitment) balance or charged as overage. <br/><br/> Individual subscription with pay-as-you-go rates (offer numbers: MS-AZR-0003P or MS-AZR-0023P): For an individual subscription with pay-as-you-go rates, the charges are billed to the credit card or invoice payment method on the subscription. |
+ | **Region** | The region where the Reservation is in effect. |
+ | **Tier** | The tier for which the Reservation is in effect. Options include *Premium*, *Hot*, and *Cool*. |
+ | **Redundancy** | The redundancy option for the Reservation. Options include *LRS*, *ZRS*, *GRS*, and *GZRS*. For more information about redundancy options, see [Azure Files redundancy](storage-files-planning.md#redundancy). |
+ | **Billing frequency** | Indicates how often the account is billed for the Reservation. Options include *Monthly* or *Upfront*. |
| **Size** | The amount of capacity to reserve. | |**Term** | One year or three years. |
-1. After you select the parameters for your reservation, the Azure portal displays the cost. The portal also shows the discount percentage over pay-as-you-go billing.
+1. After you select the parameters for your Reservation, the Azure portal displays the cost. The portal also shows the discount percentage over pay-as-you-go billing.
-1. In the **Purchase reservations** blade, review the total cost of the reservation. You can also provide a name for the reservation.
+1. In the **Purchase Reservations** blade, review the total cost of the Reservation. You can also provide a name for the Reservation.
-After you purchase a reservation, it is automatically applied to any existing Azure file shares that match the terms of the reservation. If you haven't created any Azure file shares yet, the reservation will apply whenever you create a resource that matches the terms of the reservation. In either case, the term of the reservation begins immediately after a successful purchase.
+After you purchase a Reservation, it is automatically applied to any existing Azure file shares that match the terms of the Reservation. If you haven't created any Azure file shares yet, the Reservation will apply whenever you create a resource that matches the terms of the Reservation. In either case, the term of the Reservation begins immediately after a successful purchase.
-## Exchange or refund a reservation
-You can exchange or refund a reservation, with certain limitations. These limitations are described in the following sections.
+## Exchange or refund a Reservation
+You can exchange or refund a Reservation, with certain limitations. These limitations are described in the following sections.
-To exchange or refund a reservation, navigate to the reservation details in the Azure portal. Select **Exchange** or **Refund**, and follow the instructions to submit a support request. When the request has been processed, Microsoft will send you an email to confirm completion of the request.
+To exchange or refund a Reservation, navigate to the Reservation details in the Azure portal. Select **Exchange** or **Refund**, and follow the instructions to submit a support request. When the request has been processed, Microsoft will send you an email to confirm completion of the request.
For more information about Azure Reservations policies, see [Self-service exchanges and refunds for Azure Reservations](../../cost-management-billing/reservations/exchange-and-refund-azure-reservations.md).
-### Exchange a reservation
-Exchanging a reservation enables you to receive a prorated refund based on the unused portion of the reservation. You can then apply the refund to the purchase price of a new Azure Files reservation.
+### Exchange a Reservation
+Exchanging a Reservation enables you to receive a prorated refund based on the unused portion of the Reservation. You can then apply the refund to the purchase price of a new Azure Files Reservation.
-There's no limit on the number of exchanges you can make. Additionally, there's no fee associated with an exchange. The new reservation that you purchase must be of equal or greater value than the prorated credit from the original reservation. An Azure Files reservation can be exchanged only for another Azure Files reservation, and not for a reservation for any other Azure service.
+There's no limit on the number of exchanges you can make. Additionally, there's no fee associated with an exchange. The new Reservation that you purchase must be of equal or greater value than the prorated credit from the original reservation. An Azure Files reservation can be exchanged only for another Azure Files reservation, and not for a reservation for any other Azure service.
-### Refund a reservation
-You may cancel an Azure Files reservation at any time. When you cancel, you'll receive a prorated refund based on the remaining term of the reservation, minus a 12 percent early termination fee. The maximum refund per year is $50,000.
+### Refund a Reservation
+You may cancel an Azure Files Reservation at any time. When you cancel, you'll receive a prorated refund based on the remaining term of the Reservation, minus a 12 percent early termination fee. The maximum refund per year is $50,000.
-Cancelling a reservation immediately terminates the reservation and returns the remaining months to Microsoft. The remaining prorated balance, minus the fee, will be refunded to your original form of purchase.
+Cancelling a Reservation immediately terminates the Reservation and returns the remaining months to Microsoft. The remaining prorated balance, minus the fee, will be refunded to your original form of purchase.
-## Expiration of a reservation
-When a reservation expires, any Azure Files capacity that you are using under that reservation is billed at the pay-as-you go rate. Reservations don't renew automatically.
+## Expiration of a Reservation
+When a Reservation expires, any Azure Files capacity that you are using under that Reservation is billed at the pay-as-you go rate. Reservations don't renew automatically.
-You will receive an email notification 30 days prior to the expiration of the reservation, and again on the expiration date. To continue taking advantage of the cost savings that a reservation provides, renew it no later than the expiration date.
+You will receive an email notification 30 days prior to the expiration of the Reservation, and again on the expiration date. To continue taking advantage of the cost savings that a Reservation provides, renew it no later than the expiration date.
## Need help? Contact us If you have questions or need help, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458).
storage Files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-whats-new.md
For more information, see:
- [Overview of SMB features in the Windows Server documentation](/windows-server/storage/file-server/file-server-smb-overview) ### 2021 quarter 2 (April, May, June)
-#### Premium, hot, and cool storage capacity reservations
-Azure Files supports storage capacity reservations (also referred to as *reserve instances*). Storage capacity reservations allow you to achieve a discount on storage by pre-committing to storage utilization. Azure Files supports capacity reservations on the premium, hot, and cool tiers. Capacity reservations are sold in units of 10 TiB or 100 TiB, for terms of either one year or three years.
+#### Premium, hot, and cool storage reservations
+Azure Files supports storage reservations (also referred to as *reserved instances*). Azure Files Reservations allow you to achieve a discount on storage by pre-committing to storage utilization. Azure Files supports Reservations on the premium, hot, and cool tiers. Reservations are sold in units of 10 TiB or 100 TiB, for terms of either one year or three years.
For more information, see: - [Understanding Azure Files billing](understanding-billing.md)-- [Optimized costs for Azure Files with reserved capacity](files-reserve-capacity.md)
+- [Optimized costs for Azure Files with reservations](files-reserve-capacity.md)
- [Azure Files pricing](https://azure.microsoft.com/pricing/details/storage/files/) #### Improved portal experience for domain joining to Active Directory
storage Storage Files Identity Ad Ds Assign Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-assign-permissions.md
Last updated 12/07/2022
ms.devlang: azurecli
+recommendations: false
# Assign share-level permissions
storage Storage Files Identity Ad Ds Configure Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-configure-permissions.md
Last updated 12/19/2022
+recommendations: false
# Configure directory and file-level permissions over SMB
storage Storage Files Identity Ad Ds Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-enable.md
Last updated 11/29/2022
+recommendations: false
# Enable AD DS authentication for Azure file shares
storage Storage Files Identity Ad Ds Mount File Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-mount-file-share.md
Last updated 11/09/2022
+recommendations: false
# Mount a file share from a domain-joined VM
storage Storage Files Identity Ad Ds Update Password https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-update-password.md
Last updated 11/17/2022
+recommendations: false
# Update the password of your storage account identity in AD DS
storage Storage Files Identity Auth Active Directory Domain Service Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-active-directory-domain-service-enable.md
description: Learn how to enable identity-based authentication over Server Messa
Previously updated : 12/12/2022 Last updated : 01/03/2023
+recommendations: false
# Enable Azure Active Directory Domain Services authentication on Azure Files
To enable Azure AD DS authentication over SMB with the [Azure portal](https://po
:::image type="content" source="media/storage-files-active-directory-enable/files-azure-ad-enable-storage-account-identity.png" alt-text="Screenshot of the File shares pane in your storage account, Active directory is highlighted." lightbox="media/storage-files-active-directory-enable/files-azure-ad-enable-storage-account-identity.png":::
-1. Select **Azure Active Directory Domain Services** then switch the toggle to **Enabled**.
+1. Select **Azure Active Directory Domain Services** then enable the feature by ticking the checkbox.
1. Select **Save**.
- :::image type="content" source="media/storage-files-active-directory-enable/files-azure-ad-highlight.png" alt-text="Screenshot of the Active Directory pane, Azure Active Directory Domain Services is enabled." lightbox="media/storage-files-active-directory-enable/files-azure-ad-highlight.png":::
+ :::image type="content" source="media/storage-files-active-directory-enable/files-azure-ad-ds-highlight.png" alt-text="Screenshot of the Active Directory pane, Azure Active Directory Domain Services is enabled." lightbox="media/storage-files-active-directory-enable/files-azure-ad-ds-highlight.png":::
# [PowerShell](#tab/azure-powershell)
storage Storage Files Identity Auth Active Directory Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-active-directory-enable.md
Last updated 11/28/2022
+recommendations: false
# Overview - on-premises Active Directory Domain Services authentication over SMB for Azure file shares
storage Storage Files Identity Auth Azure Active Directory Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-azure-active-directory-enable.md
Last updated 12/05/2022
+recommendations: false
# Enable Azure Active Directory Kerberos authentication for hybrid identities on Azure Files
storage Storage Files Migration Nas Cloud Databox https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-nas-cloud-databox.md
Last updated 12/15/2022
+recommendations: false
# Use DataBox to migrate from Network Attached Storage (NAS) to Azure file shares
storage Storage Files Migration Robocopy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-robocopy.md
Last updated 12/16/2022
+recommendations: false
# Use RoboCopy to migrate to Azure file shares
storage Storage How To Use Files Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-use-files-portal.md
New-AzStorageDirectory `
To create a new directory named **myDirectory** at the root of your Azure file share, use the [`az storage directory create`](/cli/azure/storage/directory) command:
+> [!NOTE]
+> If you don't provide credentials with your commands, Azure CLI will query for your storage account key. You can also provide your storage account key with the command by using a variable such as `--account-key $storageAccountKey` or in plain text such as `--account-key "your-storage-account-key-here"`.
+ ```azurecli-interactive az storage directory create \ --account-name $storageAccountName \
- --account-key $storageAccountKey \
--share-name $shareName \ --name "myDirectory" \ --output none
date > SampleUpload.txt
az storage file upload \ --account-name $storageAccountName \
- --account-key $storageAccountKey \
--share-name $shareName \ --source "SampleUpload.txt" \ --path "myDirectory/SampleUpload.txt"
After you upload the file, you can use the [`az storage file list`](/cli/azure/s
```azurecli-interactive az storage file list \ --account-name $storageAccountName \
- --account-key $storageAccountKey \
--share-name $shareName \ --path "myDirectory" \ --output table
rm -f SampleDownload.txt
az storage file download \ --account-name $storageAccountName \
- --account-key $storageAccountKey \
--share-name $shareName \ --path "myDirectory/SampleUpload.txt" \
- --dest "SampleDownload.txt" \
+ --dest "./SampleDownload.txt" \
--output none ```
storage Understanding Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/understanding-billing.md
If you're migrating to Azure Files from on-premises or comparing Azure Files to
- **How do you pay for storage, IOPS, and bandwidth?** With Azure Files, the billing model you use depends on whether you're deploying [premium](#provisioned-model) or [standard](#pay-as-you-go-model) file shares. Most cloud solutions have models that align with the principles of either provisioned storage, such as price determinism and simplicity, or pay-as-you-go storage, which can optimize costs by only charging you for what you actually use. Of particular interest for provisioned models are minimum provisioned share size, the provisioning unit, and the ability to increase and decrease provisioning. -- **Are there any methods to optimize storage costs?** With Azure Files, you can use [capacity reservations](#reserve-capacity) to achieve an up to 36% discount on storage. Other solutions may employ strategies like deduplication or compression to optionally optimize storage efficiency. However, these storage optimization strategies often have non-monetary costs, such as reducing performance. Azure Files capacity reservations have no side effects on performance.
+- **Are there any methods to optimize storage costs?** You can use [Azure Files Reservations](#reservations) to achieve an up to 36% discount on storage. Other solutions may employ strategies like deduplication or compression to optionally optimize storage efficiency. However, these storage optimization strategies often have non-monetary costs, such as reducing performance. Reservations have no side effects on performance.
- **How do you achieve storage resiliency and redundancy?** With Azure Files, storage resiliency and redundancy are baked into the product offering. All tiers and redundancy levels ensure that data is highly available and at least three copies of your data are accessible. When considering other file storage options, consider whether storage resiliency and redundancy is built in or something you must assemble yourself.
If you're migrating to Azure Files from on-premises or comparing Azure Files to
- **What are the costs of value-added products, like backup, security, etc.?** Azure Files supports integrations with multiple first- and third-party [value-added services](#value-added-services). Value-added services such as Azure Backup, Azure File Sync, and Azure Defender provide backup, replication and caching, and security functionality for Azure Files. Value-added solutions, whether on-premises or in the cloud, have their own licensing and product costs, but are often considered part of the total cost of ownership for file storage.
-## Reserve capacity
-Azure Files supports storage capacity reservations, which enable you to achieve a discount on storage by pre-committing to storage utilization. You should consider purchasing reserved instances for any production workload, or dev/test workloads with consistent footprints. When you purchase reserved capacity, your reservation must specify the following dimensions:
+## Reservations
+Azure Files supports reservations (also referred to as *reserved instances*), which enable you to achieve a discount on storage by pre-committing to storage utilization. You should consider purchasing reserved instances for any production workload, or dev/test workloads with consistent footprints. When you purchase a Reservation, you must specify the following dimensions:
-- **Capacity size**: Capacity reservations can be for either 10 TiB or 100 TiB, with more significant discounts for purchasing a higher capacity reservation. You can purchase multiple reservations, including reservations of different capacity sizes to meet your workload requirements. For example, if your production deployment has 120 TiB of file shares, you could purchase one 100 TiB reservation and two 10 TiB reservations to meet the total capacity requirements.-- **Term**: Reservations can be purchased for either a one-year or three-year term, with more significant discounts for purchasing a longer reservation term.-- **Tier**: The tier of Azure Files for the capacity reservation. Reservations for Azure Files currently are available for the premium, hot, and cool tiers.-- **Location**: The Azure region for the capacity reservation. Capacity reservations are available in a subset of Azure regions.-- **Redundancy**: The storage redundancy for the capacity reservation. Reservations are supported for all redundancies Azure Files supports, including LRS, ZRS, GRS, and GZRS.
+- **Capacity size**: Reservations can be for either 10 TiB or 100 TiB, with more significant discounts for purchasing a higher capacity Reservation. You can purchase multiple Reservations, including Reservations of different capacity sizes to meet your workload requirements. For example, if your production deployment has 120 TiB of file shares, you could purchase one 100 TiB Reservation and two 10 TiB Reservations to meet the total storage capacity requirements.
+- **Term**: Reservations can be purchased for either a one-year or three-year term, with more significant discounts for purchasing a longer Reservation term.
+- **Tier**: The tier of Azure Files for the Reservation. Reservations currently are available for the premium, hot, and cool tiers.
+- **Location**: The Azure region for the Reservation. Reservations are available in a subset of Azure regions.
+- **Redundancy**: The storage redundancy for the Reservation. Reservations are supported for all redundancies Azure Files supports, including LRS, ZRS, GRS, and GZRS.
-Once you purchase a capacity reservation, it will automatically be consumed by your existing storage utilization. If you use more storage than you have reserved, you'll pay list price for the balance not covered by the capacity reservation. Transaction, bandwidth, data transfer, and metadata storage charges aren't included in the reservation.
+Once you purchase a Reservation, it will automatically be consumed by your existing storage utilization. If you use more storage than you have reserved, you'll pay list price for the balance not covered by the Reservation. Transaction, bandwidth, data transfer, and metadata storage charges aren't included in the Reservation.
-There are differences in how capacity reservations work with Azure file share snapshots for standard and premium file shares. If you're taking snapshots of standard file shares, then the snapshot differentials count against the reserved capacity and are billed as part of the normal used storage meter. However, if you're taking snapshots of premium file shares, then the snapshots are billed using a separate meter and don't count against the capacity reservation. For more information, see [Snapshots](#snapshots).
+There are differences in how Reservations work with Azure file share snapshots for standard and premium file shares. If you're taking snapshots of standard file shares, then the snapshot differentials count against the Reservation and are billed as part of the normal used storage meter. However, if you're taking snapshots of premium file shares, then the snapshots are billed using a separate meter and don't count against the Reservation. For more information, see [Snapshots](#snapshots).
-For more information on how to purchase storage reservations, see [Optimize costs for Azure Files with reserved capacity](files-reserve-capacity.md).
+For more information on how to purchase Reservations, see [Optimize costs for Azure Files with Reservations](files-reserve-capacity.md).
## Provisioned model Azure Files uses a provisioned model for premium file shares. In a provisioned billing model, you proactively specify to the Azure Files service what your storage requirements are, rather than being billed based on what you use. A provisioned model for storage is similar to buying an on-premises storage solution because when you provision an Azure file share with a certain amount of storage capacity, you pay for that storage capacity regardless of whether you use it or not. Unlike purchasing physical media on-premises, provisioned file shares can be dynamically scaled up or down depending on your storage and IO performance characteristics.
Snapshots are always billed based on the differential storage utilization of eac
- In premium file shares, snapshots are billed against their own snapshot meter, which has a reduced price over the provisioned storage price. This means that you'll see a separate line item on your bill representing snapshots for premium file shares for each FileStorage storage account on your bill. -- In standard file shares, snapshots are billed as part of the normal used storage meter, although you're still only billed for the differential cost of the snapshot. This means that you won't see a separate line item on your bill representing snapshots for each standard storage account containing Azure file shares. This also means that differential snapshot usage counts against capacity reservations that are purchased for standard file shares.
+- In standard file shares, snapshots are billed as part of the normal used storage meter, although you're still only billed for the differential cost of the snapshot. This means that you won't see a separate line item on your bill representing snapshots for each standard storage account containing Azure file shares. This also means that differential snapshot usage counts against Reservations that are purchased for standard file shares.
Value-added services for Azure Files may use snapshots as part of their value proposition. See [value-added services for Azure Files](#value-added-services) for more information on how snapshots are used.
storage Isv File Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/solution-integration/validated-partners/primary-secondary-storage/isv-file-services.md
This article compares several ISV solutions that provide files services in Azure
- White glove deployment and professional services **NetApp**-- [Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/netapp.netapp-ontap-cloud?tab=Overview)
+- [Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/netapp.cloud-manager)
- De-duplication savings passed on to customer via reduced infrastructure consumption **Panzura**
stream-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/policy-reference.md
Title: Built-in policy definitions for Azure Stream Analytics description: Lists Azure Policy built-in policy definitions for Azure Stream Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/28/2022 Last updated : 01/05/2023
stream-analytics Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Stream Analytics description: Lists Azure Policy Regulatory Compliance controls available for Azure Stream Analytics. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/04/2022 Last updated : 01/05/2023
synapse-analytics Cognitive Services With Synapseml Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/cognitive-services-with-synapseml-overview.md
+
+ Title: cognitive-services-with-synapseml-overview
+description: cognitive-services-with-synapseml-overview
++ Last updated : 01/09/2023+++
+# Cognitive Services
+
+[Azure Cognitive Services](https://azure.microsoft.com/services/cognitive-services/) are a suite of APIs, SDKs, and services available to help developers build intelligent applications without having direct AI or data science skills or knowledge by enabling developers to easily add cognitive features into their applications. The goal of Azure Cognitive Services is to help developers create applications that can see, hear, speak, understand, and even begin to reason. The catalog of services within Azure Cognitive Services can be categorized into five main pillars - Vision, Speech, Language, Web Search, and Decision.
+
+## Usage
+
+### Vision
+[**Computer Vision**](https://azure.microsoft.com/services/cognitive-services/computer-vision/)
+- Describe: provides description of an image in human readable language ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/DescribeImage.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.DescribeImage))
+- Analyze (color, image type, face, adult/racy content): analyzes visual features of an image ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/AnalyzeImage.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.AnalyzeImage))
+- OCR: reads text from an image ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/OCR.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.OCR))
+- Recognize Text: reads text from an image ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/RecognizeText.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.RecognizeText))
+- Thumbnail: generates a thumbnail of user-specified size from the image ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/GenerateThumbnails.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.GenerateThumbnails))
+- Recognize domain-specific content: recognizes domain-specific content (celebrity, landmark) ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/RecognizeDomainSpecificContent.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.RecognizeDomainSpecificContent))
+- Tag: identifies list of words that are relevant to the input image ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/TagImage.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.TagImage))
+
+[**Face**](https://azure.microsoft.com/services/cognitive-services/face/)
+- Detect: detects human faces in an image ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/DetectFace.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.DetectFace))
+- Verify: verifies whether two faces belong to a same person, or a face belongs to a person ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/VerifyFaces.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.VerifyFaces))
+- Identify: finds the closest matches of the specific query person face from a person group ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/IdentifyFaces.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.IdentifyFaces))
+- Find similar: finds similar faces to the query face in a face list ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/FindSimilarFace.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.FindSimilarFace))
+- Group: divides a group of faces into disjoint groups based on similarity ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/GroupFaces.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.GroupFaces))
+
+### Speech
+[**Speech Services**](https://azure.microsoft.com/services/cognitive-services/speech-services/)
+- Speech-to-text: transcribes audio streams ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/SpeechToText.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.SpeechToText))
+- Conversation Transcription: transcribes audio streams into live transcripts with identified speakers. ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/ConversationTranscription.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.ConversationTranscription))
+- Text to Speech: Converts text to realistic audio ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/TextToSpeech.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.TextToSpeech))
++
+### Language
+[**Text Analytics**](https://azure.microsoft.com/services/cognitive-services/text-analytics/)
+- Language detection: detects language of the input text ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/LanguageDetector.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.LanguageDetector))
+- Key phrase extraction: identifies the key talking points in the input text ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/KeyPhraseExtractor.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.KeyPhraseExtractor))
+- Named entity recognition: identifies known entities and general named entities in the input text ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/NER.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.NER))
+- Sentiment analysis: returns a score between 0 and 1 indicating the sentiment in the input text ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/TextSentiment.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.TextSentiment))
+- Healthcare Entity Extraction: Extracts medical entities and relationships from text. ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/AnalyzeHealthText.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.AnalyzeHealthText))
++
+### Translation
+[**Translator**](https://azure.microsoft.com/services/cognitive-services/translator/)
+- Translate: Translates text. ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/Translate.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.Translate))
+- Transliterate: Converts text in one language from one script to another script. ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/Transliterate.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.Transliterate))
+- Detect: Identifies the language of a piece of text. ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/Detect.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.Detect))
+- BreakSentence: Identifies the positioning of sentence boundaries in a piece of text. ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/BreakSentence.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.BreakSentence))
+- Dictionary Lookup: Provides alternative translations for a word and a small number of idiomatic phrases. ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/DictionaryLookup.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.DictionaryLookup))
+- Dictionary Examples: Provides examples that show how terms in the dictionary are used in context. ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/DictionaryExamples.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.DictionaryExamples))
+- Document Translation: Translates documents across all supported languages and dialects while preserving document structure and data format. ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/DocumentTranslator.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.DocumentTranslator))
+
+### Form Recognizer
+[**Form Recognizer**](https://azure.microsoft.com/services/form-recognizer/)
+- Analyze Layout: Extract text and layout information from a given document. ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/AnalyzeLayout.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.AnalyzeLayout))
+- Analyze Receipts: Detects and extracts data from receipts using optical character recognition (OCR) and our receipt model, enabling you to easily extract structured data from receipts such as merchant name, merchant phone number, transaction date, transaction total, and more. ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/AnalyzeReceipts.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.AnalyzeReceipts))
+- Analyze Business Cards: Detects and extracts data from business cards using optical character recognition (OCR) and our business card model, enabling you to easily extract structured data from business cards such as contact names, company names, phone numbers, emails, and more. ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/AnalyzeBusinessCards.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.AnalyzeBusinessCards))
+- Analyze Invoices: Detects and extracts data from invoices using optical character recognition (OCR) and our invoice understanding deep learning models, enabling you to easily extract structured data from invoices such as customer, vendor, invoice ID, invoice due date, total, invoice amount due, tax amount, ship to, bill to, line items and more. ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/AnalyzeInvoices.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.AnalyzeInvoices))
+- Analyze ID Documents: Detects and extracts data from identification documents using optical character recognition (OCR) and our ID document model, enabling you to easily extract structured data from ID documents such as first name, last name, date of birth, document number, and more. ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/AnalyzeIDDocuments.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.AnalyzeIDDocuments))
+- Analyze Custom Form: Extracts information from forms (PDFs and images) into structured data based on a model created from a set of representative training forms. ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/AnalyzeCustomModel.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.AnalyzeCustomModel))
+- Get Custom Model: Get detailed information about a custom model. ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/GetCustomModel.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/ListCustomModels.html))
+- List Custom Models: Get information about all custom models. ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/ListCustomModels.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.ListCustomModels))
+
+### Decision
+[**Anomaly Detector**](https://azure.microsoft.com/services/cognitive-services/anomaly-detector/)
+- Anomaly status of latest point: generates a model using preceding points and determines whether the latest point is anomalous ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/DetectLastAnomaly.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.DetectLastAnomaly))
+- Find anomalies: generates a model using an entire series and finds anomalies in the series ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/DetectAnomalies.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.DetectAnomalies))
+
+### Search
+- [Bing Image search](https://azure.microsoft.com/services/cognitive-services/bing-image-search-api/) ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/BingImageSearch.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.BingImageSearch))
+- [Azure Cognitive search](https://docs.microsoft.com/azure/search/search-what-is-azure-search) ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/https://docsupdatetracker.net/index.html#com.microsoft.azure.synapse.ml.cognitive.search.AzureSearchWriter$), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.AzureSearchWriter))
+
+## Prerequisites
+
+1. Follow the steps in [Getting started](https://docs.microsoft.com/azure/cognitive-services/big-data/getting-started) to set up your Azure Databricks and Cognitive Services environment. This tutorial shows you how to install SynapseML and how to create your Spark cluster in Databricks.
+1. After you create a new notebook in Azure Databricks, copy the **Shared code** below and paste into a new cell in your notebook.
+1. Choose a service sample, below, and copy paste it into a second new cell in your notebook.
+1. Replace any of the service subscription key placeholders with your own key.
+1. Choose the run button (triangle icon) in the upper right corner of the cell, then select **Run Cell**.
+1. View results in a table below the cell.
+
+## Shared code
+
+To get started, we'll need to add this code to the project:
++
+```python
+from pyspark.sql.functions import udf, col
+from synapse.ml.io.http import HTTPTransformer, http_udf
+from requests import Request
+from pyspark.sql.functions import lit
+from pyspark.ml import PipelineModel
+from pyspark.sql.functions import col
+import os
+```
++
+```python
+from pyspark.sql import SparkSession
+from synapse.ml.core.platform import *
+
+# Bootstrap Spark Session
+spark = SparkSession.builder.getOrCreate()
+
+from synapse.ml.core.platform import materializing_display as display
+```
++
+```python
+from synapse.ml.cognitive import *
+
+# A general Cognitive Services key for Text Analytics, Computer Vision and Form Recognizer (or use separate keys that belong to each service)
+service_key = find_secret("cognitive-api-key")
+service_loc = "eastus"
+
+# A Bing Search v7 subscription key
+bing_search_key = find_secret("bing-search-key")
+
+# An Anomaly Dectector subscription key
+anomaly_key = find_secret("anomaly-api-key")
+anomaly_loc = "westus2"
+
+# A Translator subscription key
+translator_key = find_secret("translator-key")
+translator_loc = "eastus"
+
+# An Azure search key
+search_key = find_secret("azure-search-key")
+```
+
+## Text Analytics sample
+
+The [Text Analytics](https://azure.microsoft.com/services/cognitive-services/text-analytics/) service provides several algorithms for extracting intelligent insights from text. For example, we can find the sentiment of given input text. The service will return a score between 0.0 and 1.0 where low scores indicate negative sentiment and high score indicates positive sentiment. This sample uses three simple sentences and returns the sentiment for each.
++
+```python
+# Create a dataframe that's tied to it's column names
+df = spark.createDataFrame(
+ [
+ ("I am so happy today, its sunny!", "en-US"),
+ ("I am frustrated by this rush hour traffic", "en-US"),
+ ("The cognitive services on spark aint bad", "en-US"),
+ ],
+ ["text", "language"],
+)
+
+# Run the Text Analytics service with options
+sentiment = (
+ TextSentiment()
+ .setTextCol("text")
+ .setLocation(service_loc)
+ .setSubscriptionKey(service_key)
+ .setOutputCol("sentiment")
+ .setErrorCol("error")
+ .setLanguageCol("language")
+)
+
+# Show the results of your text query in a table format
+display(
+ sentiment.transform(df).select(
+ "text", col("sentiment.document.sentiment").alias("sentiment")
+ )
+)
+```
+
+## Text Analytics for Health Sample
+
+The [Text Analytics for Health Service](https://docs.microsoft.com/azure/cognitive-services/language-service/text-analytics-for-health/overview?tabs=ner) extracts and labels relevant medical information from unstructured texts such as doctor's notes, discharge summaries, clinical documents, and electronic health records.
++
+```python
+df = spark.createDataFrame(
+ [
+ ("20mg of ibuprofen twice a day",),
+ ("1tsp of Tylenol every 4 hours",),
+ ("6-drops of Vitamin B-12 every evening",),
+ ],
+ ["text"],
+)
+
+healthcare = (
+ AnalyzeHealthText()
+ .setSubscriptionKey(service_key)
+ .setLocation(service_loc)
+ .setLanguage("en")
+ .setOutputCol("response")
+)
+
+display(healthcare.transform(df))
+```
+
+## Translator sample
+[Translator](https://azure.microsoft.com/services/cognitive-services/translator/) is a cloud-based machine translation service and is part of the Azure Cognitive Services family of cognitive APIs used to build intelligent apps. Translator is easy to integrate in your applications, websites, tools, and solutions. It allows you to add multi-language user experiences in 90 languages and dialects and can be used for text translation with any operating system. In this sample, we do a simple text translation by providing the sentences you want to translate and target languages you want to translate to.
++
+```python
+from pyspark.sql.functions import col, flatten
+
+# Create a dataframe including sentences you want to translate
+df = spark.createDataFrame(
+ [(["Hello, what is your name?", "Bye"],)],
+ [
+ "text",
+ ],
+)
+
+# Run the Translator service with options
+translate = (
+ Translate()
+ .setSubscriptionKey(translator_key)
+ .setLocation(translator_loc)
+ .setTextCol("text")
+ .setToLanguage(["zh-Hans"])
+ .setOutputCol("translation")
+)
+
+# Show the results of the translation.
+display(
+ translate.transform(df)
+ .withColumn("translation", flatten(col("translation.translations")))
+ .withColumn("translation", col("translation.text"))
+ .select("translation")
+)
+```
+
+## Form Recognizer sample
+[Form Recognizer](https://azure.microsoft.com/services/form-recognizer/) is a part of Azure Applied AI Services that lets you build automated data processing software using machine learning technology. Identify and extract text, key/value pairs, selection marks, tables, and structure from your documents. The service outputs structured data that includes the relationships in the original file, bounding boxes, confidence and more. In this sample, we analyze a business card image and extract its information into structured data.
++
+```python
+from pyspark.sql.functions import col, explode
+
+# Create a dataframe containing the source files
+imageDf = spark.createDataFrame(
+ [
+ (
+ "https://mmlspark.blob.core.windows.net/datasets/FormRecognizer/business_card.jpg",
+ )
+ ],
+ [
+ "source",
+ ],
+)
+
+# Run the Form Recognizer service
+analyzeBusinessCards = (
+ AnalyzeBusinessCards()
+ .setSubscriptionKey(service_key)
+ .setLocation(service_loc)
+ .setImageUrlCol("source")
+ .setOutputCol("businessCards")
+)
+
+# Show the results of recognition.
+display(
+ analyzeBusinessCards.transform(imageDf)
+ .withColumn(
+ "documents", explode(col("businessCards.analyzeResult.documentResults.fields"))
+ )
+ .select("source", "documents")
+)
+```
+
+## Computer Vision sample
+
+[Computer Vision](https://azure.microsoft.com/services/cognitive-services/computer-vision/) analyzes images to identify structure such as faces, objects, and natural-language descriptions. In this sample, we tag a list of images. Tags are one-word descriptions of things in the image like recognizable objects, people, scenery, and actions.
++
+```python
+# Create a dataframe with the image URLs
+base_url = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/ComputerVision/Images/"
+df = spark.createDataFrame(
+ [
+ (base_url + "objects.jpg",),
+ (base_url + "dog.jpg",),
+ (base_url + "house.jpg",),
+ ],
+ [
+ "image",
+ ],
+)
+
+# Run the Computer Vision service. Analyze Image extracts infortmation from/about the images.
+analysis = (
+ AnalyzeImage()
+ .setLocation(service_loc)
+ .setSubscriptionKey(service_key)
+ .setVisualFeatures(
+ ["Categories", "Color", "Description", "Faces", "Objects", "Tags"]
+ )
+ .setOutputCol("analysis_results")
+ .setImageUrlCol("image")
+ .setErrorCol("error")
+)
+
+# Show the results of what you wanted to pull out of the images.
+display(analysis.transform(df).select("image", "analysis_results.description.tags"))
+```
+
+## Bing Image Search sample
+
+[Bing Image Search](https://azure.microsoft.com/services/cognitive-services/bing-image-search-api/) searches the web to retrieve images related to a user's natural language query. In this sample, we use a text query that looks for images with quotes. It returns a list of image URLs that contain photos related to our query.
++
+```python
+# Number of images Bing will return per query
+imgsPerBatch = 10
+# A list of offsets, used to page into the search results
+offsets = [(i * imgsPerBatch,) for i in range(100)]
+# Since web content is our data, we create a dataframe with options on that data: offsets
+bingParameters = spark.createDataFrame(offsets, ["offset"])
+
+# Run the Bing Image Search service with our text query
+bingSearch = (
+ BingImageSearch()
+ .setSubscriptionKey(bing_search_key)
+ .setOffsetCol("offset")
+ .setQuery("Martin Luther King Jr. quotes")
+ .setCount(imgsPerBatch)
+ .setOutputCol("images")
+)
+
+# Transformer that extracts and flattens the richly structured output of Bing Image Search into a simple URL column
+getUrls = BingImageSearch.getUrlTransformer("images", "url")
+
+# This displays the full results returned, uncomment to use
+# display(bingSearch.transform(bingParameters))
+
+# Since we have two services, they are put into a pipeline
+pipeline = PipelineModel(stages=[bingSearch, getUrls])
+
+# Show the results of your search: image URLs
+display(pipeline.transform(bingParameters))
+```
+
+## Speech-to-Text sample
+The [Speech-to-text](https://azure.microsoft.com/services/cognitive-services/speech-services/) service converts streams or files of spoken audio to text. In this sample, we transcribe one audio file.
++
+```python
+# Create a dataframe with our audio URLs, tied to the column called "url"
+df = spark.createDataFrame(
+ [("https://mmlspark.blob.core.windows.net/datasets/Speech/audio2.wav",)], ["url"]
+)
+
+# Run the Speech-to-text service to translate the audio into text
+speech_to_text = (
+ SpeechToTextSDK()
+ .setSubscriptionKey(service_key)
+ .setLocation(service_loc)
+ .setOutputCol("text")
+ .setAudioDataCol("url")
+ .setLanguage("en-US")
+ .setProfanity("Masked")
+)
+
+# Show the results of the translation
+display(speech_to_text.transform(df).select("url", "text.DisplayText"))
+```
+
+## Text-to-Speech sample
+[Text to speech](https://azure.microsoft.com/services/cognitive-services/text-to-speech/#overview) is a service that allows one to build apps and services that speak naturally, choosing from more than 270 neural voices across 119 languages and variants.
++
+```python
+from synapse.ml.cognitive import TextToSpeech
+
+fs = ""
+if running_on_databricks():
+ fs = "dbfs:"
+elif running_on_synapse_internal():
+ fs = "Files"
+
+# Create a dataframe with text and an output file location
+df = spark.createDataFrame(
+ [
+ (
+ "Reading out loud is fun! Check out aka.ms/spark for more information",
+ fs + "/output.mp3",
+ )
+ ],
+ ["text", "output_file"],
+)
+
+tts = (
+ TextToSpeech()
+ .setSubscriptionKey(service_key)
+ .setTextCol("text")
+ .setLocation(service_loc)
+ .setVoiceName("en-US-JennyNeural")
+ .setOutputFileCol("output_file")
+)
+
+# Check to make sure there were no errors during audio creation
+display(tts.transform(df))
+```
+
+## Anomaly Detector sample
+
+[Anomaly Detector](https://azure.microsoft.com/services/cognitive-services/anomaly-detector/) is great for detecting irregularities in your time series data. In this sample, we use the service to find anomalies in the entire time series.
++
+```python
+# Create a dataframe with the point data that Anomaly Detector requires
+df = spark.createDataFrame(
+ [
+ ("1972-01-01T00:00:00Z", 826.0),
+ ("1972-02-01T00:00:00Z", 799.0),
+ ("1972-03-01T00:00:00Z", 890.0),
+ ("1972-04-01T00:00:00Z", 900.0),
+ ("1972-05-01T00:00:00Z", 766.0),
+ ("1972-06-01T00:00:00Z", 805.0),
+ ("1972-07-01T00:00:00Z", 821.0),
+ ("1972-08-01T00:00:00Z", 20000.0),
+ ("1972-09-01T00:00:00Z", 883.0),
+ ("1972-10-01T00:00:00Z", 898.0),
+ ("1972-11-01T00:00:00Z", 957.0),
+ ("1972-12-01T00:00:00Z", 924.0),
+ ("1973-01-01T00:00:00Z", 881.0),
+ ("1973-02-01T00:00:00Z", 837.0),
+ ("1973-03-01T00:00:00Z", 9000.0),
+ ],
+ ["timestamp", "value"],
+).withColumn("group", lit("series1"))
+
+# Run the Anomaly Detector service to look for irregular data
+anamoly_detector = (
+ SimpleDetectAnomalies()
+ .setSubscriptionKey(anomaly_key)
+ .setLocation(anomaly_loc)
+ .setTimestampCol("timestamp")
+ .setValueCol("value")
+ .setOutputCol("anomalies")
+ .setGroupbyCol("group")
+ .setGranularity("monthly")
+)
+
+# Show the full results of the analysis with the anomalies marked as "True"
+display(
+ anamoly_detector.transform(df).select("timestamp", "value", "anomalies.isAnomaly")
+)
+```
+
+## Arbitrary web APIs
+
+With HTTP on Spark, any web service can be used in your big data pipeline. In this example, we use the [World Bank API](http://api.worldbank.org/v2/country/) to get information about various countries around the world.
++
+```python
+# Use any requests from the python requests library
++
+def world_bank_request(country):
+ return Request(
+ "GET", "http://api.worldbank.org/v2/country/{}?format=json".format(country)
+ )
++
+# Create a dataframe with spcificies which countries we want data on
+df = spark.createDataFrame([("br",), ("usa",)], ["country"]).withColumn(
+ "request", http_udf(world_bank_request)(col("country"))
+)
+
+# Much faster for big data because of the concurrency :)
+client = (
+ HTTPTransformer().setConcurrency(3).setInputCol("request").setOutputCol("response")
+)
+
+# Get the body of the response
++
+def get_response_body(resp):
+ return resp.entity.content.decode()
++
+# Show the details of the country data returned
+display(
+ client.transform(df).select(
+ "country", udf(get_response_body)(col("response")).alias("response")
+ )
+)
+```
+
+## Azure Cognitive search sample
+
+In this example, we show how you can enrich data using Cognitive Skills and write to an Azure Search Index using SynapseML.
++
+```python
+search_service = "mmlspark-azure-search"
+search_index = "test-33467690"
+
+df = spark.createDataFrame(
+ [
+ (
+ "upload",
+ "0",
+ "https://mmlspark.blob.core.windows.net/datasets/DSIR/test1.jpg",
+ ),
+ (
+ "upload",
+ "1",
+ "https://mmlspark.blob.core.windows.net/datasets/DSIR/test2.jpg",
+ ),
+ ],
+ ["searchAction", "id", "url"],
+)
+
+tdf = (
+ AnalyzeImage()
+ .setSubscriptionKey(service_key)
+ .setLocation(service_loc)
+ .setImageUrlCol("url")
+ .setOutputCol("analyzed")
+ .setErrorCol("errors")
+ .setVisualFeatures(
+ ["Categories", "Tags", "Description", "Faces", "ImageType", "Color", "Adult"]
+ )
+ .transform(df)
+ .select("*", "analyzed.*")
+ .drop("errors", "analyzed")
+)
+
+tdf.writeToAzureSearch(
+ subscriptionKey=search_key,
+ actionCol="searchAction",
+ serviceName=search_service,
+ indexName=search_index,
+ keyCol="id",
+)
+```
synapse-analytics Implementation Success Assess Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-assess-environment.md
Last updated 05/31/2022
[!INCLUDE [implementation-success-context](includes/implementation-success-context.md)]
-The first step when implementing Azure Synapse Analytics is to assessment your environment. An assessment provides you with the opportunity to gather all the available information about your existing environment, environmental requirements, project requirements, constraints, timelines, and pain points. This information will form the basis of later evaluations and checkpoint activities. It will prove invaluable when it comes time to validate and compare against the project solution as it's planned, designed, and developed. We recommend that you dedicate a good amount of time to gather all the information and be sure to have necessary discussions with relevant groups. Relevant groups can include project stakeholders, business users, solution designers, and subject matter experts (SMEs) of the existing solution and environment.
+The first step when implementing Azure Synapse Analytics is to conduct an assessment of your environment. An assessment provides you with the opportunity to gather all the available information about your existing environment, environmental requirements, project requirements, constraints, timelines, and pain points. This information will form the basis of later evaluations and checkpoint activities. It will prove invaluable when it comes time to validate and compare against the project solution as it's planned, designed, and developed. We recommend that you dedicate a good amount of time to gather all the information and be sure to have necessary discussions with relevant groups. Relevant groups can include project stakeholders, business users, solution designers, and subject matter experts (SMEs) of the existing solution and environment.
The assessment will become a guide to help you evaluate the solution design and make informed technology recommendations to implement Azure Synapse.
synapse-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/policy-reference.md
Title: Built-in policy definitions description: Lists Azure Policy built-in policy definitions for Azure Synapse Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/28/2022 Last updated : 01/05/2023
synapse-analytics Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Synapse Analytics description: Lists Azure Policy Regulatory Compliance controls available for Azure Synapse Analytics. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/04/2022 Last updated : 01/05/2023
synapse-analytics Reservation Of Executors In Dynamic Allocation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/reservation-of-executors-in-dynamic-allocation.md
+
+ Title: Reservation of Executors as part of Dynamic Allocation in Synapse Spark
+description: In this article, you learn how Dynamic Allocation of Executors works, and the conservative reservation that's applied to the executors to ensure jobs run with greater reliability.
+++++ Last updated : 11/07/2022++++
+# Reservation of Executors as part of Dynamic Allocation in Synapse Spark Pools
+
+Users create Spark pools in Azure Synapse Analytics and size them based on their analytics workload requirements. It's common among enterprise teams to use spark pools for multiple data engineering process and the usage of the pools could vary based on data ingestion rates, data volume and other factors. A spark pool could be used for compute intensive data transformation and also for carrying out data exploratory processes, and in these cases users can enable the Autoscale option and specify a minimum and maximum number of nodes and the platform handles scaling the number of active nodes within these limits based on the demand.
+
+Going one level further by looking at application level executor requirements, users find it hard to tune the executor configurations as they're vastly different across different stages of a Spark Job Execution process, which are also dependent on the volume of data processed which changes from time to time. Users can enable Dynamic Allocation of Executors option as part of the pool configuration, which would enable automatic allocation of executors to the spark application based on the nodes available in the Spark Pool.
+
+When Dynamic Allocation option is enabled, for every spark application submitted, the system *reserves* executors during the job submission step based on the Max Nodes, which were specified by the user to support successful auto scale scenarios.
+
+> [!NOTE]
+> **This conservative approach allows the platform to enable scaling from say 3 to 10 Nodes without running out of capacity, thereby providing users with greater reliability for job execution.**
+
+![Dynamic Allocation in Synapse Spark Pools](./media/reservation-of-executors-in-spark/dynamic-allocation-overview.png)
+
+## What does the reservation of executors mean?
+
+In scenarios where the Dynamic Allocation option is enabled in a Synapse Spark Pool, the platform reserves the number of executors based on the maximum limit specified by the user for any spark application submitted. A new job submitted by the user will only be accepted when there are available executors is > than the max number of reserved executors.
+
+> [!IMPORTANT]
+> This reservation activity however does not impact the billing where the users are billed only for the cores used and not for the number of cores in the reserved state.
++
+## How does this dynamic allocation work when multiple jobs are submitted against a Spark Pool
+
+Lets look at an example scenario of a single user who creates a Spark Pool A with Auto Scale enabled with minimum of 5 to maximum of 50 nodes.
+Since the user isn't sure how much compute the spark job would require, the user enables Dynamic Allocation to allow the executors to scale.
++ The user starts by submitting the application App1, which starts with three executors, and it can scale from 3 to 10 executors.++ The maximum number of nodes that are allocated for the Spark Pool is 50. With the submission of App1 resulting in reservation of 10 executors, the number of available executors in the spark pool reduces to 40. ++ The user submits another Spark Application App2 with the same compute configurations as that of App1 where the application starts with 3, which can scale up to 10 executors and thereby reserving 10 more executors from the total available executors in the spark pool.++ Total number of available executors in the spark pool has reduced to 30. ++ The user submits an application App3, App4 and App5 with the same as the other applications, for the sixth job would get queued because, as part of accepting App3, the number of available executors reduces to 20, and similarly reduces to 10 and then to 0 when App5 is accepted as part of the reservation of 10 Executors from the available set of executors in the pool. ++ Given that there are no available cores, App6 will be in the queue till these other applications complete execution and will be accepted once the available number of executors in the pool increases to 10 from 0. +
+![Job Level Reservation of Executors in Spark Pool with Dynamic Allocation](./media/reservation-of-executors-in-spark/reservation-of-executors.png)
+
+> [!NOTE]
+> + Even though the reservation of executors is carried out, not all executors are being used but are reserved to support auto scale scenarios for these applications.
+> + If all the applications App1, App2, App3, App4 and App5 were able to run in minimum node capacity the executors consumed for this execution is 15 Executors in total, however the rest of the 35 executors were added as part of the reserve enable scaling out from 3 Executors to 10 Executors in any case while running these applications.
+> + Even with having the 35 executors reserved, **the user is only billed for the 15 executors used in this case and not for the 35 executors in the reserved state.**
+> + **When Dynamic Allocation is Disabled**: In a scenario where the user disabled dynamic allocation, the reservation of executors will be based on the min and max number of executors specified by the user.
+> + If user in the above example has specified number of executors to be 5, then the 5 executors will be reserved for every application submitted, and the user can submit App6 and it would not be queued.
++
+## Scenario where concurrent jobs are submitted to Spark Pools in a Synapse Workspace
+
+Users can create multiple Spark Pools in a Synapse Analytics workspace and size them based on their analytics workload requirements. For these spark pools created, if the users have enabled Dynamic Allocation, the total available cores for the given workspace at any point in time will be
+
+**Total Available Cores for the Workspace = Total Cores of all Spark Pools - Cores Reserved or Being Used for Active Jobs running in Spark Pools**
+
+Users will get a **workspace capacity exceeded error** for jobs submitted when Total Available Cores for the Workspace is 0.
+
+## Dynamic Allocation and Reservation of Cores in a Multiuser Scenario
+
+In scenarios where multiple users try to run multiple spark jobs in a given Synapse Workspace, if User1 is submitting jobs to a Spark Pool, which is enabled with Dynamic Allocation, there by taking up all the cores available in Pool. If User2 submits jobs and given that there are no Available Cores for the Spark Pool as some of them are being actively used in the execution of the jobs submitted by User1 and some are reserved for supporting the execution, User2 would experience a **workspace capacity exceeded error**.
+
+> [!TIP]
+> Users can increase the number of cores, there by increasing the total available cores to avoid **workspace capacity exceeded errors**.
++++
+## Next steps
+- [Quickstart: Create an Apache Spark pool in Azure Synapse Analytics using web tools](/azure/synapse-analytics/quickstart-create-apache-spark-pool-portal)
+- [What is Apache Spark in Azure Synapse Analytics](/azure/synapse-analytics/spark/apache-spark-overview)
+- [Automatically scale Azure Synapse Analytics Apache Spark pools](/azure/synapse-analytics/spark/apache-spark-autoscale)
+- [Azure Synapse Analytics](/azure/synapse-analytics)
synapse-analytics Memory Concurrency Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/memory-concurrency-limits.md
The service levels range from DW100c to DW30000c.
The maximum service level is DW30000c, which has 60 Compute nodes and one distribution per Compute node. For example, a 600 TB data warehouse at DW30000c processes approximately 10 TB per Compute node.
+> [!NOTE]
+> Synapse Dedicated SQL pool is an evergreen platform service. Under [shared responsibility model in the cloud](/azure/security/fundamentals/shared-responsibility#division-of-responsibility), Microsoft continues to invest in advancements to underlying software and hardware which host dedicated SQL pool. As a result, the number of nodes or the type of computer hardware which underpins a given performance level (SLO) may change. The number of compute nodes listed here are provided as a reference, and shouldn't be used for sizing or performance purposes. Irrespective of number of nodes or underlying infrastructure, Microsoft's goal is to deliver performance in accordance with SLO; hence, we recommend that all sizing exercises must use cDWU as a guide. For more information on SLO and compute Data Warehouse Units, see [Data Warehouse Units (DWUs) for dedicated SQL pool (formerly SQL DW)](what-is-a-data-warehouse-unit-dwu-cdwu.md#service-level-objective).
+ ## Concurrency maximums for workload groups With the introduction of [workload groups](sql-data-warehouse-workload-isolation.md), the concept of concurrency slots no longer applies. Resources per request are allocated on a percentage basis and specified in the workload group definition. However, even with the removal of concurrency slots, there are minimum amounts of resources needed per queries based on the service level. The below table defined the minimum amount of resources needed per query across service levels and the associated concurrency that can be achieved.
synapse-analytics Pause And Resume Compute Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/pause-and-resume-compute-portal.md
Title: 'Quickstart: Pause and resume compute in dedicated SQL pool via the Azure portal'
-description: Use the Azure portal to pause compute for dedicated SQL pool to save costs. Resume compute when you are ready to use the data warehouse.
+ Title: "Quickstart: Pause and resume compute in dedicated SQL pool via the Azure portal"
+description: Use the Azure portal to pause compute for dedicated SQL pool to save costs. Resume compute when you're ready to use the data warehouse.
- Previously updated : 11/23/2020- Last updated : 01/05/2023 -++
+ - seo-lt-2019
+ - azure-synapse
+ - mode-ui
# Quickstart: Pause and resume compute in dedicated SQL pool via the Azure portal
-You can use the Azure portal to pause and resume the dedicated SQL pool compute resources.
+You can use the Azure portal to pause and resume the dedicated SQL pool compute resources.
If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin.
+> [!NOTE]
+> This article applies to dedicated SQL pools created in Azure Synapse Workspaces and not dedicated SQL pools (formerly SQL DW). There are different PowerShell cmdlets to use for each, for example, use `Suspend-AzSqlDatabase` for a dedicated SQL pool (formerly SQL DW), but `Suspend-AzSynapseSqlPool` for a dedicated SQL pool in an Azure Synapse Workspace. For more on the differences between dedicated SQL pool (formerly SQL DW) and dedicated SQL pools in Azure Synapse Workspaces, read [What's the difference between Azure Synapse (formerly SQL DW) and Azure Synapse Analytics Workspace](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/what-s-the-difference-between-azure-synapse-formerly-sql-dw-and/ba-p/3597772).
+ ## Sign in to the Azure portal Sign in to the [Azure portal](https://portal.azure.com/). ## Before you begin
-Use [Create and Connect - portal](../quickstart-create-sql-pool-portal.md) to create a dedicated SQL pool called **mySampleDataWarehouse**.
+Use [Create and Connect - portal](../quickstart-create-sql-pool-portal.md) to create a dedicated SQL pool called `mySampleDataWarehouse`.
## Pause compute To reduce costs, you can pause and resume compute resources on-demand. For example, if you won't be using the database during the night and on weekends, you can pause it during those times, and resume it during the day.
-
->[!NOTE]
->You won't be charged for compute resources while the database is paused. However, you will continue to be charged for storage.
+
+> [!NOTE]
+> You won't be charged for compute resources while the database is paused. However, you will continue to be charged for storage.
Follow these steps to pause a dedicated SQL pool: 1. Sign in to the [Azure portal](https://portal.azure.com/).
-2. Navigate to your the **Dedicated SQL pool** page to open the SQL pool.
-3. Notice **Status** is **Online**.
+1. Select **Azure Synapse Analytics** in the menu of the Azure portal, or search for **Azure Synapse Analytics** in the search bar.
+1. Navigate to your **Dedicated SQL pool** page to open the SQL pool.
+1. Notice **Status** is **Online**.
- ![Compute online](././media/pause-and-resume-compute-portal/compute-online.png)
+ :::image type="content" source="././media/pause-and-resume-compute-portal/compute-online.png" alt-text="Screenshot of the Azure portal indicating that the dedicated SQL pool compute is online.":::
-4. To pause the dedicated SQL pool, click the **Pause** button.
-5. A confirmation question appears asking if you want to continue. Click **Yes**.
-6. Wait a few moments, and then notice the **Status** is **Pausing**.
+1. To pause the dedicated SQL pool, select the **Pause** button.
+1. A confirmation question appears asking if you want to continue. Select **Yes**.
+1. Wait a few moments, and then notice the **Status** is **Pausing**.
- ![Screenshot shows the Azure portal for a sample data warehouse with a Status value of Pausing.](./media/pause-and-resume-compute-portal/pausing.png)
+ :::image type="content" source="./media/pause-and-resume-compute-portal/pausing.png" alt-text="Screenshot shows the Azure portal for a sample data warehouse with a Status value of Pausing.":::
-7. When the pause operation is complete, the status is **Paused** and the option button is **Resume**.
-8. The compute resources for the dedicated SQL pool are now offline. You won't be charged for compute until you resume the service.
-
- ![Compute offline](././media/pause-and-resume-compute-portal/compute-offline.png)
+1. When the pause operation is complete, the status is **Paused** and the option button is **Resume**.
+1. The compute resources for the dedicated SQL pool are now offline. You won't be charged for compute until you resume the service.
+ :::image type="content" source="././media/pause-and-resume-compute-portal/compute-offline.png" alt-text="Compute offline.":::
## Resume compute Follow these steps to resume a dedicated SQL pool.
-1. Navigate to your the **Dedicated SQL pool** page to open the SQL pool.
-3. On the **mySampleDataWarehouse** page, notice **Status** is **Paused**.
+1. Navigate to your **Dedicated SQL pool** to open the SQL pool.
+1. On the `mySampleDataWarehouse` page, notice **Status** is **Paused**.
- ![Compute offline](././media/pause-and-resume-compute-portal/compute-offline.png)
+ :::image type="content" source="././media/pause-and-resume-compute-portal/compute-offline.png" alt-text="Compute offline.":::
-1. To resume SQL pool, click **Resume**.
-1. A confirmation question appears asking if you want to start. Click **Yes**.
+1. To resume SQL pool, select **Resume**.
+1. A confirmation question appears asking if you want to start. Select **Yes**.
1. Notice the **Status** is **Resuming**.
- ![Screenshot shows the Azure portal for a sample data warehouse with the Start button selected and a Status value of Resuming.](./media/pause-and-resume-compute-portal/resuming.png)
+ :::image type="content" source="./media/pause-and-resume-compute-portal/resuming.png" alt-text="Screenshot shows the Azure portal for a sample data warehouse with the Start button selected and a Status value of Resuming.":::
-1. When the SQL pool is back online, the status is **Online** and the option button is **Pause**.
+1. When the SQL pool is back online, the status is **Online**, and the option button is **Pause**.
1. The compute resources for SQL pool are now online and you can use the service. Charges for compute have resumed.
- ![Compute online](././media/pause-and-resume-compute-portal/compute-online.png)
+ :::image type="content" source="././media/pause-and-resume-compute-portal/compute-online.png" alt-text="Compute online.":::
## Clean up resources
-You are being charged for data warehouse units and the data stored in your dedicated SQL pool. These compute and storage resources are billed separately.
+You are being charged for data warehouse units and the data stored in your dedicated SQL pool. These compute and storage resources are billed separately.
- If you want to keep the data in storage, pause compute.-- If you want to remove future charges, you can delete the dedicated SQL pool.
+- If you want to remove future charges, you can delete the dedicated SQL pool.
Follow these steps to clean up resources as you desire. 1. Sign in to the [Azure portal](https://portal.azure.com), and select your dedicated SQL pool.
- ![Clean up resources](./media/pause-and-resume-compute-portal/clean-up-resources.png)
-
-1. To pause compute, click the **Pause** button.
+ :::image type="content" source="./media/pause-and-resume-compute-portal/clean-up-resources.png" alt-text="Clean up resources.":::
-1. To remove the dedicated SQL pool so you are not charged for compute or storage, click **Delete**.
+1. To pause compute, select the **Pause** button.
+1. To remove the dedicated SQL pool so you are not charged for compute or storage, select **Delete**.
## Next steps
-You have now paused and resumed compute for your dedicated SQL pool. Continue to the next article to learn more about how to [Load data into a dedicated SQL pool](./load-data-from-azure-blob-storage-using-copy.md). For additional information about managing compute capabilities, see the [Manage compute overview](sql-data-warehouse-manage-compute-overview.md) article.
+- You have now paused and resumed compute for your dedicated SQL pool. Continue to the next article to learn more about how to [Load data into a dedicated SQL pool](./load-data-from-azure-blob-storage-using-copy.md). For additional information about managing compute capabilities, see the [Manage compute overview](sql-data-warehouse-manage-compute-overview.md) article.
+
+- For more on the differences between dedicated SQL pool (formerly SQL DW) and dedicated SQL pools in Azure Synapse Workspaces, read [What's the difference between Azure Synapse (formerly SQL DW) and Azure Synapse Analytics Workspace](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/what-s-the-difference-between-azure-synapse-formerly-sql-dw-and/ba-p/3597772).
synapse-analytics Pause And Resume Compute Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/pause-and-resume-compute-powershell.md
Title: 'Quickstart: Pause and resume compute in dedicated SQL pool (formerly SQL DW) with Azure PowerShell'
-description: You can use Azure PowerShell to pause and resume dedicated SQL pool (formerly SQL DW). compute resources.
+ Title: "Quickstart: Pause and resume compute in dedicated SQL pool (formerly SQL DW) with Azure PowerShell"
+description: You can use Azure PowerShell to pause and resume dedicated SQL pool (formerly SQL DW) compute resources.
- Previously updated : 03/20/2019- Last updated : 01/05/2023 -++
+ - devx-track-azurepowershell
+ - seo-lt-2019
+ - azure-synapse
+ - mode-api
# Quickstart: Pause and resume compute in dedicated SQL pool (formerly SQL DW) with Azure PowerShell
You can use Azure PowerShell to pause and resume dedicated SQL pool (formerly SQL DW) compute resources. If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin.
+> [!NOTE]
+> This article applies to dedicated SQL pools (formerly SQL DW) and not dedicated SQL pools created in Azure Synapse Workspaces. There are different PowerShell cmdlets to use for each, for example, use `Suspend-AzSqlDatabase` for a dedicated SQL pool (formerly SQL DW), but `Suspend-AzSynapseSqlPool` for a dedicated SQL pool in an Azure Synapse Workspace. For instructions to pause and resume a dedicated SQL pool in a Azure Synapse Workspace, see [Quickstart: Pause and resume compute in dedicated SQL pool in an Azure Synapse Workspace with Azure PowerShell](pause-and-resume-compute-workspace-powershell.md).
+> For more on the differences between dedicated SQL pool (formerly SQL DW) and dedicated SQL pools in Azure Synapse Workspaces, read [What's the difference between Azure Synapse (formerly SQL DW) and Azure Synapse Analytics Workspace](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/what-s-the-difference-between-azure-synapse-formerly-sql-dw-and/ba-p/3597772).
+ ## Before you begin [!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)]
-This quickstart assumes you already have a dedicated SQL pool (formerly SQL DW) that you can pause and resume. If you need to create one, you can use [Create and Connect - portal](create-data-warehouse-portal.md) to create a dedicated SQL pool (formerly SQL DW) called **mySampleDataWarehouse**.
+This quickstart assumes you already have a dedicated SQL pool (formerly SQL DW) that you can pause and resume. If you need to create one, you can use [Create and Connect - portal](create-data-warehouse-portal.md) to create a dedicated SQL pool (formerly SQL DW) called `mySampleDataWarehouse`.
-## Log in to Azure
+## Sign in to Azure
-Log in to your Azure subscription using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) command and follow the on-screen directions.
+Sign in to your Azure subscription using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) command and follow the on-screen directions.
```powershell Connect-AzAccount
Locate the database name, server name, and resource group for the dedicated SQL
Follow these steps to find location information for your dedicated SQL pool (formerly SQL DW): 1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Click **Azure Synapse Analytics (formerly SQL DW)** in the left page of the Azure portal.
-1. Select **mySampleDataWarehouse** from the **Azure Synapse Analytics (formerly SQL DW)** page. The SQL pool opens.
+1. Select **Dedicated SQL pool (formerly SQL DW)** in the menu of the Azure portal, or search for **Dedicated SQL pool (formerly SQL DW)** in the search bar.
+1. Select `mySampleDataWarehouse`. The SQL pool opens.
- ![Server name and resource group](./media/pause-and-resume-compute-powershell/locate-data-warehouse-information.png)
+ :::image type="content" source="./media/pause-and-resume-compute-powershell/locate-data-warehouse-information.png" alt-text="Screenshot of the Azure portal containing the dedicated SQL pool (formerly SQL DW) server name and resource group.":::
-1. Write down the dedicated SQL pool (formerly SQL DW) name, which is the database name. Also write down the server name, and the resource group.
-1. Use only the first part of the server name in the PowerShell cmdlets. In the preceding image, the full server name is sqlpoolservername.database.windows.net. We use **sqlpoolservername** as the server name in the PowerShell cmdlet.
+1. Remember the dedicated SQL pool (formerly SQL DW) name, which is the database name. Also write down the server name, and the resource group.
+1. Use only the first part of the server name in the PowerShell cmdlets. In the preceding image, the full server name is `sqlpoolservername.database.windows.net`. We use **sqlpoolservername** as the server name in the PowerShell cmdlet.
## Pause compute To save costs, you can pause and resume compute resources on-demand. For example, if you are not using the database during the night and on weekends, you can pause it during those times, and resume it during the day.
-> [!NOTE]
+> [!NOTE]
> There is no charge for compute resources while the database is paused. However, you continue to be charged for storage.
-To pause a database, use the [Suspend-AzSqlDatabase](/powershell/module/az.sql/suspend-azsqldatabase?toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) cmdlet. The following example pauses a SQL pool named **mySampleDataWarehouse** hosted on a server named **sqlpoolservername**. The server is in an Azure resource group named **myResourceGroup**.
+To pause a database, use the [Suspend-AzSqlDatabase](/powershell/module/az.sql/suspend-azsqldatabase?toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) cmdlet. The following example pauses a SQL pool named `mySampleDataWarehouse` hosted on a server named **sqlpoolservername**. The server is in an Azure resource group named **myResourceGroup**.
```powershell Suspend-AzSqlDatabase ΓÇôResourceGroupName "myResourceGroup" ` ΓÇôServerName "sqlpoolservername" ΓÇôDatabaseName "mySampleDataWarehouse" ```
-The following example retrieves the database into the $database object. It then pipes the object to [Suspend-AzSqlDatabase](/powershell/module/az.sql/suspend-azsqldatabase?toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json). The results are stored in the object resultDatabase. The final command shows the results.
+The following example retrieves the database into the `$database` object. It then pipes the object to [Suspend-AzSqlDatabase](/powershell/module/az.sql/suspend-azsqldatabase?toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json). The results are stored in the object `$resultDatabase`. The final command shows the results.
```powershell $database = Get-AzSqlDatabase ΓÇôResourceGroupName "myResourceGroup" `
$resultDatabase
## Resume compute
-To start a database, use the [Resume-AzSqlDatabase](/powershell/module/az.sql/resume-azsqldatabase?toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) cmdlet. The following example starts a database named **mySampleDataWarehouse** hosted on a server named **sqlpoolservername**. The server is in an Azure resource group named **myResourceGroup**.
+To start a database, use the [Resume-AzSqlDatabase](/powershell/module/az.sql/resume-azsqldatabase?toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) cmdlet. The following example starts a database named `mySampleDataWarehouse` hosted on a server named **sqlpoolservername**. The server is in an Azure resource group named **myResourceGroup**.
```powershell Resume-AzSqlDatabase ΓÇôResourceGroupName "myResourceGroup" ` ΓÇôServerName "sqlpoolservername" -DatabaseName "mySampleDataWarehouse" ```
-The next example retrieves the database into the $database object. It then pipes the object to [Resume-AzSqlDatabase](/powershell/module/az.sql/resume-azsqldatabase?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) and stores the results in $resultDatabase. The final command shows the results.
+The next example retrieves the database into the `$database` object. It then pipes the object to [Resume-AzSqlDatabase](/powershell/module/az.sql/resume-azsqldatabase?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) and stores the results in `$resultDatabase`. The final command shows the results.
```powershell $database = Get-AzSqlDatabase ΓÇôResourceGroupName "myResourceGroup" `
You are being charged for data warehouse units and data stored your dedicated SQ
Follow these steps to clean up resources as you desire.
-1. Sign in to the [Azure portal](https://portal.azure.com), and click on your SQL pool.
+1. Sign in to the [Azure portal](https://portal.azure.com), and select on your SQL pool.
- ![Clean up resources](./media/load-data-from-azure-blob-storage-using-polybase/clean-up-resources.png)
+ :::image type="content" source="./media/load-data-from-azure-blob-storage-using-polybase/clean-up-resources.png" alt-text="Clean up resources.":::
-2. To pause compute, click the **Pause** button. When the SQL pool is paused, you see a **Start** button. To resume compute, click **Start**.
+1. To pause compute, select the **Pause** button. When the SQL pool is paused, you see a **Start** button. To resume compute, select **Resume**.
-3. To remove the SQL pool so you are not charged for compute or storage, click **Delete**.
+1. To remove the SQL pool so you are not charged for compute or storage, select **Delete**.
-4. To remove the SQL server you created, click **sqlpoolservername.database.windows.net**, and then click **Delete**. Be careful with this deletion, since deleting the server also deletes all databases assigned to the server.
+1. To remove the SQL server you created, select `sqlpoolservername.database.windows.net`, and then select **Delete**. Be careful with this deletion, since deleting the server also deletes all databases assigned to the server.
-5. To remove the resource group, click **myResourceGroup**, and then click **Delete resource group**.
+1. To remove the resource group, select **myResourceGroup**, and then select **Delete resource group**.
## Next steps
-To learn more about SQL pool, continue to the [Load data into dedicated SQL pool (formerly SQL DW)](./load-data-from-azure-blob-storage-using-copy.md) article. For additional information about managing compute capabilities, see the [Manage compute overview](sql-data-warehouse-manage-compute-overview.md) article.
+- To learn more about SQL pool, continue to the [Load data into dedicated SQL pool (formerly SQL DW)](./load-data-from-azure-blob-storage-using-copy.md) article. For additional information about managing compute capabilities, see the [Manage compute overview](sql-data-warehouse-manage-compute-overview.md) article.
+
+- For more on the differences between dedicated SQL pool (formerly SQL DW) and dedicated SQL pools in Azure Synapse Workspaces, read [What's the difference between Azure Synapse (formerly SQL DW) and Azure Synapse Analytics Workspace](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/what-s-the-difference-between-azure-synapse-formerly-sql-dw-and/ba-p/3597772).
synapse-analytics Pause And Resume Compute Workspace Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/pause-and-resume-compute-workspace-powershell.md
+
+ Title: "Quickstart: Pause and resume compute in dedicated SQL pool in a Synapse Workspace with Azure PowerShell"
+description: You can use Azure PowerShell to pause and resume dedicated SQL pool compute resources in an Azure Synapse Workspace.
+++ Last updated : 01/05/2023++++
+ - azure-synapse
++
+# Quickstart: Pause and resume compute in dedicated SQL pool in a Synapse Workspace with Azure PowerShell
+
+You can use Azure PowerShell to pause and resume dedicated SQL pool in a Synapse Workspace compute resources.
+If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin.
+
+> [!NOTE]
+> This article applies to dedicated SQL pools created in Azure Synapse Workspaces and not dedicated SQL pools (formerly SQL DW). There are different PowerShell cmdlets to use for each, for example, use `Suspend-AzSynapsePool` for a dedicated SQL pool (formerly SQL DW), but `Suspend-AzSynapseSqlPool` for a dedicated SQL pool in an Azure Synapse Workspace. For instructions to pause and resume a dedicated SQL pool (formerly SQL DW), see [Quickstart: Pause and resume compute in dedicated SQL pool (formerly SQL DW) with Azure PowerShell](pause-and-resume-compute-powershell.md).
+> For more on the differences between dedicated SQL pool (formerly SQL DW) and dedicated SQL pools in Azure Synapse Workspaces, read [What's the difference between Azure Synapse (formerly SQL DW) and Azure Synapse Analytics Workspace](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/what-s-the-difference-between-azure-synapse-formerly-sql-dw-and/ba-p/3597772).
+
+## Before you begin
++
+This quickstart assumes you already have a dedicated SQL pool in a Synapse Workspace that you can pause and resume. If you need to create one, you can use [Create and Connect - portal](create-data-warehouse-portal.md) to create a dedicated SQL pool in a Synapse Workspace called `mySampleDataWarehouse`.
+
+## Sign in to Azure
+
+Sign in to your Azure subscription using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) command and follow the on-screen directions.
+
+```powershell
+Connect-AzAccount
+```
+
+To see which subscription you are using, run [Get-AzSubscription](/powershell/module/az.accounts/get-azsubscription?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json).
+
+```powershell
+Get-AzSubscription
+```
+
+If you need to use a different subscription than the default, run [Set-AzContext](/powershell/module/az.accounts/set-azcontext?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json).
+
+```powershell
+Set-AzContext -SubscriptionName "MySubscription"
+```
+
+## Look up dedicated SQL pool information
+
+Locate the pool name, server name, and resource group for the dedicated SQL pool you plan to pause and resume.
+
+Follow these steps to find location information for your dedicated SQL pool in the Azure Synapse Workspace:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. Select **Azure Synapse Analytics** in the menu of the Azure portal, or search for **Azure Synapse Analytics** in the search bar.
+1. Select `mySampleDataWarehouse` from the **Azure Synapse Analytics** page. The SQL pool opens.
+
+ :::image type="content" source="././media/pause-and-resume-compute-portal/compute-online.png" alt-text="Screenshot of the Azure portal indicating that the dedicated SQL pool compute is online.":::
+
+1. Remember the resource group name, dedicated SQL pool name, and workspace name.
+
+## Pause compute
+
+To save costs, you can pause and resume compute resources on-demand. For example, if you are not using the pool during the night and on weekends, you can pause it during those times, and resume it during the day.
+
+> [!NOTE]
+> There is no charge for compute resources while the pool is paused. However, you continue to be charged for storage.
+
+To pause a pool, use the [Suspend-AzSynapseSqlPool](/powershell/module/az.synapse/suspend-azsynapsepool?toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) cmdlet. The following example pauses a SQL pool named `mySampleDataWarehouse` hosted in workspace named `synapseworkspacename`. The server is in an Azure resource group named **myResourceGroup**.
+
+```powershell
+Suspend-AzSynapseSqlPool ΓÇôResourceGroupName "myResourceGroup" `
+-WorkspaceName "synapseworkspacename" ΓÇôName "mySampleDataWarehouse"
+```
+
+The following example retrieves the pool into the `$pool` object. It then pipes the object to [Suspend-AzSynapseSqlPool](/powershell/module/az.synapse/suspend-azsynapsepool?toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json). The results are stored in the object `$resultPool`. The final command shows the results.
+
+```powershell
+$pool = Get-AzSynapseSqlPool ΓÇôResourceGroupName "myResourceGroup" `
+-WorkspaceName "synapseworkspacename" ΓÇôName "mySampleDataWarehouse"
+$resultPool = $pool | Suspend-AzSynapseSqlPool
+$resultPool
+```
+
+The **Status** output of the resulting `$resultPool` object contains the new status of the pool, **Paused**.
+
+## Resume compute
+
+To start a pool, use the [Resume-AzSynapsePool](/powershell/module/az.synapse/resume-AzSynapsePool?toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) cmdlet. The following example starts a pool named `mySampleDataWarehouse` hosted on a workspace named `sqlpoolservername`. The server is in an Azure resource group named **myResourceGroup**.
+
+```powershell
+Resume-AzSynapsePool ΓÇôResourceGroupName "myResourceGroup" `
+-WorkspaceName "synapseworkspacename" -Name "mySampleDataWarehouse"
+```
+
+The next example retrieves the pool into the `$pool` object. It then pipes the object to [Resume-AzSynapsePool](/powershell/module/az.synapse/resume-AzSynapsePool?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) and stores the results in `$resultpool`. The final command shows the results.
+
+```powershell
+$pool = Get-AzSynapseSqlPool ΓÇôResourceGroupName "myResourceGroup" `
+-WorkspaceName "synapseworkspacename" ΓÇôName "mySampleDataWarehouse"
+$resultPool = $pool | Resume-AzSynapsePool
+$resultPool
+```
+
+The **Status** output of the resulting `$resultPool` object contains the new status of the pool, **Online**.
+
+## Clean up resources
+
+You are being charged for data warehouse units and data stored your dedicated SQL pool. These compute and storage resources are billed separately.
+
+- If you want to keep the data in storage, pause compute.
+- If you want to remove future charges, you can delete the dedicated SQL pool.
+
+Follow these steps to clean up resources as you desire.
+
+1. Sign in to the [Azure portal](https://portal.azure.com), and select on your SQL pool.
+
+1. To pause compute, select the **Pause** button. When the SQL pool is paused, you see a **Resume** button. To resume compute, select **Resume**.
+
+1. To remove the dedicated SQL pool so you are not charged for compute or storage, select **Delete**.
+
+1. To remove the resource group, select **myResourceGroup**, and then select **Delete resource group**.
+
+## Next steps
+
+- To get started with Azure Synapse Analytics, see [Get Started with Azure Synapse Analytics](../get-started.md).
+- To learn more about dedicated SQL pools in Azure Synapse Analytics, see [What is dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics?](sql-data-warehouse-overview-what-is.md)
+- To learn more about SQL pool, continue to the [Load data into dedicated SQL pool (formerly SQL DW)](./load-data-from-azure-blob-storage-using-copy.md) article. For additional information about managing compute capabilities, see the [Manage compute overview](sql-data-warehouse-manage-compute-overview.md) article.
+- For more on the differences between dedicated SQL pool (formerly SQL DW) and dedicated SQL pools in Azure Synapse Workspaces, read [What's the difference between Azure Synapse (formerly SQL DW) and Azure Synapse Analytics Workspace](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/what-s-the-difference-between-azure-synapse-formerly-sql-dw-and/ba-p/3597772).
synapse-analytics Sql Data Warehouse Concept Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-concept-recommendations.md
The following section describes workload-based heuristics you may find in the Az
Currently Advisor will only show at most four replicated table candidates at once with clustered columnstore indexes prioritizing the highest activity. > [!IMPORTANT]
-> The replicated table recommendation is not full proof and does not take into account data movement operations. We are working on adding this as a heuristic but in the meantime, you should always validate your workload after applying the recommendation. To learn more about replicated tables, visit the following [documentation](design-guidance-for-replicated-tables.md#what-is-a-replicated-table).
+> The replicated table recommendation is not fool proof and does not take into account data movement operations. We are working on adding this as a heuristic but in the meantime, you should always validate your workload after applying the recommendation. To learn more about replicated tables, visit the following [documentation](design-guidance-for-replicated-tables.md#what-is-a-replicated-table).
## Adaptive (Gen2) cache utilization
Query performance can degrade when there is high tempdb contention. Tempdb cont
## Data loading misconfiguration
-You should always load data from a storage account in the same region as your dedicated SQL pool to minimize latency. Use the [COPY statement for high throughput data ingestion](/sql/t-sql/statements/copy-into-transact-sql?view=azure-sqldw-latest&preserve-view=true) and split your staged files in your storage account to maximize throughput. If you can't use the COPY statement, you can use the SqlBulkCopy API or bcp with a high batch size for better throughput. See [Best practices for data loading](../sql/data-loading-best-practices.md) for additional data loading guidance.
+You should always load data from a storage account in the same region as your dedicated SQL pool to minimize latency. Use the [COPY statement for high throughput data ingestion](/sql/t-sql/statements/copy-into-transact-sql?view=azure-sqldw-latest&preserve-view=true) and split your staged files in your storage account to maximize throughput. If you can't use the COPY statement, you can use the SqlBulkCopy API or bcp with a high batch size for better throughput. See [Best practices for data loading](../sql/data-loading-best-practices.md) for additional data loading guidance.
synapse-analytics Sql Data Warehouse Troubleshoot Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-troubleshoot-connectivity.md
The status of your dedicated SQL pool (formerly SQL DW) will be shown here. If t
![Service Available](./media/sql-data-warehouse-troubleshoot-connectivity/resource-health.png)
-For more information, see [Resource Health](/articles/service-health/resource-health-overview.md).
+For more information, see [Resource Health](/azure/service-health/resource-health-overview).
## Check for paused or scaling operation
synapse-analytics Develop Storage Files Storage Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-storage-files-storage-access-control.md
You can access publicly available files placed on Azure storage accounts that [a
#### Cross-tenant scenarios
-In cases when Azure Storage is in a different tenant from the Synapse serverless SQL pool, authorization via **Service Principal** is the recommended method. **SAS** authorization is also possible, while **Managed Identity** is not supported.
+In cases when Azure Storage is in a different tenant from the Synapse serverless SQL pool, authorization via **Service Principal** is the recommended method. **SAS** authorization is also possible, while **Managed Identity** is not supported.
+> [!NOTE]
+> In case when Azure Storage is protected with a firewall **Service Principal** will not be supported.
### Supported authorization types for databases users
synapse-analytics Develop Tables Cetas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-tables-cetas.md
WITH (
AS SELECT decennialTime, stateName, SUM(population) AS population FROM
- OPENROWSET(BULK 'https://azureopendatastorage.blob.core.windows.net/censusdatacontainer/release/us_population_county/year=*/*.parquet',
+ OPENROWSET(BULK 'https://azureopendatastorage.dfs.core.windows.net/censusdatacontainer/release/us_population_county/year=*/*.parquet',
FORMAT='PARQUET') AS [r] GROUP BY decennialTime, stateName GO
GO
SELECT * FROM population_by_year_state ```
+### General example
+
+In this example we can see example of a template code for writing CETAS with a View as source and using Managed Identity as an authentication.
+
+```sql
+CREATE DATABASE [<mydatabase>];
+GO
+
+USE [<mydatabase>];
+GO
+
+CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<strong password>';
+
+CREATE DATABASE SCOPED CREDENTIAL [WorkspaceIdentity] WITH IDENTITY = 'Managed Identity';
+GO
+
+CREATE EXTERNAL FILE FORMAT [ParquetFF] WITH (
+ FORMAT_TYPE = PARQUET,
+ DATA_COMPRESSION = 'org.apache.hadoop.io.compress.SnappyCodec'
+);
+GO
+
+CREATE EXTERNAL DATA SOURCE [SynapseSQLwriteable] WITH (
+ LOCATION = 'https://<mystoageaccount>.dfs.core.windows.net/<mycontainer>/<mybaseoutputfolderpath>',
+ CREDENTIAL = [WorkspaceIdentity]
+);
+GO
+
+CREATE EXTERNAL TABLE [dbo].[<myexternaltable>] WITH (
+ LOCATION = '<myoutputsubfolder>/',
+ DATA_SOURCE = [SynapseSQLwriteable],
+ FILE_FORMAT = [ParquetFF]
+) AS
+SELECT * FROM [<myview>];
+GO
+```
+ ## Supported data types CETAS can be used to store result sets with following SQL data types:
synapse-analytics Overview Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/overview-features.md
Synapse SQL pools enable you to use built-in security features to secure your da
| **Built-in/system security &amp; identity functions** | Some Transact-SQL security functions and operators: `CURRENT_USER`, `HAS_DBACCESS`, `IS_MEMBER`, `IS_ROLEMEMBER`, `SESSION_USER`, `SUSER_NAME`, `SUSER_SNAME`, `SYSTEM_USER`, `USER`, `USER_NAME`, `EXECUTE AS`, `OPEN/CLOSE MASTER KEY` | Some Transact-SQL security functions and operators are supported: `CURRENT_USER`, `HAS_DBACCESS`, `HAS_PERMS_BY_NAME`, `IS_MEMBER`, `IS_ROLEMEMBER`, `IS_SRVROLEMEMBER`, `SESSION_USER`, `SESSION_CONTEXT`, `SUSER_NAME`, `SUSER_SNAME`, `SYSTEM_USER`, `USER`, `USER_NAME`, `EXECUTE AS`, and `REVERT`. Security functions cannot be used to query external data (store the result in variable that can be used in the query). | | **Transparent Data Encryption (TDE)** | [Yes](/azure/azure-sql/database/transparent-data-encryption-tde-overview) | No, Transparent Data Encryption is not supported. | | **Data Discovery & Classification** | [Yes](/azure/azure-sql/database/data-discovery-and-classification-overview) | No, Data Discovery & Classification is not supported. |
-| **Vulnerability Assessment** | [Yes](/azure/azure-sql/database/sql-vulnerability-assessment) | No, Vulnerability Assessment is not available. |
+| **Vulnerability Assessment** | [Yes](/sql/relational-databases/security/sql-vulnerability-assessment) | No, Vulnerability Assessment is not available. |
| **Advanced Threat Protection** | [Yes](/azure/azure-sql/database/threat-detection-overview) | No, Advanced Threat Protection is not supported. | | **Auditing** | [Yes](/azure/azure-sql/database/auditing-overview) | Yes, [auditing is supported](/azure/azure-sql/database/auditing-overview) in serverless SQL pools. | | **[Firewall rules](../security/synapse-workspace-ip-firewall.md)**| Yes | Yes, the firewall rules can be set on the serverless SQL endpoint. |
synapse-analytics Resources Self Help Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/resources-self-help-sql-on-demand.md
If you have a shared access signature key that you should use to access files, m
### Can't read, list, or access files in Azure Data Lake Storage
-If you use an Azure AD login without explicit credentials, make sure that your Azure AD identity can access the files in storage. To access the files, your Azure AD identity must have the **Blob Data Reader** permission, or permissions to **List** and **Read** [access control lists (ACL) in ADLS](/storage/blobs/data-lake-storage-access-control-model). For more information, see [Query fails because file cannot be opened](#query-fails-because-file-cant-be-opened).
+If you use an Azure AD login without explicit credentials, make sure that your Azure AD identity can access the files in storage. To access the files, your Azure AD identity must have the **Blob Data Reader** permission, or permissions to **List** and **Read** [access control lists (ACL) in ADLS](/azure/storage/blobs/data-lake-storage-access-control-model). For more information, see [Query fails because file cannot be opened](#query-fails-because-file-cant-be-opened).
If you access storage by using [credentials](develop-storage-files-storage-access-control.md#credentials), make sure that your [managed identity](develop-storage-files-storage-access-control.md?tabs=managed-identity) or [SPN](develop-storage-files-storage-access-control.md?tabs=service-principal) has the **Data Reader** or **Contributor role** or specific ACL permissions. If you used a [shared access signature token](develop-storage-files-storage-access-control.md?tabs=shared-access-signature), make sure that it has `rl` permission and that it hasn't expired.
This error might indicate that some internal process issue happened in serverles
Describe anything that might be unusual compared to the regular workload. For example, perhaps there was a large number of concurrent requests or a special workload or query started executing before this error happened.
+### Wildcard expansion timed out
+
+As described in the [Query folders and multiple files](../sql/query-folders-multiple-csv-files.md) section, Serverless SQL pool supports reading multiple files/folders by using wildcards. There is a maximum limit of 10 wildcards per query. You must be aware that this functionality comes at a cost. It takes time for the serverless pool to list all the files that can match the wildcard. This introduces latency and this latency can increase if the number of files you are trying to query is high. In this case you can run into the following error:
+
+```
+"Wildcard expansion timed out after X seconds."
+```
+
+There are several mitigation steps that you can do to avoid this:
+- Apply best practices described in [Best Practices Serverless SQL Pool](../sql/best-practices-serverless-sql-pool.md).
+- Try to reduce the number of files you are trying to query, by compacting files into larger ones. Try to keep your file sizes above 100MB.
+- Make sure that filters over partitioning columns are used wherever possible.
+- If you are using delta file format, use the optimize write feature in Spark. This can improve the performance of queries by reducing the amount of data that needs to be read and processed. How to use optimize write is described in [Using optimize write on Apache Spark](../spark/optimize-write-for-apache-spark.md).
+- To avoid some of the top-level wildcards by effectively hardcoding the implicit filters over partitioning columns use [dynamic SQL](../sql/develop-dynamic-sql.md).
+ ## Configuration Serverless SQL pools enable you to use T-SQL to configure database objects. There are some constraints:
synapse-analytics How To Monitor Synapse Link Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/how-to-monitor-synapse-link-sql-database.md
In this section, we're going to walk through how you can set up alerts for your
1. In this example, let's use the **Event Hubs** action type, so we'll need to input the **subscription name**, **Event Hub namespace**, and select an **Event Hub name**. Then click on **OK**.
- a. If you donΓÇÖt have an Event Hub created, refer to the document here to create one: [Configure an expiration policy for shared accessed signatures (SAS)](/azure/event-hubs/event-hub-create.md?context=/azure/synapse-analytics/context/context)
+ a. If you donΓÇÖt have an Event Hub created, refer to the document here to create one: [Configure an expiration policy for shared accessed signatures (SAS)](/rest/api/eventhub/create-event-hub)
:::image type="content" source="../media/connect-synapse-link-sql-database/monitor-create-action-group-2.png" alt-text="Screenshot that shows how to create an action group and specify an action type when an alert rule's conditions are met.":::
If you're using a database other than an Azure SQL database, see:
* [Configure Azure Synapse Link for Azure Cosmos DB](../../cosmos-db/configure-synapse-link.md?context=/azure/synapse-analytics/context/context) * [Configure Azure Synapse Link for Dataverse](/powerapps/maker/data-platform/azure-synapse-link-synapse?context=/azure/synapse-analytics/context/context)
-* [Get started with Azure Synapse Link for SQL Server 2022](connect-synapse-link-sql-server-2022.md)
+* [Get started with Azure Synapse Link for SQL Server 2022](connect-synapse-link-sql-server-2022.md)
synapse-analytics How To Monitor Synapse Link Sql Server 2022 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/how-to-monitor-synapse-link-sql-server-2022.md
In this section, we're going to walk through how you can set up alerts for your
1. In this example, let's use the **Event Hubs** action type, so we'll need to input the **subscription name**, **Event Hub namespace**, and select an **Event Hub name**. Then click on **OK**.
- a. If you donΓÇÖt have an Event Hub created, refer to the document here to create one: [Configure an expiration policy for shared accessed signatures (SAS)](/azure/event-hubs/event-hub-create.md?context=/azure/synapse-analytics/context/context)
+ a. If you donΓÇÖt have an Event Hub created, refer to the document here to create one: [Configure an expiration policy for shared accessed signatures (SAS)](/rest/api/eventhub/create-event-hub)
:::image type="content" source="../media/connect-synapse-link-sql-database/monitor-create-action-group-2.png" alt-text="Screenshot that shows how to create an action group and specify an action type when an alert rule's conditions are met.":::
If you're using a database other than a SQL Server 2022 instance, see:
* [Configure Azure Synapse Link for Azure Cosmos DB](../../cosmos-db/configure-synapse-link.md?context=/azure/synapse-analytics/context/context) * [Configure Azure Synapse Link for Dataverse](/powerapps/maker/data-platform/azure-synapse-link-synapse?context=/azure/synapse-analytics/context/context)
-* [Get started with Azure Synapse Link for Azure SQL Database](connect-synapse-link-sql-database.md)
+* [Get started with Azure Synapse Link for Azure SQL Database](connect-synapse-link-sql-database.md)
synapse-analytics Troubleshoot Sql Database Failover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/troubleshoot/troubleshoot-sql-database-failover.md
You must stop Synapse Link manually and configure Synapse Link according to the
:::image type="content" source="media/troubleshoot-sql-database-failover/synapse-studio-linked-services.png" alt-text="A screenshot of Synapse Studio. The Manage hub is open. In the list of Linked services, the AzureSqlDatabase1 linked service is highlighted." lightbox="media/troubleshoot-sql-database-failover/synapse-studio-linked-services.png"::: 1. You must reset the linked service connection string based on the new primary server after failover so that Synapse Link can connect to the new primary logical server's database. There are two options:
- * Use [the auto-failover group read/write listener endpoint](/sql/azure-sql/database/auto-failover-group-configure-sql-db#locate-listener-endpoint) and use the Synapse workspace's managed identity (SMI) to connect your Synapse workspace to the source database. Because of Read/Write listener endpoint that automatically maps to the new primary server after failover, so you only need to set it once. If failover occurs later, it will automatically use the fully-qualified domain name (FQDN) of the listener endpoint. Note that you still need to take action on every failover to update the Resource ID and Managed Identity ID for the new primary (see next step).
+ * Use [the auto-failover group read/write listener endpoint](/azure/azure-sql/database/auto-failover-group-configure-sql-db#locate-listener-endpoint) and use the Synapse workspace's managed identity (SMI) to connect your Synapse workspace to the source database. Because of Read/Write listener endpoint that automatically maps to the new primary server after failover, so you only need to set it once. If failover occurs later, it will automatically use the fully-qualified domain name (FQDN) of the listener endpoint. Note that you still need to take action on every failover to update the Resource ID and Managed Identity ID for the new primary (see next step).
* After each failover, edit the linked service **Connection string** with the **Server name**, **Database name**, and authentication information for the new primary server. You can use a managed identity or SQL Authentication. The authentication account used to connect to the database, whether it be a managed identity or SQL Authenticated login to the Azure SQL Database, must have at least the CONTROL permission inside the database to perform the actions necessary for the linked service. The db_owner permission is similar to CONTROL.
synapse-analytics Troubleshoot Sql Snapshot Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/troubleshoot/troubleshoot-sql-snapshot-issues.md
When a snapshot has not completed for a given table, there are two possible case
### Step 2: Snapshot retry
-If errors have forced the snapshot to retry, find more information in the [changefeed.change_feed_errors](/sql/relational-databases/system-tables/changefeed-change-feed-errors-transact-sql) dynamic management view. Run the following T-SQL command in the source database:
+If errors have forced the snapshot to retry, find more information in the [sys.dm_change_feed_errors](/sql/relational-databases/system-dynamic-management-views/sys-dm-change-feed-errors) dynamic management view. Run the following T-SQL command in the source database:
```sql SELECT * FROM sys.dm_change_feed_errors;
For example:
- [Get started with Azure Synapse Link for Azure SQL Database](../connect-synapse-link-sql-database.md) - [Get started with Azure Synapse Link for SQL Server 2022](../connect-synapse-link-sql-server-2022.md)
+ - [Known limitations and issues with Azure Synapse Link for SQL](../synapse-link-for-sql-known-issues.md)
synapse-analytics Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/whats-new-archive.md
Title: Previous monthly updates in Azure Synapse Analytics description: Archive of the new features and documentation improvements for Azure Synapse Analytics-- Previously updated : 09/09/2022+++ Last updated : 01/06/2023
-# Previous monthly updates in Azure Synapse Analytics
+# What's New in Azure Synapse Analytics Archive
This article describes previous month updates to Azure Synapse Analytics. For the most current month's release, check out [Azure Synapse Analytics latest updates](whats-new.md). Each update links to the Azure Synapse Analytics blog and an article that provides more information.
+## Generally available features
+
+The following table lists a past history of the features of Azure Synapse Analytics that have transitioned from preview to general availability (GA).
+
+|**Month** | **Feature** | **Learn more**|
+|:-- |:-- | :-- |
+| June 2022 | **Map Data tool** | The Map Data tool is a guided process to help you create ETL mappings and mapping data flows from your source data to Synapse without writing code. To learn more about the Map Data tool, read [Map Data in Azure Synapse Analytics](./database-designer/overview-map-data.md).|
+| June 2022 | **User Defined Functions** | User defined functions (UDFs) are now generally available. To learn more, read [User defined functions in mapping data flows](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/introducing-user-defined-functions-preview-for-mapping-data/ba-p/3414628). |
+| May 2022 | **Azure Synapse Data Explorer connector for Power Automate, Logic Apps, and Power Apps** | The Azure Data Explorer connector for Power Automate enables you to orchestrate and schedule flows, send notifications, and alerts, as part of a scheduled or triggered task. To learn more, read [Azure Data Explorer connector for Microsoft Power Automate](/azure/data-explorer/flow) and [Usage examples for Azure Data Explorer connector to Power Automate](/azure/data-explorer/flow-usage). |
+| April 2022 | **Cross-subscription restore for Azure Synapse SQL** | With the PowerShell `Az.Sql` module 3.8 update, the [Restore-AzSqlDatabase](/powershell/module/az.sql/restore-azsqldatabase) cmdlet can be used for cross-subscription restore of dedicated SQL pools. To learn more, see [Blog: Restore a dedicated SQL pool (formerly SQL DW) to a different subscription](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-april-update-2022/ba-p/3280185). This feature is now generally available for dedicated SQL pools (formerly SQL DW) and dedicated SQL pools in a Synapse workspace. [What's the difference?](https://aka.ms/dedicatedSQLpooldiff) |
+| April 2022 | **Database Designer** | The database designer allows users to visually create databases within Synapse Studio without writing a single line of code. For more information, see [Announcing General Availability of Database Designer](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/announcing-general-availability-of-database-designer-amp/ba-p/3294234). Read more about [lake databases](database-designer/concepts-lake-database.md) and learn [How to modify an existing lake database using the database designer](database-designer/modify-lake-database.md).|
+| April 2022 | **Database Templates** | New industry-specific database templates were introduced in the [Synapse Database Templates General Availability blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/synapse-database-templates-general-availability-and-new-synapse/ba-p/3289790). Learn more about [Database templates](database-designer/concepts-database-templates.md) and [the improved exploration experience](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-april-update-2022/ba-p/3295633#TOCREF_5).|
+| April 2022 | **Synapse Monitoring Operator RBAC role** | The Synapse Monitoring Operator RBAC (role-based access control) role allows a user persona to monitor the execution of Synapse Pipelines and Spark applications without having the ability to run or cancel the execution of these applications. For more information, review the [Synapse RBAC Roles](security/synapse-workspace-synapse-rbac-roles.md).|
+| March 2022 | **Flowlets** | Flowlets help you design portions of new data flow logic, or to extract portions of an existing data flow, and save them as separate artifact inside your Synapse workspace. Then, you can reuse these Flowlets can inside other data flows. To learn more, review the [Flowlets GA announcement blog post](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/flowlets-and-change-feed-now-ga-in-azure-data-factory/ba-p/3267450) and read [Flowlets in mapping data flow](../data-factory/concepts-data-flow-flowlet.md). |
+| March 2022 | **Change Feed connectors** | Changed data capture (CDC) feed data flow source transformations for Azure Cosmos DB, Azure Blob Storage, ADLS Gen1, ADLS Gen2, and Common Data Model (CDM) are now generally available. By simply checking a box, you can tell ADF to manage a checkpoint automatically for you and only read the latest rows that were updated or inserted since the last pipeline run. To learn more, review the [Change Feed connectors GA preview blog post](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/flowlets-and-change-feed-now-ga-in-azure-data-factory/ba-p/3267450) and read [Copy and transform data in Azure Data Lake Storage Gen2 using Azure Data Factory or Azure Synapse Analytics](../data-factory/connector-azure-data-lake-storage.md).|
+| March 2022 | **Column level encryption for dedicated SQL pools** | [Column level encryption](/sql/relational-databases/security/encryption/encrypt-a-column-of-data?view=azure-sqldw-latest&preserve-view=true) is now generally available for use on new and existing Azure SQL logical servers with Azure Synapse dedicated SQL pools and dedicated SQL pools in Azure Synapse workspaces. SQL Server Data Tools (SSDT) support for column level encryption for the dedicated SQL pools is available starting with the 17.2 Preview 2 build of Visual Studio 2022. |
+| March 2022 | **Synapse Spark Common Data Model (CDM) connector** | The CDM format reader/writer enables a Spark program to read and write CDM entities in a CDM folder via Spark dataframes. To learn more, see [how the CDM connector supports reading, writing data, examples, & known issues](./spark/data-sources/apache-spark-cdm-connector.md). |
+| November 2021 | **PREDICT** | The T-SQL [PREDICT](/sql/t-sql/queries/predict-transact-sql) syntax is now generally available for dedicated SQL pools. Get started with the [Machine learning model scoring wizard for dedicated SQL pools](./machine-learning/tutorial-sql-pool-model-scoring-wizard.md).|
+| October 2021 | **Synapse RBAC Roles** | [Synapse role-based access control (RBAC) roles are now generally available](https://techcommunity.microsoft.com/t5/azure-synapse-analytics/azure-synapse-analytics-october-update/ba-p/2875372#synapse-rbac). Learn more about [Synapse RBAC roles](./security/synapse-workspace-synapse-rbac-roles.md) and [Azure Synapse role-based access control (RBAC) using PowerShell](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/retrieve-azure-synapse-role-based-access-control-rbac/ba-p/3466419#:~:text=Synapse%20RBAC%20is%20used%20to%20manage%20who%20can%3A,job%20execution%2C%20review%20job%20output%2C%20and%20execution%20logs.).|
+
+## Community
+
+This section is an archive of Azure Synapse Analytics community opportunities and the [Azure Synapse Influencer program](https://aka.ms/synapseinfluencers) from Microsoft.
+
+|**Month** | **Feature** | **Learn more**|
+|:-- |:-- | :-- |
+| May 2022 | **Azure Synapse Influencer program** | Sign up for our free [Azure Synapse Influencer program](https://aka.ms/synapseinfluencers) and get connected with a community of Synapse-users who are dedicated to helping others achieve more with cloud analytics. Register now for our next [Synapse Influencer Ask the Experts session](https://aka.ms/synapseinfluencers/#events). It's free to attend and everyone is welcome to participate and join the discussion on Synapse-related topics. You can [watch past recorded Ask the Experts events](https://aka.ms/ATE-RecordedSessions) on the [Azure Synapse YouTube channel](https://www.youtube.com/channel/UCsZ4IlYjjVxqe5OZ14tyh5g). |
+| March 2022 | **Azure Synapse Analytics and Microsoft MVP YouTube video series** | A joint activity with the Azure Synapse product team and the Microsoft MVP community, a new [YouTube MVP Video Series about the Azure Synapse features](https://www.youtube.com/playlist?list=PLzUAjXZBFU9MEK2trKw_PGk4o4XrOzw4H) has launched. See more at the [Azure Synapse Analytics YouTube channel](https://www.youtube.com/channel/UCsZ4IlYjjVxqe5OZ14tyh5g).|
+
+## Apache Spark for Azure Synapse Analytics
+
+This section is an archive of features and capabilities of [Apache Spark for Azure Synapse Analytics](spark/apache-spark-overview.md).
+
+|**Month** | **Feature** | **Learn more**|
+|:-- |:-- | :-- |
+| May 2022 | **Azure Synapse dedicated SQL pool connector for Apache Spark now available in Python** | Previously, the [Azure Synapse Dedicated SQL Pool Connector for Apache Spark](./spark/synapse-spark-sql-pool-import-export.md) was only available using Scala. Now, [the dedicated SQL pool connector for Apache Spark can be used with Python on Spark 3](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-may-update-2022/ba-p/3430970#TOCREF_6). |
+| May 2022 | **Manage Azure Synapse Apache Spark configuration** | With the new [Apache Spark configurations](./spark/apache-spark-azure-create-spark-configuration.md) feature, you can create a standalone Spark configuration artifact with auto-suggestions and built-in validation rules. The Spark configuration artifact allows you to share your Spark configuration within and across Azure Synapse workspaces. You can also easily associate your Spark configuration with a Spark pool, a Notebook, and a Spark job definition for reuse and minimize the need to copy the Spark configuration in multiple places. |
+| April 2022 | **Apache Spark 3.2 for Synapse Analytics** | Apache Spark 3.2 for Synapse Analytics with preview availability. Review the [official Spark 3.2 release notes](https://spark.apache.org/releases/spark-release-3-2-0.html) and [migration guidelines between Spark 3.1 and 3.2](https://spark.apache.org/docs/latest/sql-migration-guide.html#upgrading-from-spark-sql-31-to-32) to assess potential changes to your applications. For more details, read [Apache Spark version support and Azure Synapse Runtime for Apache Spark 3.2](./spark/apache-spark-version-support.md). |
+| April 2022 | **Parameterization for Spark job definition** | You can now assign parameters dynamically based on variables, metadata, or specifying Pipeline specific parameters for the Spark job definition activity. For more details, read [Transform data using Apache Spark job definition](quickstart-transform-data-using-spark-job-definition.md#settings-tab). |
+| April 2022 | **Apache Spark notebook snapshot** | You can access a snapshot of the Notebook when there's a Pipeline Notebook run failure or when there's a long-running Notebook job. To learn more, read [Transform data by running a Synapse notebook](synapse-notebook-activity.md?tabs=classical#see-notebook-activity-run-history) and [Introduction to Microsoft Spark utilities](./spark/microsoft-spark-utilities.md?pivots=programming-language-scala#reference-a-notebook-1). |
+| March 2022 | **Synapse Spark Common Data Model (CDM) connector** | The CDM format reader/writer enables a Spark program to read and write CDM entities in a CDM folder via Spark dataframes. To learn more, see [how the CDM connector supports reading, writing data, examples, & known issues](./spark/data-sources/apache-spark-cdm-connector.md). |
+| March 2022 | **Performance optimization for Synapse Spark dedicated SQL pool connector** | New improvements to the [Azure Synapse Dedicated SQL Pool Connector for Apache Spark](spark/synapse-spark-sql-pool-import-export.md) reduce data movement and leverage `COPY INTO`. Performance tests indicated at least ~5x improvement over the previous version. No action is required from the user to leverage these enhancements. For more information, see [Blog: Synapse Spark Dedicated SQL Pool (DW) Connector: Performance Improvements](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_10).|
+| March 2022 | **Support for all Spark Dataframe SaveMode choices** | The [Azure Synapse Dedicated SQL Pool Connector for Apache Spark](spark/synapse-spark-sql-pool-import-export.md) now supports all four Spark Dataframe SaveMode choices: Append, Overwrite, ErrorIfExists, Ignore. For more information on Spark SaveMode, read the [official Apache Spark documentation](https://spark.apache.org/docs/1.6.0/api/java/org/apache/spark/sql/SaveMode.html?wt.mc_id=azsynapseblog_mar2022_blog_azureeng). |
+| March 2022 | **Apache Spark in Azure Synapse Analytics Intelligent Cache feature** | Intelligent Cache for Spark automatically stores each read within the allocated cache storage space, detecting underlying file changes and refreshing the files to provide the most recent data. To learn more on this preview feature, see how to [Enable/Disable the cache for your Apache Spark pool](./spark/apache-spark-intelligent-cache-concept.md) or see the [blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_12).|
+
+## Data integration
+
+This section is an archive of features and capabilities of Azure Synapse Analytics data integration. Learn how to [Load data into Azure Synapse Analytics using Azure Data Factory (ADF) or a Synapse pipeline](../data-factory/load-azure-sql-data-warehouse.md).
+
+|**Month** | **Feature** | **Learn more**|
+|:-- |:-- | :-- |
+| June 2022 | **SAP CDC connector preview** | A new data connector for SAP Change Data Capture (CDC) is now available in preview. For more information, see [Announcing Public Preview of the SAP CDC solution in Azure Data Factory and Azure Synapse Analytics](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/announcing-public-preview-of-the-sap-cdc-solution-in-azure-dat).|
+| June 2022 | **Fuzzy join option in Join Transformation** | Use fuzzy matching with a similarity threshold score slider has been added to the [Join transformation in Mapping Data Flows](../data-factory/data-flow-join.md). |
+| June 2022 | **Map Data tool GA** | We're excited to announce that the [Map Data tool](./database-designer/overview-map-data.md) is now Generally Available. The Map Data tool is a guided process to help you create ETL mappings and mapping data flows from your source data to Synapse without writing code. |
+| June 2022 | **Rerun pipeline with new parameters** | You can now change pipeline parameters when rerunning a pipeline from the Monitoring page without having to return to the pipeline editor. To learn more, read [Rerun pipelines and activities](../data-factory/monitor-visually.md#rerun-pipelines-and-activities).|
+| June 2022 | **User Defined Functions GA** | [User defined functions (UDFs) in mapping data flows](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/introducing-user-defined-functions-preview-for-mapping-data/ba-p/3414628) are now generally available (GA). |
+| May 2022 | **Export pipeline monitoring as a CSV** | The ability to [export pipeline monitoring to CSV and other monitoring improvements](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/adf-monitoring-improvements/ba-p/3295531) have been introduced to ADF. |
+| May 2022 | **Automatic incremental source data loading from PostgreSQL and MySQL** | Automatic [incremental source data loading from PostgreSQL and MySQL](../data-factory/tutorial-incremental-copy-overview.md) to Synapse SQL and Azure Database is now natively available in ADF. |
+| May 2022 | **Assert transformation error handling** | Error handling has now been added to sinks following an [assert transformation in mapping data flow](../data-factory/data-flow-assert.md). You can now choose whether to output the failed rows to the selected sink or to a separate file. |
+| May 2022 | **Mapping data flows projection editing** | In mapping data flows, you can now [update source projection column names and column types](../data-factory/data-flow-source.md). |
+| April 2022 | **Dataverse connector for Synapse Data Flows** | Dataverse is now a source and sink connector to Synapse Data Flows. You can [Copy and transform data from Dynamics 365 (Microsoft Dataverse) or Dynamics CRM using Azure Data Factory or Azure Synapse Analytics](../data-factory/connector-dynamics-crm-office-365.md?tabs=data-factory).|
+| April 2022 | **Configurable Synapse Pipelines Web activity response timeout** | With the response timeout property `httpRequestTimeout`, you can [define a timeout for the HTTP request up to 10 minutes](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/web-activity-response-timeout-improvement/ba-p/3260307). Web activities work exceptionally well with APIs that follow [the asynchronous request-reply pattern](/azure/architecture/patterns/async-request-reply), a suggested approach for building scalable web APIs/services. |
+| March 2022 | **sFTP connector for Synapse data flows** | A native sftp connector in Synapse data flows is supported to read and write data from sFTP using the visual low-code data flows interface in Synapse. To learn more, see [Copy and transform data in SFTP server using Azure Data Factory or Azure Synapse Analytics](../data-factory/connector-sftp.md).|
+| March 2022 | **Data flow improvements to Data Preview** | Review features added to the [Data Preview and debug improvements in Mapping Data Flows](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/data-preview-and-debug-improvements-in-mapping-data-flows/ba-p/3268254?wt.mc_id=azsynapseblog_mar2022_blog_azureeng). |
+| March 2022 | **Pipeline script activity** | You can now [Transform data by using the Script activity](../data-factory/transform-data-using-script.md) to invoke SQL commands to perform both DDL and DML. |
+| December 2021 | **Custom partitions for Synapse link for Azure Cosmos DB** | Improve query execution times for your Spark queries, by creating custom partitions based on fields frequently used in your queries. To learn more, see [Custom partitioning in Azure Synapse Link for Azure Cosmos DB (Preview)](../cosmos-db/custom-partitioning-analytical-store.md). |
+
+## Database Templates & Database Designer
+
+This section is an archive of features and capabilities of [database templates](./database-designer/overview-database-templates.md) and [the database designer](database-designer/quick-start-create-lake-database.md).
+
+|**Month** | **Feature** | **Learn more**|
+|:-- |:-- | :-- |
+| April 2022 | **Database Designer** | The database designer allows users to visually create databases within Synapse Studio without writing a single line of code. For more information, see [Announcing General Availability of Database Designer](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/announcing-general-availability-of-database-designer-amp/ba-p/3294234). Read more about [lake databases](database-designer/concepts-lake-database.md) and learn [How to modify an existing lake database using the database designer](database-designer/modify-lake-database.md).|
+| April 2022 | **Database Templates** | New industry-specific database templates were introduced in the [Synapse Database Templates General Availability blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/synapse-database-templates-general-availability-and-new-synapse/ba-p/3289790). Learn more about [Database templates](database-designer/concepts-database-templates.md) and [the improved exploration experience](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-april-update-2022/ba-p/3295633#TOCREF_5).|
+| April 2022 | **Clone lake database** | In Synapse Studio, you can now clone a database using the action menu available on the lake database. To learn more, read [How-to: Clone a lake database](./database-designer/clone-lake-database.md). |
+| April 2022 | **Use wildcards to specify custom folder hierarchies** | Lake databases sit on top of data that is in the lake and this data can live in nested folders that don't fit into clean partition patterns. You can now use wildcards to specify custom folder hierarchies. To learn more, read [How-to: Modify a datalake](./database-designer/modify-lake-database.md). |
+| January 2022 | **New database templates** | Learn more about new industry-specific [Automotive, Genomics, Manufacturing, and Pharmaceuticals templates](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/four-additional-azure-synapse-database-templates-now-available/ba-p/3058044) and get started with [database templates](./database-designer/overview-database-templates.md) in the Synapse Studio gallery. |
+
+## Developer experience
+
+This section is an archive of quality of life and feature improvements for [developers in Azure Synapse Analytics](sql/develop-overview.md).
+
+|**Month** | **Feature** | **Learn more**|
+|:-- |:-- | :-- |
+| May 2022 | **Updated Azure Synapse Analyzer Report** | Learn about the new features in [version 2.0 of the Synapse Analyzer report](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/updated-synapse-analyzer-report-workload-management-and-ability/ba-p/3580269).|
+| April 2022 | **Azure Synapse Analyzer Report** | The [Azure Synapse Analyzer Report](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analyzer-report-to-monitor-and-improve-azure/ba-p/3276960) helps you identify common issues that may be present in your database that can lead to performance issues.|
+| April 2022 | **Reference unpublished notebooks** | Now, when using %run notebooks, you can [enable 'unpublished notebook reference'](spark/apache-spark-development-using-notebooks.md#reference-unpublished-notebook), which will allow you to reference unpublished notebooks. When enabled, notebook run will fetch the current contents in the notebook web cache, meaning the changes in your notebook editor can be referenced immediately by other notebooks without having to be published (Live mode) or committed (Git mode). |
+| March 2022 | **Code cells with exception to show standard output**| Now in Synapse notebooks, both standard output and exception messages are shown when a code statement fails for Python and Scala languages. For examples, see [Synapse notebooks: Code cells with exception to show standard output](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_1).|
+| March 2022 | **Partial output is available for running notebook code cells** | Now in Synapse notebooks, you can see anything you write (with `println` commands, for example) as the cell executes, instead of waiting until it ends. For examples, see [Synapse notebooks: Partial output is available for running notebook code cells ](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_1).|
+| March 2022 | **Dynamically control your Spark session configuration with pipeline parameters** | Now in Synapse notebooks, you can use pipeline parameters to configure the session with the notebook %%configure magic. For examples, see [Synapse notebooks: Dynamically control your Spark session configuration with pipeline parameters](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_2).|
+| March 2022 | **Reuse and manage notebook sessions** | Now in Synapse notebooks, it's easy to reuse an active session conveniently without having to start a new one and to see and manage your active sessions in the **Active sessions** list. To view your sessions, select the 3 dots in the notebook and select **Manage sessions.** For examples, see [Synapse notebooks: Reuse and manage notebook sessions](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_3).|
+| March 2022 | **Support for Python logging** | Now in Synapse notebooks, anything written through the Python logging module is captured, in addition to the driver logs. For examples, see [Synapse notebooks: Support for Python logging](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_4).|
+
+## Machine Learning
+
+This section is an archive of features and improvements to machine learning models in Azure Synapse Analytics.
+
+|**Month** | **Feature** | **Learn more**|
+|:-- |:-- | :-- |
+| June 2022 | **Distributed Deep Neural Network Training (preview)** | The Azure Synapse runtime also includes supporting libraries like Petastorm and Horovod, which are commonly used for distributed training. This feature is currently available in preview. The Azure Synapse Analytics runtime for Apache Spark 3.1 and 3.2 also now includes support for the most common deep learning libraries like TensorFlow and PyTorch. To learn more about how to leverage these libraries within your Azure Synapse Analytics GPU-accelerated pools, read the [Deep learning tutorials](./machine-learning/concept-deep-learning.md). |
+| November 2021 | **PREDICT** | The T-SQL [PREDICT](/sql/t-sql/queries/predict-transact-sql) syntax is now generally available for dedicated SQL pools. Get started with the [Machine learning model scoring wizard for dedicated SQL pools](./machine-learning/tutorial-sql-pool-model-scoring-wizard.md).|
+
+## Samples and guidance
+
+This section is an archive of guidance and sample project resources for Azure Synapse Analytics.
+
+|**Month** | **Feature** | **Learn more**|
+|:-- |:-- | :-- |
+| June 2022 | **Azure Orbital analytics with Synapse Analytics** | We now offer an [Azure Orbital analytics sample solution](https://github.com/Azure/Azure-Orbital-Analytics-Samples) showing an end-to-end implementation of extracting, loading, transforming, and analyzing spaceborne data by using geospatial libraries and AI models with Azure Synapse Analytics. The sample solution also demonstrates how to integrate geospatial-specific [Azure Cognitive Services](../cognitive-services/index.yml) models, AI models from partners, and bring-your-own-data models. |
+| June 2022 | **Migration guides for Oracle** | A new Microsoft-authored migration guide for Oracle to Azure Synapse Analytics is now available. [Design and performance for Oracle migrations](migration-guides/oracle/1-design-performance-migration.md). |
+| June 2022 | **Azure Synapse success by design** | The [Azure Synapse proof of concept playbook](./guidance/proof-of-concept-playbook-overview.md) provides a guide to scope, design, execute, and evaluate a proof of concept for SQL or Spark workloads. |
+| June 2022 | **Migration guides for Teradata** | A new Microsoft-authored migration guide for Teradata to Azure Synapse Analytics is now available. [Design and performance for Teradata migrations](migration-guides/teradat). |
+| June 2022 | **Migration guides for IBM Netezza** | A new Microsoft-authored migration guide for IBM Netezza to Azure Synapse Analytics is now available. [Design and performance for IBM Netezza migrations](migration-guides/netezz). |
+
+## Security
+
+This section is an archive of security features and settings in Azure Synapse Analytics.
+
+|**Month** | **Feature** | **Learn more**|
+|:-- |:-- | :-- |
+| April 2022 | **Synapse Monitoring Operator RBAC role** | The Synapse Monitoring Operator role-based access control (RBAC) role allows a user persona to monitor the execution of Synapse Pipelines and Spark applications without having the ability to run or cancel the execution of these applications. For more information, review the [Synapse RBAC Roles](security/synapse-workspace-synapse-rbac-roles.md).|
+| March 2022 | **Enforce minimal TLS version** | You can now raise or lower the minimum TLS version for dedicated SQL pools in Synapse workspaces. To learn more, see [Azure SQL connectivity settings](/azure/azure-sql/database/connectivity-settings#minimal-tls-version). The [workspace managed SQL API](/rest/api/synapse/sqlserver/workspace-managed-sql-server-dedicated-sql-minimal-tls-settings/update) can be used to modify the minimum TLS settings.|
+| March 2022 | **Azure Synapse Analytics now supports Azure Active Directory (Azure AD) only authentication** | You can now use Azure Active Directory authentication to centrally manage access to all Azure Synapse resources, including SQL pools. You can [disable local authentication](sql/active-directory-authentication.md#disable-local-authentication) upon creation or after a workspace is created through the Azure portal.|
+| December 2021 | **User-Assigned managed identities** | Now you can use user-assigned managed identities in linked services for authentication in Synapse Pipelines and Dataflows. To learn more, see [Credentials in Azure Data Factory and Azure Synapse](../data-factory/credentials.md?context=%2Fazure%2Fsynapse-analytics%2Fcontext%2Fcontext&tabs=data-factory).|
+| December 2021 | **Browse ADLS Gen2 folders in the Azure Synapse Analytics workspace** | You can now [browse and secure an Azure Data Lake Storage Gen2 (ADLS Gen2) container or folder](how-to-access-container-with-access-control-lists.md) in your Azure Synapse Analytics workspace by connecting to a specific container or folder in Synapse Studio.|
+| December 2021 | **TLS 2.1 enforced for new Synapse Workspaces** | Starting in December 2021, [a requirement for TLS 1.2](security/connectivity-settings.md#minimal-tls-version) has been implemented for new Synapse Workspaces only. |
+
+## Azure Synapse Data Explorer
+
+Azure Data Explorer (ADX) is a fast and highly scalable data exploration service for log and telemetry data. It offers ingestion from Event Hubs, IoT Hubs, blobs written to blob containers, and Azure Stream Analytics jobs. This section is an archive of features and capabilities of [the Azure Synapse Data Explorer](data-explorer/data-explorer-overview.md) and [the Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/). Read more about [What is the difference between Azure Synapse Data Explorer and Azure Data Explorer? (Preview)](data-explorer/data-explorer-compare.md)
+
+|**Month** | **Feature** | **Learn more**|
+|:-- |:-- | :-- |
+| June 2022 | **Web Explorer new homepage** | The new Azure Synapse [Web Explorer homepage](https://dataexplorer.azure.com/home) makes it even easier to get started with Synapse Web Explorer. |
+| June 2022 | **Web Explorer sample gallery** | The [Web Explorer sample gallery]((https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/azure-data-explorer-in-60-minutes-with-the-new-samples-gallery/ba-p/3447552) provides end-to-end samples of how customers leverage Synapse Data Explorer popular use cases such as Logs Data, Metrics Data, IoT data and Basic big data examples. |
+| June 2022 | **Web Explorer dashboards drill through capabilities** | You can now [use drillthroughs as parameters in your Synapse Web Explorer dashboards](/azure/data-explorer/dashboard-parameters#use-drillthroughs-as-dashboard-parameters). |
+| June 2022 | **Time Zone settings for Web Explorer** | The [Time Zone settings of the Web Explorer](/azure/data-explorer/web-query-data#change-datetime-to-specific-time-zone) now apply to both the Query results and to the Dashboard. By changing the time zone, the dashboards will be automatically refreshed to present the data with the selected time zone. |
+| May 2022 | **Synapse Data Explorer live query in Excel** | Using the [new Data Explorer web experience Open in Excel feature](https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/open-live-kusto-query-in-excel/ba-p/3198500), you can now provide access to live results of your query by sharing the connected Excel Workbook with colleagues and team members. You can open the live query in an Excel Workbook and refresh it directly from Excel to get the most up to date query results. To create an Excel Workbook connected to Synapse Data Explorer, [start by running a query in the Web experience](https://aka.ms/adx.help.livequery). |
+| May 2022 | **Use Managed Identities for external SQL Server tables** | With Managed Identity support, Synapse Data Explorer table definition is now simpler and more secure. You can now [use managed identities](/azure/data-explorer/managed-identities-overview) instead of entering in your credentials. To learn more about external tables, read [Create and alter SQL Server external tables](/azure/data-explorer/kusto/management/external-sql-tables).|
+| May 2022 | **Azure Synapse Data Explorer connector for Microsoft Power Automate, Logic Apps, and Power Apps** | New Azure Data Explorer connectors for Power Automate are generally available (GA). To learn more, read [Azure Data Explorer connector for Microsoft Power Automate](/azure/data-explorer/flow), the [Microsoft Logic App and Azure Data Explorer](/azure/data-explorer/kusto/tools/logicapps), and the ability to [Create Power Apps application to query data in Azure Data Explorer](/azure/data-explorer/power-apps-connector). |
+| May 2022 | **Dynamic events routing from event hub to multiple databases** | We now support [routing events data from Azure Event Hub/Azure IoT Hub/Azure Event Grid to multiple databases](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-may-update-2022/ba-p/3430970#TOCREF_15) hosted in a single ADX cluster. To learn more about dynamic routing, read [Ingest from event hub](/azure/data-explorer/ingest-data-event-hub-overview#events-routing). |
+| May 2022 | **Configure a database using a KQL inline script as part of JSON ARM deployment template** | Running a [Kusto Query Language (KQL) script to configure your database](/azure/data-explorer/database-script) can now be done using an inline script provided inline as a parameter to a JSON ARM template. |
+
+## Azure Synapse Link
+
+Azure Synapse Link is an automated system for replicating data from [SQL Server or Azure SQL Database](synapse-link/sql-synapse-link-overview.md), [Azure Cosmos DB](../cosmos-db/synapse-link.md?context=%2fazure%2fsynapse-analytics%2fcontext%2fcontext), or [Dataverse](/power-apps/maker/data-platform/export-to-data-lake?context=%2Fazure%2Fsynapse-analytics%2Fcontext%2Fcontext) into Azure Synapse Analytics. This section is an archive of news about the Azure Synapse Link feature.
+
+|**Month** | **Feature** | **Learn more**|
+|:-- |:-- | :-- |
+| May 2022 | **Azure Synapse Link for SQL preview** | Azure Synapse Link for SQL is in preview for both SQL Server 2022 and Azure SQL Database. The Azure Synapse Link feature provides low- and no-code, near real-time data replication from your SQL-based operational stores into Azure Synapse Analytics. Provide BI reporting on operational data in near real-time, with minimal impact on your operational store. The [Azure Synapse Link for SQL preview has been announced](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/announcing-the-public-preview-of-azure-synapse-link-for-sql/ba-p/3372986). For more information, see [Blog: Azure Synapse Link for SQL Deep Dive](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/synapse-link-for-sql-deep-dive/ba-p/3567645).|
+
+## Synapse SQL
+
+This section is an archive of improvements and features in SQL pools in Azure Synapse Analytics.
+
+|**Month** | **Feature** | **Learn more**|
+|:-- |:-- | :-- |
+| June 2022 | **Result set size limit increase** | The [maximum size of query result sets](./sql/resources-self-help-sql-on-demand.md?tabs=x80070002#constraints) in serverless SQL pools has been increased from 200 GB to 400 GB. |
+| May 2022 | **Automatic character column length calculation for serverless SQL pools** | It's no longer necessary to define character column lengths for serverless SQL pools in the data lake. You can get optimal query performance [without having to define the schema](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-may-update-2022/ba-p/3430970#TOCREF_4), because the serverless SQL pool will use automatically calculated average column lengths and cardinality estimation. |
+| April 2022 | **Cross-subscription restore for Azure Synapse SQL GA** | With the PowerShell `Az.Sql` module 3.8 update, the [Restore-AzSqlDatabase](/powershell/module/az.sql/restore-azsqldatabase) cmdlet can be used for cross-subscription restore of dedicated SQL pools. To learn more, see [Restore a dedicated SQL pool to a different subscription](sql-data-warehouse/sql-data-warehouse-restore-active-paused-dw.md#restore-an-existing-dedicated-sql-pool-formerly-sql-dw-to-a-different-subscription-through-powershell). This feature is now generally available for dedicated SQL pools (formerly SQL DW) and dedicated SQL pools in a Synapse workspace. [What's the difference?](https://aka.ms/dedicatedSQLpooldiff)|
+| April 2022 | **Recover SQL pool from dropped server or workspace** | With the PowerShell Restore cmdlets in `Az.Sql` and `Az.Synapse` modules, you can now restore from a deleted server or workspace without filing a support ticket. For more information, see [Restore a dedicated SQL pool from a deleted Azure Synapse workspace](backuprestore/restore-sql-pool-from-deleted-workspace.md) or [Restore a standalone dedicated SQL pools (formerly SQL DW) from a deleted server](backuprestore/restore-sql-pool-from-deleted-workspace.md), depending on your scenario. |
+| March 2022 | **Column level encryption for dedicated SQL pools** | [Column level encryption](/sql/relational-databases/security/encryption/encrypt-a-column-of-data?view=azure-sqldw-latest&preserve-view=true) is now generally available for use on new and existing Azure SQL logical servers with Azure Synapse dedicated SQL pools and dedicated SQL pools in Azure Synapse workspaces. SQL Server Data Tools (SSDT) support for column level encryption for the dedicated SQL pools is available starting with the 17.2 Preview 2 build of Visual Studio 2022.|
+| March 2022 | **Parallel execution for CETAS** | Better performance for [CREATE TABLE AS SELECT](sql/develop-tables-cetas.md) (CETAS) and subsequent SELECT statements now made possible by use of parallel execution plans. For examples, see [Better performance for CETAS and subsequent SELECTs](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_7).|
+
+## Previous monthly updates in Azure Synapse Analytics
+
+What follows are the previous format of monthly news updates for Synapse Analytics.
+ ## June 2022 update
-## General
+### General
* **Azure Orbital analytics with Synapse Analytics** - We now offer an [Azure Orbital analytics sample solution](https://github.com/Azure/Azure-Orbital-Analytics-Samples) showing an end-to-end implementation of extracting, loading, transforming, and analyzing spaceborne data by using geospatial libraries and AI models with [Azure Synapse Analytics](overview-what-is.md). The sample solution also demonstrates how to integrate geospatial-specific [Azure Cognitive Services](../cognitive-services/index.yml) models, AI models from partners, and bring-your-own-data models. * **Azure Synapse success by design** - Project success is no accident and requires careful planning and execution. The Synapse Analytics' Success by Design playbooks are now available. The [Azure Synapse proof of concept playbook](./guidance/proof-of-concept-playbook-overview.md) provides a guide to scope, design, execute, and evaluate a proof of concept for SQL or Spark workloads. These guides contain best practices from the most challenging and complex solution implementations incorporating Azure Synapse. To learn more about the Azure Synapse proof of concept playbook, read [Success by Design](./guidance/success-by-design-introduction.md).
-## SQL
+
+### SQL
**Result set size limit increase** - We know that you turn to Azure Synapse Analytics to work with large amounts of data. With that in mind, the maximum size of query result sets in Serverless SQL pools has been increased from 200 GB to 400 GB. This limit is shared between concurrent queries. To learn more about this size limit increase and other constraints, read [Self-help for serverless SQL pool](./sql/resources-self-help-sql-on-demand.md?tabs=x80070002#constraints).
-## Synapse data explorer
+### Synapse data explorer
* **Web Explorer new homepage** - The new Synapse Web Explorer homepage makes it even easier to get started with Synapse Web Explorer. The [Web Explorer homepage](https://dataexplorer.azure.com/home) now includes the following sections:
This article describes previous month updates to Azure Synapse Analytics. For th
* **Time Zone settings for Web Explorer** - Being able to display data in different time zones is very powerful. You can now decide to view the data in UTC time, your local time zone, or the time zone of the monitored device/machine. The Time Zone settings of the Web Explorer now apply to both the Query results and to the Dashboard. By changing the time zone, the dashboards will be automatically refreshed to present the data with the selected time zone. For more information on time zone settings, read [Change datetime to specific time zone](/azure/data-explorer/web-query-data#change-datetime-to-specific-time-zone).
-## Data integration
+### Data integration
* **Fuzzy Join option in Join Transformation** - Fuzzy matching with a sliding similarity score option has been added to the Join transformation in Mapping Data Flows. You can create inner and outer joins on data values that are similar rather than exact matches! Previously, you would have had to use an exact match. The sliding scale value goes from 60% to 100%, making it easy to adjust the similarity threshold of the match. For learn more about fuzzy joins, read [Join transformation in mapping data flow](../data-factory/data-flow-join.md).
This article describes previous month updates to Azure Synapse Analytics. For th
* **User Defined Functions [Generally Available]** - We're excited to announce that user defined functions (UDFs) are now Generally Available. With user-defined functions, you can create customized expressions that can be reused across multiple mapping data flows. You no longer have to use the same string manipulation, math calculations, or other complex logic several times. User-defined functions will be grouped in libraries to help developers group common sets of functions. To learn more about user defined functions, read [User defined functions in mapping data flows](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/introducing-user-defined-functions-preview-for-mapping-data/ba-p/3414628).
-## Machine learning
+### Machine learning
**Distributed Deep Neural Network Training with Horovod and Petastorm [Public Preview]** - To simplify the process for creating and managing GPU-accelerated pools, Azure Synapse takes care of pre-installing low-level libraries and setting up all the complex networking requirements between compute nodes. This integration allows users to get started with GPU- accelerated pools within just a few minutes.
synapse-analytics Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/whats-new.md
Title: What's new? description: Learn about the new features and documentation improvements for Azure Synapse Analytics--- Previously updated : 12/06/2022+++ Last updated : 01/06/2023
This page is continuously updated with a recent review of what's new in [Azure Synapse Analytics](overview-what-is.md), and also what features are currently in preview. To follow the latest in Azure Synapse news and features, see the [Azure Synapse Analytics Blog](https://aka.ms/SynapseMonthlyUpdate) and [companion videos on YouTube](https://www.youtube.com/channel/UCsZ4IlYjjVxqe5OZ14tyh5g).
-For older updates, review past [Azure Synapse Analytics Blog](https://aka.ms/SynapseMonthlyUpdate) posts or [previous monthly updates in Azure Synapse Analytics](whats-new-archive.md).
+For older updates, review past [Azure Synapse Analytics Blog](https://aka.ms/SynapseMonthlyUpdate) posts or [previous updates in Azure Synapse Analytics](whats-new-archive.md).
## Features currently in preview
The following table lists the features of Azure Synapse Analytics that are curre
| **Feature** | **Learn more**| |:-- |:-- |
-| **Azure Synapse Runtime for Apache Spark 3.3** | The [Azure Synapse Runtime for Apache Spark 3.3](spark/apache-spark-33-runtime.md) is currently in preview. Based on our testing using the 1 TB TPC-H industry benchmark, you're likely to see [up to 77% increased performance](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-update-2022/ba-p/3680019#TOCREF_2). |
+| **Azure Synapse Runtime for Apache Spark 3.3** | The [Azure Synapse Runtime for Apache Spark 3.3](spark/apache-spark-33-runtime.md) is currently in preview. For more information, see the [Apache Spark 3.3 preview blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-runtime-for-apache-spark-3-3-is-now-in-public/ba-p/3686449). Based on our testing using the 1 TB TPC-H industry benchmark, you're likely to see [up to 77% increased performance](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-update-2022/ba-p/3680019#TOCREF_2). |
| **Apache Spark Delta Lake tables in serverless SQL pools** | The ability to for serverless SQL pools to access Delta Lake tables created in Spark databases is in preview. For more information, see [Azure Synapse Analytics shared metadata tables](metadat).| | **Apache Spark elastic pool storage** | Azure Synapse Analytics Spark pools now support elastic pool storage in preview. Elastic pool storage allows the Spark engine to monitor worker node temporary storage and attach more disks if needed. No action is required, and you should see fewer job failures as a result. For more information, see [Blog: Azure Synapse Analytics Spark elastic pool storage is available for public preview](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-august-update-2022/ba-p/3535126#TOCREF_8).| | **Apache Spark Optimized Write** | [Optimize Write is a Delta Lake on Azure Synapse](spark/optimize-write-for-apache-spark.md) feature reduces the number of files written by Apache Spark 3 (3.1 and 3.2) and aims to increase individual file size of the written data.| | **Apache Spark R language support** | Built-in [R support for Apache Spark](spark/apache-spark-r-language.md) is now in preview. |
-| **Azure Synapse Data Explorer** | The [Azure Synapse Data Explorer](./data-explorer/data-explorer-overview.md) provides an interactive query experience to unlock insights from log and telemetry data. Connectors for Azure Data Explorer are available for Synapse Data Explorer. |
+| **Azure Synapse Data Explorer** | The [Azure Synapse Data Explorer](./data-explorer/data-explorer-overview.md) provides an interactive query experience to unlock insights from log and telemetry data. Connectors for Azure Data Explorer are available for Synapse Data Explorer. For more news, see [Azure Synapse Data Explorer (preview)](#azure-synapse-data-explorer-preview).|
| **Browse ADLS Gen2 folders in the Azure Synapse Analytics workspace** | You can now browse an Azure Data Lake Storage Gen2 (ADLS Gen2) container or folder in your Azure Synapse Analytics workspace by connecting to a specific container or folder in Synapse Studio. To learn more, see [Browse an ADLS Gen2 folder with ACLs in Azure Synapse Analytics](how-to-access-container-with-access-control-lists.md).| | **Custom partitions for Synapse link for Azure Cosmos DB** | Improve query execution times for your Spark queries, by creating custom partitions based on fields frequently used in your queries. To learn more, see [Custom partitioning in Azure Synapse Link for Azure Cosmos DB (Preview)](../cosmos-db/custom-partitioning-analytical-store.md). | | **Data flow improvements to Data Preview** | To learn more, see [Data Preview and debug improvements in Mapping Data Flows](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/data-preview-and-debug-improvements-in-mapping-data-flows/ba-p/3268254?wt.mc_id=azsynapseblog_mar2022_blog_azureeng). |
The following table lists the features of Azure Synapse Analytics that have tran
| March 2022 | **Change Feed connectors** | Changed data capture (CDC) feed data flow source transformations for Azure Cosmos DB, Azure Blob Storage, ADLS Gen1, ADLS Gen2, and Common Data Model (CDM) are now generally available. By simply checking a box, you can tell ADF to manage a checkpoint automatically for you and only read the latest rows that were updated or inserted since the last pipeline run. To learn more, review the [Change Feed connectors GA preview blog post](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/flowlets-and-change-feed-now-ga-in-azure-data-factory/ba-p/3267450) and read [Copy and transform data in Azure Data Lake Storage Gen2 using Azure Data Factory or Azure Synapse Analytics](../data-factory/connector-azure-data-lake-storage.md).| | March 2022 | **Column level encryption for dedicated SQL pools** | [Column level encryption](/sql/relational-databases/security/encryption/encrypt-a-column-of-data?view=azure-sqldw-latest&preserve-view=true) is now generally available for use on new and existing Azure SQL logical servers with Azure Synapse dedicated SQL pools and dedicated SQL pools in Azure Synapse workspaces. SQL Server Data Tools (SSDT) support for column level encryption for the dedicated SQL pools is available starting with the 17.2 Preview 2 build of Visual Studio 2022. | | March 2022 | **Synapse Spark Common Data Model (CDM) connector** | The CDM format reader/writer enables a Spark program to read and write CDM entities in a CDM folder via Spark dataframes. To learn more, see [how the CDM connector supports reading, writing data, examples, & known issues](./spark/data-sources/apache-spark-cdm-connector.md). |
-| November 2021 | **PREDICT** | The T-SQL [PREDICT](/sql/t-sql/queries/predict-transact-sql) syntax is now generally available for dedicated SQL pools. Get started with the [Machine learning model scoring wizard for dedicated SQL pools](./machine-learning/tutorial-sql-pool-model-scoring-wizard.md).|
-| October 2021 | **Synapse RBAC Roles** | [Synapse role-based access control (RBAC) roles are now generally available](https://techcommunity.microsoft.com/t5/azure-synapse-analytics/azure-synapse-analytics-october-update/ba-p/2875372#synapse-rbac). Learn more about [Synapse RBAC roles](./security/synapse-workspace-synapse-rbac-roles.md) and [Azure Synapse role-based access control (RBAC) using PowerShell](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/retrieve-azure-synapse-role-based-access-control-rbac/ba-p/3466419#:~:text=Synapse%20RBAC%20is%20used%20to%20manage%20who%20can%3A,job%20execution%2C%20review%20job%20output%2C%20and%20execution%20logs.).|
+ ## Community
This section summarizes new Azure Synapse Analytics community opportunities and
|**Month** | **Feature** | **Learn more**| |:-- |:-- | :-- |
+| December 2022 | **Azure Synapse MVP Corner** | November highlights from the Microsoft Azure Synapse MVP blog series in this month's [Azure Synapse MVP Corner](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-mvp-corner-november-2022/ba-p/3696939).|
| November 2022 | **Azure Synapse Influencer program** | The Azure Synapse Influencer program provides exclusive events and Q&A sessions like Ask the Experts with the Microsoft product team, where members can interact directly with product experts by asking any questions on various rotating topics. Get feedback from members of [Azure Synapse Analytics influencer community](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-influencer-program-passionate-about-azure-synapse/ba-p/3672906). | | October 2022 | **Azure Synapse MVP Corner** | October highlights from the Microsoft Azure Synapse MVP blog series in this month's [Azure Synapse MVP Corner](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-mvp-corner-october-2022/ba-p/3668048).| | September 2022 | **Azure Synapse MVP Corner** | September highlights from the Microsoft Azure Synapse MVP blog series in this month's [Azure Synapse MVP Corner](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-mvp-corner-september-2022/ba-p/3643960).|
This section summarizes recent new features and capabilities of [Apache Spark fo
|**Month** | **Feature** | **Learn more**| |:-- |:-- | :-- |
-| November 2022 | **Azure Synapse Runtime for Apache Spark 3.3** | The [Azure Synapse Runtime for Apache Spark 3.3](spark/apache-spark-33-runtime.md) is currently in preview. Based on our testing using the 1 TB TPC-H industry benchmark, you're likely to see [up to 77% increased performance](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-update-2022/ba-p/3680019#TOCREF_2). |
-| September 2022 | **New informative Livy error codes** | [More precise error codes](spark/apache-spark-handle-livy-error.md) describe the cause of failure and replaces the previous generic error codes. Previously, all errors in failing Spark jobs surfaced with a generic error code displaying LIVY_JOB_STATE_DEAD. |
+| January 2023 | **Improve Spark pool utilization with Synapse Genie** | The Synapse Genie Framework improves Spark pool utilization by executing multiple Synapse notebooks on the same Spark pool instance. Read more about this [metadata-driven utility written in Python](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/improve-spark-pool-utilization-with-synapse-genie/ba-p/3690428). |
+| November 2022 | **Azure Synapse Runtime for Apache Spark 3.3** | The [Azure Synapse Runtime for Apache Spark 3.3](spark/apache-spark-33-runtime.md) is currently in preview. For more information, see the [Apache Spark 3.3 preview blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-runtime-for-apache-spark-3-3-is-now-in-public/ba-p/3686449). Based on our testing using the 1 TB TPC-H industry benchmark, you're likely to see [up to 77% increased performance](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-update-2022/ba-p/3680019#TOCREF_2). |
+| September 2022 | **New informative Livy error codes** | [More precise error codes](spark/apache-spark-handle-livy-error.md) describe the cause of failure and replaces the previous generic error codes. Previously, all errors in failing Spark jobs surfaced with a generic error code displaying `LIVY_JOB_STATE_DEAD`. |
| September 2022 | **New query optimization techniques in Apache Spark for Azure Synapse Analytics** | Read the [findings from Microsoft's work](https://vldb.org/pvldb/vol15/p936-rajan.pdf) to gain considerable performance benefits across the board on the reference TPC-DS workload as well as a significant reduction in query plan generation time. | | August 2022 | **Apache Spark elastic pool storage** | Azure Synapse Analytics Spark pools now support elastic pool storage in preview. Elastic pool storage allows the Spark engine to monitor worker nodes temporary storage and attach additional disks if needed. No action is required, and you should see fewer job failures as a result. For more information, see [Blog: Azure Synapse Analytics Spark elastic pool storage is available for public preview](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-august-update-2022/ba-p/3535126#TOCREF_8).| | August 2022 | **Apache Spark Optimized Write** | Optimize Write is a Delta Lake on Synapse preview feature that reduces the number of files written by Apache Spark 3 (3.1 and 3.2) and aims to increase individual file size of the written data. To learn more, see [The need for optimize write on Apache Spark](spark/optimize-write-for-apache-spark.md).|
This section summarizes recent new features and capabilities of Azure Synapse An
| March 2022 | **sFTP connector for Synapse data flows** | A native sftp connector in Synapse data flows is supported to read and write data from sFTP using the visual low-code data flows interface in Synapse. To learn more, see [Copy and transform data in SFTP server using Azure Data Factory or Azure Synapse Analytics](../data-factory/connector-sftp.md).| | March 2022 | **Data flow improvements to Data Preview** | Review features added to the [Data Preview and debug improvements in Mapping Data Flows](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/data-preview-and-debug-improvements-in-mapping-data-flows/ba-p/3268254?wt.mc_id=azsynapseblog_mar2022_blog_azureeng). | | March 2022 | **Pipeline script activity** | You can now [Transform data by using the Script activity](../data-factory/transform-data-using-script.md) to invoke SQL commands to perform both DDL and DML. |
-| December 2021 | **Custom partitions for Synapse link for Azure Cosmos DB** | Improve query execution times for your Spark queries, by creating custom partitions based on fields frequently used in your queries. To learn more, see [Custom partitioning in Azure Synapse Link for Azure Cosmos DB (Preview)](../cosmos-db/custom-partitioning-analytical-store.md). |
## Database Templates & Database Designer
This section summarizes recent new quality of life and feature improvements for
|**Month** | **Feature** | **Learn more**| |:-- |:-- | :-- |
+| December 2022 | **MSSparkUtils is the Swiss Army knife inside Synapse Spark** | MSSparkUtils is a built-in package to help you easily perform common tasks called Microsoft Spark utilities, including the ability to [share results between notebooks](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/mssparkutils-is-the-swiss-army-knife-inside-synapse-spark/ba-p/3673355). |
| September 2022 | **Synapse CICD for publishing workspace artifacts** | Integrating Synapse Studio with a Source Control System such as [Azure DevOps Git](https://dev.azure.com/) or [GitHub](https://github.com/) has been shown as one of Synapse Studio's preferred features to collaborate and provide [source control for Azure Synapse](cicd/source-control.md). The Visual Studio marketplace has a [Synapse workspace deployment task](https://marketplace.visualstudio.com/items?itemName=AzureSynapseWorkspace.synapsecicd-deploy) to automate publishing.| | July 2022 | **Synapse Notebooks compatibility with IPython** | The official kernel for Jupyter notebooks is IPython and it's now supported in Synapse Notebooks. For more information, see [Synapse Notebooks is now fully compatible with IPython](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-july-update-2022/ba-p/3535089#TOCREF_14).| | July 2022 | **Mssparkutils now has spark.stop() method** | A new API `mssparkutils.session.stop()` has been added to the mssparkutils package. This feature becomes handy when there are multiple sessions running against the same Spark pool. The new API is available for Scala and Python. To learn more, see [Stop an interactive session](spark/microsoft-spark-utilities.md#stop-an-interactive-session).|
This section summarizes recent new quality of life and feature improvements for
## Machine Learning
-This section summarizes recent new features and improvements to using machine learning models in Azure Synapse Analytics.
+This section summarizes recent new features and improvements to machine learning models in Azure Synapse Analytics.
|**Month** | **Feature** | **Learn more**| |:-- |:-- | :-- |
This section summarizes recent new features and improvements to using machine le
| August 2022 | **MLflow platform support** | SynapseML models now integrate with [MLflow](https://microsoft.github.io/SynapseML/docs/mlflow/introduction/) with full support for saving, loading, deployment, and [autologging](https://microsoft.github.io/SynapseML/docs/mlflow/autologging/).| | August 2022 | **SynapseML in Binder** | We know that Spark can be intimidating for first users but fear not because with the technology Binder, you can [explore and experiment with SynapseML in Binder](https://mybinder.org/v2/gh/microsoft/SynapseML/93d7ccf?labpath=notebooks%2Ffeatures) with zero setup, install, infrastructure, or Azure account required.| | June 2022 | **Distributed Deep Neural Network Training (preview)** | The Azure Synapse runtime also includes supporting libraries like Petastorm and Horovod, which are commonly used for distributed training. This feature is currently available in preview. The Azure Synapse Analytics runtime for Apache Spark 3.1 and 3.2 also now includes support for the most common deep learning libraries like TensorFlow and PyTorch. To learn more about how to leverage these libraries within your Azure Synapse Analytics GPU-accelerated pools, read the [Deep learning tutorials](./machine-learning/concept-deep-learning.md). |
-| November 2021 | **PREDICT** | The T-SQL [PREDICT](/sql/t-sql/queries/predict-transact-sql) syntax is now generally available for dedicated SQL pools. Get started with the [Machine learning model scoring wizard for dedicated SQL pools](./machine-learning/tutorial-sql-pool-model-scoring-wizard.md).|
## Samples and guidance
This section summarizes new guidance and sample project resources for Azure Syna
|**Month** | **Feature** | **Learn more**| |:-- |:-- | :-- |
+| January 2023 | **Create DNS alias for dedicated SQL pool in Synapse workspace for disaster recovery** | A [custom DNS for dedicated SQL pools (formerly SQL DW)](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/create-dns-alias-for-dedicated-sql-pool-in-synapse-workspace-for/ba-p/3675676) can provide redirect to client programs during a disaster. |
+| December 2022 | **Azure Synapse - Data Lake vs. Delta Lake vs. Data Lakehouse** | Read a new Success Engineering blog post demystifying the terms [Data Lake, Delta Lake, and Data Lakehouse](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/synapse-data-lake-vs-delta-lake-vs-data-lakehouse/ba-p/3673653). |
| November 2022 | **How Data Exfiltration Protection (DEP) impacts Azure Synapse Analytics Pipelines** | [Data Exfiltration Protection (DEP)](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/how-data-exfiltration-protection-dep-impacts-azure-synapse/ba-p/3676146) is a feature that enables additional restrictions on the ability of Azure Synapse Analytics to connect to other services. | | November 2022 | **Getting started with REST APIs for Azure Synapse Analytics - Apache Spark Pool** | We provide [instructions on how to setup and use Synapse REST endpoints and describe the Apache Spark Pool operations supported by REST APIs](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/getting-started-with-rest-apis-for-azure-synapse-analytics/ba-p/3668474). | | November 2022 | **Demystifying Azure Synapse Data Explorer** | A two-part explainer [demystify Data Explorer in Azure Synapse](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/demystifying-data-explorer/ba-p/3636191) and [data ingestion with Azure Synapse Data Explorer](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/demystifying-data-ingestion-in-azure-synapse-data-explorer/ba-p/3661133). |
This section summarizes recent new security features and settings in Azure Synap
| April 2022 | **Synapse Monitoring Operator RBAC role** | The Synapse Monitoring Operator role-based access control (RBAC) role allows a user persona to monitor the execution of Synapse Pipelines and Spark applications without having the ability to run or cancel the execution of these applications. For more information, review the [Synapse RBAC Roles](security/synapse-workspace-synapse-rbac-roles.md).| | March 2022 | **Enforce minimal TLS version** | You can now raise or lower the minimum TLS version for dedicated SQL pools in Synapse workspaces. To learn more, see [Azure SQL connectivity settings](/azure/azure-sql/database/connectivity-settings#minimal-tls-version). The [workspace managed SQL API](/rest/api/synapse/sqlserver/workspace-managed-sql-server-dedicated-sql-minimal-tls-settings/update) can be used to modify the minimum TLS settings.| | March 2022 | **Azure Synapse Analytics now supports Azure Active Directory (Azure AD) only authentication** | You can now use Azure Active Directory authentication to centrally manage access to all Azure Synapse resources, including SQL pools. You can [disable local authentication](sql/active-directory-authentication.md#disable-local-authentication) upon creation or after a workspace is created through the Azure portal.|
-| December 2021 | **User-Assigned managed identities** | Now you can use user-assigned managed identities in linked services for authentication in Synapse Pipelines and Dataflows. To learn more, see [Credentials in Azure Data Factory and Azure Synapse](../data-factory/credentials.md?context=%2Fazure%2Fsynapse-analytics%2Fcontext%2Fcontext&tabs=data-factory).|
-| December 2021 | **Browse ADLS Gen2 folders in the Azure Synapse Analytics workspace** | You can now [browse and secure an Azure Data Lake Storage Gen2 (ADLS Gen2) container or folder](how-to-access-container-with-access-control-lists.md) in your Azure Synapse Analytics workspace by connecting to a specific container or folder in Synapse Studio.|
-| December 2021 | **TLS 2.1 enforced for new Synapse Workspaces** | Starting in December 2021, [a requirement for TLS 1.2](security/connectivity-settings.md#minimal-tls-version) has been implemented for new Synapse Workspaces only. |
+ ## Azure Synapse Data Explorer (preview)
Azure Data Explorer (ADX) is a fast and highly scalable data exploration service
|**Month** | **Feature** | **Learn more**| |:-- |:-- | :-- |
+| December 2022 | **Demystifying data consumption using Azure Synapse Data Explorer** | A guide to the various ways of [retrieving, consuming and visualizing data from Azure Synapse Data Explorer](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/demystifying-data-consumption-using-azure-synapse-data-explorer/ba-p/3684265). |
| November 2022 | **Table Level Sharing support via Azure Data Share** | We have now [added Table level sharing support](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-update-2022/ba-p/3680019#TOCREF_10) via the [Azure Data Share interface](https://azure.microsoft.com/products/data-share/#overview) where you can share specific tables in the database. This allows you to easily and securely share your data with people in your company or external partners. | | November 2022 | **Ingest data from Azure Stream Analytics into Synapse Data Explorer** | The ability to use a Streaming Analytics job to collect data from an event hub and send it to your Azure Data Explorer cluster is now generally available. For more information, see [Ingest data from Azure Stream Analytics into Azure Data Explorer](/azure/data-explorer/stream-analytics-connector) and [ADX output from Azure Stream Analytics](/azure/stream-analytics/azure-database-explorer-output).| | November 2022 | **Parse-kv operator** | The new [parse-kv operator](/azure/data-explorer/kusto/query/parse-kv-operator) extracts structured information from a string expression and represents the information in a key/value form. You can use a [specified delimeter](/azure/data-explorer/kusto/query/parse-kv-operator#specified-delimeter), a [non-specified delimeter](/azure/data-explorer/kusto/query/parse-kv-operator#non-specified-delimiter), or [Regex](/azure/data-explorer/kusto/query/parse-kv-operator#regex) via a [RE2 regular expression](/azure/data-explorer/kusto/query/re2). |
This section summarizes recent improvements and features in SQL pools in Azure S
## Learn more
+For older updates, review past [Azure Synapse Analytics Blog](https://aka.ms/SynapseMonthlyUpdate) posts or [previous updates in Azure Synapse Analytics](whats-new-archive.md).
+ - [Get started with Azure Synapse Analytics](get-started.md) - [Introduction to Azure Synapse Analytics](/training/modules/introduction-azure-synapse-analytics/) - [Realize Integrated Analytical Solutions with Azure Synapse Analytics](/training/paths/realize-integrated-analytical-solutions-with-azure-synapse-analytics/)
This section summarizes recent improvements and features in SQL pools in Azure S
- [Become an Azure Synapse Influencer](https://aka.ms/synapseinfluencers) - [Azure Synapse Analytics terminology](overview-terminology.md) - [Azure Synapse Analytics migration guides](migration-guides/index.yml)-- [Azure Synapse Analytics frequently asked questions](overview-faq.yml)
+- [Azure Synapse Analytics frequently asked questions](overview-faq.yml)
virtual-desktop Configure Single Sign On https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/configure-single-sign-on.md
Previously updated : 12/06/2022 Last updated : 01/05/2023 # Configure single sign-on for Azure Virtual Desktop using Azure AD Authentication
> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-This article will walk you through the process of configuring single sign-on (SSO) using Azure Active Directory (Azure AD) authentication for Azure Virtual Desktop (preview). When you enable SSO, you can use passwordless authentication and third-party Identity Providers that federate with Azure AD to sign in to your Azure Virtual Desktop and Remote Applications.
+This article will walk you through the process of configuring single sign-on (SSO) using Azure Active Directory (Azure AD) authentication for Azure Virtual Desktop (preview). When you enable SSO, you can use passwordless authentication and third-party Identity Providers that federate with Azure AD to sign in to your Azure Virtual Desktop and Remote Applications. When enabled, this feature provides a single sign-on experience when authenticating to the session host and configures the session to provide single sign-on to Azure AD-based resources inside the session.
-For additional passwordless functionality within the session, see the [**Next Steps**](#next-steps) section for configuring in-session passwordless authentication below.
+For information on using passwordless authentication within the session, see [In-session passwordless authentication (preview)](authentication.md#in-session-passwordless-authentication-preview).
> [!NOTE] > Azure Virtual Desktop (classic) doesn't support this feature.
Single sign-on is available on session hosts using the following operating syste
- Windows 10 Enterprise single or multi-session, versions 20H2 or later with the [2022-09 Cumulative Updates for Windows 10 Preview (KB5017380)](https://support.microsoft.com/kb/KB5017380) or later installed. - Windows Server 2022 with the [2022-09 Cumulative Update for Microsoft server operating system preview (KB5017381)](https://support.microsoft.com/kb/KB5017381) or later installed.
-Session hosts must be Azure AD-joined or [Hybrid Azure AD-Joined](https://learn.microsoft.com/azure/active-directory/devices/hybrid-azuread-join-plan).
+Session hosts must be Azure AD-joined or [Hybrid Azure AD-Joined](../active-directory/devices/hybrid-azuread-join-plan.md).
> [!NOTE] > Azure Virtual Desktop doesn't support this solution with VMs joined to Azure AD Domain Services or Active Directory only joined session hosts.
To enable SSO on your host pool, you must [customize an RDP property](customize-
When enabling single sign-on, you'll currently be prompted to authenticate to Azure AD and allow the Remote Desktop connection when launching a connection to a new host. Azure AD remembers up to 15 hosts for 30 days before prompting again. If you see this dialogue, select **Yes** to connect.
+### Disconnection when the session is locked
+
+When SSO is enabled, you sign in to Windows using an Azure AD authentication token, which provides support for passwordless authentication to Windows. The Windows lock screen in the remote session doesn't support Azure AD authentication tokens or passwordless authentication methods like FIDO keys. The lack of support for these authentication methods means that users can't unlock their screens in a remote session. When you try to lock a remote session, either through user action or system policy, the session is instead disconnected and the service sends a message to the user explaining they've been disconnected.
+
+Disconnecting the session also ensures that when the connection is relaunched after a period of inactivity, Azure AD reevaluates the applicable conditional access policies.
+ ## Next steps - Check out [In-session passwordless authentication (preview)](authentication.md#in-session-passwordless-authentication-preview) to learn how to enable passwordless authentication.
virtual-desktop Connection Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/connection-latency.md
Title: Azure Virtual Desktop user connection quality - Azure
+ Title: Analyze connection quality in Azure Virtual Desktop - Azure
description: Connection quality for Azure Virtual Desktop users. Previously updated : 09/26/2022 Last updated : 01/05/2023 +
-# Connection quality in Azure Virtual Desktop
+# Analyze connection quality in Azure Virtual Desktop
>[!IMPORTANT] >The Connection Graphics Data Logs are currently in preview. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-Azure Virtual Desktop helps users host client sessions on their session hosts running on Azure. When a user starts a session, they connect from their end-user device, also known as a "client," over a network to access the session host. It's important that the user experience feels as much like a local session on a physical device as possible. In this article, we'll talk about how you can measure and improve the connection quality of your end-users.
+Azure Virtual Desktop helps users host client sessions on their session hosts running on Azure. When a user starts a session, they connect from their local device over a network to access the session host. It's important that the user experience feels as much like a local session on a physical device as possible. In this article, we'll talk about how you can measure your connection network and connection graphics to improve the connection quality of your end-users.
-There are currently two ways you can analyze connection quality in your Azure Virtual Desktop deployment: Azure Log Analytics and Azure Front Door. This article will describe how to use each method to optimize graphics quality and improve end-user experience.
+You can analyze connection quality in your Azure Virtual Desktop deployment by using Azure Log Analytics. This article will tell you how you can use Azure Log Analytics to optimize graphics quality and improve end-user experience.
-## Monitor connection quality with Azure Log Analytics
+Azure Virtual Desktop uses [Azure Front Door](https://azure.microsoft.com/services/frontdoor/) to redirect the user connection to the nearest Azure Virtual Desktop gateway based on the source IP address. Azure Virtual Desktop will always use the Azure Virtual Desktop gateway that the client chooses.
-If you're already using [Azure Log Analytics](diagnostics-log-analytics.md), you can monitor network and graphics data for Azure Virtual Desktop connections. The connection network and graphics data Log Analytics collects can help you discover areas that impact your end-user's graphical experience. The service collects data for reports regularly throughout the session. Azure Virtual Desktop connection network data reports have the following advantages over RemoteFX network performance counters:
+The connection network and graphics data that [Azure Log Analytics](diagnostics-log-analytics.md) collects can help you discover areas that impact your end-user's graphical experience. The service collects data for reports regularly throughout the session. You can also use [RemoteFX network performance counters](remotefx-graphics-performance-counters.md) to get some graphics-related performance data from your deployment, but they're not quite as comprehensive as Azure Log Analytics. Azure Virtual Desktop connection network data reports have the following advantages over RemoteFX network performance counters:
- Each record is connection-specific and includes the correlation ID of the connection that can be tied back to the user. - The round trip time measured in this table is protocol-agnostic and will record the measured latency for Transmission Control Protocol (TCP) or User Datagram Protocol (UDP) connections.
-To start collecting this data, youΓÇÖll need to make sure you have diagnostics and the **Network Data Logs** and **Connection Graphics Data Logs Preview** tables enabled in your Azure Virtual Desktop host pools.
-
->[!NOTE]
->Normal storage charges for Log Analytics will apply. Learn more at [Azure Monitor Logs pricing details](../azure-monitor/logs/cost-logs.md).
-
-To check and modify your diagnostics settings in the Azure portal:
-
-1. Sign in to the Azure portal, then go to **Azure Virtual Desktop** and select **Host pools**.
-
-2. Select the host pool you want to collect network data for.
-
-3. Select **Diagnostic settings**, then create a new setting if you haven't configured your diagnostic settings yet. If you've already configured your diagnostic settings, select **Edit setting**.
-
-4. Select **allLogs** or select the names of the diagnostics tables you want to collect data for, including **Network Data Logs** and **Connection Graphics Data Logs Preview**. The *allLogs* parameter will automatically add new tables to your data table in the future.
-
-5. Select where you want to send the collected data. Azure Virtual Desktop Insights users should select a Log Analytics workspace.
-
-6. Select **Save** to apply your changes.
+## Connection network data
-7. Repeat this process for all other host pools you want to measure.
-
-8. Make sure the network data is going to your selected destination by returning to the host pool's resource page, selecting **Logs**, then running one of the queries in [Sample queries for Azure Log Analytics](#sample-queries-for-azure-log-analytics-network-data). In order for your query to get results, your host pool must have active users who have been connecting to sessions. Keep in mind that it can take up to 15 minutes for network data to appear in the Azure portal.
-
- To check network data, return to the host pool's resource page, select **Logs**, then run one of the queries in [Sample queries for Azure Log Analytics](connection-latency.md#sample-queries-for-azure-log-analytics-network-data). In order for your query to get results, your host pool must have active users who've connected to sessions before. Keep in mind that it can take up to 15 minutes for network data to appear in the Azure portal.
-
-### Connection network data
-
-The network data you collect for your data tables includes the following information:
+The network data you collect for your data tables using the NetworkData table includes the following information:
- The **estimated available bandwidth (kilobytes per second)** is the average estimated available network bandwidth during each connection time interval.
The network data you collect for your data tables includes the following informa
- The **source system**, **Subscription ID**, **Tenant ID**, and **type** (table name).
-#### Frequency
+### Frequency
The service generates these network data points every two minutes during an active session.
-### Connection graphics data (preview)
+## The ConnectionGraphicsData table (preview)
-You should consult the Graphics data table (preview) when users report slow or choppy experiences in their Azure Virtual Desktop sessions. The Graphics data table will give you useful information whenever graphical indicators, end-to-end delay, and dropped frames percentage fall below the "healthy" threshold for Azure Virtual Desktop. This table will help your admins track and understand factors across the server, client, and network that could be contributing to the user's slow or choppy experience. However, while the Graphics data table is a useful tool for troubleshooting poor user experience, since it's not regularly populated throughout a session, it isn't a reliable environment baseline.
+You should consult the ConnectionGraphicsData table (preview) when users report slow or choppy experiences in their Azure Virtual Desktop sessions. The ConnectionGraphicsData table will give you useful information whenever graphical indicators, end-to-end delay, and dropped frames percentage fall below the "healthy" threshold for Azure Virtual Desktop. This table will help your admins track and understand factors across the server, client, and network that could be contributing to the user's slow or choppy experience. However, while the ConnectionGraphicsData table is a useful tool for troubleshooting poor user experience, since it's not regularly populated throughout a session, it isn't a reliable environment baseline.
The Graphics table only captures performance data from the Azure Virtual Desktop graphics stream. This table doesn't capture performance degradation or "slowness" caused by application-specific factors or the virtual machine (CPU or storage constraints). You should use this table with other VM performance metrics to determine if the delay is caused by the remote desktop service (graphics and network) or something inherent in the VM or app itself.
The graphics data you collect for your data tables includes the following inform
- The **source system**, **Subscription ID**, **Tenant ID**, and **type** (table name).
-#### Frequency
+### Frequency
In contrast to other diagnostics tables that report data at regular intervals throughout a session, the frequency of data collection for the graphics data varies depending on the graphical health of a connection. The table won't record data for "Good" scenarios, but will recording if any of the following metrics are recorded as "Poor" or "Okay," and the resulting data will be sent to your storage account. Data only records once every two minutes, maximum. The metrics involved in data collection are listed in the following table:
In contrast to other diagnostics tables that report data at regular intervals th
>[!NOTE] >For end-to-end delay per frame, if any frame in a single second is delayed by over 300 ms, the service registers it as "Bad". If all frames in a single second take between 150 ms and 300 ms, the service marks it as "Okay."
-## Sample queries for Azure Log Analytics: network data
-
-In this section, we have a list of queries that will help you review connection quality information. You can run queries in the [Log Analytics query editor](../azure-monitor/logs/log-analytics-tutorial.md#write-a-query).
-
->[!NOTE]
->For each example, replace the *userupn* variable with the UPN of the user you want to look up.
-
-### Query average RTT and bandwidth
-
-To look up the average round trip time and bandwidth:
-
-```kusto
-// 90th, 50th, 10th Percentile for RTT in 10 min increments
-WVDConnectionNetworkData
-| summarize RTTP90=percentile(EstRoundTripTimeInMs,90),RTTP50=percentile(EstRoundTripTimeInMs,50),RTTP10=percentile(EstRoundTripTimeInMs,10) by bin(TimeGenerated,10m)
-| render timechart
-// 90th, 50th, 10th Percentile for BW in 10 min increments
-WVDConnectionNetworkData
-| summarize BWP90=percentile(EstAvailableBandwidthKBps,90),BWP50=percentile(EstAvailableBandwidthKBps,50),BWP10=percentile(EstAvailableBandwidthKBps,10) by bin(TimeGenerated,10m)
-| render timechart
-```
-To look up the round-trip time and bandwidth per connection:
-
-```kusto
-// RTT and BW Per Connection Summary
-// Returns P90 Round Trip Time (ms) and Bandwidth (KBps) per connection with connection details.
-WVDConnectionNetworkData
-| summarize RTTP90=percentile(EstRoundTripTimeInMs,90),BWP90=percentile(EstAvailableBandwidthKBps,90),StartTime=min(TimeGenerated), EndTime=max(TimeGenerated) by CorrelationId
-| join kind=leftouter (
-WVDConnections
-| extend Protocol = iff(UdpUse in ("0","<>"),"TCP","UDP")
-| distinct CorrelationId, SessionHostName, Protocol, ClientOS, ClientType, ClientVersion, ConnectionType, ResourceAlias, SessionHostSxSStackVersion, UserName
-) on CorrelationId
-| project CorrelationId, StartTime, EndTime, UserName, SessionHostName, RTTP90, BWP90, Protocol, ClientOS, ClientType, ClientVersion, ConnectionType, ResourceAlias, SessionHostSxSStackVersion
-```
-
-### Query data for a specific user
-
-To look up the bandwidth for a specific user:
-
-```kusto
-let user = "alias@domain";
-WVDConnectionNetworkData
-| join kind=leftouter (
- WVDConnections
- | distinct CorrelationId, UserName
-) on CorrelationId
-| where UserName == user
-| project EstAvailableBandwidthKBps, TimeGenerated
-| render columnchart
-```
-
-To look up the round trip time for a specific user:
-
-```kusto
-let user = "alias@domain";
-WVDConnectionNetworkData
-| join kind=leftouter (
-WVDConnections
-| distinct CorrelationId, UserName
-) on CorrelationId
-| where UserName == user
-| project EstRoundTripTimeInMs, TimeGenerated
-| render columnchart
-```
-
-To look up the top 10 users with the highest round trip time:
-
-```kusto
-WVDConnectionNetworkData
-| join kind=leftouter (
- WVDConnections
- | distinct CorrelationId, UserName
-) on CorrelationId
-| summarize AvgRTT=avg(EstRoundTripTimeInMs),RTT_P95=percentile(EstRoundTripTimeInMs,95) by UserName
-| top 10 by AvgRTT desc
-```
-
-To look up the 10 users with the lowest bandwidth:
-
-```kusto
-WVDConnectionNetworkData
-| join kind=leftouter (
- WVDConnections
- | distinct CorrelationId, UserName
-) on CorrelationId
-| summarize AvgBW=avg(EstAvailableBandwidthKBps),BW_P95=percentile(EstAvailableBandwidthKBps,95) by UserName
-| top 10 by AvgBW asc
-```
-
-## Azure Front Door
-
-Azure Virtual Desktop uses [Azure Front Door](https://azure.microsoft.com/services/frontdoor/) to redirect the user connection to the nearest Azure Virtual Desktop gateway based on the source IP address. Azure Virtual Desktop will always use the Azure Virtual Desktop gateway that the client chooses.
- ## Next steps
+- Learn more about how to monitor and run queries about connection quality issues at [Monitor connection quality](connection-quality-monitoring.md).
- Troubleshoot connection and latency issues at [Troubleshoot connection quality for Azure Virtual Desktop](troubleshoot-connection-quality.md). - To check the best location for optimal latency, see the [Azure Virtual Desktop Experience Estimator tool](https://azure.microsoft.com/services/virtual-desktop/assessment/).-- For pricing plans, see [Azure Virtual Desktop pricing](https://azure.microsoft.com/pricing/details/virtual-desktop/).
+- For pricing plans, see [Azure Log Analytics pricing](/services-hub/health/azure_pricing).
- To get started with your Azure Virtual Desktop deployment, check out [our tutorial](./create-host-pools-azure-marketplace.md). - To learn about bandwidth requirements for Azure Virtual Desktop, see [Understanding Remote Desktop Protocol (RDP) Bandwidth Requirements for Azure Virtual Desktop](rdp-bandwidth.md). - To learn about Azure Virtual Desktop network connectivity, see [Understanding Azure Virtual Desktop network connectivity](network-connectivity.md).
virtual-desktop Connection Quality Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/connection-quality-monitoring.md
+
+ Title: Collect and query Azure Virtual Desktop connection quality data (preview) - Azure
+description: How to set up and query the connection quality data table for Azure Virtual Desktop to diagnose connection issues.
++ Last updated : 01/05/2023++++
+# Collect and query connection quality data
+
+>[!IMPORTANT]
+>The Connection Graphics Data Logs are currently in preview. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+[Connection quality](connection-latency.md) is essential for good user experiences, so it's important to be able to monitor connections for potential issues and troubleshoot problems as they arise. Azure Virtual Desktop offers tools like [Log Analytics](diagnostics-log-analytics.md) that can help you monitor your deployment's connection health. This article will show you how to configure your diagnostic settings to let you collect connection quality data and query data for specific parameters.
+
+## Prerequisites
+
+To start collecting connection quality data, youΓÇÖll need to [set up a Log Analytics workspace](diagnostics-log-analytics.md).
+
+>[!NOTE]
+>Normal storage charges for Log Analytics will apply. Learn more at [Azure Monitor Logs pricing details](../azure-monitor/logs/cost-logs.md).
+
+## Configure diagnostics settings
+
+To check and modify your diagnostics settings in the Azure portal:
+
+1. Sign in to the Azure portal, then go to **Azure Virtual Desktop** and select **Host pools**.
+
+2. Select the host pool you want to collect network data for.
+
+3. Select **Diagnostic settings**, then create a new setting if you haven't configured your diagnostic settings yet. If you've already configured your diagnostic settings, select **Edit setting**.
+
+4. Select **allLogs** if you want to collect data for all tables. The *allLogs* parameter will automatically add new tables to your data table in the future.
+
+ If you'd prefer to view more specific tables, first select **Network Data Logs** and **Connection Graphics Data Logs Preview**, then select the names of the other tables you want to see.
+
+5. Select where you want to send the collected data. Azure Virtual Desktop Insights users should select a Log Analytics workspace.
+
+6. Select **Save** to apply your changes.
+
+7. Repeat this process for all other host pools you want to measure.
+
+8. To check network data, return to the host pool's resource page, select **Logs**, then run one of the queries in [Sample queries for Azure Log Analytics](#sample-queries-for-azure-log-analytics-network-data). In order for your query to get results, your host pool must have active users who've connected to sessions before. Keep in mind that it can take up to 15 minutes for network data to appear in the Azure portal.
+
+## Sample queries for Azure Log Analytics: network data
+
+In this section, we have a list of queries that will help you review connection quality information. You can run queries in the [Log Analytics query editor](../azure-monitor/logs/log-analytics-tutorial.md#write-a-query).
+
+>[!NOTE]
+>For each example, replace the *userupn* variable with the UPN of the user you want to look up.
+
+### Query average RTT and bandwidth
+
+To look up the average round trip time and bandwidth:
+
+```kusto
+// 90th, 50th, 10th Percentile for RTT in 10 min increments
+WVDConnectionNetworkData
+| summarize RTTP90=percentile(EstRoundTripTimeInMs,90),RTTP50=percentile(EstRoundTripTimeInMs,50),RTTP10=percentile(EstRoundTripTimeInMs,10) by bin(TimeGenerated,10m)
+| render timechart
+// 90th, 50th, 10th Percentile for BW in 10 min increments
+WVDConnectionNetworkData
+| summarize BWP90=percentile(EstAvailableBandwidthKBps,90),BWP50=percentile(EstAvailableBandwidthKBps,50),BWP10=percentile(EstAvailableBandwidthKBps,10) by bin(TimeGenerated,10m)
+| render timechart
+```
+To look up the round-trip time and bandwidth per connection:
+
+```kusto
+// RTT and BW Per Connection Summary
+// Returns P90 Round Trip Time (ms) and Bandwidth (KBps) per connection with connection details.
+WVDConnectionNetworkData
+| summarize RTTP90=percentile(EstRoundTripTimeInMs,90),BWP90=percentile(EstAvailableBandwidthKBps,90),StartTime=min(TimeGenerated), EndTime=max(TimeGenerated) by CorrelationId
+| join kind=leftouter (
+WVDConnections
+| extend Protocol = iff(UdpUse in ("0","<>"),"TCP","UDP")
+| distinct CorrelationId, SessionHostName, Protocol, ClientOS, ClientType, ClientVersion, ConnectionType, ResourceAlias, SessionHostSxSStackVersion, UserName
+) on CorrelationId
+| project CorrelationId, StartTime, EndTime, UserName, SessionHostName, RTTP90, BWP90, Protocol, ClientOS, ClientType, ClientVersion, ConnectionType, ResourceAlias, SessionHostSxSStackVersion
+```
+
+### Query data for a specific user
+
+To look up the bandwidth for a specific user:
+
+```kusto
+let user = "alias@domain";
+WVDConnectionNetworkData
+| join kind=leftouter (
+ WVDConnections
+ | distinct CorrelationId, UserName
+) on CorrelationId
+| where UserName == user
+| project EstAvailableBandwidthKBps, TimeGenerated
+| render columnchart
+```
+
+To look up the round trip time for a specific user:
+
+```kusto
+let user = "alias@domain";
+WVDConnectionNetworkData
+| join kind=leftouter (
+WVDConnections
+| distinct CorrelationId, UserName
+) on CorrelationId
+| where UserName == user
+| project EstRoundTripTimeInMs, TimeGenerated
+| render columnchart
+```
+
+To look up the top 10 users with the highest round trip time:
+
+```kusto
+WVDConnectionNetworkData
+| join kind=leftouter (
+ WVDConnections
+ | distinct CorrelationId, UserName
+) on CorrelationId
+| summarize AvgRTT=avg(EstRoundTripTimeInMs),RTT_P95=percentile(EstRoundTripTimeInMs,95) by UserName
+| top 10 by AvgRTT desc
+```
+
+To look up the 10 users with the lowest bandwidth:
+
+```kusto
+WVDConnectionNetworkData
+| join kind=leftouter (
+ WVDConnections
+ | distinct CorrelationId, UserName
+) on CorrelationId
+| summarize AvgBW=avg(EstAvailableBandwidthKBps),BW_P95=percentile(EstAvailableBandwidthKBps,95) by UserName
+| top 10 by AvgBW asc
+```
+
+## Next steps
+
+Learn more about connection quality at [Connection quality in Azure Virtual Desktop](connection-latency.md).
virtual-desktop Create Profile Container Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-profile-container-azure-ad.md
Previously updated : 11/07/2022 Last updated : 12/16/2022 # Create a profile container with Azure Files and Azure Active Directory
-In this article, you'll learn how to create an Azure Files share to store FSLogix profiles that can be accessed by hybrid user identities authenticated with Azure Active Directory (Azure AD). Azure AD users can now access an Azure file share using Kerberos authentication. This configuration uses Azure AD to issue the necessary Kerberos tickets to access the file share with the industry-standard SMB protocol. Your end-users can access Azure file shares over the internet without requiring a line-of-sight to domain controllers from Hybrid Azure AD-joined and Azure AD-joined VMs.
+In this article, you'll learn how to create and configure an Azure Files share for Azure Active Directory (Azure AD) Kerberos authentication. This configuration allows you to store FSLogix profiles that can be accessed by hybrid user identities from Azure AD-joined or Hybrid Azure AD-joined session hosts without requiring network line-of-sight to domain controllers. Azure AD Kerberos enables Azure AD to issue the necessary Kerberos tickets to access the file share with the industry-standard SMB protocol.
This feature is currently supported in the Azure Public cloud.
+## Prerequisites
+
+Before deploying this solution, verify that your environment [meets the requirements](../storage/files/storage-files-identity-auth-azure-active-directory-enable.md#prerequisites) to configure Azure Files with Azure AD Kerberos authentication.
+
+When used for FSLogix profiles in Azure Virtual Desktop, the session hosts don't need to have network line-of-sight to the domain controller (DC). However, a system with network line-of-sight to the DC is required to configure the permissions on the Azure Files share.
+ ## Configure your Azure storage account and file share To store your FSLogix profiles on an Azure file share:
virtual-desktop Deploy Azure Ad Joined Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/deploy-azure-ad-joined-vm.md
The following configurations are currently supported with Azure AD-joined VMs:
- Personal desktops with local user profiles. - Pooled desktops used as a jump box. In this configuration, users first access the Azure Virtual Desktop VM before connecting to a different PC on the network. Users shouldn't save data on the VM. - Pooled desktops or apps where users don't need to save data on the VM. For example, for applications that save data online or connect to a remote database.-- Personal or pooled desktops with FSLogix user profiles with synced users from Active Directory.
+- Personal or pooled desktops with FSLogix user profiles.
User accounts can be cloud-only or synced users from the same Azure AD tenant.
virtual-desktop Environment Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/environment-setup.md
A host pool is a collection of Azure virtual machines that register to Azure Vir
A host pool can be one of two types: - Personal, where each session host is assigned to an individual user. Personal host pools provide dedicated desktops to end-users that optimize environments for performance and data separation. -- Pooled, where user sessions can be load balanced to any session host in the host pool. There can be multiple different users on a single session host at the same time. Pooled host pools provide a shared remote experience to end-users, which ensures lower costs costs and greater efficiency.
+- Pooled, where user sessions can be load balanced to any session host in the host pool. There can be multiple different users on a single session host at the same time. Pooled host pools provide a shared remote experience to end-users, which ensures lower costs and greater efficiency.
The following table goes into more detail about the features each type of host pool has:
virtual-desktop Insights Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/insights-costs.md
Title: Estimate Azure Virtual Desktop monitoring costs - Azure
+ Title: Estimate Azure Virtual Desktop Insights monitoring costs - Azure
description: How to estimate costs and pricing for using Azure Virtual Desktop Insights.
virtual-desktop Insights Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/insights-glossary.md
Title: Monitor Azure Virtual Desktop glossary - Azure
+ Title: Azure Virtual Desktop Insights glossary - Azure
description: A glossary of terms and concepts related to Azure Virtual Desktop Insights.
virtual-desktop Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/insights.md
Title: Use Monitor Azure Virtual Desktop Monitor - Azure
+ Title: How to monitor with Azure Virtual Desktop Insights - Azure
description: How to use Azure Virtual Desktop Insights.
virtual-desktop Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/prerequisites.md
You have a choice of operating systems that you can use for session hosts to pro
|Operating system |User access rights| |||
-|<ul><li>[Windows 11 Enterprise multi-session](/lifecycle/products/windows-11-enterprise-and-education)</li><li>[Windows 11 Enterprise](/lifecycle/products/windows-11-enterprise-and-education)</li><li>[Windows 10 Enterprise multi-session](/lifecycle/products/windows-10-enterprise-and-education)</li><li>[Windows 10 Enterprise](/lifecycle/products/windows-10-enterprise-and-education)</li><li>[Windows 7 Enterprise](/lifecycle/products/windows-7) (with Extended Security Updates)</li></ul>|License entitlement:<ul><li>Microsoft 365 E3, E5, A3, A5, F3, Business Premium, Student Use Benefit</li><li>Windows Enterprise E3, E5</li><li>Windows VDA E3, E5</li><li>Windows Education A3, A5</li></ul>External users can use [per-user access pricing](https://azure.microsoft.com/pricing/details/virtual-desktop/) instead of license entitlement.</li></ul>|
+|<ul><li>[Windows 11 Enterprise multi-session](/lifecycle/products/windows-11-enterprise-and-education)</li><li>[Windows 11 Enterprise](/lifecycle/products/windows-11-enterprise-and-education)</li><li>[Windows 10 Enterprise multi-session](/lifecycle/products/windows-10-enterprise-and-education)</li><li>[Windows 10 Enterprise](/lifecycle/products/windows-10-enterprise-and-education)</li><ul>|License entitlement:<ul><li>Microsoft 365 E3, E5, A3, A5, F3, Business Premium, Student Use Benefit</li><li>Windows Enterprise E3, E5</li><li>Windows VDA E3, E5</li><li>Windows Education A3, A5</li></ul>External users can use [per-user access pricing](https://azure.microsoft.com/pricing/details/virtual-desktop/) instead of license entitlement.</li></ul>|
|<ul><li>[Windows Server 2022](/lifecycle/products/windows-server-2022)</li><li>[Windows Server 2019](/lifecycle/products/windows-server-2019)</li><li>[Windows Server 2016](/lifecycle/products/windows-server-2016)</li><li>[Windows Server 2012 R2](/lifecycle/products/windows-server-2012-r2)</li></ul>|License entitlement:<ul><li>Remote Desktop Services (RDS) Client Access License (CAL) with Software Assurance (per-user or per-device), or RDS User Subscription Licenses.</li></ul>Per-user access pricing is not available for Windows Server operating systems.| > [!IMPORTANT]
virtual-desktop Client Features Microsoft Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/client-features-microsoft-store.md
Title: Use features of the Remote Desktop client for Windows (Microsoft Store) - Azure Virtual Desktop
-description: Learn how to use features of the Remote Desktop client for Windows (Microsoft Store) when connecting to Azure Virtual Desktop.
+ Title: Use features of the Remote Desktop Microsoft Store client - Azure Virtual Desktop
+description: Learn how to use features of the Remote Desktop Microsoft Store client when connecting to Azure Virtual Desktop.
Last updated 10/04/2022
-# Use features of the Remote Desktop client for Windows (Microsoft Store) when connecting to Azure Virtual Desktop
+# Use features of the Remote Desktop Microsoft Store client when connecting to Azure Virtual Desktop
-Once you've connected to Azure Virtual Desktop using the Remote Desktop client, it's important to know how to use the features. This article shows you how to use the features available in the Remote Desktop client for Windows (Microsoft Store). If you want to learn how to connect to Azure Virtual Desktop, see [Connect to Azure Virtual Desktop with the Remote Desktop client for Windows (Microsoft Store)](connect-microsoft-store.md).
+Once you've connected to Azure Virtual Desktop using the Remote Desktop client, it's important to know how to use the features. This article shows you how to use the features available in the Remote Desktop Microsoft Store client. If you want to learn how to connect to Azure Virtual Desktop, see [Connect to Azure Virtual Desktop with the Remote Desktop Microsoft Store client](connect-microsoft-store.md).
You can find a list of all the Remote Desktop clients at [Remote Desktop clients overview](remote-desktop-clients-overview.md). For more information about the differences between the clients, see [Compare the Remote Desktop clients](../compare-remote-desktop-clients.md).
virtual-desktop Connect Android Chrome Os https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/connect-android-chrome-os.md
Title: Connect to Azure Virtual Desktop with the Remote Desktop client for Android and Chrome OS- Azure Virtual Desktop
+ Title: Connect to Azure Virtual Desktop with the Remote Desktop client for Android and Chrome OS - Azure Virtual Desktop
description: Learn how to connect to Azure Virtual Desktop using the Remote Desktop client for Android and Chrome OS.
A workspace combines all the desktops and applications that have been made avail
|--|--| | Azure cloud *(most common)* | `https://rdweb.wvd.microsoft.com` | | Azure US Gov | `https://rdweb.wvd.azure.us/api/arm/feeddiscovery` |
- | Azure China 21Vianet 21Vianet | `https://rdweb.wvd.azure.cn/api/arm/feeddiscovery` |
+ | Azure China 21Vianet | `https://rdweb.wvd.azure.cn/api/arm/feeddiscovery` |
1. Tap **Next**.
virtual-desktop Connect Microsoft Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/connect-microsoft-store.md
Title: Connect to Azure Virtual Desktop with the Remote Desktop client for Windows (Microsoft Store) - Azure Virtual Desktop
-description: Learn how to connect to Azure Virtual Desktop using the Remote Desktop client for Windows (Microsoft Store).
+ Title: Connect to Azure Virtual Desktop with the Remote Desktop Microsoft Store client - Azure Virtual Desktop
+description: Learn how to connect to Azure Virtual Desktop using the Remote Desktop Microsoft Store client.
Previously updated : 10/04/2022 Last updated : 01/04/2023
-# Connect to Azure Virtual Desktop with the Remote Desktop client for Windows (Microsoft Store)
+# Connect to Azure Virtual Desktop with the Remote Desktop Microsoft Store client
-The Microsoft Remote Desktop client is used to connect to Azure Virtual Desktop to access your desktops and applications. This article shows you how to connect to Azure Virtual Desktop with the Remote Desktop client for Windows from the Microsoft Store.
+The Microsoft Remote Desktop client is used to connect to Azure Virtual Desktop to access your desktops and applications. This article shows you how to connect to Azure Virtual Desktop with the Remote Desktop Microsoft Store client.
+
+> [!IMPORTANT]
+> We're no longer updating the Microsoft Store client with new features.
+>
+> For the best Azure Virtual Desktop experience that includes the latest features and fixes, we recommend you download the [Remote Desktop client for Windows](connect-windows.md) instead.
You can find a list of all the Remote Desktop clients at [Remote Desktop clients overview](remote-desktop-clients-overview.md).
-If you want to connect to Remote Desktop Services or a remote PC instead of Azure Virtual Desktop, see [Connect to Remote Desktop Services with the Remote Desktop client for Windows (Microsoft Store)](/windows-server/remote/remote-desktop-services/clients/windows).
+If you want to connect to Remote Desktop Services or a remote PC instead of Azure Virtual Desktop, see [Connect to Remote Desktop Services with the Remote Desktop Microsoft Store client](/windows-server/remote/remote-desktop-services/clients/windows).
## Prerequisites
virtual-desktop Whats New Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-agent.md
Title: What's new in the Azure Virtual Desktop Agent? - Azure
description: New features and product updates for the Azure Virtual Desktop Agent. Previously updated : 12/08/2022 Last updated : 01/09/2023
New versions of the Azure Virtual Desktop Agent are installed automatically. Whe
| Release | Latest version | |||
-| Generally available | 1.0.5555.1008 |
-| In-flight | 1.0.5555.1010 |
+| Generally available | 1.0.5739.9000/1.0.5739.9800 |
+| In-flight | N/A |
-## Version 1.0.5555.1010 (in-flight)
+## Version 1.0.5739.9000/1.0.5739.9800
+
+>[!NOTE]
+>You may see version 1.0.5739.9000 or 1.0.5739.9800 installed on session hosts depending on whether the host pool is configured to be a [validation environment](create-validation-host-pool.md). Version 1.0.5739.9000 was released to validation environments and version 1.0.5739.9800 was released to all other environments.
+>
+>Normally, all environments receive the same version. However, for this release, we had to adjust certain parameters unrelated to the Agent to allow this version to roll out to non-validation environments, which is why the non-validation version number is higher than the validation version number. Besides those changes, both versions are the same.
+
+This update was released in January 2023 and includes the following changes:
+
+- Added the RDGateway URL to URL Access Check.
+- Introduced RD Agent provisioning state for new installations.
+- Fixed error reporting in MSIX App Attach for apps with expired signatures.
+
+## Version 1.0.5555.1010
This update was released in December 2022. There are no changes to the agent in this version.
virtual-desktop Whats New Client Android Chrome Os https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-android-chrome-os.md
+
+ Title: What's new in the Remote Desktop client for Android and Chrome OS - Azure Virtual Desktop
+description: Learn about recent changes to the Remote Desktop client for Android and Chrome OS
+++ Last updated : 01/04/2023++
+# What's new in the Remote Desktop client for Android and Chrome OS
+
+In this article you'll learn about the latest updates for the Remote Desktop client for Android and Chrome OS. To learn more about using the Remote Desktop client for Android and Chrome OS with Azure Virtual Desktop, see [Connect to Azure Virtual Desktop with the Remote Desktop client for Android and Chrome OS](users/connect-android-chrome-os.md) and [Use features of the Remote Desktop client for Android and Chrome OS when connecting to Azure Virtual Desktop](users/client-features-android-chrome-os.md).
+
virtual-desktop Whats New Client Ios Ipados https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-ios-ipados.md
+
+ Title: What's new in the Remote Desktop client for iOS and iPadOS - Azure Virtual Desktop
+description: Learn about recent changes to the Remote Desktop client for iOS and iPadOS
+++ Last updated : 01/04/2023++
+# What's new in the Remote Desktop client for iOS and iPadOS
+
+In this article you'll learn about the latest updates for the Remote Desktop client for iOS and iPadOS. To learn more about using the Remote Desktop client for iOS and iPadOS with Azure Virtual Desktop, see [Connect to Azure Virtual Desktop with the Remote Desktop client for iOS and iPadOS](users/connect-ios-ipados.md) and [Use features of the Remote Desktop client for iOS and iPadOS when connecting to Azure Virtual Desktop](users/client-features-ios-ipados.md).
+
virtual-desktop Whats New Client Macos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-macos.md
+
+ Title: What's new in the Remote Desktop client for macOS - Azure Virtual Desktop
+description: Learn about recent changes to the Remote Desktop client for macOS
+++ Last updated : 01/04/2023++
+# What's new in the Remote Desktop client for macOS
+
+In this article you'll learn about the latest updates for the Remote Desktop client for macOS. To learn more about using the Remote Desktop client for macOS with Azure Virtual Desktop, see [Connect to Azure Virtual Desktop with the Remote Desktop client for macOS](users/connect-macos.md) and [Use features of the Remote Desktop client for macOS when connecting to Azure Virtual Desktop](users/client-features-macos.md).
+
virtual-desktop Whats New Client Microsoft Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-microsoft-store.md
+
+ Title: What's new in the Remote Desktop Microsoft Store client - Azure Virtual Desktop
+description: Learn about recent changes to the Remote Desktop Microsoft Store client
+++ Last updated : 01/04/2023++
+# What's new in the Remote Desktop Microsoft Store client
+
+In this article you'll learn about the latest updates for the Remote Desktop Microsoft Store client. To learn more about using the Remote Desktop Microsoft Store client with Azure Virtual Desktop, see [Connect to Azure Virtual Desktop with the Remote Desktop Microsoft Store client](users/connect-microsoft-store.md) and [Use features of the Remote Desktop Microsoft Store client when connecting to Azure Virtual Desktop](users/client-features-microsoft-store.md).
+
+> [!IMPORTANT]
+> We're no longer updating the Microsoft Store client with new features.
+>
+> For the best Azure Virtual Desktop experience that includes the latest features and fixes, we recommend you download the [Remote Desktop client for Windows](users/connect-windows.md) instead.
+
virtual-desktop Whats New Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-windows.md
Title: What's new in the Remote Desktop client for Windows
+ Title: What's new in the Remote Desktop client for Windows - Azure Virtual Desktop
description: Learn about recent changes to the Remote Desktop client for Windows - Last updated 12/14/2022 + # What's new in the Remote Desktop client for Windows
-You can find more detailed information about the Windows Desktop client at [Connect to Azure Virtual Desktop with the Remote Desktop client for Windows](users/connect-windows.md) and [Use features of the Remote Desktop client for Windows when connecting to Azure Virtual Desktop](users/client-features-windows.md). You'll find the latest updates for the available clients in this article.
+In this article you'll learn about the latest updates for the Remote Desktop client for Windows. To learn more about using the Remote Desktop client for Windows with Azure Virtual Desktop, see [Connect to Azure Virtual Desktop with the Remote Desktop client for Windows](users/connect-windows.md) and [Use features of the Remote Desktop client for Windows when connecting to Azure Virtual Desktop](users/client-features-windows.md).
## Supported client versions
-The client can be configured to enable Windows Insider releases. The following table lists the current versions available for each release:
+The following table lists the current versions available for the public and Insider releases. To enable Insider releases, see [Enable Windows Insider releases](users/client-features-windows.md#enable-windows-insider-releases).
| Release | Latest version | Minimum supported version | ||-||
The client can be configured to enable Windows Insider releases. The following t
## Updates for version 1.2.3770
-*Date published: 12/14/2022*
+*Date published: December 14, 2022*
Download: [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139369), [Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139456), [Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139370)
+In this release, we've made the following changes:
+ - Fixed an issue where the app sometimes entered an infinite loop while disconnecting. - Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues. - Updates to Teams for Azure Virtual Desktop, including the following:
Download: [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139369), [Wi
## Updates for version 1.2.3667
-*Date published: 11/30/2022*
+*Date published: November 30, 2022*
Download: [Windows 64-bit](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE5axvS), [Windows 32-bit](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE5axvR), [Windows ARM64](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE5aCCE)
+In this release, we've made the following changes:
+ - Added User Datagram Protocol (UDP) support to the client's ARM64 platform. - Fixed an issue where the tooltip didn't disappear when the user moved the mouse cursor away from the tooltip area. - Fixed an issue where the application crashes when calling reset manually from the command line.
Download: [Windows 64-bit](https://query.prod.cms.rt.microsoft.com/cms/api/am/bi
## Updates for version 1.2.3577
-*Date published: 10/10/2022*
+*Date published: October 10, 2022*
+
+In this release, we've made the following change:
-Fixed a bug related to tracing that was blocking reconnections.
+- Fixed a bug related to tracing that was blocking reconnections.
## Updates for version 1.2.3576
-*Date published: 10/6/2022*
+*Date published: October 6, 2022*
-Fixed a bug that affected users of some third-party plugins.
+In this release, we've made the following change:
+
+- Fixed a bug that affected users of some third-party plugins.
## Updates for version 1.2.3575
-*Date published: 10/4/2022*
+*Date published: October 4, 2022*
+
+In this release, we've made the following change:
-Fixed an issue that caused unexpected disconnects in certain RemoteApp scenarios.
+- Fixed an issue that caused unexpected disconnects in certain RemoteApp scenarios.
## Updates for version 1.2.3574
-*Date published: 10/4/2022*
+*Date published: October 4, 2022*
+
+In this release, we've made the following changes:
- Added banner warning users running client on Windows 7 that support for Windows 7 will end starting January 10, 2023. - Added page to installer warning users running client on Windows 7 that support for Windows 7 will end starting January 10, 2023.
Fixed an issue that caused unexpected disconnects in certain RemoteApp scenarios
## Updates for version 1.2.3497
-*Date published: 9/20/2022*
+*Date published: September 20, 2022*
+
+In this release, we've made the following changes:
- Accessibility improvements through increased color contrast in the virtual desktop connection blue bar. - Updated connection information dialog to distinguish between Websocket (renamed from TCP), RDP Shortpath for managed networks, and RDP Shortpath for public networks.
Fixed an issue that caused unexpected disconnects in certain RemoteApp scenarios
## Updates for version 1.2.3496
-*Date published: 9/08/2022*
+*Date published: September 8, 2022*
+
+In this release, we've made the following change:
- Reverted to version 1.2.3401 build to avoid a connectivity issue with older RDP stacks. ## Updates for version 1.2.3401
-*Date published: 8/02/2022*
+*Date published: August 2, 2022*
+
+In this release, we've made the following changes:
- Fixed an issue where the narrator was announcing the Tenant Expander button as "on" or "off" instead of "expanded" or ΓÇ£collapsed." - Fixed an issue where the text size didn't change when the user adjusted the text size system setting.
Fixed an issue that caused unexpected disconnects in certain RemoteApp scenarios
## Updates for version 1.2.3317
-*Date published: 7/12/2022*
+*Date published: July 12, 2022*
+
+In this release, we've made the following change:
- Fixed the vulnerability known as [CVE-2022-30221](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2022-30221). ## Updates for version 1.2.3316
-*Date published: 7/06/2022*
+*Date published: July 6, 2022*
+
+In this release, we've made the following changes:
- Fixed an issue where the service couldn't render RemoteApp windows while RemoteFX Advanced Graphics were disabled. - Fixed an issue that happened when a user tried to connect to an Azure Virtual Desktop endpoint while using the Remote Desktop Services Transport Layer Security protocol (RDSTLS) with CredSSP disabled, which caused the Windows Desktop client to not prompt the user for credentials. Because the client couldn't authenticate, it would get stuck in an infinite loop of failed connection attempts.
Fixed an issue that caused unexpected disconnects in certain RemoteApp scenarios
## Updates for version 1.2.3213
-*Date published: 6/02/2022*
+*Date published: June 2, 2022*
+
+In this release, we've made the following changes:
- Reduced flicker when application is restored to full-screen mode from minimized state in single-monitor configuration. - The client now shows an error message when the user tries to open a connection from the UI, but the connection doesn't launch.
Fixed an issue that caused unexpected disconnects in certain RemoteApp scenarios
## Updates for version 1.2.3130
-*Date published: 05/10/2022*
+*Date published: May 10, 2022*
+
+In this release, we've made the following changes:
- Fixed the vulnerability known as [CVE-2022-22017](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2022-22017). - Fixed the vulnerability known as [CVE-2022-26940](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2022-26940).
Fixed an issue that caused unexpected disconnects in certain RemoteApp scenarios
## Updates for version 1.2.3128
-*Date published: 5/03/2022*
+*Date published: May 3, 2022*
+
+In this release, we've made the following changes:
- Improved Narrator application experience. - Accessibility improvements.
Fixed an issue that caused unexpected disconnects in certain RemoteApp scenarios
## Updates for version 1.2.3004
-*Date published: 3/29/2022*
+*Date published: March 29, 2022*
+
+In this release, we've made the following changes:
- Fixed an issue where Narrator didn't announce grid or list views correctly. - Fixed an issue where the msrdc.exe process might take a long time to exit after closing the last Azure Virtual Desktop connection if customers have set a very short token expiration policy.
Fixed an issue that caused unexpected disconnects in certain RemoteApp scenarios
## Updates for version 1.2.2927
-*Date published: 3/15/2022*
+*Date published: March 15, 2022*
+
+In this release, we've made the following change:
-Fixed an issue where the number pad didn't work on initial focus.
+- Fixed an issue where the number pad didn't work on initial focus.
## Updates for version 1.2.2925
-*Date published: 03/08/2022*
+*Date published: March 8, 2022*
+
+In this release, we've made the following changes:
- Fixed the vulnerability known as [CVE-2022-21990](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2022-21990). - Fixed the vulnerability known as [CVE-2022-24503](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2022-24503).
Fixed an issue where the number pad didn't work on initial focus.
## Updates for version 1.2.2924
-*Date published: 02/23/2022*
+*Date published: February 23, 2022*
+
+In this release, we've made the following changes:
- The Desktop client now supports Ctrl+Alt+arrow key keyboard shortcuts during desktop sessions. - Improved graphics performance with certain mouse types.
Fixed an issue where the number pad didn't work on initial focus.
## Updates for version 1.2.2860
-*Date published: 02/15/2022*
+*Date published: February 15, 2022*
+
+In this release, we've made the following changes:
- Improved stability of Azure Active Directory authentication. - Fixed an issue that was preventing users from opening multiple .RDP files from different host pools. ## Updates for version 1.2.2851
-*Date published: 01/25/2022*
+*Date published: January 25, 2022*
+
+In this release, we've made the following changes:
- Fixed an issue that caused a redirected camera to give incorrect error codes when camera access was restricted in the Privacy settings on the client device. This update should give accurate error messages in apps using the redirected camera. - Fixed an issue where the Azure Active Directory credential prompt appeared in the wrong monitor.
Fixed an issue where the number pad didn't work on initial focus.
## Updates for version 1.2.2691
-*Date published: 01/12/2022*
+*Date published: January 12, 2022*
+
+In this release, we've made the following changes:
- Fixed the vulnerability known as [CVE-2019-0887](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2019-0887). - Fixed the vulnerability known as [CVE-2022-21850](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2022-21850).
Fixed an issue where the number pad didn't work on initial focus.
## Updates for version 1.2.2688
-*Date published: 12/09/2021*
+*Date published: December 9, 2021*
+
+In this release, we've made the following change:
-- Fixed an issue where some users were unable to subscribe using the "subscribe with URL" option after updating to version 1.2.2687.0.
+- Fixed an issue where some users were unable to subscribe using the **subscribe with URL** option after updating to version 1.2.2687.0.
## Updates for version 1.2.2687
-*Date published: 12/02/2021*
+*Date published: December 2, 2021*
+
+In this release, we've made the following changes:
- Improved manual refresh functionality to acquire new user tokens, which ensures the service can accurately update user access to resources. - Fixed an issue where the service sometimes pasted empty frames when a user tried to copy an image from a remotely running Internet Explorer browser to a locally running Word document.
Fixed an issue where the number pad didn't work on initial focus.
## Updates for version 1.2.2606
-*Date published: 11/9/2021*
+*Date published: November 9, 2021*
+
+In this release, we've made the following changes:
- Fixed the vulnerability known as [CVE-2021-38665](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2021-38665). - Fixed the vulnerability known as [CVE-2021-38666](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2021-38666).
Fixed an issue where the number pad didn't work on initial focus.
## Updates for version 1.2.2600
-*Date published: 10/26/2021*
+*Date published: October 26, 2021*
+
+In this release, we've made the following changes:
- Updates to Teams for Azure Virtual Desktop, including improvements to camera performance during video calls. - Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues. ## Updates for version 1.2.2459
-*Date published: 09/28/2021*
+*Date published: September 28, 2021*
+
+In this release, we've made the following changes:
- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues. - Fixed an issue that caused the client to prompt for credentials a second time after closing a credential prompt window while subscribing.
Fixed an issue where the number pad didn't work on initial focus.
## Updates for version 1.2.2322
-*Date published: 08/24/2021*
+*Date published: August 24, 2021*
+
+In this release, we've made the following changes:
- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues. - Added updates to Teams on Azure Virtual Desktop, including:
Fixed an issue where the number pad didn't work on initial focus.
## Updates for version 1.2.2223
-*Date published: 08/10/2021*
+*Date published: August 10, 2021*
+
+In this release, we've made the following change:
- Fixed the security vulnerability known as [CVE-2021-34535](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2021-34535). ## Updates for version 1.2.2222
-*Date published: 07/27/2021*
+*Date published: July 27, 2021*
+
+In this release, we've made the following changes:
- The client also updates in the background when the auto-update feature is enabled, no remote connection is active, and MSRDCW.exe isn't running. - Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
Fixed an issue where the number pad didn't work on initial focus.
## Updates for version 1.2.2130
-*Date published: 06/22/2021*
+*Date published: June 22, 2021*
+
+In this release, we've made the following changes:
- Windows Virtual Desktop has been renamed to Azure Virtual Desktop. Learn more about the name change at [our announcement on our blog](https://azure.microsoft.com/blog/azure-virtual-desktop-the-desktop-and-app-virtualization-platform-for-the-hybrid-workplace/). - Fixed an issue where the client would ask for authentication after the user ended their session and closed the window.
Fixed an issue where the number pad didn't work on initial focus.
## Updates for version 1.2.2061
-*Date published: 05/25/2021*
+*Date published: May 25, 2021*
+
+In this release, we've made the following changes:
- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues. - Updates to Teams on Azure Virtual Desktop, including the following:
Fixed an issue where the number pad didn't work on initial focus.
## Updates for version 1.2.1954
-*Date published: 05/13/2021*
+*Date published: May 13, 2021*
+
+In this release, we've made the following change:
- Fixed the vulnerability known as [CVE-2021-31186](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2021-31186). ## Updates for version 1.2.1953
-*Date published: 05/06/2021*
+*Date published: May 6, 2021*
+
+In this release, we've made the following changes:
- Fixed an issue that caused the client to crash when users selected "Disconnect all sessions" in the system tray. - Fixed an issue where the client wouldn't switch to full screen on a single monitor with a docking station.
Fixed an issue where the number pad didn't work on initial focus.
## Updates for version 1.2.1844
-*Date published: 03/23/2021*
+*Date published: March 23, 2021*
+
+In this release, we've made the following changes:
- Updated background installation functionality to perform silently for the client auto-update feature. - Fixed an issue where the client forwarded multiple attempts to launch a desktop to the same session. Depending on your group policy configuration, the session host can now allow the creation of multiple sessions for the same user on the same session host or disconnect the previous connection by default. This behavior wasn't consistent before version 1.2.1755.
Fixed an issue where the number pad didn't work on initial focus.
## Updates for version 1.2.1755
-*Date published: 02/23/2021*
+*Date published: February 23, 2021*
+
+In this release, we've made the following changes:
- Added the Experience Monitor access point to the system tray icon. - Fixed an issue where entering an email address into the "Subscribe to a Workplace" tab caused the application to stop responding.
Fixed an issue where the number pad didn't work on initial focus.
## Updates for version 1.2.1672
-*Date published: 01/26/2021*
+*Date published: January 26, 2021*
+
+In this release, we've made the following changes:
- Added support for the screen capture protection feature for Windows 10 endpoints. To learn more, see [Session host security best practices](./security-guide.md#session-host-security-best-practices). - Added support for proxies that require authentication for feed subscription.
Fixed an issue where the number pad didn't work on initial focus.
## Updates for version 1.2.1525
-*Date published: 12/01/2020*
+*Date published: December 1, 2020*
+
+In this release, we've made the following changes:
- Added List view for remote resources so that longer app names are readable. - Added a notification icon that appears when an update for the client is available. ## Updates for version 1.2.1446
-*Date published: 10/27/2020*
+*Date published: October 27, 2020*
+
+In this release, we've made the following changes:
- Added the auto-update feature, which allows the client to install the latest updates automatically. - The client now distinguishes between different feeds in the Connection Center.
Fixed an issue where the number pad didn't work on initial focus.
## Updates for version 1.2.1364
-*Date published: 09/22/2020*
+*Date published: September 22, 2020*
+
+In this release, we've made the following changes:
- Fixed an issue where single sign-on (SSO) didn't work on Windows 7. - Fixed the connection failure that happened when calling or joining a Teams call while another app has an audio stream opened in exclusive mode and when media optimization for Teams is enabled.
Fixed an issue where the number pad didn't work on initial focus.
## Updates for version 1.2.1275
-*Date published: 08/25/2020*
+*Date published: August 25, 2020*
+
+In this release, we've made the following changes:
- Added functionality to auto-detect sovereign clouds from the userΓÇÖs identity. - Added functionality to enable custom URL subscriptions for all users.
Fixed an issue where the number pad didn't work on initial focus.
## Updates for version 1.2.1186
-*Date published: 07/28/2020*
+*Date published: July 28, 2020*
+
+In this release, we've made the following changes:
- You can now be subscribed to Workspaces with multiple user accounts, using the overflow menu (**...**) option on the command bar at the top of the client. To differentiate Workspaces, the Workspace titles now include the username, as do all app shortcuts titles. - Added additional information to subscription error messages to improve troubleshooting.
Fixed an issue where the number pad didn't work on initial focus.
## Updates for version 1.2.1104
-*Date published: 06/23/2020*
+*Date published: June 23, 2020*
+
+In this release, we've made the following changes:
- Updated the automatic discovery logic for the **Subscribe** option to support the Azure Resource Manager-integrated version of Azure Virtual Desktop. Customers with only Azure Virtual Desktop resources should no longer need to provide consent for Azure Virtual Desktop (classic). - Improved support for high-DPI devices with scale factor up to 400%.
Fixed an issue where the number pad didn't work on initial focus.
## Updates for version 1.2.1026
-*Date published: 05/27/2020*
+*Date published: May 27, 2020*
+
+In this release, we've made the following changes:
- When subscribing, you can now choose your account instead of typing your email address. - Added a new **Subscribe with URL** option that allows you to specify the URL of the Workspace you are subscribing to or leverage email discovery when available in cases where we can't automatically find your resources. This is similar to the subscription process in the other Remote Desktop clients. This can be used to subscribe directly to Azure Virtual Desktop workspaces.
Fixed an issue where the number pad didn't work on initial focus.
## Updates for version 1.2.945
-*Date published: 04/28/2020*
+*Date published: April 28, 2020*
+
+In this release, we've made the following changes:
- Added new display settings options for desktop connections available when right-clicking a desktop icon on the Connection Center. - There are now three display configuration options: **All displays**, **Single display** and **Select displays**.
Fixed an issue where the number pad didn't work on initial focus.
## Updates for version 1.2.790
-*Date published: 03/24/2020*
+*Date published: March 24, 2020*
+
+In this release, we've made the following changes:
- Renamed the "Update" action for Workspaces to "Refresh" for consistency with other Remote Desktop clients. - You can now refresh a Workspace directly from its context menu.
Fixed an issue where the number pad didn't work on initial focus.
## Updates for version 1.2.675
-*Date published: 02/25/2020*
+*Date published: February 25, 2020*
+
+In this release, we've made the following changes:
- Connections to Azure Virtual Desktop are now blocked if the RDP file is missing the signature or one of the signscope properties has been modified. - When a Workspace is empty or has been removed, the Connection Center no longer appears to be empty.
Fixed an issue where the number pad didn't work on initial focus.
## Updates for version 1.2.605
-*Date published: 01/29/2020*
+*Date published: January 29, 2020*
+
+In this release, we've made the following changes:
- You can now select which displays to use for desktop connections. To change this setting, right-click the icon of the desktop connection and select **Settings**. - Fixed an issue where the connection settings didn't display the correct available scale factors.
Fixed an issue where the number pad didn't work on initial focus.
## Updates for version 1.2.535
-*Date published: 12/04/2019*
+*Date published: December 4, 2019*
+
+In this release, we've made the following changes:
- You can now access information about updates directly from the more options button on the command bar at the top of the client. - You can now report feedback from the command bar of the client.
Fixed an issue where the number pad didn't work on initial focus.
## Updates for version 1.2.431
-*Date published: 11/12/2019*
+*Date published: November 12, 2019*
+
+In this release, we've made the following changes:
- The 32-bit and ARM64 versions of the client are now available! - The client now saves any changes you make to the connection bar (such as its position, size, and pinned state) and applies those changes across sessions.
Fixed an issue where the number pad didn't work on initial focus.
## Updates for version 1.2.247
-*Date published: 09/17/2019*
+*Date published: September 17, 2019*
+
+In this release, we've made the following changes:
- Improved the fallback languages for localized version. (For example, FR-CA will properly display in French instead of English.) - When removing a subscription, the client now properly removes the saved credentials from Credential Manager.
virtual-machine-scale-sets Disk Encryption Extension Sequencing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/disk-encryption-extension-sequencing.md
For a more in-depth template, see:
## Next steps - Learn more about extension sequencing: [Sequence extension provisioning in Virtual Machine Scale Sets](virtual-machine-scale-sets-extension-sequencing.md).-- Learn more about the `provisionAfterExtensions` property: [Microsoft.Compute virtualMachineScaleSets/extensions template reference](/azure/templates/microsoft.compute/2018-10-01/virtualmachinescalesets/extensions).
+- Learn more about the `provisionAfterExtensions` property: [Microsoft.Compute virtualMachineScaleSets/extensions template reference](/azure/templates/microsoft.compute/2022-08-01/virtualmachinescalesets/extensions).
- [Azure Disk Encryption for Virtual Machine Scale Sets](disk-encryption-overview.md) - [Encrypt a Virtual Machine Scale Sets using the Azure CLI](disk-encryption-cli.md) - [Encrypt a Virtual Machine Scale Sets using the Azure PowerShell](disk-encryption-powershell.md)
virtual-machine-scale-sets Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/policy-reference.md
Previously updated : 11/28/2022 Last updated : 01/05/2023 # Azure Policy built-in definitions for Azure Virtual Machine Scale Sets
virtual-machine-scale-sets Share Images Across Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/share-images-across-tenants.md
## Create a scale set using Azure CLI
-> [!IMPORTANT]
-> You can't currently create a Flexible Virtual Machine Scale Set from an image shared by another tenant.
Sign in the service principal for tenant 1 using the appID, the app key, and the ID of tenant 1. You can use `az account show --query "tenantId"` to get the tenant IDs if needed.
virtual-machines Azure Compute Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/azure-compute-gallery.md
For example, let's say you have an image of a 127 GB OS disk, that only occupies
- For disaster recovery scenarios, it is a best practice is to have at least two galleries, in different regions. You can still use image versions in other regions, but if the region your gallery is in goes down, you can't create new gallery resources or update existing ones. -- Set 'safetyProfile.allowDeletionOfReplicatedLocations' to false on Image versions to prevent accidental deletion of replicated regions and prevent outage. You can also set this using CLI(allow-replicated-location-deletion): https://learn.microsoft.com/cli/azure/sig/image-version?view=azure-cli-latest#az-sig-image-version-create
+- Set `safetyProfile.allowDeletionOfReplicatedLocations` to false on Image versions to prevent accidental deletion of replicated regions and prevent outage. You can also set this using CLI [allow-replicated-location-deletion](/cli/azure/sig/image-version#az-sig-image-version-create)
``` {
For example, let's say you have an image of a 127 GB OS disk, that only occupies
The following SDKs support creating Azure Compute Galleries: -- [.NET](/dotnet/api/overview/azure/virtualmachines/management)
+- [.NET](/dotnet/api/overview/azure/virtualmachines#management-apis)
- [Java](/java/azure/) - [Node.js](/javascript/api/overview/azure/arm-compute-readme) - [Python](/python/api/overview/azure/virtualmachines)
virtual-machines Eav4 Easv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/eav4-easv4-series.md
The Eav4-series run on 2nd Generation AMD EPYC<sup>TM</sup> 7452 (up to 3.35GHz)
| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS / Read MBps / Write MBps | Max NICs | Expected network bandwidth (Mbps) | | --|--|--|--|--|--|--|--|
-| Standard\_E2a\_v4<sup>1</sup>|2|16|50|4|3000 / 46 / 23|2 | 800 |
-| Standard\_E4a\_v4|4|32|100|8|6000 / 93 / 46|2 | 1600 |
-| Standard\_E8a\_v4|8|64|200|16|12000 / 187 / 93|4 | 3200 |
-| Standard\_E16a\_v4|16|128|400|32|24000 / 375 / 187|8 | 6400 |
-| Standard\_E20a\_v4|20|160|500|32|30000 / 468 / 234|8 | 8000 |
-| Standard\_E32a\_v4|32|256|800|32|48000 / 750 / 375|8 | 12800 |
-| Standard\_E48a\_v4|48|384|1200|32|96000 / 1000 (500)|8 | 19200 |
-| Standard\_E64a\_v4|64|512|1600|32|96000 / 1000 (500)|8 | 25600 |
+| Standard\_E2a\_v4<sup>1</sup>|2|16|50|4|3000 / 46 / 23|2 | 2000 |
+| Standard\_E4a\_v4|4|32|100|8|6000 / 93 / 46|2 | 4000 |
+| Standard\_E8a\_v4|8|64|200|16|12000 / 187 / 93|4 | 8000 |
+| Standard\_E16a\_v4|16|128|400|32|24000 / 375 / 187|8 | 10000 |
+| Standard\_E20a\_v4|20|160|500|32|30000 / 468 / 234|8 | 12000 |
+| Standard\_E32a\_v4|32|256|800|32|48000 / 750 / 375|8 | 16000 |
+| Standard\_E48a\_v4|48|384|1200|32|96000 / 1000 (500)|8 | 24000 |
+| Standard\_E64a\_v4|64|512|1600|32|96000 / 1000 (500)|8 | 32000 |
| Standard\_E96a\_v4|96|672|2400|32|96000 / 1000 (500)|8 | 32000 | <sup>1</sup> Accelerated networking can only be applied to a single NIC.
The Easv4-series run on 2nd Generation AMD EPYC<sup>TM</sup> 7452 (up to 3.35GHz
| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max cached and temp storage throughput: IOPS / MBps (cache size in GiB) | Max burst cached and temp storage throughput: IOPS / MBps<sup>1</sup> | Max uncached disk throughput: IOPS / MBps | Max burst uncached disk throughput: IOPS/MBps<sup>1</sup> | Max NICs | Expected network bandwidth (Mbps) | |--|--|--|--|--|--|--|--|--|--|--|
-| Standard_E2as_v4<sup>3</sup>|2|16|32|4|4000 / 32 (50)| 4000/100 |3200 / 48| 4000/200 |2 | 800 |
-| Standard_E4as_v4 <sup>2</sup>|4|32|64|8|8000 / 64 (100)| 8000/200 |6400 / 96| 8000/200 |2 | 1600 |
-| Standard_E8as_v4 <sup>2</sup>|8|64|128|16|16000 / 128 (200)| 16000/400 |12800 / 192| 16000/400 |4 | 3200 |
-| Standard_E16as_v4 <sup>2</sup>|16|128|256|32|32000 / 255 (400)| 32000/800 |25600 / 384| 32000/800 |8 | 6400 |
-| Standard_E20as_v4|20|160|320|32|40000 / 320 (500)| 40000/1000 |32000 / 480| 40000/1000 |8 | 8000 |
-| Standard_E32as_v4<sup>2</sup>|32|256|512|32|64000 / 510 (800)| 64000/1600 |51200 / 768| 64000/1600 |8 | 12800 |
-| Standard_E48as_v4|48|384|768|32|96000 / 1020 (1200)| 96000/2000 |76800 / 1148| 80000/2000 |8 | 19200 |
-| Standard_E64as_v4<sup>2</sup>|64|512|1024|32|128000 / 1020 (1600)| 128000/2000 |80000 / 1200| 80000/2000 |8 | 25600 |
+| Standard_E2as_v4<sup>3</sup>|2|16|32|4|4000 / 32 (50)| 4000/100 |3200 / 48| 4000/200 |2 | 2000 |
+| Standard_E4as_v4 <sup>2</sup>|4|32|64|8|8000 / 64 (100)| 8000/200 |6400 / 96| 8000/200 |2 | 4000 |
+| Standard_E8as_v4 <sup>2</sup>|8|64|128|16|16000 / 128 (200)| 16000/400 |12800 / 192| 16000/400 |4 | 8000 |
+| Standard_E16as_v4 <sup>2</sup>|16|128|256|32|32000 / 255 (400)| 32000/800 |25600 / 384| 32000/800 |8 | 10000 |
+| Standard_E20as_v4|20|160|320|32|40000 / 320 (500)| 40000/1000 |32000 / 480| 40000/1000 |8 | 12000 |
+| Standard_E32as_v4<sup>2</sup>|32|256|512|32|64000 / 510 (800)| 64000/1600 |51200 / 768| 64000/1600 |8 | 16000 |
+| Standard_E48as_v4|48|384|768|32|96000 / 1020 (1200)| 96000/2000 |76800 / 1148| 80000/2000 |8 | 24000 |
+| Standard_E64as_v4<sup>2</sup>|64|512|1024|32|128000 / 1020 (1600)| 128000/2000 |80000 / 1200| 80000/2000 |8 | 32000 |
| Standard_E96as_v4 <sup>2</sup>|96|672|1344|32|192000 / 1020 (2400)| 192000/2000 |80000 / 1200| 80000/2000 |8 | 32000 | <sup>1</sup> Easv4-series VMs can [burst](./disk-bursting.md) their disk performance and get up to their bursting max for up to 30 minutes at a time. <br>
virtual-machines Disk Encryption Cli Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-cli-quickstart.md
Previously updated : 05/17/2019 Last updated : 01/04/2023
It takes a few minutes to create the VM and supporting resources. The following
Azure disk encryption stores its encryption key in an Azure Key Vault. Create a Key Vault with [az keyvault create](/cli/azure/keyvault#az-keyvault-create). To enable the Key Vault to store encryption keys, use the --enabled-for-disk-encryption parameter. > [!Important]
-> Every key vault must have a name that is unique across Azure. In the examples below, replace \<your-unique-keyvault-name\> with the name you choose.
+> Every key vault must have a name that is unique across Azure. Replace \<your-unique-keyvault-name\> with the name you choose.
```azurecli-interactive az keyvault create --name "<your-unique-keyvault-name>" --resource-group "myResourceGroup" --location "eastus" --enabled-for-disk-encryption
Encrypt your VM with [az vm encryption](/cli/azure/vm/encryption), providing you
az vm encryption enable -g "MyResourceGroup" --name "myVM" --disk-encryption-keyvault "<your-unique-keyvault-name>" ```
-After a moment the process will return, "The encryption request was accepted. Please use 'show' command to monitor the progress.". The "show" command is [az vm show](/cli/azure/vm/encryption#az-vm-encryption-show).
+After a moment the process will return, "The encryption request was accepted. Use 'show' command to monitor the progress.". The "show" command is [az vm show](/cli/azure/vm/encryption#az-vm-encryption-show).
```azurecli-interactive az vm encryption show --name "myVM" -g "MyResourceGroup" ```
-When encryption is enabled, you will see the following in the returned output:
+When encryption is enabled, you will see "EnableEncryption" in the returned output:
``` "EncryptionOperation": "EnableEncryption"
az group delete --name "myResourceGroup"
## Next steps
-In this quickstart, you created a virtual machine, created a Key Vault that was enable for encryption keys, and encrypted the VM. Advance to the next article to learn more about more Azure Disk Encryption for Linux VMs.
+In this quickstart, you created a virtual machine, created a Key Vault that was enabled for encryption keys, and encrypted the VM. Advance to the next article to learn more about more Azure Disk Encryption for Linux VMs.
> [!div class="nextstepaction"] > [Azure Disk Encryption overview](disk-encryption-overview.md)
virtual-machines Disk Encryption Isolated Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-isolated-network.md
Previously updated : 05/15/2021 Last updated : 01/04/2023 # Azure Disk Encryption on an isolated network
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets.
When connectivity is restricted by a firewall, proxy requirement, or network security group (NSG) settings, the ability of the extension to perform needed tasks might be disrupted. This disruption can result in status messages such as "Extension status not available on the VM." ## Package management
-Azure Disk Encryption depends on a number of components, which are typically installed as part of ADE enablement if not already present. When behind a firewall or otherwise isolated from the Internet, these packages must be pre-installed or available locally.
+Azure Disk Encryption depends on many components, which are typically installed as part of ADE enablement if not already present. When behind a firewall or otherwise isolated from the Internet, these packages must be pre-installed or available locally.
Here are the packages necessary for each distribution. For a full list of supported distros and volume types, see [supported VMs and operating systems](disk-encryption-overview.md#supported-vms-and-operating-systems). - **Ubuntu 14.04, 16.04, 18.04**: lsscsi, psmisc, at, cryptsetup-bin, python-parted, python-six, procps, grub-pc-bin - **CentOS 7.2 - 7.9, 8.1, 8.2**: lsscsi, psmisc, lvm2, uuid, at, patch, cryptsetup, cryptsetup-reencrypt, pyparted, procps-ng, util-linux-- **CentOS 6.8**: lsscsi, psmisc, lvm2, uuid, at, cryptsetup-reencrypt, pyparted, python-six
+- **CentOS 6.8**: lsscsi, psmisc, lvm2, uuid, at, cryptsetup-reencrypt, parted, python-six
- **RedHat 7.2 - 7.9, 8.1, 8.2**: lsscsi, psmisc, lvm2, uuid, at, patch, cryptsetup, cryptsetup-reencrypt, procps-ng, util-linux - **RedHat 6.8**: lsscsi, psmisc, lvm2, uuid, at, patch, cryptsetup-reencrypt - **openSUSE 42.3, SLES 12-SP4, 12-SP3**: lsscsi, cryptsetup
Any network security group settings that are applied must still allow the endpoi
## Azure Disk Encryption with Azure AD (previous version)
-If using [Azure Disk Encryption with Azure AD (previous version)](disk-encryption-overview-aad.md), the [Microsoft Authentication Library](../../active-directory/develop/msal-overview.md) will need to be installed manually for all distros (in addition to the packages appropriate for the distro, as [listed above](#package-management)).
+If using [Azure Disk Encryption with Azure AD (previous version)](disk-encryption-overview-aad.md), the [Microsoft Authentication Library](../../active-directory/develop/msal-overview.md) will need to be installed manually for all distros (in addition to the [packages appropriate for the distro](#package-management)).
When encryption is being enabled with [Azure AD credentials](disk-encryption-linux-aad.md), the target VM must allow connectivity to both Azure Active Directory endpoints and Key Vault endpoints. Current Azure Active Directory authentication endpoints are maintained in sections 56 and 59 of the [Microsoft 365 URLs and IP address ranges](/microsoft-365/enterprise/urls-and-ip-address-ranges) documentation. Key Vault instructions are provided in the documentation on how to [Access Azure Key Vault behind a firewall](../../key-vault/general/access-behind-firewall.md). ### Azure Instance Metadata Service
-The virtual machine must be able to access the [Azure Instance Metadata service](instance-metadata-service.md) endpoint, which uses a well-known non-routable IP address (`169.254.169.254`) that can be accessed only from within the VM. Proxy configurations that alter local HTTP traffic to this address (for example, adding an X-Forwarded-For header) are not supported.
+The virtual machine must be able to access the [Azure Instance Metadata service](instance-metadata-service.md) endpoint, which uses a well-known non-routable IP address (`169.254.169.254`) that can be accessed only from within the VM. Proxy configurations that alter local HTTP traffic to this address (for example, adding an X-Forwarded-For header) aren't supported.
## Next steps
virtual-machines Disk Encryption Key Vault Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-key-vault-aad.md
Previously updated : 12/06/2021 Last updated : 01/04/2023
You can manage your key vault with Azure CLI using the [az keyvault](/cli/azure/
You can create a key vault by using the [Resource Manager template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.keyvault/key-vault-create). 1. On the Azure quickstart template, click **Deploy to Azure**.
-2. Select the subscription, resource group, resource group location, Key Vault name, Object ID, legal terms, and agreement, and then click **Purchase**.
+2. Select the subscription, resource group, resource group location, Key Vault name, Object ID, legal terms, and agreement, and then select **Purchase**.
## <a name="bkmk_ADapp"></a> Set up an Azure AD app and service principal
You can manage your service principals with Azure CLI using the [az ad sp](/cli/
``` 3. The appId returned is the Azure AD ClientID used in other commands. It's also the SPN you'll use for az keyvault set-policy. The password is the client secret that you should use later to enable Azure Disk Encryption. Safeguard the Azure AD client secret appropriately.
-### <a name="bkmk_ADappRM"></a> Set up an Azure AD app and service principal though the Azure portal
+### <a name="bkmk_ADappRM"></a> Set up an Azure AD app and service principal through the Azure portal
Use the steps from the [Use portal to create an Azure Active Directory application and service principal that can access resources](../../active-directory/develop/howto-create-service-principal-portal.md) article to create an Azure AD application. Each step listed below will take you directly to the article section to complete. 1. [Verify required permissions](../../active-directory/develop/howto-create-service-principal-portal.md#permissions-required-for-registering-an-app)
az keyvault set-policy --name "MySecureVault" --spn "<spn created with CLI/the A
### <a name="bkmk_KVAPRM"></a> Set the key vault access policy for the Azure AD app with the portal 1. Open the resource group with your key vault.
-2. Select your key vault, go to **Access Policies**, then click **Add new**.
+2. Select your key vault, go to **Access Policies**, then select **Add new**.
3. Under **Select principal**, search for the Azure AD application you created and select it. 4. For **Key permissions**, check **Wrap Key** under **Cryptographic Operations**. 5. For **Secret permissions**, check **Set** under **Secret Management Operations**.
-6. Click **OK** to save the access policy.
+6. Select **OK** to save the access policy.
![Azure Key Vault cryptographic operations - Wrap Key](./media/disk-encryption/keyvault-portal-fig3.png)
Use [az keyvault update](/cli/azure/keyvault#az-keyvault-update) to enable disk
1. Select your keyvault, go to **Access Policies**, and **Click to show advanced access policies**. 2. Select the box labeled **Enable access to Azure Disk Encryption for volume encryption**. 3. Select **Enable access to Azure Virtual Machines for deployment** and/or **Enable Access to Azure Resource Manager for template deployment**, if needed.
-4. Click **Save**.
+4. Select **Save**.
![Azure key vault advanced access policies](./media/disk-encryption/keyvault-portal-fig4.png)
virtual-machines Disk Encryption Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-key-vault.md
Previously updated : 08/06/2019 Last updated : 01/04/2023
You may also, if you wish, generate or import a key encryption key (KEK).
## Install tools and connect to Azure
-The steps in this article can be completed with the [Azure CLI](/cli/azure/), the [Azure PowerShell Az module](/powershell/azure/), or the [Azure portal](https://portal.azure.com).
+The steps in this article can be completed with the [Azure CLI](/cli/azure/), the [Azure PowerShell Az PowerShell module module](/powershell/azure/), or the [Azure portal](https://portal.azure.com).
While the portal is accessible through your browser, Azure CLI and Azure PowerShell require local installation; see [Azure Disk Encryption for Linux: Install tools](disk-encryption-linux.md#install-tools-and-connect-to-azure) for details.
virtual-machines Disk Encryption Linux Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-linux-aad.md
Previously updated : 03/15/2019 Last updated : 01/04/2023
virtual-machines Disk Encryption Overview Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-overview-aad.md
Previously updated : 03/15/2019 Last updated : 01/04/2023
virtual-machines Disk Encryption Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-overview.md
Previously updated : 08/06/2019 Last updated : 01/04/2023
virtual-machines Disk Encryption Portal Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-portal-quickstart.md
Previously updated : 10/02/2019 Last updated : 01/04/2023
virtual-machines Disk Encryption Powershell Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-powershell-quickstart.md
Previously updated : 05/17/2019 Last updated : 01/04/2023
virtual-machines Disk Encryption Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-troubleshooting.md
Before taking any of the steps below, first ensure that the VMs you are attempti
- [Networking requirements](disk-encryption-overview.md#networking-requirements) - [Encryption key storage requirements](disk-encryption-overview.md#encryption-key-storage-requirements)
-
## Troubleshooting Linux OS disk encryption
virtual-machines Disk Encryption Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-upgrade.md
# Upgrading the Azure Disk Encryption version
-The first version of Azure Disk Encryption (ADE) relied on Azure Active Directory (AAD) for authentication; the current version does not. We strongly encourage the use of the newest version.
+The first version of Azure Disk Encryption (ADE) relied on Azure Active Directory (Azure AD) for authentication; the current version does not. We strongly encourage the use of the newest version.
## Determine ADE version
Choose the "AzureDiskEncryption" extension for Windows or "AzureDiskEncryptionFo
## How to migrate
-Migration from Azure Disk Encryption (with AAD) to Azure Disk Encryption (without AAD) is only available through Azure PowerShell. Ensure you have the latest version of Azure PowerShell and at least the [Azure PowerShell Az module version 5.9.0](/powershell/azure/new-azureps-module-az) installed .
+Migration from Azure Disk Encryption (with Azure AD) to Azure Disk Encryption (without Azure AD) is only available through Azure PowerShell. Ensure you have the latest version of Azure PowerShell and at least the [Azure PowerShell Az module version 5.9.0](/powershell/azure/new-azureps-module-az) installed .
-To upgrade from Azure Disk Encryption (with AAD) to Azure Disk Encryption (without AAD), use the [Set-AzVMDiskEncryptionExtension](/powershell/module/az.compute/set-azvmdiskencryptionextension) PowerShell cmdlet.
+To upgrade from Azure Disk Encryption (with Azure AD) to Azure Disk Encryption (without Azure AD), use the [Set-AzVMDiskEncryptionExtension](/powershell/module/az.compute/set-azvmdiskencryptionextension) PowerShell cmdlet.
> [!WARNING]
-> The Set-AzVMDiskEncryptionExtension cmdlet must only be used on VMs encrypted with Azure Disk Encryption (with AAD). Attempting to migrate an unencrypted VM, or a VM encrypted with Azure Disk Encryption (without AAD), will result in a terminal error.
+> The Set-AzVMDiskEncryptionExtension cmdlet must only be used on VMs encrypted with Azure Disk Encryption (with Azure AD). Attempting to migrate an unencrypted VM, or a VM encrypted with Azure Disk Encryption (without Azure AD), will result in a terminal error.
```azurepowershell-interactive Set-AzVMDiskEncryptionExtension -ResourceGroupName <resourceGroupName> -VMName <vmName> -Migrate
When the cmdlet prompts you for confirmation, enter "Y". The ADE version will b
> Set-AzVMDiskEncryptionExtension -ResourceGroupName myResourceGroup -VMName myVM -Migrate Update AzureDiskEncryption version?
-This cmdlet updates Azure Disk Encryption version to single pass (Azure Disk Encryption without AAD). This may reboot
+This cmdlet updates Azure Disk Encryption version to single pass (Azure Disk Encryption without Azure AD). This may reboot
the machine and takes 10-15 minutes to finish. Are you sure you want to continue? [Y] Yes [N] No [S] Suspend [?] Help (default is "Y"): Y Azure Disk Encryption Extension Public Settings
Azure Disk Encryption Extension Public Settings
"SequenceVersion": "MigrateFlag": Migrate "KeyVaultURL": https://myKeyVault.vault.azure.net/
-"AADClientID": d29edf8c-3fcb-42e7-8410-9e39fdf0dd70
+"Azure ADClientID": d29edf8c-3fcb-42e7-8410-9e39fdf0dd70
"KeyEncryptionKeyURL": "KekVaultResourceId": "EncryptionOperation": EnableEncryption
-"AADClientCertThumbprint":
+"Azure ADClientCertThumbprint":
"VolumeType": "KeyEncryptionAlgorithm":
-Running ADE extension (with AAD) for -Migrate..
-ADE extension (with AAD) is now complete. Updating VM model..
-Running ADE extension (without AAD) for -Migrate..
-ADE extension (without AAD) is now complete. Clearing VM model..
+Running ADE extension (with Azure AD) for -Migrate..
+ADE extension (with Azure AD) is now complete. Updating VM model..
+Running ADE extension (without Azure AD) for -Migrate..
+ADE extension (without Azure AD) is now complete. Clearing VM model..
RequestId IsSuccessStatusCode StatusCode ReasonPhrase - -
virtual-machines Disks Upload Vhd To Managed Disk Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disks-upload-vhd-to-managed-disk-cli.md
description: Learn how to upload a VHD to an Azure managed disk and copy a manag
Previously updated : 12/07/2022 Last updated : 01/03/2023
If you're providing a backup solution for IaaS VMs in Azure, you should use dire
## Secure uploads with Azure AD
-If you're using [Azure Active Directory (Azure AD)](../../active-directory/fundamentals/active-directory-whatis.md) to control resource access, you can now use it to restrict uploading of Azure managed disks. This feature is available as a GA offering in all public cloud regions, it is a currently only available as a preview offering in Azure Government and Azure China regions. When a user attempts to upload a disk, Azure validates the identity of the requesting user in Azure AD, and confirms that user has the required permissions. At a higher level, a system administrator could set a policy at the Azure account or subscription level, to ensure that an Azure AD identity has the necessary permissions for uploading before allowing a disk or a disk snapshot to be uploaded. If you have any questions on securing uploads with Azure AD, reach out to this email: azuredisks@microsoft .com
+If you're using [Azure Active Directory (Azure AD)](../../active-directory/fundamentals/active-directory-whatis.md) to control resource access, you can now use it to restrict uploading of Azure managed disks. This feature is available as a GA offering in all regions. When a user attempts to upload a disk, Azure validates the identity of the requesting user in Azure AD, and confirms that user has the required permissions. At a higher level, a system administrator could set a policy at the Azure account or subscription level, to ensure that an Azure AD identity has the necessary permissions for uploading before allowing a disk or a disk snapshot to be uploaded. If you have any questions on securing uploads with Azure AD, reach out to this email: azuredisks@microsoft .com
### Prerequisites - [Install the Azure CLI](/cli/azure/install-azure-cli).
virtual-machines Download Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/download-vhd.md
Previously updated : 12/07/2022 Last updated : 01/03/2023 # Download a Linux VHD from Azure
virtual-machines Run Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/run-command.md
If needing to remove your action run command Linux extension, refer to the below
```azurecli-interactive az vm run-command invoke --command-id RemoveRunCommandLinuxExtension --name vmname -g rgname ```
+> [!NOTE]
+> When you apply a Run Command again, the extension will get installed automatically. You can use the extension removal command to troubleshoot any issues related to the extension.
## Next steps
virtual-machines Tutorial Azure Devops Blue Green Strategy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/tutorial-azure-devops-blue-green-strategy.md
Title: Configure canary deployments for Azure Linux virtual machines
+ Title: Configure blue-green deployments for Azure Linux virtual machines
description: Learn how to set up a classic release pipeline and deploy to Linux virtual machines using the blue-green deployment strategy. tags: azure-devops-pipelines
virtual-machines Tutorial Create Vmss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/tutorial-create-vmss.md
Title: "Tutorial: Create a Linux virtual machine scale set"
-description: Learn how to create and deploy a highly available application on Linux VMs using a virtual machine scale set
+ Title: "Tutorial: Create a Linux Virtual Machine Scale Set"
+description: Learn how to create and deploy a highly available application on Linux VMs using a Virtual Machine Scale Set
#Customer intent: As an IT administrator, I want to learn about autoscaling VMs in Azure so that I can deploy a highly-available and scalable infrastructure.
-# Tutorial: Create a virtual machine scale set and deploy a highly available app on Linux
+# Tutorial: Create a Virtual Machine Scale Set and deploy a highly available app on Linux
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Uniform scale sets
-Virtual machine scale sets with [Flexible orchestration](../flexible-virtual-machine-scale-sets.md) let you create and manage a group of load balanced VMs. The number of VM instances can automatically increase or decrease in response to demand or a defined schedule.
+Virtual Machine Scale Sets with [Flexible orchestration](../flexible-virtual-machine-scale-sets.md) let you create and manage a group of load balanced VMs. The number of VM instances can automatically increase or decrease in response to demand or a defined schedule.
-In this tutorial, you deploy a virtual machine scale set in Azure and learn how to:
+In this tutorial, you deploy a Virtual Machine Scale Set in Azure and learn how to:
> [!div class="checklist"] > * Create a resource group.
Learn more about the differences between Uniform scale sets and Flexible scale s
Use the Azure portal to create a Flexible scale set. 1. Open the [Azure portal](https://portal.azure.com).
-1. Search for and select **Virtual machine scale sets**.
-1. Select **Create** on the **Virtual machine scale sets** page. The **Create a virtual machine scale set** will open.
+1. Search for and select **Virtual Machine Scale Set**.
+1. Select **Create** on the **Virtual Machine Scale Sets** page. The **Create a Virtual Machine Scale Set** will open.
1. Select the subscription that you want to use for **Subscription**. 1. For **Resource group**, select **Create new** and type *myVMSSRG* for the name and then select **OK**. :::image type="content" source="media/tutorial-create-vmss/flex-project-details.png" alt-text="Project details.":::
-1. For **Virtual machine scale set name**, type *myVMSS*.
+1. For **Virtual Machine Scale Set name**, type *myVMSS*.
1. For **Region**, select a region that is close to you like *East US*. :::image type="content" source="media/tutorial-create-vmss/flex-details.png" alt-text="Name and region."::: 1. Leave **Availability zone** as blank for this example. 1. For **Orchestration mode**, select **Flexible**.
-1. Leave the default of *1* for fault domain count or choose another value from the drop-down.
- :::image type="content" source="media/tutorial-create-vmss/flex-orchestration.png" alt-text="Choose Flexible orchestration mode.":::
1. For **Image**, select *Ubuntu 18.04 LTS*. 1. For **Size**, leave the default value or select a size like *Standard_E2s_V3*. 1. In **Username** type *azureuser*.
Use the Azure portal to create a Flexible scale set.
:::image type="content" source="media/tutorial-create-vmss/load-balancer-settings.png" alt-text="Load balancer settings."::: 1. On the **Create a load balancer** page, type in a name for your load balancer and **Public IP address name**. 1. For **Domain name label**, type in a name to use as a prefix for your domain name. This name must be unique.
-1. When you are done, select **Create**.
+1. When you're done, select **Create**.
:::image type="content" source="media/tutorial-create-vmss/flex-load-balancer.png" alt-text="Create a load balancer."::: 1. Back on the **Networking** tab, leave the default name for the backend pool. 1. On the **Scaling** tab, leave the default instance count as *2*, or add in your own value. This is the number of VMs that will be created, so be aware of the costs and the limits on your subscription if you change this value.
Use the Azure portal to create a Flexible scale set.
- npm install express -y - nodejs index.js ```
-1. When you are done, select **Review + create**.
+1. When you're done, select **Review + create**.
1. Once you see that validation has passed, you can select **Create** at the bottom of the page to deploy your scale set.
-1. When the **Generate new key pair** window opens, select **Download private key and create resource**. Your key file will be download as **myKey.pem**. Make sure you know where the `.pem` file was downloaded, you will need the path to it in the next step.
+1. When the **Generate new key pair** window opens, select **Download private key and create resource**. Your key file will be download as **myKey.pem**. Make sure you know where the `.pem` file was downloaded, you'll need the path to it in the next step.
1. When the deployment is complete, select **Go to resource** to see your scale set.
Use the Azure portal to create a Flexible scale set.
On the page for the scale set, select **Instances** from the left menu.
-You will see a list of VMs that are part of your scale set. This list includes:
+You'll see a list of VMs that are part of your scale set. This list includes:
- The name of the VM - The computer name used by the VM.
Test your scale set by connecting to it from a browser.
## Delete your scale set
-When you are done, you should delete the resource group, which will delete everything you deployed for your scale set.
+When you're done, you should delete the resource group, which will delete everything you deployed for your scale set.
1. On the page for your scale set, select the **Resource group**. The page for your resource group will open. 1. At the top of the page, select **Delete resource group**. 1. In the **Are you sure you want to delete** page, type in the name of your resource group and then select **Delete**. ## Next steps
-In this tutorial, you created a virtual machine scale set. You learned how to:
+In this tutorial, you created a Virtual Machine Scale Set. You learned how to:
> [!div class="checklist"] > * Create a resource group.
virtual-machines Np Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/np-series.md
This feature isn't supported in Azure NP VMs.
**A:** Need to run xbutil query and look at the lower portion.
+**Q:** Do Azure NP VMs support FPGA bitstreams with Networking GT Kernel connections?
+**A:** No. The FPGA Attestation service performs a series of validations on a design checkpoint file and will generate an error if the user's application contains connections to the FPGA card's QSFP networking ports.
## Other sizes and information
virtual-machines Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/overview.md
The [size](sizes.md) of the virtual machine that you use is determined by the wo
Azure charges an [hourly price](https://azure.microsoft.com/pricing/details/virtual-machines/linux/) based on the virtual machineΓÇÖs size and operating system. For partial hours, Azure charges only for the minutes used. Storage is priced and charged separately.
-## Virtual machine limits
-Your subscription has default [quota limits](../azure-resource-manager/management/azure-subscription-service-limits.md) in place that could impact the deployment of many virtual machines for your project. The current limit on a per subscription basis is 20 virtual machines per region. Limits can be raised by [filing a support ticket requesting an increase](../azure-portal/supportability/regional-quota-requests.md)
+## Virtual machine total core limits
+Your subscription has default [quota limits](../azure-resource-manager/management/azure-subscription-service-limits.md) in place that could impact the deployment of many virtual machines for your project. The current limit on a per subscription basis is 20 virtual machine total cores per region. Limits can be raised by [filing a support ticket requesting an increase](../azure-portal/supportability/regional-quota-requests.md)
## Managed Disks
virtual-machines Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/policy-reference.md
Title: Built-in policy definitions for Azure Virtual Machines description: Lists Azure Policy built-in policy definitions for Azure Virtual Machines. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/28/2022 Last updated : 01/05/2023
virtual-machines Premium Storage Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/premium-storage-performance.md
Throughout this section, refer to the application requirements checklist that yo
### Optimize IOPS, throughput, and latency at a glance
-The table below summarizes performance factors and the steps necessary to optimize IOPS, throughput, and latency. The sections following this summary will describe each factor is much more depth.
+The table below summarizes performance factors and the steps necessary to optimize IOPS, throughput, and latency. The sections following this summary will describe each factor in much more depth.
For more information on VM sizes and on the IOPS, throughput, and latency available for each type of VM, see [Sizes for virtual machines in Azure](sizes.md).
virtual-machines Security Controls Policy Image Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/security-controls-policy-image-builder.md
Title: Azure Policy Regulatory Compliance controls for Azure VM Image Builder description: Lists Azure Policy Regulatory Compliance controls available for Azure VM Image Builder. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/04/2022 Last updated : 01/05/2023
virtual-machines Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Virtual Machines description: Lists Azure Policy Regulatory Compliance controls available for Azure Virtual Machines . These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/04/2022 Last updated : 01/05/2023
virtual-machines Shared Image Galleries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/shared-image-galleries.md
To set an image for shallow replication, use `--replication-mode Shallow` with t
The following SDKs support creating Azure Compute Galleries: -- [.NET](/dotnet/api/overview/azure/virtualmachines/management)
+- [.NET](/dotnet/api/azure.resourcemanager.compute)
- [Java](/java/azure/) - [Node.js](/javascript/api/overview/azure/arm-compute-readme) - [Python](/python/api/overview/azure/virtualmachines)
virtual-machines Sizes Hpc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes-hpc.md
HBv2 VMs feature 200 Gb/sec Mellanox HDR InfiniBand, while both HB and HC-series
[HC-series](hc-series.md) VMs are optimized for applications driven by dense computation, such as implicit finite element analysis, molecular dynamics, and computational chemistry. HC VMs feature 44 Intel Xeon Platinum 8168 processor cores, 8 GB of RAM per CPU core, and no hyperthreading. The Intel Xeon Platinum platform supports IntelΓÇÖs rich ecosystem of software tools such as the Intel Math Kernel Library.
-[H-series](h-series.md) VMs are optimized for applications driven by high CPU frequencies or large memory per core requirements. H-series VMs feature 8 or 16 Intel Xeon E5 2667 v3 processor cores, 7 or 14 GB of RAM per CPU core, and no hyperthreading. H-series features 56 Gb/sec Mellanox FDR InfiniBand in a non-blocking fat tree configuration for consistent RDMA performance. H-series VMs support Intel MPI 5.x and MS-MPI.
- > [!NOTE] > All HBv3, HBv2, HB, and HC-series VMs have exclusive access to the physical servers. There is only 1 VM per physical server and there is no shared multi-tenancy with any other VMs for these VM sizes.
-> [!NOTE]
-> The [A8 ΓÇô A11 VMs](./sizes-previous-gen.md#a-seriescompute-intensive-instances) are retired as of 3/2021. No new VM deployments of these sizes are now possible. If you have existing VMs, refer to emailed notifications for next steps including migrating to other VM sizes in [HPC Migration Guide](https://azure.microsoft.com/resources/hpc-migration-guide/).
- ## RDMA-capable instances Most of the HPC VM sizes feature a network interface for remote direct memory access (RDMA) connectivity. Selected [N-series](./nc-series.md) sizes designated with 'r' are also RDMA-capable. This interface is in addition to the standard Azure Ethernet network interface available in the other VM sizes.
This secondary interface allows the RDMA-capable instances to communicate over a
> IP over IB is only supported on the SR-IOV enabled VMs. > RDMA is not enabled over the Ethernet network. -- **Operating System** - Linux distributions such as CentOS, RHEL, Ubuntu, SUSE are commonly used. Windows Server 2016 and newer versions are supported on all the HPC series VMs. Windows Server 2012 R2 and Windows Server 2012 are also supported on the non-SR-IOV enabled VMs. Note that [Windows Server 2012 R2 is not supported on HBv2 onwards as VM sizes with more than 64 (virtual or physical) cores](/windows-server/virtualization/hyper-v/supported-windows-guest-operating-systems-for-hyper-v-on-windows). See [VM Images](./workloads/hpc/configure.md) for a list of supported VM Images on the Marketplace and how they can be configured appropriately. The respective VM size pages also list out the software stack support.
+- **Operating System** - Linux distributions such as CentOS, RHEL, Ubuntu, SUSE are commonly used. Windows Server 2016 and newer versions are supported on all the HPC series VMs. Note that [Windows Server 2012 R2 is not supported on HBv2 onwards as VM sizes with more than 64 (virtual or physical) cores](/windows-server/virtualization/hyper-v/supported-windows-guest-operating-systems-for-hyper-v-on-windows). See [VM Images](./workloads/hpc/configure.md) for a list of supported VM Images on the Marketplace and how they can be configured appropriately. The respective VM size pages also list out the software stack support.
- **InfiniBand and Drivers** - On InfiniBand enabled VMs, the appropriate drivers are required to enable RDMA. See [VM Images](./workloads/hpc/configure.md) for a list of supported VM Images on the Marketplace and how they can be configured appropriately. Also see [enabling InfiniBand](./workloads/hpc/enable-infiniband.md) to learn about VM extensions or manual installation of InfiniBand drivers. -- **MPI** - The SR-IOV enabled VM sizes on Azure allow almost any flavor of MPI to be used with Mellanox OFED. On non-SR-IOV enabled VMs, supported MPI implementations use the Microsoft Network Direct (ND) interface to communicate between VMs. Hence, only Intel MPI 5.x and Microsoft MPI (MS-MPI) 2012 R2 or later versions are supported. Later versions of the Intel MPI runtime library may or may not be compatible with the Azure RDMA drivers. See [Setup MPI for HPC](./workloads/hpc/setup-mpi.md) for more details on setting up MPI on HPC VMs on Azure.
+- **MPI** - The SR-IOV enabled VM sizes on Azure allow almost any flavor of MPI to be used with Mellanox OFED. See [Setup MPI for HPC](./workloads/hpc/setup-mpi.md) for more details on setting up MPI on HPC VMs on Azure.
> [!NOTE] > **RDMA network address space**: The RDMA network in Azure reserves the address space 172.16.0.0/16. To run MPI applications on instances deployed in an Azure virtual network, make sure that the virtual network address space does not overlap the RDMA network.
Azure provides several options to create clusters of HPC VMs that can communicat
- Learn more about [configuring your VMs](./workloads/hpc/configure.md), [enabling InfiniBand](./workloads/hpc/enable-infiniband.md), [setting up MPI](./workloads/hpc/setup-mpi.md) and optimizing HPC applications for Azure at [HPC Workloads](./workloads/hpc/overview.md). - Review the [HBv3-series overview](./workloads/hpc/hbv3-series-overview.md) and [HC-series overview](./workloads/hpc/hc-series-overview.md). - Read about the latest announcements, HPC workload examples, and performance results at the [Azure Compute Tech Community Blogs](https://techcommunity.microsoft.com/t5/azure-compute/bg-p/AzureCompute).-- For a higher level architectural view of running HPC workloads, see [High Performance Computing (HPC) on Azure](/azure/architecture/topics/high-performance-computing/).
+- For a higher level architectural view of running HPC workloads, see [High Performance Computing (HPC) on Azure](/azure/architecture/topics/high-performance-computing/).
virtual-machines Disk Encryption Cli Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disk-encryption-cli-quickstart.md
Previously updated : 05/17/2019 Last updated : 01/04/2023
It takes a few minutes to create the VM and supporting resources. The following
## Create a Key Vault configured for encryption keys
-Azure disk encryption stores its encryption key in an Azure Key Vault. Create a Key Vault with [az keyvault create](/cli/azure/keyvault#az-keyvault-create). To enable the Key Vault to store encryption keys, use the --enabled-for-disk-encryption parameter.
+Azure disk encryption stores its encryption key in an Azure Key Vault. Create a Key Vault with [az keyvault create](/cli/azure/keyvault#az-keyvault-create). To enable the Key Vault to store encryption keys, use the--enabled-for-disk-encryption parameter.
> [!Important]
-> Each Key Vault must have a unique name. The following example creates a Key Vault named *myKV*, but you must name yours something different.
+> Each Key Vault must have a unique name. This example creates a Key Vault named *myKV*, but you must name yours something different.
```azurecli-interactive az keyvault create --name "myKV" --resource-group "myResourceGroup" --location eastus --enabled-for-disk-encryption
az group delete --name myResourceGroup
## Next steps
-In this quickstart, you created a virtual machine, created a Key Vault that was enable for encryption keys, and encrypted the VM. Advance to the next article to learn more about Azure Disk Encryption prerequisites for IaaS VMs.
+In this quickstart, you created a virtual machine, created a Key Vault that was enabled for encryption keys, and encrypted the VM. Advance to the next article to learn more about Azure Disk Encryption prerequisites for IaaS VMs.
> [!div class="nextstepaction"] > [Azure Disk Encryption overview](disk-encryption-overview.md)
virtual-machines Disk Encryption Key Vault Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disk-encryption-key-vault-aad.md
Previously updated : 12/06/2021 Last updated : 01/04/2023
Azure Disk Encryption is integrated with [Azure Key Vault](../../key-vault/index
### Create a key vault with PowerShell
-You can create a key vault with Azure PowerShell using the [New-AzKeyVault](/powershell/module/az.keyvault/New-azKeyVault) cmdlet. For additional cmdlets for Key Vault, see [Az.KeyVault](/powershell/module/az.keyvault/).
+You can create a key vault with Azure PowerShell using the [New-AzKeyVault](/powershell/module/az.keyvault/New-azKeyVault) cmdlet. For another cmdlets for Key Vault, see [Az.KeyVault](/powershell/module/az.keyvault/).
1. Create a new resource group, if needed, with [New-AzResourceGroup](/powershell/module/az.Resources/New-azResourceGroup). To list data center locations, use [Get-AzLocation](/powershell/module/az.resources/get-azlocation).
You can manage your key vault with Azure CLI using the [az keyvault](/cli/azure/
You can create a key vault by using the [Resource Manager template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.keyvault/key-vault-create).
-1. On the Azure quickstart template, click **Deploy to Azure**.
-2. Select the subscription, resource group, resource group location, Key Vault name, Object ID, legal terms, and agreement, and then click **Purchase**.
+1. On the Azure quickstart template, select **Deploy to Azure**.
+2. Select the subscription, resource group, resource group location, Key Vault name, Object ID, legal terms, and agreement, and then select **Purchase**.
## Set up an Azure AD app and service principal
To execute the following commands, get and use the [Azure AD PowerShell module](
$servicePrincipal = New-AzADServicePrincipal ΓÇôApplicationId $azureAdApplication.ApplicationId -Role Contributor ```
-3. The $azureAdApplication.ApplicationId is the Azure AD ClientID and the $aadClientSecret is the client secret that you will use later to enable Azure Disk Encryption. Safeguard the Azure AD client secret appropriately. Running `$azureAdApplication.ApplicationId` will show you the ApplicationID.
+3. The $azureAdApplication.ApplicationId is the Azure AD ClientID and the $aadClientSecret is the client secret that you'll use later to enable Azure Disk Encryption. Safeguard the Azure AD client secret appropriately. Running `$azureAdApplication.ApplicationId` will show you the ApplicationID.
### Set up an Azure AD app and service principal with Azure CLI
You can manage your service principals with Azure CLI using the [az ad sp](/cli/
3. The appId returned is the Azure AD ClientID used in other commands. It's also the SPN you'll use for az keyvault set-policy. The password is the client secret that you should use later to enable Azure Disk Encryption. Safeguard the Azure AD client secret appropriately. ### Set up an Azure AD app and service principal through the Azure portal
-Use the steps from the [Use portal to create an Azure Active Directory application and service principal that can access resources](../../active-directory/develop/howto-create-service-principal-portal.md) article to create an Azure AD application. Each step listed below will take you directly to the article section to complete.
+Use the steps from the [Use portal to create an Azure Active Directory application and service principal that can access resources](../../active-directory/develop/howto-create-service-principal-portal.md) article to create an Azure AD application. Each of these steps will take you directly to the article section to complete.
1. [Verify required permissions](../../active-directory/develop/howto-create-service-principal-portal.md#permissions-required-for-registering-an-app) 2. [Create an Azure Active Directory application](../../active-directory/develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal)
az keyvault set-policy --name "MySecureVault" --spn "<spn created with CLI/the A
### Set the key vault access policy for the Azure AD app with the portal 1. Open the resource group with your key vault.
-2. Select your key vault, go to **Access Policies**, then click **Add new**.
+2. Select your key vault, go to **Access Policies**, then select **Add new**.
3. Under **Select principal**, search for the Azure AD application you created and select it. 4. For **Key permissions**, check **Wrap Key** under **Cryptographic Operations**. 5. For **Secret permissions**, check **Set** under **Secret Management Operations**.
-6. Click **OK** to save the access policy.
+6. Select **OK** to save the access policy.
![Azure Key Vault cryptographic operations - Wrap Key](../media/disk-encryption/keyvault-portal-fig3.png)
Use [az keyvault update](/cli/azure/keyvault#az-keyvault-update) to enable disk
1. Select your keyvault, go to **Access Policies**, and **Click to show advanced access policies**. 2. Select the box labeled **Enable access to Azure Disk Encryption for volume encryption**. 3. Select **Enable access to Azure Virtual Machines for deployment** and/or **Enable Access to Azure Resource Manager for template deployment**, if needed.
-4. Click **Save**.
+4. Select **Save**.
![Azure key vault advanced access policies](../media/disk-encryption/keyvault-portal-fig4.png)
Use [az keyvault update](/cli/azure/keyvault#az-keyvault-update) to enable disk
## Set up a key encryption key (optional) If you want to use a key encryption key (KEK) for an additional layer of security for encryption keys, add a KEK to your key vault. Use the [Add-AzKeyVaultKey](/powershell/module/az.keyvault/add-azkeyvaultkey) cmdlet to create a key encryption key in the key vault. You can also import a KEK from your on-premises key management HSM. For more information, see [Key Vault Documentation](../../key-vault/keys/hsm-protected-keys.md). When a key encryption key is specified, Azure Disk Encryption uses that key to wrap the encryption secrets before writing to Key Vault.
-* When generating keys, use an RSA key type. Azure Disk Encryption does not yet support using Elliptic Curve keys.
+* When generating keys, use an RSA key type. Azure Disk Encryption doesn't yet support using Elliptic Curve keys.
* Your key vault secret and KEK URLs must be versioned. Azure enforces this restriction of versioning. For valid secret and KEK URLs, see the following examples:
virtual-machines Disk Encryption Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disk-encryption-key-vault.md
Previously updated : 08/06/2019 Last updated : 01/04/2023
virtual-machines Disk Encryption Overview Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disk-encryption-overview-aad.md
Previously updated : 03/15/2019 Last updated : 01/04/2023
This article supplements [Azure Disk Encryption for Windows VMs](disk-encryption
- To write the encryption keys to your key vault, the IaaS VM must be able to connect to the key vault endpoint. - The IaaS VM must be able to connect to an Azure storage endpoint that hosts the Azure extension repository and an Azure storage account that hosts the VHD files. - If your security policy limits access from Azure VMs to the Internet, you can resolve the preceding URI and configure a specific rule to allow outbound connectivity to the IPs. For more information, see [Azure Key Vault behind a firewall](../../key-vault/general/access-behind-firewall.md).
- - The VM to be encrypted must be configured to use TLS 1.2 as the default protocol. If TLS 1.0 has been explicitly disabled and the .NET version has not been updated to 4.6 or higher, the following registry change will enable ADE to select the more recent TLS version:
+ - The VM to be encrypted must be configured to use TLS 1.2 as the default protocol. If TLS 1.0 has been explicitly disabled and the .NET version hasn't been updated to 4.6 or higher, the following registry change will enable ADE to select the more recent TLS version:
```console [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework\v4.0.30319]
virtual-machines Disk Encryption Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disk-encryption-overview.md
Previously updated : 10/05/2019 Last updated : 01/04/2023
Azure Disk Encryption is not available on [Basic, A-series VMs](https://azure.mi
- Windows 10 Enterprise multi-session and later. > [!NOTE]
-> Windows Server 2022 and Windows 11 do not support an RSA 2048 bit key. For more details, see [FAQ: What size should I use for my key encryption key?](disk-encryption-faq.yml#what-size-should-i-use-for-my-key-encryption-key--kek--)
+> Windows Server 2022 and Windows 11 do not support an RSA 2048 bit key. For more information, see [FAQ: What size should I use for my key encryption key?](disk-encryption-faq.yml#what-size-should-i-use-for-my-key-encryption-key--kek--)
> > Windows Server 2008 R2 requires the .NET Framework 4.5 to be installed for encryption; install it from Windows Update with the optional update Microsoft .NET Framework 4.5.2 for Windows Server 2008 R2 x64-based systems ([KB2901983](https://www.catalog.update.microsoft.com/Search.aspx?q=KB2901983)). >
Azure Disk Encryption uses the BitLocker external key protector for Windows VMs.
BitLocker policy on domain joined virtual machines with custom group policy must include the following setting: [Configure user storage of BitLocker recovery information -> Allow 256-bit recovery key](/windows/security/information-protection/bitlocker/bitlocker-group-policy-settings). Azure Disk Encryption will fail when custom group policy settings for BitLocker are incompatible. On machines that didn't have the correct policy setting, apply the new policy, and force the new policy to update (gpupdate.exe /force). Restarting may be required.
-Microsoft Bitlocker Administration and Monitoring (MBAM) group policy features are not compatible with Azure Disk Encryption.
+Microsoft BitLocker Administration and Monitoring (MBAM) group policy features aren't compatible with Azure Disk Encryption.
> [!WARNING] > Azure Disk Encryption **does not store recovery keys**. If the [Interactive logon: Machine account lockout threshold](/windows/security/threat-protection/security-policy-settings/interactive-logon-machine-account-lockout-threshold) security setting is enabled, machines can only be recovered by providing a recovery key via the serial console. Instructions for ensuring the appropriate recovery policies are enabled can be found in the [Bitlocker recovery guide plan](/windows/security/information-protection/bitlocker/bitlocker-recovery-guide-plan).
virtual-machines Disk Encryption Portal Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disk-encryption-portal-quickstart.md
Previously updated : 10/02/2019 Last updated : 01/04/2023
**Applies to:** :heavy_check_mark: Windows VMs
-Azure virtual machines (VMs) can be created through the Azure portal. The Azure portal is a browser-based user interface to create VMs and their associated resources. In this quickstart you will use the Azure portal to deploy a Windows virtual machine, create a key vault for the storage of encryption keys, and encrypt the VM.
+Azure virtual machines (VMs) can be created through the Azure portal. The Azure portal is a browser-based user interface to create VMs and their associated resources. In this quickstart you'll use the Azure portal to deploy a Windows virtual machine, create a key vault for the storage of encryption keys, and encrypt the VM.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
Sign in to the [Azure portal](https://portal.azure.com).
:::image type="content" source="../media/disk-encryption/portal-quickstart-vm-creation-storage.png" alt-text="ResourceGroup creation screen":::
-1. Click "Review + Create".
-1. On the **Create a virtual machine** page, you can see the details about the VM you are about to create. When you are ready, select **Create**.
+1. Select "Review + Create".
+1. On the **Create a virtual machine** page, you can see the details about the VM you're about to create. When you're ready, select **Create**.
It will take a few minutes for your VM to be deployed. When the deployment is finished, move on to the next section.
It will take a few minutes for your VM to be deployed. When the deployment is fi
1. To the left of **Key vault and key**, select **Click to select a key**. 1. On the **Select key from Azure Key Vault**, under the **Key Vault** field, select **Create new**.
-1. On the **Create key vault** screen, ensure that the Resource Group is *myResourceGroup*, and give your key vault a name. Every key vault across Azure must have an unique name.
+1. On the **Create key vault** screen, ensure that the Resource Group is *myResourceGroup*, and give your key vault a name. Every key vault across Azure must have a unique name.
1. On the **Access Policies** tab, check the **Azure Disk Encryption for volume encryption** box. :::image type="content" source="../media/disk-encryption/portal-quickstart-keyvault-enable.png" alt-text="disks and encryption selection"::: 1. Select **Review + create**.
-1. After the key vault has passed validation, select **Create**. This will return you to the **Select key from Azure Key Vault** screen.
+1. After the key vault has passed validation, select **Create**. You will return to the **Select key from Azure Key Vault** screen.
1. Leave the **Key** field blank and choose **Select**.
-1. At the top of the encryption screen, click **Save**. A popup will warn you that the VM will reboot. Click **Yes**.
+1. At the top of the encryption screen, select **Save**. A popup will warn you that the VM will reboot. Select **Yes**.
## Clean up resources
When no longer needed, you can delete the resource group, virtual machine, and a
## Next steps
-In this quickstart, you created a Key Vault that was enable for encryption keys, created a virtual machine, and enabled the virtual machine for encryption.
+In this quickstart, you created a Key Vault that was enabled for encryption keys, created a virtual machine, and enabled the virtual machine for encryption.
> [!div class="nextstepaction"] > [Azure Disk Encryption overview](disk-encryption-overview.md)
virtual-machines Disk Encryption Powershell Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disk-encryption-powershell-quickstart.md
Previously updated : 05/17/2019 Last updated : 01/04/2023
The Azure PowerShell module is used to create and manage Azure resources from th
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. - ## Create a resource group Create an Azure resource group with [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup). A resource group is a logical container into which Azure resources are deployed and managed:
$KeyVault = Get-AzKeyVault -VaultName MyKV -ResourceGroupName MyResourceGroup
Set-AzVMDiskEncryptionExtension -ResourceGroupName MyResourceGroup -VMName MyVM -DiskEncryptionKeyVaultUrl $KeyVault.VaultUri -DiskEncryptionKeyVaultId $KeyVault.ResourceId ```
-After a few minutes the process will return the following:
+After a few minutes the process will return the following output:
``` RequestId IsSuccessStatusCode StatusCode ReasonPhrase
You can verify the encryption process by running [Get-AzVmDiskEncryptionStatus](
Get-AzVmDiskEncryptionStatus -VMName MyVM -ResourceGroupName MyResourceGroup ```
-When encryption is enabled, you will see the following in the returned output:
+When encryption is enabled, you will see the following fields in the returned output:
``` OsVolumeEncrypted : Encrypted
virtual-machines Disk Encryption Sample Scripts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disk-encryption-sample-scripts.md
Previously updated : 08/06/2019 Last updated : 01/04/2023
This article provides sample scripts for preparing pre-encrypted VHDs and other
- **List all encrypted VMSS instances in your subscription**
- You can find all ADE-encrypted VMSS instances and the extension version, in all resource groups present in a subscription, using [this PowerShell script](https://raw.githubusercontent.com/Azure/azure-powershell/master/src/Compute/Compute/Extension/AzureDiskEncryption/Scripts/Find_1passAdeVersion_VMSS.ps1).
+ You can find all ADE-encrypted Virtual Machine Scale Sets instances and the extension version, in all resource groups present in a subscription, using [this PowerShell script](https://raw.githubusercontent.com/Azure/azure-powershell/master/src/Compute/Compute/Extension/AzureDiskEncryption/Scripts/Find_1passAdeVersion_VMSS.ps1).
- **List all disk encryption secrets used for encrypting VMs in a key vault**
The following table shows which parameters can be used in the PowerShell script:
|Parameter|Description|Mandatory?| ||||
-|$resourceGroupName| Name of the resource group to which the KeyVault belongs to. A new resource group with this name will be created if one doesn't exist.| True|
+|$resourceGroupName| Name of the resource group to which the KeyVault belongs. A new resource group with this name will be created if one doesn't exist.| True|
|$keyVaultName|Name of the KeyVault in which encryption keys are to be placed. A new vault with this name will be created if one doesn't exist.| True| |$location|Location of the KeyVault. Make sure the KeyVault and VMs to be encrypted are in the same location. Get a location list with `Get-AzLocation`.|True| |$subscriptionId|Identifier of the Azure subscription to be used. You can get your Subscription ID with `Get-AzSubscription`.|True|
After BitLocker encryption is enabled, the local encrypted VHD needs to be uploa
``` ## Upload the secret for the pre-encrypted VM to your key vault
-The disk encryption secret that you obtained previously must be uploaded as a secret in your key vault. This requires granting the set secret permission and the wrapkey permission to the account that will upload the secrets.
+The disk encryption secret that you obtained previously must be uploaded as a secret in your key vault. To do so, you must grant the set secret permission and the wrapkey permission to the account that will upload the secrets.
```powershell # Typically, account Id is the user principal name (in user@domain.com format)
virtual-machines Disk Encryption Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disk-encryption-troubleshooting.md
Previously updated : 08/06/2019 Last updated : 01/04/2023
This guide is for IT professionals, information security analysts, and cloud administrators whose organizations use Azure Disk Encryption. This article is to help with troubleshooting disk-encryption-related problems.
-Before taking any of the steps below, first ensure that the VMs you are attempting to encrypt are among the [supported VM sizes and operating systems](disk-encryption-overview.md#supported-vms-and-operating-systems), and that you have met all the prerequisites:
+Before taking any of these steps, first ensure that the VMs you're attempting to encrypt are among the [supported VM sizes and operating systems](disk-encryption-overview.md#supported-vms-and-operating-systems), and that you've met all the prerequisites:
- [Networking requirements](disk-encryption-overview.md#networking-requirements) - [Group policy requirements](disk-encryption-overview.md#group-policy-requirements)
Before taking any of the steps below, first ensure that the VMs you are attempti
## Troubleshooting 'Failed to send DiskEncryptionData'
-When encrypting a VM fails with the error message "Failed to send DiskEncryptionData...", it is usually caused by one of the following situations:
+When encrypting a VM fails with the error message "Failed to send DiskEncryptionData...", it's usually caused by one of the following situations:
- Having the Key Vault existing in a different region and/or subscription than the Virtual Machine - Advanced access policies in the Key Vault are not set to allow Azure Disk Encryption - Key Encryption Key, when in use, has been disabled or deleted in the Key Vault - Typo in the Resource ID or URL for the Key Vault or Key Encryption Key (KEK)-- Special characters used while naming the VM, data disks, or keys. i.e _VMName, élite, etc
+- Special characters used while naming the VM, data disks, or keys. i.e _VMName, élite, etc.
- Unsupported encryption scenarios - Network issues that prevent the VM/Host from accessing the required resources
DISKPART> list vol
Volume 2 D Temporary S NTFS Partition 13 GB Healthy Pagefile ```
-## Troubleshooting encryption status
+## Troubleshooting encryption status
-The portal may display a disk as encrypted even after it has been unencrypted within the VM. This can occur when low-level commands are used to directly unencrypt the disk from within the VM, instead of using the higher level Azure Disk Encryption management commands. The higher level commands not only unencrypt the disk from within the VM, but outside of the VM they also update important platform level encryption settings and extension settings associated with the VM. If these are not kept in alignment, the platform will not be able to report encryption status or provision the VM properly.
+The portal may display a disk as encrypted even after it has been unencrypted within the VM. This situation can occur when low-level commands are used to directly unencrypt the disk from within the VM, instead of using the higher level Azure Disk Encryption management commands. The higher level commands not only unencrypt the disk from within the VM, but outside of the VM they also update important platform level encryption settings and extension settings associated with the VM. If these are not kept in alignment, the platform will not be able to report encryption status or provision the VM properly.
To disable Azure Disk Encryption with PowerShell, use [Disable-AzVMDiskEncryption](/powershell/module/az.compute/disable-azvmdiskencryption) followed by [Remove-AzVMDiskEncryptionExtension](/powershell/module/az.compute/remove-azvmdiskencryptionextension). Running Remove-AzVMDiskEncryptionExtension before the encryption is disabled will fail.
virtual-machines Disks Upload Vhd To Managed Disk Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disks-upload-vhd-to-managed-disk-powershell.md
Title: Upload a VHD to Azure or copy a disk across regions - Azure PowerShell
description: Learn how to upload a VHD to an Azure managed disk and copy a managed disk across regions, using Azure PowerShell, via direct upload. Previously updated : 12/07/2022 Last updated : 01/03/2023 linux
If you're providing a backup solution for IaaS VMs in Azure, you should use dire
## Secure uploads with Azure AD
-If you're using [Azure Active Directory (Azure AD)](../../active-directory/fundamentals/active-directory-whatis.md) to control resource access, you can now use it to restrict uploading of Azure managed disks. This feature is available as a GA offering in all public cloud regions, it is a currently only available as a preview offering in Azure Government and Azure China regions. When a user attempts to upload a disk, Azure validates the identity of the requesting user in Azure AD, and confirms that user has the required permissions. At a higher level, a system administrator could set a policy at the Azure account or subscription level to ensure that an Azure AD identity has the necessary permissions for uploading before allowing a disk or a disk snapshot to be uploaded. If you have any questions on securing uploads with Azure AD, reach out to this email: azuredisks@microsoft .com
+If you're using [Azure Active Directory (Azure AD)](../../active-directory/fundamentals/active-directory-whatis.md) to control resource access, you can now use it to restrict uploading of Azure managed disks. This feature is available as a GA offering in all regions. When a user attempts to upload a disk, Azure validates the identity of the requesting user in Azure AD, and confirms that user has the required permissions. At a higher level, a system administrator could set a policy at the Azure account or subscription level to ensure that an Azure AD identity has the necessary permissions for uploading before allowing a disk or a disk snapshot to be uploaded. If you have any questions on securing uploads with Azure AD, reach out to this email: azuredisks@microsoft .com
### Prerequisites [!INCLUDE [disks-azure-ad-upload-download-prereqs](../../../includes/disks-azure-ad-upload-download-prereqs.md)]
$name = <desired-managed-disk-name>
# $Zone = <desired-zone> # $sku=<desired-SKU> # -DataAccessAuthMode 'AzureActiveDirectory'
+# -DiskHyperVGeneration = V1 or V2. This applies only to OS disks.
# To use $Zone or #sku, add -Zone or -DiskSKU parameters to the command Add-AzVhd -LocalFilePath $path -ResourceGroupName $resourceGroup -Location $location -DiskName $name
virtual-machines Download Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/download-vhd.md
Previously updated : 12/07/2022 Last updated : 01/03/2023 # Download a Windows VHD from Azure
virtual-machines Run Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/run-command.md
If needing to remove your action run command Windows extension, refer to the bel
```azurecli-interactive az vm run-command invoke --command-id RemoveRunCommandWindowsExtension --name vmname -g rgname ```
+> [!NOTE]
+> When you apply a Run Command again, the extension will get installed automatically. You can use the extension removal command to troubleshoot any issues related to the extension.
## Next steps
virtual-machines Redhat Rhui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/redhat-rhui.md
This procedure is provided for reference only. RHEL PAYG images already have the
## Next steps
-* To create a Red Hat Enterprise Linux VM from an Azure Marketplace PAYG image and to use Azure-hosted RHUI, go to the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/RedHat.RedHatEnterpriseLinux610).
+* To create a Red Hat Enterprise Linux VM from an Azure Marketplace PAYG image and to use Azure-hosted RHUI, go to the [Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/redhat.rhel-20190605).
* To learn more about the Red Hat images in Azure, go to the [documentation page](./redhat-images.md). * Information on Red Hat support policies for all versions of RHEL can be found on the [Red Hat Enterprise Linux Life Cycle](https://access.redhat.com/support/policy/updates/errata) page.
virtual-machines Automation Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-get-started.md
description: Quickly get started with the SAP on Azure Deployment Automation Fra
Previously updated : 11/17/2021 Last updated : 1/2/2023
Get started quickly with the [SAP on Azure Deployment Automation Framework](auto
- An Azure subscription. If you don't have an Azure subscription, you can [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- A [download of the SAP software](automation-software.md) in your Azure environment.
+- Ability to [download of the SAP software](automation-software.md) in your Azure environment.
- A [Terraform](https://www.terraform.io/) installation. For more information, also see the [Terraform on Azure documentation](/azure/developer/terraform/). - An [Azure CLI](/cli/azure/install-azure-cli) installation on your local computer.
+- A Service Principal to use for the control plane deployment
- Optionally, if you want to use PowerShell: - An [Azure PowerShell](/powershell/azure/install-az-ps#update-the-azure-powershell-module) installation on your local computer. - The latest PowerShell modules. [Update the PowerShell module](/powershell/azure/install-az-ps#update-the-azure-powershell-module) if needed.
Some of the prerequisites may already be installed in your deployment environmen
Clone the repository and prepare the execution environment by using the following steps:
-1. Create a directory called `Azure_SAP_Automated_Deployment` for your automation framework deployment.
-
-# [Linux](#tab/linux)
-
-```bash
-mkdir ~/Azure_SAP_Automated_Deployment; cd $_
-git clone https://github.com/Azure/sap-automation.git
-```
-
-Prepare the environment using the following steps:
+- Create a directory called `Azure_SAP_Automated_Deployment` for your automation framework deployment.
```bash
-export DEPLOYMENT_REPO_PATH=~/Azure_SAP_Automated_Deployment/sap-automation
-export ARM_SUBSCRIPTION_ID=<subscriptionID>
-```
-> [!NOTE]
-> Be sure to replace the sample value `<subscriptionID>` with your information.
-
-# [Windows](#tab/windows)
+mkdir ~/Azure_SAP_Automated_Deployment/config; cd $_
+git clone https://github.com/Azure/sap-automation-bootstrap.git
-```powershell
-mkdir C:\Azure_SAP_Automated_Deployment
-
-cd Azure_SAP_Automated_Deployment
-
+mkdir ~/Azure_SAP_Automated_Deployment/sap-automation; cd $_
git clone https://github.com/Azure/sap-automation.git
-```
-
-Import the PowerShell module
-```powershell
-Import-Module C:\Azure_SAP_Automated_Deployment\sap-automation\deploy\scripts\pwsh\SAPDeploymentUtilities\Output\SAPDeploymentUtilities\SAPDeploymentUtilities.psd1
+mkdir ~/Azure_SAP_Automated_Deployment/samples; cd $_
+git clone https://github.com/Azure/sap-automation-samples.git
``` - > [!TIP]
-> The deployer already clones [SAP on Azure Deployment Automation Framework repository](https://github.com/Azure/sap-automation).
+> The deployer already clones the required repositories.
-## Copy the samples
+## Samples
-The repo contains a set of sample configuration files to start testing the deployment automation framework. You can copy them using the following steps.
+The ~/Azure_SAP_Automated_Deployment/samples folder contains a set of sample configuration files to start testing the deployment automation framework. You can copy them using the following steps.
-# [Linux](#tab/linux)
```bash cd ~/Azure_SAP_Automated_Deployment
-cp -Rp sap-automation/samples/WORKSPACES WORKSPACES
-```
-# [Windows](#tab/windows)
-
-```powershell
-cd C:\Azure_SAP_Automated_Deployment
-mkdir WORKSPACES
-
-xcopy /E sap-automation\samples\WORKSPACES WORKSPACES
+cp -Rp samples/Terraform/WORKSPACES config/WORKSPACES
``` -- ## Next step
virtual-machines Automation Plan Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-plan-deployment.md
The workload zone provides the following services for the SAP Applications:
Before you design your workload zone layout, consider the following questions:
-* How many workload zones does your scenario require?
* In which regions do you need to deploy workloads?
-* How is DNS handled?
-* What storage type do you need for the shared storage?
-* What's your [deployment scenario](#supported-deployment-scenarios)?
+* How many workload zones does your scenario require (development, quality assurance, production etc)?
+* Are you deploying into new Virtual networks or are you using existing virtual networks
+* How is DNS configured (integrate with existing DNS or deploy a Private DNS zone in the control plane)?
+* What storage type do you need for the shared storage (Azure Files NFS, Azure NetApp Files)?
For more information, see [how to configure a workload zone deployment for automation](automation-deploy-workload-zone.md).
The following table shows the required permissions for the service principal:
## DevOps structure
-The Terraform automation templates are in the [SAP on Azure Deployment Automation Framework repository](https://github.com/Azure/sap-automation/). For most use cases, consider this repository as read-only and don't modify it.
+The deployment framework uses three separate repositories for the deployment artifacts. For your own parameter files, it's a best practice to keep these files in a source control repository that you manage.
-For your own parameter files, it's a best practice to keep these files in a source control repository that you manage. You can clone the [SAP on Azure Deployment Automation Framework bootstrap repository](https://github.com/Azure/sap-automation-bootstrap/) into your source control repository.
+### Main repository
+
+This repository contains the Terraform parameter files and the files needed for the Ansible playbooks for all the workload zone and system deployments.
+
+You can create this repository by cloning the [SAP on Azure Deployment Automation Framework bootstrap repository](https://github.com/Azure/sap-automation-bootstrap/) into your source control repository.
> [!IMPORTANT]
-> Your parameter file's name becomes the name of the Terraform state file. Make sure to use a unique parameter file name for this reason.
+> This repository must be the default repository for your Azure DevOps project.
-### Folder structure
+#### Folder structure
-The following sample folder hierarchy shows how to structure your configuration files along with the automation framework files. The first top-level folder, called **sap-automation**, has the automation framework files that you don't need to change in most use cases. The second top-level folder, called **WORKSPACES**, contains subfolders with configuration files for your deployment settings.
+The following sample folder hierarchy shows how to structure your configuration files along with the automation framework files.
| Folder name | Contents | Description | | -- | -- | -- |
The following sample folder hierarchy shows how to structure your configuration
:::image type="content" source="./media/automation-plan-deployment/folder-structure.png" alt-text="Screenshot of example folder structure, showing separate folders for SAP HANA and multiple workload environments.":::
+> [!IMPORTANT]
+> Your parameter file's name becomes the name of the Terraform state file. Make sure to use a unique parameter file name for this reason.
+
+### Code repository
+
+This repository contains the Terraform automation templates and the Ansible playbooks as well as the deployment pipelines and scripts. For most use cases, consider this repository as read-only and don't modify it.
+
+You can create this repository by cloning the [SAP on Azure Deployment Automation Framework repository](https://github.com/Azure/sap-automation/) into your source control repository.
+
+> [!IMPORTANT]
+> This repository should be named 'sap-automation'.
+
+### Sample repository
+
+This repository contains the sample Bill of Materials files and the sample Terraform configuration files.
+
+You can create this repository by cloning the [SAP on Azure Deployment Automation Framework samples repository](https://github.com/Azure/sap-automation-samples/) into your source control repository.
+
+> [!IMPORTANT]
+> This repository should be named 'samples'.
++ ## Supported deployment scenarios The automation framework supports [deployment into both new and existing scenarios](automation-new-vs-existing.md).
virtual-machines Automation Supportability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-supportability.md
description: Supported platforms, topologies, and capabilities for the SAP on Az
Previously updated : 11/17/2021 Last updated : 1/6/2023
The [SAP on Azure Deployment Automation Framework](automation-deployment-framewo
The deployer virtual machine of the control plane must be deployed on Linux as the Ansible controller only works on Linux.
-### SAP Application
+### SAP Infrastructure
The automation framework supports deployment of the SAP on Azure infrastructure both on Linux or Windows virtual machines on x86-64 or x64 hardware.
The following operating systems and distributions are supported by the framework
- Red Hat Linux 64bit for the x86-64 platform (7.x and 8.x) - Oracle Linux 64bit for the x86-64 platform
-The following distributions have been tested with the framework (Red Hat 7.9, Red Hat 8.2, SUSE 12 SP5, and SUSE 15 SP2)
+The following distributions have been tested with the framework:
+- Red Hat 7.9
+- Red Hat 8.2
+- Red Hat 8.4
+- Red Hat 8.6
+- SUSE 12 SP5
+- SUSE 15 SP2
+- SUSE 15 SP3
+- Oracle Linux 8.2
+- Oracle Linux 8.4
+- Oracle Linux 8.6
+- Windows Server 2016
+- Windows Server 2019
+- Windows Server 2022
## Supported topologies By default, the automation framework deploys with database and application tiers. The application tier is split into three more tiers: application, central services, and web dispatchers.
The automation framework uses or can use the following Azure services, features,
At this time the automation framework **doesn't support** the following Azure services, features, or capabilities:
+## Unsupported SAP architectures
+
+The automation framework can be used to deploy the following SAP architectures:
+
+- Standalone
+- Distributed
+- Distributed (Highly Available)
+ ## Next steps
virtual-machines Rise Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/rise-integration.md
SAP managed workload is preferably deployed in the same [Azure region](https://a
This diagram shows a typical SAP customer's hub and spoke virtual networks. Cross-tenant virtual network peering connects SAP RISE vnet to customer's hub vnet. :::image-end:::
-Since SAP RISE/ECS runs in SAPΓÇÖs Azure tenant and subscriptions, the virtual network peering needs to be set up between [different tenants](../../../virtual-network/create-peering-different-subscriptions.md). This can be accomplished by setting up the peering with the SAP provided networkΓÇÖs Azure resource ID and have SAP approve the peering. Add a user from the opposite AAD tenant as a guest user, accept the guest user invitation and follow process documented at [Create a VNet peering - different subscriptions](../../../virtual-network/create-peering-different-subscriptions.md#cli). Contact your SAP representative for the exact steps required. Engage the respective team(s) within your organization that deal with network, user administration and architecture to enable this process to be completed swiftly.
+Since SAP RISE/ECS runs in SAPΓÇÖs Azure tenant and subscriptions, the virtual network peering needs to be set up between [different tenants](../../../virtual-network/create-peering-different-subscriptions.md). This can be accomplished by setting up the peering with the SAP provided networkΓÇÖs Azure resource ID and have SAP approve the peering. Add a user from the opposite AAD tenant as a guest user, accept the guest user invitation and follow process documented at [Create a VNet peering - different subscriptions](../../../virtual-network/create-peering-different-subscriptions.md). Contact your SAP representative for the exact steps required. Engage the respective team(s) within your organization that deal with network, user administration and architecture to enable this process to be completed swiftly.
### Connectivity during migration to ECS/RISE
virtual-machines Sap Get Started Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-get-started-classic.md
- Title: Using SAP on Linux virtual machines | Microsoft Docs
-description: Learn about using SAP on Linux virtual machines (VMs) in Microsoft Azure
---
-tags: azure-service-management
-keywords: ''
--- Previously updated : 10/04/2016---
-# Using SAP on Linux virtual machines in Azure
-Cloud Computing is a widely used term which is gaining more and more importance within the IT industry, from small companies up to large and multinational corporations. Microsoft Azure is the Cloud Services Platform from Microsoft which offers a wide spectrum of new possibilities. Now customers are able to rapidly provision and de-provision applications as Cloud-Services, so they are not limited to technical or budgeting restrictions. Instead of investing time and budget into hardware infrastructure, companies can focus on the application, business processes and its benefits for customers and users.
-
-With Microsoft Azure virtual machines, Microsoft offers a comprehensive Infrastructure as a Service (IaaS) platform. SAP NetWeaver based applications are supported on Azure Virtual Machines (IaaS). The whitepapers below describe how to plan and implement SAP NetWeaver based applications on Windows virtual machines in Azure. You can also implement SAP NetWeaver based applications on [Windows virtual machines](./get-started.md?toc=/azure/virtual-machines/windows/classic/toc.json).
--
-## SAP NetWeaver on Azure SUSE Linux Virtual Machines
-Title: Testing SAP NetWeaver on Microsoft Azure SUSE Linux VMs
-
-Summary: There is no official SAP support for running SAP NetWeaver on Azure Linux VMs at this point in time. Nevertheless customers
-might want to do some testing or might consider to run SAP demo or training systems on Azure Linux VMs as long as there is no need for contacting SAP support.
-This article should help setting up Azure SUSE Linux VMs for running SAP and gives some basic hints in order to avoid common potential pitfalls.
-
-Updated: December 2015
-
-[This article can be found here](./sap-deployment-checklist.md?toc=/azure/virtual-machines/linux/toc.json)
virtual-network-manager Concept Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-use-cases.md
Common uses include:
- Enforce security protection to prevent users from opening high-risk ports. - Create default rules for everyone in the company/organization so that administrators can prevent security threats caused by NSG misconfiguration or forgetting to put necessary NSGs. - Create security boundaries using security admin rules as an administrator and let the owners of the virtual networks configure their NSGs so the NSGs wonΓÇÖt break company policies.-- Force-allow the traffic from and to critical services so that other users can't accidentally block the necessary traffic, such as program updates.
+- Force-allow the traffic from and to critical services so that other users can't accidentally block the necessary traffic, such as monitoring services and program updates.
For a walk-through of use cases, see [Securing Your Virtual Networks with Azure Virtual Network Manager - Microsoft Tech Community](https://techcommunity.microsoft.com/t5/azure-networking-blog/securing-your-virtual-networks-with-azure-virtual-network/ba-p/3353366).
virtual-network-manager How To Exclude Elements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-exclude-elements.md
List of supported operators:
## Basic editor
-Assume you have the following virtual networks in your subscription. Each virtual network has either a *Production* or *Test* tag associated. You only want to select virtual networks with the Production tag and contain **VNet-A** in the name.
+Assume you have the following virtual networks in your subscription. Each virtual network has an associated tag named **environment** with the respective value of *Production* or *Test*.
+* myVNet01-EastUS - *Production*
+* myVNet01-WestUS - *Production*
+* myVNet02-WestUS - *Test*
+* myVNet03-WestUS - *Test*
-* VNet-A-EastUS - *Production*
-* VNet-A-WestUS - *Production*
-* VNet-B-WestUS - *Test*
-* VNet-C-WestUS - *Test*
-* VNetA - *Production*
-* VNetB - *Test*
-
-To begin using the basic editor to create your conditional statement, you need to create a new network group.
-
-1. Go to your Azure Virtual Network Manager instance and select **Network Groups** under *Settings*. Then select **+ Create** to create a new network group.
+You only want to select virtual networks that contain **VNet-A** in the name. To begin using the basic editor to create your conditional statement, you need to create a new network group.
+1. Go to your Azure Virtual Network Manager instance and select **Network Groups** under **Settings**. Then select **+ Create** to create a new network group.
1. Enter a **Name** and an optional **Description** for the network group, and select **Add**. 1. Select the network group from the list and select **Create Azure Policy**. 1. Enter a **Policy name** and leave the **Scope** selections unless changes are needed.
-1. Under **Criteria**, select **Tags** from the drop-down under *Parameter* and then select **Exist** from the drop-down under *Operator*.
-
-1. Enter **Prod** under *Condition*, then select **Save**.
-1. After a few minutes, select your network group and select **Group Members** under *Settings*. You should only see VNet-A-EastUS, VNet-A-WestUS, and VNetA show up in the list.
+1. Under **Criteria**, select **Name** from the drop-down under **Parameter** and then select **Contains** from the drop-down under *Operator*.
+1. Enter **WestUS** under **Condition**, then select **Save**.
+1. After a few minutes, select your network group and select **Group Members** under **Settings**. You should only see myVNet01-WestUS, myVNet02-WestUS, and myVNet03-WestUS show up in the list.
-> [!NOTE]
-> The **basic editor** is only available during the creation of an Azure Policy.
+> [!IMPORTANT]
+> The **basic editor** is only available during the creation of an Azure Policy. Once a policy is created, all edits will be done using JSON in the **Policies** section of virtual network manager or via Azure Policy.
+>
+> When using the basic editor, your condition options will be limited through the portal experience. For complex conditions like creating a network group for VNets based on a customer-defined tag, you can used the advanced editor. Learn more about [Azure Policy definition structure](../governance/policy/concepts/definition-structure.md).
## Advanced editor
-The advanced editor can be used to select virtual network during the creation of a network group or when updating an existing network group. Based in [JSON](../governance/policy/concepts/assignment-structure.md), the advanced editor is useful for creating and updating complex Azure Policy conditional statements by experienced users.
+The advanced editor can be used to select virtual networks during the creation of a network group or when updating an existing network group. Based in [JSON](../governance/policy/concepts/assignment-structure.md), the advanced editor is useful for creating and updating complex Azure Policy conditional statements by experienced users.
+
+### Create a new policy with advanced editor
+
+1. Go to your Azure Virtual Network Manager instance and select **Network Groups** under **Settings**. Then select **+ Create** to create a new network group.
+1. Enter a **Name** and an optional **Description** for the network group, and select **Add**.
+1. Select the network group from the list and select **Create Azure Policy**.
+1. Enter a **Policy name** and leave the **Scope** selections unless changes are needed.
+1. Under **Criteria**, select **Advanced (JSON) editor** to open the editor.
+1. Enter the following JSON code into the text box and select **Save**:
+
+ ```json
+ {
+ "allOf": [
+ {
+ "field": "Name",
+ "contains": "myVNet01"
+ }
+ ]
+ }
+ ```
+1. After a few minutes, select your network group and select **Group Members** under **Settings**. You should only see myVNet01-WestUS and myVNet01-EastUS.
-1. Select the network group created in the previous section. Then select the **Conditional statements** tab.
+### Edit an existing policy
-1. You'll see the conditional statements for the network group in the advance editor view as followed:
+1. Select the network group created in the previous section. Then select the **Policies** tab.
+1. Select the policy created in the previous section.
+1. You'll see the conditional statements for the network group in the advance editor view as follows:
```json
- {
- "allOf": [
- {
- "field": "tags['Environment']",
- "exists": true
- },
- {
- "field": "Name",
- "contains": "VNet-A"
- }
- ]
- }
+ [
+ {
+ "allOf": [
+ {
+ "field": "Name",
+ "contains": "myVNet01"
+ }
+ ]
+ }
+ ]
```
- The `"allOf"` parameter contains both the conditional statements that are separated by the **AND** logical operator.
-
-1. To add another conditional statement for a *Name* field *not containing* **WestUS**, enter the following into the advanced editor:
+1. To add another conditional statement for a **Name** field *not containing* **WestUS**, enter the following into the advanced editor:
```json { "allOf": [
- {
- "field": "tags['Environment']",
- "exists": true
- },
+ { "field": "Name",
- "contains": "VNet-A"
+ "contains": "VNet01"
}, { "field": "Name",
The advanced editor can be used to select virtual network during the creation of
} ```
-1. Then select **Evaluate**. You should only see VNet-A-EastUS virtual network in the list.
-
-1. Select **Review + save** and then select **Save** once validation has passed.
-
-See [Parameter and operators](#parameters) for the complete list of parameters and operators you can use with the advanced editor. See below for more examples:
+ The `"allOf"` parameter contains both the conditional statements that are separated by the **AND** logical operator.
+1. Select Save.
+1. After a few minutes, select your network group and select **Group Members** under **Settings**. You should only see myVNet01-EastUS.
+See [Parameter and operators](#parameters) for the complete list of parameters and operators you can use with the advanced editor.
## More examples
+Here are more examples of conditional statements in the advanced editor.
+ ### Example 1: OR operator only This example uses the **OR** logical operator to separate two conditional statements.
This example uses the **OR** logical operator to separate two conditional statem
"anyOf": [ { "field": "Name",
- "contains": "VNet-A"
+ "contains": "myVNet01"
}, { "field": "Name",
- "contains": "VNetA"
+ "contains": "myVNet02"
} ] }
The `"anyOf"` parameter contains both the conditional statements that are separa
"anyOf": [ { "field": "Name",
- "contains": "VNet-A"
+ "contains": "myVNet01"
}, { "field": "Name",
- "contains": "VNetA"
+ "contains": "myVNet02"
} ] },
The `"anyOf"` parameter contains both the conditional statements that are separa
] } ```- Both `"allOf"` and `"anyOf"` are used in the code. Since the **AND** operator is last in the list, it is on the outer part of the code containing the two conditional statements with the **OR** operator.
-> [!NOTE]
-> Conditionals should filter on resource type Microsoft.Network/virtualNetwork to improve efficiency.
-> This condition is prepended for you on any conditionals specified through the portal.
+### Example 3: Using custom tag values with advanced editor
+
+In this example, a conditional statement is created that finds virtual networks where the name includes **myVNet** AND the **environment** tag equals **production**.
+
+* Advanced editor:
+
+ ```json
+
+ {
+ "allOf": [
+ {
+ "field": "Name",
+ "contains": "myVNet"
+ },
+ {
+ "field": "tags['environment']",
+ "equals": "production"
+ }
+ ]
+ }
+
+ ```
+
+ > [!NOTE]
+ > Conditionals should filter on resource type Microsoft.Network/virtualNetwork to improve efficiency.
+ > This condition is prepended for you on any conditionals specified through the portal.
## Next steps - Learn about [Network groups](concept-network-groups.md).
virtual-network Create Peering Different Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/create-peering-different-subscriptions.md
Title: Create a VNet peering - different subscriptions-
+ Title: Create a virtual network peering between different subscriptions
+ description: Learn how to create a virtual network peering between virtual networks created through Resource Manager that exist in different Azure subscriptions in the same or different Azure Active Directory tenant.- + - Previously updated : 04/09/2019- Last updated : 12/30/2022+ # Create a virtual network peering - Resource Manager, different subscriptions and Azure Active Directory tenants In this tutorial, you learn to create a virtual network peering between virtual networks created through Resource Manager. The virtual networks exist in different subscriptions that may belong to different Azure Active Directory (Azure AD) tenants. Peering two virtual networks enables resources in different virtual networks to communicate with each other with the same bandwidth and latency as though the resources were in the same virtual network. Learn more about [Virtual network peering](virtual-network-peering-overview.md).
-The steps to create a virtual network peering are different, depending on whether the virtual networks are in the same, or different, subscriptions, and which [Azure deployment model](../azure-resource-manager/management/deployment-models.md?toc=%2fazure%2fvirtual-network%2ftoc.json) the virtual networks are created through. Learn how to create a virtual network peering in other scenarios by selecting the scenario from the following table:
+Depending on whether the virtual networks are in the same, or different subscriptions the steps to create a virtual network peering are different. Steps to peer networks created with the classic deployment model are different. For more information about deployment models, see [Azure deployment model](../azure-resource-manager/management/deployment-models.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
+
+Learn how to create a virtual network peering in other scenarios by selecting the scenario from the following table:
|Azure deployment model | Azure subscription | | ||
The steps to create a virtual network peering are different, depending on whethe
|[One Resource Manager, one classic](create-peering-different-deployment-models.md) |Same| |[One Resource Manager, one classic](create-peering-different-deployment-models-subscriptions.md) |Different|
-A virtual network peering cannot be created between two virtual networks deployed through the classic deployment model. If you need to connect virtual networks that were both created through the classic deployment model, you can use an Azure [VPN Gateway](../vpn-gateway/tutorial-site-to-site-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) to connect the virtual networks.
+A virtual network peering can't be created between two virtual networks deployed through the classic deployment model. If you need to connect virtual networks that were both created through the classic deployment model, you can use an Azure [VPN Gateway](../vpn-gateway/tutorial-site-to-site-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) to connect the virtual networks.
This tutorial peers virtual networks in the same region. You can also peer virtual networks in different [supported regions](virtual-network-manage-peering.md#cross-region). It's recommended that you familiarize yourself with the [peering requirements and constraints](virtual-network-manage-peering.md#requirements-and-constraints) before peering virtual networks.
-You can use the [Azure portal](#portal), the [Azure CLI](#cli), [Azure PowerShell](#powershell), or an [Azure Resource Manager template](#template) to create a virtual network peering. Select any of the previous tool links to go directly to the steps for creating a virtual network peering using your tool of choice.
-
-If the virtual networks are in different subscriptions, and the subscriptions are associated with different Azure Active Directory tenants, complete the following steps before continuing:
-1. Add the user from each Active Directory tenant as a [guest user](../active-directory/external-identities/add-users-administrator.md?toc=%2fazure%2fvirtual-network%2ftoc.json#add-guest-users-to-the-directory) in the opposite Azure Active Directory tenant.
-1. Each user must accept the guest user invitation from the opposite Azure Active Directory tenant.
-
-## <a name="portal"></a>Create peering - Azure portal
-
-The following steps use different accounts for each subscription. If you're using an account that has permissions to both subscriptions, you can use the same account for all steps, skip the steps for logging out of the portal, and skip the steps for assigning another user permissions to the virtual networks.
-
-1. Log in to the [Azure portal](https://portal.azure.com) as *UserA*. The account you log in with must have the necessary permissions to create a virtual network peering. For a list of permissions, see [Virtual network peering permissions](virtual-network-manage-peering.md#permissions).
-2. Select **+ Create a resource**, select **Networking**, and then select **Virtual network**.
-3. Select or enter the following example values for the following settings, then select **Create**:
- - **Name**: *myVnetA*
- - **Address space**: *10.0.0.0/16*
- - **Subnet name**: *default*
- - **Subnet address range**: *10.0.0.0/24*
- - **Subscription**: Select subscription A.
- - **Resource group**: Select **Create new** and enter *myResourceGroupA*
- - **Location**: *East US*
-4. In the **Search resources** box at the top of the portal, type *myVnetA*. Select **myVnetA** when it appears in the search results.
-5. Select **Access control (IAM)** from the vertical list of options on the left side.
-6. Assign the **Network contributor** role to *UserB* using the procedure decribed in [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
-7. Under **myVnetA - Access control (IAM)**, select **Properties** from the vertical list of options on the left side. Copy the **RESOURCE ID**, which is used in a later step. The resource ID is similar to the following example: `/subscriptions/<Subscription Id>/resourceGroups/myResourceGroupA/providers/Microsoft.Network/virtualNetworks/myVnetA`.
-8. Log out of the portal as UserA, then log in as UserB.
-9. Complete steps 2-3, entering or selecting the following values in step 3:
-
- - **Name**: *myVnetB*
- - **Address space**: *10.1.0.0/16*
- - **Subnet name**: *default*
- - **Subnet address range**: *10.1.0.0/24*
- - **Subscription**: Select subscription B.
- - **Resource group**: Select **Create new** and enter *myResourceGroupB*
- - **Location**: *East US*
-
-10. In the **Search resources** box at the top of the portal, type *myVnetB*. Select **myVnetB** when it appears in the search results.
-11. Under **myVnetB**, select **Properties** from the vertical list of options on the left side. Copy the **RESOURCE ID**, which is used in a later step. The resource ID is similar to the following example: `/subscriptions/<Subscription ID>/resourceGroups/myResourceGroupB/providers/Microsoft.ClassicNetwork/virtualNetworks/myVnetB`.
-12. Select **Access control (IAM)** under **myVnetB**, and then assign the **Network contributor** role to *UserA* using the procedure decribed in [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
-13. Log out of the portal as UserB and log in as UserA.
-14. In the **Search resources** box at the top of the portal, type *myVnetA*. Select **myVnetA** when it appears in the search results.
-15. Select **myVnetA**.
-16. Under **SETTINGS**, select **Peerings**.
-17. Under **myVnetA - Peerings**, select **+ Add**
-18. Under **Add peering**, enter, or select, the following options, then select **OK**:
- - **Name**: *myVnetAToMyVnetB*
- - **Virtual network deployment model**: Select **Resource Manager**.
- - **I know my resource ID**: Check this box.
- - **Resource ID**: Enter the resource ID from step 11.
- - **Allow virtual network access:** Ensure that **Enabled** is selected.
- No other settings are used in this tutorial. To learn about all peering settings, read [Manage virtual network peerings](virtual-network-manage-peering.md#create-a-peering).
-19. The peering you created appears a short wait after selecting **OK** in the previous step. **Initiated** is listed in the **PEERING STATUS** column for the **myVnetAToMyVnetB** peering you created. You've peered myVnetA to myVnetB, but now you must peer myVnetB to myVnetA. The peering must be created in both directions to enable resources in the virtual networks to communicate with each other.
-20. Log out of the portal as UserA and log in as UserB.
-21. Complete steps 14-18 again for myVnetB. In step 18, name the peering *myVnetBToMyVnetA*, select *myVnetA* for **Virtual network**, and enter the ID from step 7 in the **Resource ID** box.
-22. A few seconds after selecting **OK** to create the peering for myVnetB, the **myVnetBToMyVnetA** peering you just created is listed with **Connected** in the **PEERING STATUS** column.
-23. Log out of the portal as UserB and log in as UserA.
-24. Complete steps 14-16 again. The **PEERING STATUS** for the **myVnetAToVNetB** peering is now also **Connected**. The peering is successfully established after you see **Connected** in the **PEERING STATUS** column for both virtual networks in the peering. Any Azure resources you create in either virtual network are now able to communicate with each other through their IP addresses. If you're using default Azure name resolution for the virtual networks, the resources in the virtual networks are not able to resolve names across the virtual networks. If you want to resolve names across virtual networks in a peering, you must create your own DNS server. Learn how to set up [Name resolution using your own DNS server](virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server).
-25. **Optional**: Though creating virtual machines is not covered in this tutorial, you can create a virtual machine in each virtual network and connect from one virtual machine to the other, to validate connectivity.
-26. **Optional**: To delete the resources that you create in this tutorial, complete the steps in the [Delete resources](#delete-portal) section of this article.
-
-## <a name="cli"></a>Create peering - Azure CLI
-
-This tutorial uses different accounts for each subscription. If you're using an account that has permissions to both subscriptions, you can use the same account for all steps, skip the steps for logging out of Azure, and remove the lines of script that create user role assignments. Replace UserA@azure.com and UserB@azure.com in all of the following scripts with the usernames you're using for UserA and UserB.
-
-The following scripts:
--- Requires the Azure CLI version 2.0.4 or later. To find the version, run `az --version`. If you need to upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli?toc=%2fazure%2fvirtual-network%2ftoc.json).-- Works in a Bash shell. For options on running Azure CLI scripts on Windows client, see [Install the Azure CLI on Windows](/cli/azure/install-azure-cli-windows).-
-Instead of installing the CLI and its dependencies, you can use the Azure Cloud Shell. The Azure Cloud Shell is a free Bash shell that you can run directly within the Azure portal. It has the Azure CLI preinstalled and configured to use with your account. Select the **Try it** button in the script that follows, which invokes a Cloud Shell that you can log in to your Azure account with.
-
-1. Open a CLI session and log in to Azure as UserA using the `azure login` command. The account you log in with must have the necessary permissions to create a virtual network peering. For a list of permissions, see [Virtual network peering permissions](virtual-network-manage-peering.md#permissions).
-2. Copy the following script to a text editor on your PC, replace `<SubscriptionA-Id>` with the ID of SubscriptionA, then copy the modified script, paste it in your CLI session, and press `Enter`. If you don't know your subscription Id, enter the `az account show` command. The value for **id** in the output is your subscription Id.
-
- ```azurecli-interactive
- # Create a resource group.
- az group create \
- --name myResourceGroupA \
- --location eastus
-
- # Create virtual network A.
- az network vnet create \
- --name myVnetA \
- --resource-group myResourceGroupA \
- --location eastus \
- --address-prefix 10.0.0.0/16
-
- # Assign UserB permissions to virtual network A.
- az role assignment create \
- --assignee UserB@azure.com \
+## Prerequisites
+
+- An Azure account(s) with two active subscriptions. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- An Azure account with permissions in both subscriptions or an account in each subscription with the proper permissions to create a virtual network peering. For a list of permissions, see [Virtual network peering permissions](virtual-network-manage-peering.md#permissions).
+
+ - If the virtual networks are in different subscriptions and Active Directory tenants, add the user from each tenant as a guest in the opposite tenant. For more information about guest users, see [Add Azure Active Directory B2B collaboration users in the Azure portal](../active-directory/external-identities/add-users-administrator.md?toc=%2fazure%2fvirtual-network%2ftoc.json#add-guest-users-to-the-directory).
+
+ - Each user must accept the guest user invitation from the opposite Azure Active Directory tenant.
++
+- This how-to article requires version 2.31.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+
+- Azure PowerShell installed locally or Azure Cloud Shell.
+
+- Sign in to Azure PowerShell and ensure you've selected the subscription with which you want to use this feature. For more information, see [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).
+
+- Ensure your `Az.Network` module is 4.3.0 or later. To verify the installed module, use the command `Get-InstalledModule -Name "Az.Network"`. If the module requires an update, use the command `Update-Module -Name Az.Network` if necessary.
+
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+
+In the following steps, you'll learn how to peer virtual networks in different subscriptions and Azure Active Directory tenants.
+
+You can use the same account that has permissions in both subscriptions or you can use separate accounts for each subscription to set up the peering. An account with permissions in both subscriptions can complete all of the steps without signing out and signing in to portal and assigning permissions.
+
+The following resources and account examples are used in the steps in this article:
+
+| User account | Resource group | Subscription | Virtual network |
+| | -- | | |
+| **UserA** | **myResourceGroupA** | **SubscriptionA** | **myVNetA** |
+| **UserB** | **myResourceGroupB** | **SubscriptionB** | **myVNetB** |
+
+## Create virtual network - myVNetA
+
+> [!NOTE]
+> If you are using a single account to complete the steps, you can skip the steps for logging out of the portal and assigning another user permissions to the virtual networks.
+
+# [**Portal**](#tab/create-peering-portal)
+
+1. Sign-in to the [Azure portal](https://portal.azure.com) as **UserA**.
+
+2. In the search box a the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
+
+3. Select **+ Create**.
+
+4. In the **Basics** tab of **Create virtual network**, enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your **SubscriptionA**. |
+ | Resource group | Select **Create new**. </br> Enter **myResourceGroupA** in **Name**. </br> Select **OK**. |
+ | **Instance details** | |
+ | Name | Enter **myVNetA**. |
+ | Region | Select a region. |
+
+5. Select **Next: IP Addresses**.
+
+6. In **IPv4 address space**, enter **10.1.0.0/16**.
+
+7. Select **+ Add subnet**.
+
+8. Enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | Subnet name | Enter **mySubnet**. |
+ | Subnet address range | Enter **10.1.0.0/24**. |
+
+9. Select **Add**.
+
+10. Select **Review + create**.
+
+11. Select **Create**.
+
+# [**PowerShell**](#tab/create-peering-powershell)
+
+### Sign in to SubscriptionA
+
+Use [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to **SubscriptionA**.
+
+```azurepowershell-interactive
+Connect-AzAccount
+```
+
+If you're using one account for both subscriptions, sign in to that account and change the subscription context to **SubscriptionA** with [Set-AzContext](/powershell/module/az.accounts/set-azcontext).
+
+```azurepowershell-interactive
+Set-AzContext -Subscription SubscriptionA
+```
+
+### Create a resource group - myResourceGroupA
+
+An Azure resource group is a logical container where Azure resources are deployed and managed.
+
+Create a resource group with [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup):
+
+```azurepowershell-interactive
+$rsg = @{
+ Name = 'myResourceGroupA'
+ Location = 'westus3'
+}
+New-AzResourceGroup @rsg
+```
+
+### Create the virtual network
+
+Create a virtual network with [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork). This example creates a default virtual network named **myVNetA** in the **West US 3** location:
+
+```azurepowershell-interactive
+$vnet = @{
+ Name = 'myVNetA'
+ ResourceGroupName = 'myResourceGroupA'
+ Location = 'westus3'
+ AddressPrefix = '10.1.0.0/16'
+}
+$virtualNetwork = New-AzVirtualNetwork @vnet
+```
+### Add a subnet
+
+Azure deploys resources to a subnet within a virtual network, so you need to create a subnet. Create a subnet configuration named **default** with [Add-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/add-azvirtualnetworksubnetconfig):
+
+```azurepowershell-interactive
+$subnet = @{
+ Name = 'default'
+ VirtualNetwork = $virtualNetwork
+ AddressPrefix = '10.1.0.0/24'
+}
+$subnetConfig = Add-AzVirtualNetworkSubnetConfig @subnet
+```
+
+### Associate the subnet to the virtual network
+
+You can write the subnet configuration to the virtual network with [Set-AzVirtualNetwork](/powershell/module/az.network/Set-azVirtualNetwork). This command creates the subnet:
+
+```azurepowershell-interactive
+$virtualNetwork | Set-AzVirtualNetwork
+```
+
+# [**Azure CLI**](#tab/create-peering-cli)
+
+### Sign in to SubscriptionA
+
+Use [az login](/cli/azure/reference-index#az-login) to sign in to **SubscriptionA**.
+
+```azurecli-interactive
+az login
+```
+
+If you're using one account for both subscriptions, sign in to that account and change the subscription context to **SubscriptionA** with [az account set](/cli/azure/account#az-account-set).
+
+```azurecli-interactive
+az account set --subscription "SubscriptionA"
+```
+
+### Create a resource group - myResourceGroupA
+
+An Azure resource group is a logical container where Azure resources are deployed and managed.
+
+Create a resource group with [az group create](/cli/azure/group#az-group-create):
+
+```azurecli-interactive
+az group create \
+ --name myResourceGroupA \
+ --location westus3
+```
+
+### Create the virtual network
+
+Create a virtual network and subnet with [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create). This example creates a default virtual network named **myVNetA** in the **West US 3** location.
+
+```azurecli-interactive
+az network vnet create \
+ --resource-group myResourceGroupA\
+ --location westus3 \
+ --name myVNetA \
+ --address-prefixes 10.1.0.0/16 \
+ --subnet-name default \
+ --subnet-prefixes 10.1.0.0/24
+```
+++
+## Assign permissions for UserB
+
+A user account in the other subscription that you want to peer with must be added to the network you previously created. If you're using a single account for both subscriptions, you can skip this section.
+
+# [**Portal**](#tab/create-peering-portal)
+
+1. Remain signed in to the portal as **UserA**.
+
+2. In the search box a the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
+
+3. Select **myVNetA**.
+
+4. Select **Access control (IAM)**.
+
+5. Select **+ Add** -> **Add role assignment**.
+
+6. In **Add role assignment** in the **Role** tab, select **Network Contributor**.
+
+7. Select **Next**.
+
+8. In the **Members** tab, select **+ Select members**.
+
+9. In **Select members** in the search box, enter **UserB**.
+
+10. Select **Select**.
+
+11. Select **Review + assign**.
+
+12. Select **Review + assign**.
+
+# [**PowerShell**](#tab/create-peering-powershell)
+
+Use [Get-AzVirtualNetwork](/powershell/module/az.network/get-azvirtualnetwork) to obtain the resource ID for **myVNetA**. Assign **UserB** from **SubscriptionB** to **myVNetA** with [New-AzRoleAssignment](/powershell/module/az.resources/new-azroleassignment).
+
+Use [Get-AzADUser](/powershell/module/az.resources/get-azaduser) to obtain the object ID for **UserB**.
+
+**UserB** is used in this example for the user account. Replace this value with the display name for the user from **SubscriptionB** that you wish to assign permissions to **myVNetA**. You can skip this step if you're using the same account for both subscriptions.
+
+```azurepowershell-interactive
+$id = @{
+ Name = 'myVNetA'
+ ResourceGroupName = 'myResourceGroupA'
+}
+$vnet = Get-AzVirtualNetwork @id
+
+$obj = Get-AzADUser -DisplayName 'UserB'
+
+$role = @{
+ ObjectId = $obj.id
+ RoleDefinitionName = 'Network Contributor'
+ Scope = $vnet.id
+}
+New-AzRoleAssignment @role
+```
+
+# [**Azure CLI**](#tab/create-peering-cli)
+
+Use [az network vnet show](/cli/azure/network/vnet#az-network-vnet-show) to obtain the resource ID for **myVNetA**. Assign **UserB** from **SubscriptionB** to **myVNetA** with [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create).
+
+Use [az ad user list](/cli/azure/ad/user#az-ad-user-list) to obtain the object ID for **UserB**.
+
+**UserB** is used in this example for the user account. Replace this value with the display name for the user from **SubscriptionB** that you wish to assign permissions to **myVNetA**. You can skip this step if you're using the same account for both subscriptions.
+
+```azurecli-interactive
+az ad user list --display-name UserB
+```
+```bash
+[
+ {
+ "businessPhones": [],
+ "displayName": "UserB",
+ "givenName": null,
+ "id": "16d51293-ec4b-43b1-b54b-3422c108321a",
+ "jobTitle": null,
+ "mail": "userB@fabrikam.com",
+ "mobilePhone": null,
+ "officeLocation": null,
+ "preferredLanguage": null,
+ "surname": null,
+ "userPrincipalName": "userb_fabrikam.com#EXT#@contoso.onmicrosoft.com"
+ }
+]
+```
+
+Make note of the object ID of **UserB** in field **id**. In this example, its **16d51293-ec4b-43b1-b54b-3422c108321a**.
++
+```azurecli-interactive
+vnetid=$(az network vnet show \
+ --name myVNetA \
+ --resource-group myResourceGroupA \
+ --query id \
+ --output tsv)
+
+az role assignment create \
+ --assignee 16d51293-ec4b-43b1-b54b-3422c108321a \
--role "Network Contributor" \
- --scope /subscriptions/<SubscriptionA-Id>/resourceGroups/myResourceGroupA/providers/Microsoft.Network/VirtualNetworks/myVnetA
- ```
-
-3. Log out of Azure as UserA using the `az logout` command, then log in to Azure as UserB. The account you log in with must have the necessary permissions to create a virtual network peering. For a list of permissions, see [Virtual network peering permissions](virtual-network-manage-peering.md#permissions).
-4. Create myVnetB. Copy the script contents in step 2 to a text editor on your PC. Replace `<SubscriptionA-Id>` with the ID of SubscriptionB. Change 10.0.0.0/16 to 10.1.0.0/16, change all As to B, and all Bs to A. Copy the modified script, paste it in to your CLI session, and press `Enter`.
-5. Log out of Azure as UserB and log in to Azure as UserA.
-6. Create a virtual network peering from myVnetA to myVnetB. Copy the following script contents to a text editor on your PC. Replace `<SubscriptionB-Id>` with the ID of SubscriptionB. To execute the script, copy the modified script, paste it into your CLI session, and press Enter.
-
- ```azurecli-interactive
- # Get the id for myVnetA.
- vnetAId=$(az network vnet show \
- --resource-group myResourceGroupA \
- --name myVnetA \
- --query id --out tsv)
-
- # Peer myVNetA to myVNetB.
- az network vnet peering create \
- --name myVnetAToMyVnetB \
- --resource-group myResourceGroupA \
- --vnet-name myVnetA \
- --remote-vnet /subscriptions/<SubscriptionB-Id>/resourceGroups/myResourceGroupB/providers/Microsoft.Network/VirtualNetworks/myVnetB \
- --allow-vnet-access
- ```
-
-7. View the peering state of myVnetA.
-
- ```azurecli-interactive
- az network vnet peering list \
- --resource-group myResourceGroupA \
- --vnet-name myVnetA \
- --output table
- ```
-
- The state is **Initiated**. It changes to **Connected** once you create the peering to myVnetA from myVnetB.
-
-8. Log out UserA from Azure and log in to Azure as UserB.
-9. Create the peering from myVnetB to myVnetA. Copy the script contents in step 6 to a text editor on your PC. Replace `<SubscriptionB-Id>` with the ID for SubscriptionA and change all As to B and all Bs to A. Once you've made the changes, copy the modified script, paste it into your CLI session, and press `Enter`.
-10. View the peering state of myVnetB. Copy the script contents in step 7 to a text editor on your PC. Change A to B for the resource group and virtual network names, copy the script, paste the modified script in to your CLI session, and then press `Enter`. The peering state is **Connected**. The peering state of myVnetA changes to **Connected** after you've created the peering from myVnetB to myVnetA. You can log UserA back in to Azure and complete step 7 again to verify the peering state of myVnetA.
-
- > [!NOTE]
- > The peering is not established until the peering state is **Connected** for both virtual networks.
-
-11. **Optional**: Though creating virtual machines is not covered in this tutorial, you can create a virtual machine in each virtual network and connect from one virtual machine to the other, to validate connectivity.
-12. **Optional**: To delete the resources that you create in this tutorial, complete the steps in [Delete resources](#delete-cli) in this article.
-
-Any Azure resources you create in either virtual network are now able to communicate with each other through their IP addresses. If you're using default Azure name resolution for the virtual networks, the resources in the virtual networks are not able to resolve names across the virtual networks. If you want to resolve names across virtual networks in a peering, you must create your own DNS server. Learn how to set up [Name resolution using your own DNS server](virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server).
-
-## <a name="powershell"></a>Create peering - PowerShell
--
-This tutorial uses different accounts for each subscription. If you're using an account that has permissions to both subscriptions, you can use the same account for all steps, skip the steps for logging out of Azure, and remove the lines of script that create user role assignments. Replace UserA@azure.com and UserB@azure.com in all of the following scripts with the usernames you're using for UserA and UserB.
-
-1. Confirm that you have Azure PowerShell version 1.0.0 or higher. You can do this by running the `Get-Module -Name Az` We recommend installing the latest version of the PowerShell [Az module](/powershell/azure/install-az-ps). If you're new to Azure PowerShell, see [Azure PowerShell overview](/powershell/azure/?toc=%2fazure%2fvirtual-network%2ftoc.json).
-2. Start a PowerShell session.
-3. In PowerShell, log in to Azure as UserA by entering the `Connect-AzAccount` command. The account you log in with must have the necessary permissions to create a virtual network peering. For a list of permissions, see [Virtual network peering permissions](virtual-network-manage-peering.md#permissions).
-4. Create a resource group and virtual network A. Copy the following script to a text editor on your PC. Replace `<SubscriptionA-Id>` with the ID of SubscriptionA. If you don't know your subscription Id, enter the `Get-AzSubscription` command to view it. The value for **Id** in the returned output is your subscription ID. To execute the script, copy the modified script, paste it in to PowerShell, and then press `Enter`.
-
- ```powershell
- # Create a resource group.
- New-AzResourceGroup `
- -Name MyResourceGroupA `
- -Location eastus
-
- # Create virtual network A.
- $vNetA = New-AzVirtualNetwork `
- -ResourceGroupName MyResourceGroupA `
- -Name 'myVnetA' `
- -AddressPrefix '10.0.0.0/16' `
- -Location eastus
-
- # Assign UserB permissions to myVnetA.
- New-AzRoleAssignment `
- -SignInName UserB@azure.com `
- -RoleDefinitionName "Network Contributor" `
- -Scope /subscriptions/<SubscriptionA-Id>/resourceGroups/myResourceGroupA/providers/Microsoft.Network/VirtualNetworks/myVnetA
- ```
-
-5. Log out UserA from Azure and log in UserB. The account you log in with must have the necessary permissions to create a virtual network peering. For a list of permissions, see [Virtual network peering permissions](virtual-network-manage-peering.md#permissions).
-6. Copy the script contents in step 4 to a text editor on your PC. Replace `<SubscriptionA-Id>` with the ID for subscription B. Change 10.0.0.0/16 to 10.1.0.0/16. Change all As to B and all Bs to A. To execute the script, copy the modified script, paste into PowerShell, and then press `Enter`.
-7. Log out UserB from Azure and log in UserA.
-8. Create the peering from myVnetA to myVnetB. Copy the following script to a text editor on your PC. Replace `<SubscriptionB-Id>` with the ID of subscription B. To execute the script, copy the modified script, paste in to PowerShell, and then press `Enter`.
-
- ```powershell
- # Peer myVnetA to myVnetB.
- $vNetA=Get-AzVirtualNetwork -Name myVnetA -ResourceGroupName myResourceGroupA
- Add-AzVirtualNetworkPeering `
- -Name 'myVnetAToMyVnetB' `
- -VirtualNetwork $vNetA `
- -RemoteVirtualNetworkId "/subscriptions/<SubscriptionB-Id>/resourceGroups/myResourceGroupB/providers/Microsoft.Network/virtualNetworks/myVnetB"
- ```
-
-9. View the peering state of myVnetA.
-
- ```powershell
- Get-AzVirtualNetworkPeering `
- -ResourceGroupName myResourceGroupA `
- -VirtualNetworkName myVnetA `
- | Format-Table VirtualNetworkName, PeeringState
- ```
-
- The state is **Initiated**. It changes to **Connected** once you set up the peering to myVnetA from myVnetB.
-
-10. Log out UserA from Azure and log in UserB.
-11. Create the peering from myVnetB to myVnetA. Copy the script contents in step 8 to a text editor on your PC. Replace `<SubscriptionB-Id>` with the ID of subscription A and change all As to B and all Bs to A. To execute the script, copy the modified script, paste it in to PowerShell, and then press `Enter`.
-12. View the peering state of myVnetB. Copy the script contents in step 9 to a text editor on your PC. Change A to B for the resource group and virtual network names. To execute the script, paste the modified script into PowerShell, and then press `Enter`. The state is **Connected**. The peering state of **myVnetA** changes to **Connected** after you've created the peering from **myVnetB** to **myVnetA**. You can log UserA back in to Azure and complete step 9 again to verify the peering state of myVnetA.
-
- > [!NOTE]
- > The peering is not established until the peering state is **Connected** for both virtual networks.
-
- Any Azure resources you create in either virtual network are now able to communicate with each other through their IP addresses. If you're using default Azure name resolution for the virtual networks, the resources in the virtual networks are not able to resolve names across the virtual networks. If you want to resolve names across virtual networks in a peering, you must create your own DNS server. Learn how to set up [Name resolution using your own DNS server](virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server).
-
-13. **Optional**: Though creating virtual machines is not covered in this tutorial, you can create a virtual machine in each virtual network and connect from one virtual machine to the other, to validate connectivity.
-14. **Optional**: To delete the resources that you create in this tutorial, complete the steps in [Delete resources](#delete-powershell) in this article.
-
-## <a name="template"></a>Create peering - Resource Manager template
-
-1. To create a virtual network and assign the appropriate [permissions](virtual-network-manage-peering.md#permissions), complete the steps in the [Portal](#portal), [Azure CLI](#cli), or [PowerShell](#powershell) sections of this article.
-2. Save the text that follows to a file on your local computer. Replace `<subscription ID>` with UserA's subscription ID. You might save the file as vnetpeeringA.json, for example.
-
- ```json
- {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- },
- "variables": {
- },
- "resources": [
- {
- "apiVersion": "2016-06-01",
- "type": "Microsoft.Network/virtualNetworks/virtualNetworkPeerings",
- "name": "myVnetA/myVnetAToMyVnetB",
- "location": "[resourceGroup().location]",
- "properties": {
- "allowVirtualNetworkAccess": true,
- "allowForwardedTraffic": false,
- "allowGatewayTransit": false,
- "useRemoteGateways": false,
- "remoteVirtualNetwork": {
- "id": "/subscriptions/<subscription ID>/resourceGroups/PeeringTest/providers/Microsoft.Network/virtualNetworks/myVnetB"
- }
- }
- }
- ]
- }
- ```
-
-3. Log in to Azure as UserA and deploy the template using the [portal](../azure-resource-manager/templates/deploy-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json#deploy-resources-from-custom-template), [PowerShell](../azure-resource-manager/templates/deploy-powershell.md?toc=%2fazure%2fvirtual-network%2ftoc.json#deploy-local-template), or the [Azure CLI](../azure-resource-manager/templates/deploy-cli.md?toc=%2fazure%2fvirtual-network%2ftoc.json#deploy-local-template). Specify the file name you saved the example json text in step 2 to.
-4. Copy the example json from step 2 to a file on your computer and make changes to the lines that begin with:
- - **name**: Change *myVnetA/myVnetAToMyVnetB* to *myVnetB/myVnetBToMyVnetA*.
- - **id**: Replace `<subscription ID>` with UserB's subscription ID and change *myVnetB* to *myVnetA*.
-5. Complete step 3 again, logged in to Azure as UserB.
-6. **Optional**: Though creating virtual machines is not covered in this tutorial, you can create a virtual machine in each virtual network and connect from one virtual machine to the other, to validate connectivity.
-7. **Optional**: To delete the resources that you create in this tutorial, complete the steps in the [Delete resources](#delete) section of this article, using either the Azure portal, PowerShell, or the Azure CLI.
-
-## <a name="delete"></a>Delete resources
-When you've finished this tutorial, you might want to delete the resources you created in the tutorial, so you don't incur usage charges. Deleting a resource group also deletes all resources that are in the resource group.
-
-### <a name="delete-portal"></a>Azure portal
-
-1. Log in to the Azure portal as UserA.
-2. In the portal search box, enter **myResourceGroupA**. In the search results, select **myResourceGroupA**.
-3. Select **Delete**.
-4. To confirm the deletion, in the **TYPE THE RESOURCE GROUP NAME** box, enter **myResourceGroupA**, and then select **Delete**.
-5. Log out of the portal as UserA and log in as UserB.
-6. Complete steps 2-4 for myResourceGroupB.
-
-### <a name="delete-cli"></a>Azure CLI
-
-1. Log in to Azure as UserA and execute the following command:
-
- ```azurecli-interactive
- az group delete --name myResourceGroupA --yes
- ```
-
-2. Log out of Azure as UserA and log in as UserB.
-3. Execute the following command:
-
- ```azurecli-interactive
- az group delete --name myResourceGroupB --yes
- ```
-
-### <a name="delete-powershell"></a>PowerShell
-
-1. Log in to Azure as UserA and execute the following command:
-
- ```powershell
- Remove-AzResourceGroup -Name myResourceGroupA -force
- ```
-
-2. Log out of Azure as UserA and log in as UserB.
-3. Execute the following command:
-
- ```powershell
- Remove-AzResourceGroup -Name myResourceGroupB -force
- ```
+ --scope $vnetid
+```
-## Next steps
+Replace the example guid in **`--assignee`** with the real object ID for **UserB**.
+++
+## Obtain resource ID of myVNetA
+
+# [**Portal**](#tab/create-peering-portal)
+
+1. Remain signed in to the portal as **UserA**.
+
+2. In the search box a the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
+
+3. Select **myVNetA**.
+
+4. In **Settings**, select **Properties**.
+
+5. Copy the information in the **Resource ID** field and save for the later steps. The resource ID is similar to the following example: **`/subscriptions/<Subscription Id>/resourceGroups/myResourceGroupA/providers/Microsoft.Network/virtualNetworks/myVnetA`**.
+
+6. Sign out of the portal as **UserA**.
+
+# [**PowerShell**](#tab/create-peering-powershell)
+
+The resource ID of **myVNetA** is required to set up the peering connection from **myVNetB** to **myVNetA**. Use [Get-AzVirtualNetwork](/powershell/module/az.network/get-azvirtualnetwork) to obtain the resource ID for **myVNetA**.
+
+```azurepowershell-interactive
+$id = @{
+ Name = 'myVNetA'
+ ResourceGroupName = 'myResourceGroupA'
+}
+$vnetA = Get-AzVirtualNetwork @id
+
+$vnetA.id
+```
+
+# [**Azure CLI**](#tab/create-peering-cli)
+
+The resource ID of **myVNetA** is required to set up the peering connection from **myVNetB** to **myVNetA**. Use [az network vnet show](/cli/azure/network/vnet#az-network-vnet-show) to obtain the resource ID for **myVNetA**.
+
+```azurecli-interactive
+vnetidA=$(az network vnet show \
+ --name myVNetA \
+ --resource-group myResourceGroupA \
+ --query id \
+ --output tsv)
+
+echo $vnetidA
+```
+++
+## Create virtual network - myVNetB
+
+In this section, you'll sign in as **UserB** and create a virtual network for the peering connection to **myVNetA**.
+
+# [**Portal**](#tab/create-peering-portal)
+
+1. Sign in to the portal as **UserB**. If you're using one account for both subscriptions, change to **SubscriptionB** in the portal.
+
+2. In the search box a the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
+
+3. Select **+ Create**.
+
+4. In the **Basics** tab of **Create virtual network**, enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your **SubscriptionB**. |
+ | Resource group | Select **Create new**. </br> Enter **myResourceGroupB** in **Name**. </br> Select **OK**. |
+ | **Instance details** | |
+ | Name | Enter **myVNetB**. |
+ | Region | Select a region. |
+
+5. Select **Next: IP Addresses**.
+
+6. In **IPv4 address space**, enter **10.2.0.0/16**.
+
+7. Select **+ Add subnet**.
+
+8. Enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | Subnet name | Enter **mySubnet**. |
+ | Subnet address range | Enter **10.2.0.0/24**. |
+
+9. Select **Add**.
+
+10. Select **Review + create**.
+
+11. Select **Create**.
+
+# [**PowerShell**](#tab/create-peering-powershell)
+
+### Sign in to SubscriptionB
+
+Use [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to **SubscriptionB**.
+
+```azurepowershell-interactive
+Connect-AzAccount
+```
+
+If you're using one account for both subscriptions, sign in to that account and change the subscription context to **SubscriptionB** with [Set-AzContext](/powershell/module/az.accounts/set-azcontext).
+
+```azurepowershell-interactive
+Set-AzContext -Subscription SubscriptionB
+```
+
+### Create a resource group - myResourceGroupB
+
+An Azure resource group is a logical container where Azure resources are deployed and managed.
+
+Create a resource group with [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup):
+
+```azurepowershell-interactive
+$rsg = @{
+ Name = 'myResourceGroupB'
+ Location = 'westus3'
+}
+New-AzResourceGroup @rsg
+```
+
+### Create the virtual network
+
+Create a virtual network with [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork). This example creates a default virtual network named **myVNetB** in the **West US 3** location:
+
+```azurepowershell-interactive
+$vnet = @{
+ Name = 'myVNetB'
+ ResourceGroupName = 'myResourceGroupB'
+ Location = 'westus3'
+ AddressPrefix = '10.2.0.0/16'
+}
+$virtualNetwork = New-AzVirtualNetwork @vnet
+```
+### Add a subnet
+
+Azure deploys resources to a subnet within a virtual network, so you need to create a subnet. Create a subnet configuration named **default** with [Add-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/add-azvirtualnetworksubnetconfig):
+
+```azurepowershell-interactive
+$subnet = @{
+ Name = 'default'
+ VirtualNetwork = $virtualNetwork
+ AddressPrefix = '10.2.0.0/24'
+}
+$subnetConfig = Add-AzVirtualNetworkSubnetConfig @subnet
+```
+
+### Associate the subnet to the virtual network
+
+You can write the subnet configuration to the virtual network with [Set-AzVirtualNetwork](/powershell/module/az.network/Set-azVirtualNetwork). This command creates the subnet:
+
+```azurepowershell-interactive
+$virtualNetwork | Set-AzVirtualNetwork
+```
+
+# [**Azure CLI**](#tab/create-peering-cli)
+
+### Sign in to SubscriptionB
+
+Use [az login](/cli/azure/reference-index#az-login) to sign in to **SubscriptionA**.
+
+```azurecli-interactive
+az login
+```
+
+If you're using one account for both subscriptions, sign in to that account and change the subscription context to **SubscriptionB** with [az account set](/cli/azure/account#az-account-set).
+
+```azurecli-interactive
+az account set --subscription "SubscriptionB"
+```
+
+### Create a resource group - myResourceGroupB
+
+An Azure resource group is a logical container where Azure resources are deployed and managed.
+
+Create a resource group with [az group create](/cli/azure/group#az-group-create):
+
+```azurecli-interactive
+az group create \
+ --name myResourceGroupB \
+ --location westus3
+```
+
+### Create the virtual network
+
+Create a virtual network and subnet with [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create). This example creates a default virtual network named **myVNetB** in the **West US 3** location.
+
+```azurecli-interactive
+az network vnet create \
+ --resource-group myResourceGroupB\
+ --location westus3 \
+ --name myVNetB \
+ --address-prefixes 10.2.0.0/16 \
+ --subnet-name default \
+ --subnet-prefixes 10.2.0.0/24
+```
+++
+## Assign permissions for UserA
+A user account in the other subscription that you want to peer with must be added to the network you previously created. If you're using a single account for both subscriptions, you can skip this section.
+
+# [**Portal**](#tab/create-peering-portal)
+
+1. Remain signed in to the portal as **UserB**.
+
+2. In the search box a the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
+
+3. Select **myVNetB**.
+
+4. Select **Access control (IAM)**.
+
+5. Select **+ Add** -> **Add role assignment**.
+
+6. In **Add role assignment** in the **Role** tab, select **Network Contributor**.
+
+7. Select **Next**.
+
+8. In the **Members** tab, select **+ Select members**.
+
+9. In **Select members** in the search box, enter **UserA**.
+
+10. Select **Select**.
+
+11. Select **Review + assign**.
+
+12. Select **Review + assign**.
+
+# [**PowerShell**](#tab/create-peering-powershell)
+
+Use [Get-AzVirtualNetwork](/powershell/module/az.network/get-azvirtualnetwork) to obtain the resource ID for **myVNetA**. Assign **UserA** from **SubscriptionA** to **myVNetB** with [New-AzRoleAssignment](/powershell/module/az.resources/new-azroleassignment).
+
+Use [Get-AzADUser](/powershell/module/az.resources/get-azaduser) to obtain the object ID for **UserA**.
+
+**UserA** is used in this example for the user account. Replace this value with the display name for the user from **SubscriptionA** that you wish to assign permissions to **myVNetB**. You can skip this step if you're using the same account for both subscriptions.
+
+```azurepowershell-interactive
+$id = @{
+ Name = 'myVNetB'
+ ResourceGroupName = 'myResourceGroupB'
+}
+$vnet = Get-AzVirtualNetwork @id
+
+$obj = Get-AzADUser -DisplayName 'UserA'
+
+$role = @{
+ ObjectId = $obj.id
+ RoleDefinitionName = 'Network Contributor'
+ Scope = $vnet.id
+}
+New-AzRoleAssignment @role
+```
+
+# [**Azure CLI**](#tab/create-peering-cli)
+
+Use [az network vnet show](/cli/azure/network/vnet#az-network-vnet-show) to obtain the resource ID for **myVNetB**. Assign **UserA** from **SubscriptionA** to **myVNetB** with [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create).
+
+Use [az ad user list](/cli/azure/ad/user#az-ad-user-list) to obtain the object ID for **UserA**.
+
+**UserA** is used in this example for the user account. Replace this value with the display name for the user from **SubscriptionA** that you wish to assign permissions to **myVNetB**. You can skip this step if you're using the same account for both subscriptions.
+
+```azurecli-interactive
+az ad user list --display-name UserA
+```
+
+```bash
+[
+ {
+ "businessPhones": [],
+ "displayName": "UserA",
+ "givenName": null,
+ "id": "ee0645cc-e439-4ffc-b956-79577e473969",
+ "jobTitle": null,
+ "mail": "userA@contoso.com",
+ "mobilePhone": null,
+ "officeLocation": null,
+ "preferredLanguage": null,
+ "surname": null,
+ "userPrincipalName": "usera_contoso.com#EXT#@fabrikam.onmicrosoft.com"
+ }
+]
+```
+
+Make note of the object ID of **UserA** in field **id**. In this example, it's **ee0645cc-e439-4ffc-b956-79577e473969**.
+
+```azurecli-interactive
+vnetid=$(az network vnet show \
+ --name myVNetB \
+ --resource-group myResourceGroupB \
+ --query id \
+ --output tsv)
+
+az role assignment create \
+ --assignee ee0645cc-e439-4ffc-b956-79577e473969 \
+ --role "Network Contributor" \
+ --scope $vnetid
+```
+++
+## Obtain resource ID of myVNetB
+
+The resource ID of **myVNetB** is required to set up the peering connection from **myVNetA** to **myVNetB**. Use the following steps to obtain the resource ID of **myVNetB**.
+# [**Portal**](#tab/create-peering-portal)
+
+1. Remain signed in to the portal as **UserB**.
+
+2. In the search box a the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
+
+3. Select **myVNetB**.
+
+4. In **Settings**, select **Properties**.
+
+5. Copy the information in the **Resource ID** field and save for the later steps. The resource ID is similar to the following example: **`/subscriptions/<Subscription Id>/resourceGroups/myResourceGroupB/providers/Microsoft.Network/virtualNetworks/myVnetB`**.
+
+6. Sign out of the portal as **UserB**.
+
+# [**PowerShell**](#tab/create-peering-powershell)
+
+The resource ID of **myVNetB** is required to set up the peering connection from **myVNetA** to **myVNetB**. Use [Get-AzVirtualNetwork](/powershell/module/az.network/get-azvirtualnetwork) to obtain the resource ID for **myVNetB**.
+
+```azurepowershell-interactive
+$id = @{
+ Name = 'myVNetB'
+ ResourceGroupName = 'myResourceGroupB'
+}
+$vnetB = Get-AzVirtualNetwork @id
+
+$vnetB.id
+```
+
+# [**Azure CLI**](#tab/create-peering-cli)
+
+The resource ID of **myVNetB** is required to set up the peering connection from **myVNetA** to **myVNetB**. Use [az network vnet show](/cli/azure/network/vnet#az-network-vnet-show) to obtain the resource ID for **myVNetB**.
+
+```azurecli-interactive
+vnetidB=$(az network vnet show \
+ --name myVNetB \
+ --resource-group myResourceGroupB \
+ --query id \
+ --output tsv)
+
+echo $vnetidB
+```
+++
+## Create peering connection - myVNetA to myVNetB
+
+You'll need the **Resource ID** for **myVNetB** from the previous steps to set up the peering connection.
+
+# [**Portal**](#tab/create-peering-portal)
+
+1. Sign in to the [Azure portal](https://portal.azure.com) as **UserA**. If you're using one account for both subscriptions, change to **SubscriptionA** in the portal.
+
+2. In the search box a the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
+
+3. Select **myVNetA**.
+
+4. Select **Peerings**.
+
+5. Select **+ Add**.
+
+6. Enter or select the following information in **Add peering**:
+
+ | Setting | Value |
+ | - | -- |
+ | **This virtual network** | |
+ | Peering link name | Enter **myVNetAToMyVNetB**. |
+ | Traffic to remote virtual network | Leave the default of **Allow (default)**. |
+ | Traffic forwarded from remote virtual network | Leave the default of **Allow (default)**. |
+ | Virtual network gateway or Route Server | Leave the default of **None (default)**. |
+ | **Remote virtual network** | |
+ | Peering link name | Leave blank. |
+ | Virtual network deployment model | Select **Resource manager**. |
+ | Select the box for **I know my resource ID**. | |
+ | Resource ID | Enter or paste the **Resource ID** for **myVNetB**. |
+
+7. In the pull-down box, select the **Directory** that corresponds with **myVNetB** and **UserB**.
+
+8. Select **Authenticate**.
+
+9. Select **Add**.
+
+10. Sign out of the portal as **UserA**.
+
+# [**PowerShell**](#tab/create-peering-powershell)
+
+### Sign in to SubscriptionA
+
+Use [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to **SubscriptionA**.
+
+```azurepowershell-interactive
+Connect-AzAccount
+```
+
+If you're using one account for both subscriptions, sign in to that account and change the subscription context to **SubscriptionA** with [Set-AzContext](/powershell/module/az.accounts/set-azcontext).
+
+```azurepowershell-interactive
+Set-AzContext -Subscription SubscriptionA
+```
+
+### Sign in to SubscriptionB
+
+Authenticate to **SubscriptionB** so that the peering can be set up.
+
+Use [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to **SubscriptionB**.
+
+```azurepowershell-interactive
+Connect-AzAccount
+```
+
+### Change to SubscriptionA (optional)
+
+You may have to switch back to **SubscriptionA** to continue with the actions in **SubscriptionA**.
+
+Change context to **SubscriptionA**.
+
+```azurepowershell-interactive
+Set-AzContext -Subscription SubscriptionA
+```
+
+### Create peering connection
+
+Use [Add-AzVirtualNetworkPeering](/powershell/module/az.network/add-azvirtualnetworkpeering) to create a peering connection between **myVNetA** and **myVNetB**.
+
+```azurepowershell-interactive
+$netA = @{
+ Name = 'myVNetA'
+ ResourceGroupName = 'myResourceGroupA'
+}
+$vnetA = Get-AzVirtualNetwork @netA
+
+$peer = @{
+ Name = 'myVNetAToMyVNetB'
+ VirtualNetwork = $vnetA
+ RemoteVirtualNetworkId = '/subscriptions/<SubscriptionB-Id>/resourceGroups/myResourceGroupB/providers/Microsoft.Network/virtualNetworks/myVnetB'
+}
+Add-AzVirtualNetworkPeering @peer
+```
+
+Use [Get-AzVirtualNetworkPeering](/powershell/module/az.network/get-azvirtualnetworkpeering) to obtain the status of the peering connections from **myVNetA** to **myVNetB**.
+
+```azurepowershell-interactive
+$status = @{
+ ResourceGroupName = 'myResourceGroupA'
+ VirtualNetworkName = 'myVNetA'
+}
+Get-AzVirtualNetworkPeering @status | Format-Table VirtualNetworkName, PeeringState
+```
+
+```powershell
+PS /home/azureuser> Get-AzVirtualNetworkPeering @status | Format-Table VirtualNetworkName, PeeringState
+
+VirtualNetworkName PeeringState
+
+myVNetA Initiated
+```
+
+# [**Azure CLI**](#tab/create-peering-cli)
+
+### Sign in to SubscriptionA
+
+Use [az login](/cli/azure/reference-index#az-login) to sign in to **SubscriptionA**.
+
+```azurecli-interactive
+az login
+```
+
+If you're using one account for both subscriptions, sign in to that account and change the subscription context to **SubscriptionA** with [az account set](/cli/azure/account#az-account-set).
+
+```azurecli-interactive
+az account set --subscription "SubscriptionA"
+```
+
+### Sign in to SubscriptionB
+
+Authenticate to **SubscriptionB** so that the peering can be set up.
+
+Use [az login](/cli/azure/reference-index#az-login) to sign in to **SubscriptionB**.
+
+```azurecli-interactive
+az login
+```
+
+### Change to SubscriptionA (optional)
+
+You may have to switch back to **SubscriptionA** to continue with the actions in **SubscriptionA**.
+
+Change context to **SubscriptionA**.
+
+```azurecli-interactive
+az account set --subscription "SubscriptionA"
+```
+
+### Create peering connection
+
+Use [az network vnet peering create](/powershell/module/az.network/add-azvirtualnetworkpeering) to create a peering connection between **myVNetA** and **myVNetB**.
+
+```azurecli-interactive
+az network vnet peering create \
+ --name myVNetAToMyVNetB \
+ --resource-group myResourceGroupA \
+ --vnet-name myVNetA \
+ --remote-vnet /subscriptions/<SubscriptionB-Id>/resourceGroups/myResourceGroupB/providers/Microsoft.Network/VirtualNetworks/myVNetB \
+ --allow-vnet-access
+```
+
+Use [az network vnet peering list](/cli/azure/network/vnet/peering#az-network-vnet-peering-list) to obtain the status of the peering connections from **myVNetA** to **myVNetB**.
+
+```azurecli-interactive
+az network vnet peering list \
+ --resource-group myResourceGroupA \
+ --vnet-name myVNetA \
+ --output table
+```
+++
+The peering connection will show in **Peerings** in a **Initiated** state. To complete the peer, a corresponding connection must be set up in **myVNetB**.
+
+## Create peering connection - myVNetB to myVNetA
+
+You'll need the **Resource IDs** for **myVNetA** from the previous steps to set up the peering connection.
+
+# [**Portal**](#tab/create-peering-portal)
+
+1. Sign in to the [Azure portal](https://portal.azure.com) as **UserB**. If you're using one account for both subscriptions, change to **SubscriptionB** in the portal.
+
+2. In the search box a the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
+
+3. Select **myVNetB**.
+
+4. Select **Peerings**.
+
+5. Select **+ Add**.
+
+6. Enter or select the following information in **Add peering**:
+
+ | Setting | Value |
+ | - | -- |
+ | **This virtual network** | |
+ | Peering link name | Enter **myVNetBToMyVNetA**. |
+ | Traffic to remote virtual network | Leave the default of **Allow (default)**. |
+ | Traffic forwarded from remote virtual network | Leave the default of **Allow (default)**. |
+ | Virtual network gateway or Route Server | Leave the default of **None (default)**. |
+ | **Remote virtual network** | |
+ | Peering link name | Leave blank. |
+ | Virtual network deployment model | Select **Resource manager**. |
+ | Select the box for **I know my resource ID**. | |
+ | Resource ID | Enter or paste the **Resource ID** for **myVNetA**. |
+
+7. In the pull-down box, select the **Directory** that corresponds with **myVNetA** and **UserA**.
+
+8. Select **Authenticate**.
+
+9. Select **Add**.
+
+# [**PowerShell**](#tab/create-peering-powershell)
+
+### Sign in to SubscriptionB
+
+Use [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to **SubscriptionB**.
+
+```azurepowershell-interactive
+Connect-AzAccount
+```
+
+If you're using one account for both subscriptions, sign in to that account and change the subscription context to **SubscriptionB** with [Set-AzContext](/powershell/module/az.accounts/set-azcontext).
+
+```azurepowershell-interactive
+Set-AzContext -Subscription SubscriptionB
+```
+
+## Sign in to SubscriptionA
+
+Authenticate to **SubscriptionA** so that the peering can be set up.
+
+Use [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to **SubscriptionA**.
+
+```azurepowershell-interactive
+Connect-AzAccount
+```
+
+### Change to SubscriptionB (optional)
+
+You may have to switch back to **SubscriptionB** to continue with the actions in **SubscriptionB**.
+
+Change context to **SubscriptionB**.
+
+```azurepowershell-interactive
+Set-AzContext -Subscription SubscriptionB
+```
+
+### Create peering connection
+
+Use [Add-AzVirtualNetworkPeering](/powershell/module/az.network/add-azvirtualnetworkpeering) to create a peering connection between **myVNetB** and **myVNetA**.
+
+```azurepowershell-interactive
+$netB = @{
+ Name = 'myVNetB'
+ ResourceGroupName = 'myResourceGroupB'
+}
+$vnetB = Get-AzVirtualNetwork @netB
+
+$peer = @{
+ Name = 'myVNetBToMyVNetA'
+ VirtualNetwork = $vnetB
+ RemoteVirtualNetworkId = '/subscriptions/<SubscriptionA-Id>/resourceGroups/myResourceGroupA/providers/Microsoft.Network/virtualNetworks/myVNetA'
+}
+Add-AzVirtualNetworkPeering @peer
+```
+
+User [Get-AzVirtualNetworkPeering](/powershell/module/az.network/get-azvirtualnetworkpeering) to obtain the status of the peering connections from **myVNetB** to **myVNetA**.
+
+```azurepowershell-interactive
+$status = @{
+ ResourceGroupName = 'myResourceGroupB'
+ VirtualNetworkName = 'myVNetB'
+}
+Get-AzVirtualNetworkPeering @status | Format-Table VirtualNetworkName, PeeringState
+```
+
+```powershell
+PS /home/azureuser> Get-AzVirtualNetworkPeering @status | Format-Table VirtualNetworkName, PeeringState
+
+VirtualNetworkName PeeringState
+
+myVNetB Connected
+```
+
+# [**Azure CLI**](#tab/create-peering-cli)
+
+### Sign in to SubscriptionB
+
+Use [az login](/cli/azure/reference-index#az-login) to sign in to **SubscriptionB**.
+
+```azurecli-interactive
+az login
+```
+
+If you're using one account for both subscriptions, sign in to that account and change the subscription context to **SubscriptionB** with [az account set](/cli/azure/account#az-account-set).
+
+```azurecli-interactive
+az account set --subscription "SubscriptionB"
+```
+
+### Sign in to SubscriptionA
+
+Authenticate to **SubscriptionA** so that the peering can be set up.
+
+Use [az login](/cli/azure/reference-index#az-login) to sign in to **SubscriptionA**.
+
+```azurecli-interactive
+az login
+```
+
+### Change to SubscriptionB (optional)
+
+You may have to switch back to **SubscriptionB** to continue with the actions in **SubscriptionB**.
+
+Change context to **SubscriptionB**.
+
+```azurecli-interactive
+az account set --subscription "SubscriptionB"
+```
+
+### Create peering connection
+
+Use [az network vnet peering create](/powershell/module/az.network/add-azvirtualnetworkpeering) to create a peering connection between **myVNetB** and **myVNetA**.
+
+```azurecli-interactive
+az network vnet peering create \
+ --name myVNetBToMyVNetA \
+ --resource-group myResourceGroupB \
+ --vnet-name myVNetB \
+ --remote-vnet /subscriptions/<SubscriptionA-Id>/resourceGroups/myResourceGroupA/providers/Microsoft.Network/VirtualNetworks/myVNetA \
+ --allow-vnet-access
+```
+
+Use [az network vnet peering list](/cli/azure/network/vnet/peering#az-network-vnet-peering-list) to obtain the status of the peering connections from **myVNetB** to **myVNetA**.
+
+```azurecli-interactive
+az network vnet peering list \
+ --resource-group myResourceGroupB \
+ --vnet-name myVNetB \
+ --output table
+```
++
+The peering is successfully established after you see **Connected** in the **Peering status** column for both virtual networks in the peering. Any Azure resources you create in either virtual network are now able to communicate with each other through their IP addresses. If you're using default Azure name resolution for the virtual networks, the resources in the virtual networks aren't able to resolve names across the virtual networks. If you want to resolve names across virtual networks in a peering, you must create your own DNS server or use Azure DNS.
+
+For more information about using your own DNS for name resolution, see, [Name resolution using your own DNS server](virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server).
+
+For more information about Azure DNS, see [What is Azure DNS?](/azure/dns/dns-overview).
+
+## Next steps
+<!-- Add a context sentence for the following links -->
- Thoroughly familiarize yourself with important [virtual network peering constraints and behaviors](virtual-network-manage-peering.md#requirements-and-constraints) before creating a virtual network peering for production use. - Learn about all [virtual network peering settings](virtual-network-manage-peering.md#create-a-peering).-- Learn how to [create a hub and spoke network topology](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke#virtual-network-peering) with virtual network peering.
+- Learn how to [create a hub and spoke network topology](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke#virtual-network-peering) with virtual network peering.
virtual-network Manage Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/manage-virtual-network.md
The account you log into, or connect to Azure with, must be assigned to the [net
| Setting | Value | Details | | | | | | **Project details** | | |
- | **Subscription** | Select your subscription. | You can't use the same virtual network in more than one Azure subscription. However, you can connect a virtual network in one subscription to virtual networks in other subscriptions using [virtual network peering](virtual-network-peering-overview.md). <br> Any Azure resource that you connect to the virtual network must be in the same subscription as the virtual network. |
- |**Resource group**| Select an existing [resource group](../azure-resource-manager/management/overview.md#resource-groups) or create a new one by selecting **Create new**. | An Azure resource that you connect to the virtual network can be in the same resource group as the virtual network or in a different resource group. |
+ | Subscription | Select your subscription. | You can't use the same virtual network in more than one Azure subscription. However, you can connect a virtual network in one subscription to virtual networks in other subscriptions using [virtual network peering](virtual-network-peering-overview.md). <br> Any Azure resource that you connect to the virtual network must be in the same subscription as the virtual network. |
+ |Resource group| Select an existing [resource group](../azure-resource-manager/management/overview.md#resource-groups) or create a new one by selecting **Create new**. | An Azure resource that you connect to the virtual network can be in the same resource group as the virtual network or in a different resource group. |
| **Instance details** | |
- | **Name** | Enter a name for the virtual network you're creating. | The name must be unique in the resource group that you select to create the virtual network in. <br> You can't change the name after the virtual network is created. <br> For naming suggestions, see [Naming conventions](/azure/cloud-adoption-framework/ready/azure-best-practices/naming-and-tagging#naming-and-tagging-resources). Following a naming convention can help make it easier to manage multiple virtual networks. |
- | **Region** | Select an Azure [region](https://azure.microsoft.com/regions/). | A virtual network can be in only one Azure region. However, you can connect a virtual network in one region to a virtual network in another region using [virtual network peering](virtual-network-peering-overview.md). <br> Any Azure resource that you connect to the virtual network must be in the same region as the virtual network. |
+ | Name | Enter a name for the virtual network you're creating. | The name must be unique in the resource group that you select to create the virtual network in. <br> You can't change the name after the virtual network is created. <br> For naming suggestions, see [Naming conventions](/azure/cloud-adoption-framework/ready/azure-best-practices/naming-and-tagging#naming-and-tagging-resources). Following a naming convention can help make it easier to manage multiple virtual networks. |
+ | Region | Select an Azure [region](https://azure.microsoft.com/regions/). | A virtual network can be in only one Azure region. However, you can connect a virtual network in one region to a virtual network in another region using [virtual network peering](virtual-network-peering-overview.md). <br> Any Azure resource that you connect to the virtual network must be in the same region as the virtual network. |
1. Select **IP Addresses** tab or **Next: IP Addresses >**, and enter the following IP address information: - **IPv4 Address space**: The address space for a virtual network is composed of one or more non-overlapping address ranges that are specified in CIDR notation. The address range you define can be public or private (RFC 1918). Whether you define the address range as public or private, the address range is reachable only from within the virtual network, from interconnected virtual networks, and from any on-premises networks that you've connected to the virtual network.
virtual-network Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/policy-reference.md
Title: Built-in policy definitions for Azure Virtual Network description: Lists Azure Policy built-in policy definitions for Azure Virtual Network. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/28/2022 Last updated : 01/05/2023
virtual-network Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Virtual Network description: Lists Azure Policy Regulatory Compliance controls available for Azure Virtual Network. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/04/2022 Last updated : 01/05/2023
virtual-network Tutorial Connect Virtual Networks Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-connect-virtual-networks-cli.md
az group delete --name myResourceGroup --yes
## Next steps
-In this article, you learned how to connect two networks in the same Azure region, with virtual network peering. You can also peer virtual networks in different [supported regions](virtual-network-manage-peering.md#cross-region) and in [different Azure subscriptions](create-peering-different-subscriptions.md#cli), as well as create [hub and spoke network designs](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke#virtual-network-peering) with peering. To learn more about virtual network peering, see [Virtual network peering overview](virtual-network-peering-overview.md) and [Manage virtual network peerings](virtual-network-manage-peering.md).
+In this article, you learned how to connect two networks in the same Azure region, with virtual network peering. You can also peer virtual networks in different [supported regions](virtual-network-manage-peering.md#cross-region) and in [different Azure subscriptions](create-peering-different-subscriptions.md), as well as create [hub and spoke network designs](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke#virtual-network-peering) with peering. To learn more about virtual network peering, see [Virtual network peering overview](virtual-network-peering-overview.md) and [Manage virtual network peerings](virtual-network-manage-peering.md).
You can [connect your own computer to a virtual network](../vpn-gateway/vpn-gateway-howto-point-to-site-resource-manager-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) through a VPN, and interact with resources in a virtual network, or in peered virtual networks. For reusable scripts to complete many of the tasks covered in the virtual network articles, see [script samples](cli-samples.md).
virtual-network Tutorial Connect Virtual Networks Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-connect-virtual-networks-powershell.md
Remove-AzResourceGroup -Name myResourceGroup -Force
## Next steps
-In this article, you learned how to connect two networks in the same Azure region, with virtual network peering. You can also peer virtual networks in different [supported regions](virtual-network-manage-peering.md#cross-region) and in [different Azure subscriptions](create-peering-different-subscriptions.md#powershell), as well as create [hub and spoke network designs](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke#virtual-network-peering) with peering. To learn more about virtual network peering, see [Virtual network peering overview](virtual-network-peering-overview.md) and [Manage virtual network peerings](virtual-network-manage-peering.md).
+In this article, you learned how to connect two networks in the same Azure region, with virtual network peering. You can also peer virtual networks in different [supported regions](virtual-network-manage-peering.md#cross-region) and in [different Azure subscriptions](create-peering-different-subscriptions.md), as well as create [hub and spoke network designs](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke#virtual-network-peering) with peering. To learn more about virtual network peering, see [Virtual network peering overview](virtual-network-peering-overview.md) and [Manage virtual network peerings](virtual-network-manage-peering.md).
You can [connect your own computer to a virtual network](../vpn-gateway/vpn-gateway-howto-point-to-site-rm-ps.md?toc=%2fazure%2fvirtual-network%2ftoc.json) through a VPN, and interact with resources in a virtual network, or in peered virtual networks. For reusable scripts to complete many of the tasks covered in the virtual network articles, see [script samples](powershell-samples.md).
virtual-network Virtual Network Troubleshoot Peering Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-troubleshoot-peering-issues.md
For more information, see the [requirements and constraints](./virtual-network-p
### The virtual networks are in different subscriptions or Active Directory tenants
-To configure virtual network peering for virtual networks in different subscriptions or Active Directory tenants, see [Create peering in different subscriptions for Azure CLI](./create-peering-different-subscriptions.md#cli).
+To configure virtual network peering for virtual networks in different subscriptions or Active Directory tenants, see [Create a virtual network peering between different subscriptions](./create-peering-different-subscriptions.md).
> [!Note] > To configure network peering, you must have **Network Contributor** permissions in both subscriptions. For more information, see [Peering permissions](virtual-network-manage-peering.md#permissions).
For more information, see the following articles:
### Current tenant `<TENANT ID>` isn't authorized to access linked subscription
-To resolve this issue, see [Create peering - Azure CLI](./create-peering-different-subscriptions.md#cli).
+To resolve this issue, see [Create a virtual network peering between different subscriptions](./create-peering-different-subscriptions.md).
### Not connected
virtual-wan How To Routing Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/how-to-routing-policies.md
The following section describes common issues encountered when you configure Rou
### Troubleshooting data path
-* Currently, using Azure Firewall to inspect inter-hub traffic is available for Virtual WAN hubs that are deployed in the **same** Azure Region. Inter-hub inspection for Virtual WAN hubs that are in different Azure regions is available on a limited basis. For a list of available regions, please email previewinterhub@mcirosoft.com.
+* Currently, using Azure Firewall to inspect inter-hub traffic is available for Virtual WAN hubs that are deployed in the **same** Azure Region. Inter-hub inspection for Virtual WAN hubs that are in different Azure regions is available on a limited basis. For a list of available regions, please email previewinterhub@microsoft.com.
* Currently, Private Traffic Routing Policies are not supported in Hubs with Encrypted ExpressRoute connections (Site-to-site VPN Tunnel running over ExpressRoute Private connectivity). * You can verify that the Routing Policies have been applied properly by checking the Effective Routes of the DefaultRouteTable. If Private Routing Policies are configured, you should see routes in the DefaultRouteTable for private traffic prefixes with next hop Azure Firewall. If Internet Traffic Routing Policies are configured, you should see a default (0.0.0.0/0) route in the DefaultRouteTable with next hop Azure Firewall. * If there are any Site-to-site VPN gateways or Point-to-site VPN gateways created **after** the feature has been confirmed to be enabled on your deployment, you will have to reach out again to previewinterhub@microsoft.com to get the feature enabled.
vpn-gateway Bgp Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/bgp-howto.md
Previously updated : 07/26/2021 Last updated : 01/04/2023 -
-# How to configure BGP on Azure VPN Gateways
-
-This article walks you through the steps to enable BGP on a cross-premises Site-to-Site (S2S) VPN connection and a VNet-to-VNet connection using the Azure portal.
+# How to configure BGP for Azure VPN Gateway
-## <a name="about"></a>About BGP
+This article helps you enable BGP on cross-premises site-to-site (S2S) VPN connections and VNet-to-VNet connections using the Azure portal.
BGP is the standard routing protocol commonly used in the Internet to exchange routing and reachability information between two or more networks. BGP enables the Azure VPN gateways and your on-premises VPN devices, called BGP peers or neighbors, to exchange "routes" that will inform both gateways on the availability and reachability for those prefixes to go through the gateways or routers involved. BGP can also enable transit routing among multiple networks by propagating routes a BGP gateway learns from one BGP peer to all other BGP peers.
-For more information about the benefits of BGP and to understand the technical requirements and considerations of using BGP, see [Overview of BGP with Azure VPN Gateways](vpn-gateway-bgp-overview.md).
+For more information about the benefits of BGP and to understand the technical requirements and considerations of using BGP, see [About BGP and Azure VPN Gateway](vpn-gateway-bgp-overview.md).
## Getting started
-Each part of this article helps you form a basic building block for enabling BGP in your network connectivity. If you complete all three parts, you build the topology as shown in Diagram 1.
+Each part of this article helps you form a basic building block for enabling BGP in your network connectivity. If you complete all three parts (configure BGP on the gateway, S2S connection, and VNet-to-VNet connection) you build the topology as shown in Diagram 1.
**Diagram 1**
You can combine parts together to build a more complex, multi-hop, transit netwo
Verify that you have an Azure subscription. If you don't already have an Azure subscription, you can activate your [MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/) or sign up for a [free account](https://azure.microsoft.com/pricing/free-trial/).
-## <a name ="config"></a>Part 1: Configure BGP on the virtual network gateway
+## <a name ="config"></a>Configure BGP on the virtual network gateway
In this section, you create and configure a virtual network, create and configure a virtual network gateway with BGP parameters, and obtain the Azure BGP Peer IP address. Diagram 2 shows the configuration settings to use when working with the steps in this section.
In this section, you create and configure a virtual network, create and configur
:::image type="content" source="./media/bgp-howto/bgp-gateway.png" alt-text="Diagram showing settings for virtual network gateway" border="false":::
-### 1. Create and configure TestVNet1
-
-In this step, you create and configure TestVNet1. Use the steps in the [Create a gateway tutorial](./tutorial-create-gateway-portal.md) to create and configure your Azure virtual network and VPN gateway. Use the reference settings in the screenshots below.
+### 1. Create TestVNet1
-* Virtual network:
+In this step, you create and configure TestVNet1. Use the steps in the [Create a gateway tutorial](./tutorial-create-gateway-portal.md) to create and configure your Azure virtual network and VPN gateway.
- :::image type="content" source="./media/bgp-howto/testvnet-1.png" alt-text="TestVNet1 with corresponding address prefixes":::
+Virtual network example values:
+* Resource Group: TestRG1
+* VNet: TestVNet1
+* Location/Region: EastUS
+* Address space: 10.11.0.0/16, 10.12.0.0/16
* Subnets:
+ * FrontEnd: 10.11.0.0/24
+ * BackEnd: 10.12.0.0/24
+ * GatewaySubnet: 10.12.255.0/27
- :::image type="content" source="./media/bgp-howto/testvnet-1-subnets.png" alt-text="TestVNet1 subnets":::
-
-### 2. Create the VPN gateway for TestVNet1 with BGP parameters
+### 2. Create TestVNet1 gateway with BGP
In this step, you create a VPN gateway with the corresponding BGP parameters.
-1. In the Azure portal, navigate to the **Virtual Network Gateway** resource from the Marketplace, and select **Create**.
+1. Use the steps in [Create and manage a VPN gateway](tutorial-create-gateway-portal.md) to create a gateway with the following parameters:
-1. Fill in the parameters as shown below:
+ * Instance Details:
+ * Name: VNet1GW
+ * Region: EastUS
+ * Gateway type: VPN
+ * VPN type: Route-based
+ * SKU: VpnGW1 or higher
+ * Generation: select a generation
+ * Virtual network: TestVNet1
- :::image type="content" source="./media/bgp-howto/create-gateway-1.png" alt-text="Create VNG1":::
+ * Public IP address
+ * Public IP address Type: Basic or Standard
+ * Public IP address: Create new
+ * Public IP address name: VNet1GWIP
+ * Enable active-active: Disabled
+ * Configure BGP: Enabled
1. In the highlighted **Configure BGP** section of the page, configure the following settings:
- :::image type="content" source="./media/bgp-howto/create-gateway-1-bgp.png" alt-text="Configure BGP":::
- * Select **Configure BGP** - **Enabled** to show the BGP configuration section.- * Fill in your ASN (Autonomous System Number).-
- * The **Azure APIPA BGP IP address** field is optional. If your on-premises VPN devices use APIPA address for BGP, you must select an address from the Azure-reserved APIPA address range for VPN, which is from **169.254.21.0** to **169.254.22.255**. This example uses 169.254.21.11.
-
- * If you are creating an active-active VPN gateway, the BGP section will show an additional **Second Custom Azure APIPA BGP IP address**. Each address you select must be unique and be in the allowed APIPA range (**169.254.21.0** to **169.254.22.255**). Active-active gateways also support multiple addresses for both **Azure APIPA BGP IP address** and **Second Custom Azure APIPA BGP IP address**. Additional inputs will only appear after you enter your first APIPA BGP IP address.
-
- > [!IMPORTANT]
- >
- > * By default, Azure assigns a private IP address from the GatewaySubnet prefix range automatically as the Azure BGP IP address on the Azure VPN gateway. The custom Azure APIPA BGP address is needed when your on premises VPN devices use an APIPA address (169.254.0.1 to 169.254.255.254) as the BGP IP. Azure VPN Gateway will choose the custom APIPA address if the corresponding local network gateway resource (on-premises network) has an APIPA address as the BGP peer IP. If the local network gateway uses a regular IP address (not APIPA), Azure VPN Gateway will revert to the private IP address from the GatewaySubnet range.
- >
- > * The APIPA BGP addresses must not overlap between the on-premises VPN devices and all connected Azure VPN gateways.
- >
- > * When APIPA addresses are used on Azure VPN gateways, the gateways do not initiate BGP peering sessions with APIPA source IP addresses. The on-premises VPN device must initiate BGP peering connections.
- >
+ * The **Azure APIPA BGP IP address** field is optional. If your on-premises VPN devices use APIPA address for BGP, you must select an address from the Azure-reserved APIPA address range for VPN, which is from **169.254.21.0** to **169.254.22.255**.
+ * If you're creating an active-active VPN gateway, the BGP section will show an additional **Second Custom Azure APIPA BGP IP address**. Each address you select must be unique and be in the allowed APIPA range (**169.254.21.0** to **169.254.22.255**). Active-active gateways also support multiple addresses for both **Azure APIPA BGP IP address** and **Second Custom Azure APIPA BGP IP address**. Additional inputs will only appear after you enter your first APIPA BGP IP address.
+
+ > [!IMPORTANT]
+ >
+ > * By default, Azure assigns a private IP address from the GatewaySubnet prefix range automatically as the Azure BGP IP address on the Azure VPN gateway. The custom Azure APIPA BGP address is needed when your on premises VPN devices use an APIPA address (169.254.0.1 to 169.254.255.254) as the BGP IP. Azure VPN Gateway will choose the custom APIPA address if the corresponding local network gateway resource (on-premises network) has an APIPA address as the BGP peer IP. If the local network gateway uses a regular IP address (not APIPA), Azure VPN Gateway will revert to the private IP address from the GatewaySubnet range.
+ >
+ > * The APIPA BGP addresses must not overlap between the on-premises VPN devices and all connected Azure VPN gateways.
+ >
+ > * When APIPA addresses are used on Azure VPN gateways, the gateways do not initiate BGP peering sessions with APIPA source IP addresses. The on-premises VPN device must initiate BGP peering connections.
+ >
1. Select **Review + create** to run validation. Once validation passes, select **Create** to deploy the VPN gateway. Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU. You can see the deployment status on the Overview page for your gateway.
-### 3. Obtain the Azure BGP Peer IP addresses
+### 3. Get the Azure BGP Peer IP addresses
Once the gateway is created, you can obtain the BGP Peer IP addresses on the Azure VPN gateway. These addresses are needed to configure your on-premises VPN devices to establish BGP sessions with the Azure VPN gateway.
-1. Navigate to the Virtual network gateway resource and select the **Configuration** page to see the BGP configuration information as shown in the following screenshot. On this page, you can view all BGP configuration information on your Azure VPN gateway: ASN, Public IP address, and the corresponding BGP peer IP addresses on the Azure side (default and APIPA).
-
- :::image type="content" source="./media/bgp-howto/vnet-1-gw-bgp.png" alt-text="BGP gateway":::
+On the virtual network gateway **Configuration** page, you can view the BGP configuration information on your Azure VPN gateway: ASN, Public IP address, and the corresponding BGP peer IP addresses on the Azure side (default and APIPA). You can also make the following configuration changes:
-1. On the **Configuration** page you can make the following configuration changes:
+* You can update the ASN or the APIPA BGP IP address if needed.
+* If you have an active-active VPN gateway, this page will show the Public IP address, default, and APIPA BGP IP addresses of the second VPN gateway instance.
- * You can update the ASN or the APIPA BGP IP address if needed.
- * If you have an active-active VPN gateway, this page will show the Public IP address, default, and APIPA BGP IP addresses of the second Azure VPN gateway instance.
+To get the Azure BGP Peer IP address:
-1. If you made any changes, select **Save** to commit the changes to your Azure VPN gateway.
+1. Go to the virtual network gateway resource and select the **Configuration** page to see the BGP configuration information.
+1. Make a note of the BGP Peer IP address.
-## <a name ="crosspremises"></a>Part 2: Configure BGP on cross-premises S2S connections
+## <a name ="crosspremises"></a>Configure BGP on cross-premises S2S connections
-To establish a cross-premises connection, you need to create a *local network gateway* to represent your on-premises VPN device, and a *connection* to connect the VPN gateway with the local network gateway as explained in [Create site-to-site connection](./tutorial-site-to-site-portal.md). This article contains the additional properties required to specify the BGP configuration parameters.
+To establish a cross-premises connection, you need to create a *local network gateway* to represent your on-premises VPN device, and a *connection* to connect the VPN gateway with the local network gateway as explained in [Create site-to-site connection](tutorial-site-to-site-portal.md). The following sections contain the additional properties required to specify the BGP configuration parameters.
**Diagram 3** :::image type="content" source="./media/bgp-howto/bgp-crosspremises.png" alt-text="Diagram showing IPsec" border="false":::
-### 1. Configure BGP on the local network gateway
+### 1. Create a local network gateway
-In this step, you configure BGP on the local network gateway. Use the following screenshot as an example. The screenshot shows local network gateway (Site5) with the parameters specified in Diagram 3.
+Configure a local network gateway with BGP settings.
+* For information and steps, see the [local network gateway](tutorial-site-to-site-portal.md#LocalNetworkGateway) section in the site-to-site connection article.
+* If you already have a local network gateway, you can modify it.To modify a local network gateway, go to the local network gateway resource **Configuration** page and make any necessary changes.
-#### Important configuration considerations
+1. When you create the local network gateway, for this exercise, use the following values:
-* The ASN and the BGP peer IP address must match your on-premises VPN router configuration.
-* You can leave the **Address space** empty only if you are using BGP to connect to this network. Azure VPN gateway will internally add a route of your BGP peer IP address to the corresponding IPsec tunnel. If you are **NOT** using BGP between the Azure VPN gateway and this particular network, you **must** provide a list of valid address prefixes for the **Address space**.
-* You can optionally use an **APIPA IP address** (169.254.x.x) as your on-premises BGP peer IP if needed. But you will also need to specify an APIPA IP address as described earlier in this article for your Azure VPN gateway, otherwise the BGP session cannot establish for this connection.
-* You can enter the BGP configuration information during the creation of the local network gateway, or you can add or change BGP configuration from the **Configuration** page of the local network gateway resource.
+ * Name: Site5
+ * IP address: The IP address of the gateway endpoint you want to connect to. Example: 128.9.9.9
+ * Address spaces: the address spaces on the on-premises site to which you want to route.
-**Example**
+1. To configure BGP settings, go to the **Advanced** page. Use the following example values (shown in Diagram 3). Modify any values necessary to match your environment.
-This example uses an APIPA address (169.254.100.1) as the on-premises BGP peer IP address:
+ * Configure BGP settings: Yes
+ * Autonomous system number (ASN): 65050
+ * BGP peer IP address: The address that you noted in previous steps.
+1. Click **Review + create** to create the local network gateway.
-### 2. Configure a S2S connection with BGP enabled
+#### Important configuration considerations
-In this step, you create a new connection that has BGP enabled. If you already have a connection and you want to enable BGP on it, you can [update an existing connection](#update).
+* The ASN and the BGP peer IP address must match your on-premises VPN router configuration.
+* You can leave the **Address space** empty only if you're using BGP to connect to this network. Azure VPN gateway will internally add a route of your BGP peer IP address to the corresponding IPsec tunnel. If you're **NOT** using BGP between the Azure VPN gateway and this particular network, you **must** provide a list of valid address prefixes for the **Address space**.
+* You can optionally use an **APIPA IP address** (169.254.x.x) as your on-premises BGP peer IP if needed. But you'll also need to specify an APIPA IP address as described earlier in this article for your Azure VPN gateway, otherwise the BGP session can't establish for this connection.
+* You can enter the BGP configuration information during the creation of the local network gateway, or you can add or change BGP configuration from the **Configuration** page of the local network gateway resource.
-#### To create a connection with BGP enabled
+### 2. Configure an S2S connection with BGP enabled
-To create a new connection with BGP enabled, on the **Add connection** page, fill in the values, then check the **Enable BGP** option to enable BGP on this connection. Select **OK** to create the connection.
+In this step, you create a new connection that has BGP enabled. If you already have a connection and you want to enable BGP on it, you can [update an existing connection](#update).
+#### To create a connection
-#### <a name ="update"></a>To update an existing connection
+1. To create a new connection, go to your virtual network gateway **Connections** page.
+1. Click **+Add** to open the **Add a connection page**.
+1. Fill in the necessary values.
+1. Select **Enable BGP** to enable BGP on this connection.
+1. Click **OK** to save changes.
-If you want to change the BGP option on a connection, navigate to the **Configuration** page of the connection resource, then toggle the **BGP** option as highlighted in the following example. Select **Save** to save any changes.
+#### <a name ="update"></a>To update an existing connection
+1. Go to your virtual network gateway **Connections** page.
+1. Click the connection you want to modify.
+1. Go to the **Configuration** page for the connection.
+1. Change the **BGP** setting to **Enabled**.
+1. **Save** your changes.
-## <a name ="v2v"></a>Part 3: Configure BGP on VNet-to-VNet connections
+## <a name ="v2v"></a>Configure BGP on VNet-to-VNet connections
-The steps to enable or disable BGP on a VNet-to-VNet connection are the same as the S2S steps in [Part 2](#crosspremises). You can enable BGP when creating the connection, or update the configuration on an existing VNet-to-VNet connection.
+The steps to enable or disable BGP on a VNet-to-VNet connection are the same as the [S2S steps](#crosspremises). You can enable BGP when creating the connection, or update the configuration on an existing VNet-to-VNet connection.
->[!NOTE]
+>[!NOTE]
>A VNet-to-VNet connection without BGP will limit the communication to the two connected VNets only. Enable BGP to allow transit routing capability to other S2S or VNet-to-VNet connections of these two VNets. >
-For context, referring to **Diagram 4**, if BGP were to be disabled between TestVNet2 and TestVNet1, TestVNet2 would not learn the routes for the on-premises network, Site5, and therefore could not communicate with Site 5. Once you enable BGP, as shown in the Diagram 4, all three networks will be able to communicate over the IPsec and VNet-to-VNet connections.
+For context, referring to **Diagram 4**, if BGP were to be disabled between TestVNet2 and TestVNet1, TestVNet2 wouldn't learn the routes for the on-premises network, Site5, and therefore couldn't communicate with Site 5. Once you enable BGP, as shown in the Diagram 4, all three networks will be able to communicate over the IPsec and VNet-to-VNet connections.
**Diagram 4**
For context, referring to **Diagram 4**, if BGP were to be disabled between Test
## Next steps
-Once your connection is complete, you can add virtual machines to your virtual networks. See [Create a Virtual Machine](../virtual-machines/windows/quick-create-portal.md) for steps.
+For more information about BGP, see [About BGP and VPN Gateway](vpn-gateway-bgp-overview.md).
vpn-gateway Vpn Gateway Bgp Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-bgp-overview.md
Last updated 05/18/2022
-# About BGP with Azure VPN Gateway
+# About BGP and Azure VPN Gateway
This article provides an overview of BGP (Border Gateway Protocol) support in Azure VPN Gateway.
-BGP is the standard routing protocol commonly used in the Internet to exchange routing and reachability information between two or more networks. When used in the context of Azure Virtual Networks, BGP enables the Azure VPN Gateways and your on-premises VPN devices, called BGP peers or neighbors, to exchange "routes" that will inform both gateways on the availability and reachability for those prefixes to go through the gateways or routers involved. BGP can also enable transit routing among multiple networks by propagating routes a BGP gateway learns from one BGP peer to all other BGP peers.
+BGP is the standard routing protocol commonly used in the Internet to exchange routing and reachability information between two or more networks. When used in the context of Azure Virtual Networks, BGP enables the Azure VPN gateways and your on-premises VPN devices, called BGP peers or neighbors, to exchange "routes" that will inform both gateways on the availability and reachability for those prefixes to go through the gateways or routers involved. BGP can also enable transit routing among multiple networks by propagating routes a BGP gateway learns from one BGP peer to all other BGP peers.
## <a name="why"></a>Why use BGP?
The following diagram shows an example of a multi-hop topology with multiple pat
## Next steps
-See [Getting started with BGP on Azure VPN gateways](vpn-gateway-bgp-resource-manager-ps.md) for steps to configure BGP for your cross-premises and VNet-to-VNet connections.
+See [How to configure BGP for Azure VPN Gateway](bgp-howto.md) for steps to configure BGP for your cross-premises and VNet-to-VNet connections.