Updates from: 01/04/2023 02:08:55
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Configure Authentication Sample Python Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-python-web-app.md
Open the *app_config.py* file. This file contains information about your Azure A
|Key |Value | |||
-|`ENDPOINT`| The URI of your web API (for example, `https://localhost:44332/hello`).|
+|`ENDPOINT`| The URI of your web API (for example, `https://localhost:5000/getAToken`).|
|`SCOPE`| The web API [scopes](#step-62-configure-scopes) that you created.| | | |
CLIENT_SECRET = "xxxxxxxxxxxxxxxxxxxxxxxx" # Placeholder - for use ONLY during t
### More code here # This is the API resource endpoint
-ENDPOINT = 'https://localhost:44332'
+ENDPOINT = 'https://localhost:5000'
SCOPE = ["https://contoso.onmicrosoft.com/api/demo.read", "https://contoso.onmicrosoft.com/api/demo.write"]
active-directory-b2c Partner Hypr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-hypr.md
description: Tutorial to configure Azure Active Directory B2C with Hypr for true passwordless strong customer authentication -+ Previously updated : 09/13/2022 Last updated : 12/7/2022 # Tutorial for configuring HYPR with Azure Active Directory B2C
-In this sample tutorial, we provide guidance on how to configure Azure AD B2C with [HYPR](https://get.hypr.com). With Azure AD B2C as an identity provider, you can integrate HYPR with any of your customer applications to provide true passwordless authentication to your users. HYPR replaces passwords with Public key encryptions eliminating fraud, phishing, and credential reuse.
+In this tutorial, learn to configure Azure Active Directory B2C (Azure AD B2C) with [HYPR](https://get.hypr.com). When Azure AD B2C is the identity provider (IdP), you can integrate HYPR with customer applications for passwordless authentication. HYPR replaces passwords with public key encryptions that help prevent fraud, phishing, and credential reuse.
## Prerequisites To get started, you'll need: -- An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).--- An [Azure AD B2C tenant](./tutorial-create-tenant.md). Tenant is linked to your Azure subscription.--- A HYPR cloud tenant, get a free [trial account](https://get.hypr.com/free-trial).--- A user's mobile device registered using the HYPR REST APIs or the HYPR Device Manager in your HYPR tenant. For example, you can use the [HYPR Java SDK](https://docs.hypr.com/integratinghypr/docs/hypr-java-web-sdk) to accomplish this task.
+- An Azure AD subscription
+ - If you don't have one, you can get an [Azure free account](https://azure.microsoft.com/free/)
+- An [Azure AD B2C tenant](./tutorial-create-tenant.md) linked to your Azure subscription
+- A HYPR cloud tenant
+ - Request a HYPR [custom demo](https://get.hypr.com/free-trial)
+- A user mobile device registered using the HYPR REST APIs, or the HYPR Device Manager in your HYPR tenant
+ - For example, see [HYPR SDK for Java Web](https://docs.hypr.com/integratinghypr/docs/hypr-java-web-sdk)
## Scenario description
-The HYRP integration includes the following components:
--- Azure AD B2C ΓÇô The authorization server, responsible for verifying the userΓÇÖs credentials, also known as the identity provider
+The HYPR integration has the following components:
-- Web and mobile applications - Your mobile or web applications that you choose to protect with HYPR and Azure AD B2C. HYPR provides a robust mobile SDK also a mobile app that you can use on iOS and Android platforms to do true passwordless authentication.--- The HYPR mobile app - The HYPR mobile app can be used to execute this sample if prefer not to use the mobile SDKs in your own mobile applications.--- HYPR REST APIs - You can use the HYPR APIs to do both user device registration and authentication. These APIs can be found [here](https://apidocs.hypr.com).
+- **Azure AD B2C** ΓÇô The authorization server to verify user credentials, or the identity provider (IdP)
+- **Web and mobile applications** - For mobile or web applications protected by HYPR and Azure AD B2C
+ - HYPR has mobile SDK and a mobile app for iOS and Android
+- **HYPR mobile app** - Use it for this tutorial, if you're not using the mobile SDKs in your mobile applications
+- **HYPR REST APIs** - User device registration and authentication
+ - Go to apidocs.hypr.com for [HYPR Passwordless APIs](https://apidocs.hypr.com)
The following architecture diagram shows the implementation.
-![Screenshot for hypr-architecture-diagram](media/partner-hypr/hypr-architecture-diagram.png)
+ ![Diagram of hypr architecture](media/partner-hypr/hypr-architecture-diagram.png)
-|Step | Description |
-|:--| :--|
-| 1. | User arrives at a login page. Users select sign-in/sign-up and enter username into the page.
-| 2. | The application sends the user attributes to Azure AD B2C for identify verification.
-| 3. | Azure AD B2C collects the user attributes and sends the attributes to HYPR to authenticate the user through the HYPR mobile app.
-| 4. | HYPR sends a push notification to the registered user mobile device for a Fast Identity Online (FIDO) certified authentication. It can be a user finger print, biometric or decentralized pin.
-| 5. | After user acknowledges the push notification, user is either granted or denied access to the customer application based on the verification results.
+1. User arrives at a sign-in page and selects sign-in or sign-up. User enters username.
+2. The application sends the user attributes to Azure AD B2C for identify verification.
+3. Azure AD B2C sends user attributes to HYPR to authenticate the user through the HYPR mobile app.
+4. HYPR sends a push notification to the registered user mobile device for a Fast Identity Online (FIDO) certified authentication. It can be a user fingerprint, biometric, or decentralized PIN.
+5. After user acknowledges the push notification, user is granted or denied access to the customer application.
## Configure the Azure AD B2C policy
-1. Go to the [Azure AD B2C HYPR policy](https://github.com/HYPR-Corp-Public/Azure-AD-B2C-HYPR-Sample/tree/master/policy) in the Policy folder.
-
-2. Follow this [document](tutorial-create-user-flows.md?pivots=b2c-custom-policy#custom-policy-starter-pack) to download [LocalAccounts starter pack](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/tree/master/LocalAccounts)
-
+1. Go to [Azure-AD-B2C-HYPR-Sample/policy/](https://github.com/HYPR-Corp-Public/Azure-AD-B2C-HYPR-Sample/tree/master/policy).
+2. Follow the instructions in [Custom policy starter pack](tutorial-create-user-flows.md?pivots=b2c-custom-policy#custom-policy-starter-pack) to download [Active-directory-b2c-custom-policy-starterpack/LocalAccounts/](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/tree/master/LocalAccounts)
3. Configure the policy for the Azure AD B2C tenant. >[!NOTE]
->Update the provided policies to relate to your specific tenant.
+>Update policies to relate to your tenant.
## Test the user flow
-1. Open the Azure AD B2C tenant and under Policies select **Identity Experience Framework**.
+1. Open the Azure AD B2C tenant.
+2. Under **Policies**, select **Identity Experience Framework**.
+3. Select the **SignUpSignIn** you created.
+4. Select **Run user flow**.
+5. For **Application**, select the registered app (sample is JWT).
+6. For **Reply URL**, select the **redirect URL**.
+7. Select **Run user flow**.
+8. Complete the sign-up flow to create an account.
+9. After the user attribute is created, HYPR is called.
-2. Select your previously created **SignUpSignIn**.
-
-3. Select **Run user flow** and select the settings:
-
- a. **Application**: select the registered app (sample is JWT)
-
- b. **Reply URL**: select the **redirect URL**
-
- c. Select **Run user flow**.
-
-4. Go through sign-up flow and create an account
-
-5. HYPR will be called during the flow, after user attribute is created. If the flow is incomplete, check that user isn't saved in the directory.
+>[!TIP]
+>If the flow is incomplete, confirm the user is saved in the directory.
## Next steps
-For additional information, review the following articles:
- - [Custom policies in Azure AD B2C](./custom-policy-overview.md)- - [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)
active-directory-b2c Partner Jumio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-jumio.md
Title: Tutorial to configure Azure Active Directory B2C with Jumio
-description: In this tutorial, you configure Azure Active Directory B2C with Jumio for automated ID verification, safeguarding customer data.
+description: Configure Azure Active Directory B2C with Jumio for automated ID verification, safeguarding customer data.
-+ - Previously updated : 08/20/2020 Last updated : 12/7/2022 # Tutorial for configuring Jumio with Azure Active Directory B2C
-In this sample tutorial, we provide guidance on how to integrate Azure Active Directory B2C (Azure AD B2C) with [Jumio](https://www.jumio.com/). Jumio is an ID verification service that enables real-time automated ID verification to help safeguard customer data.
+In this tutorial, learn to integrate Azure Active Directory B2C (Azure AD B2C) with [Jumio](https://www.jumio.com/), an ID verification service that enables real-time automated ID verification to help protect customer data.
## Prerequisites To get started, you'll need: -- An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).--- An [Azure AD B2C tenant](./tutorial-create-tenant.md) that's linked to your Azure subscription.
+- An Azure AD subscription
+ - If you don't have one, you can get an [Azure free account](https://azure.microsoft.com/free/)
+- An [Azure AD B2C tenant](./tutorial-create-tenant.md) linked to your Azure subscription
## Scenario description The Jumio integration includes the following components: -- Azure AD B2C: The authorization server that's responsible for verifying the user's credentials. It's also known as the identity provider.--- Jumio: The service that takes the ID details provided by the user and verifies them.--- Intermediate REST API: The API that implements the integration between Azure AD B2C and the Jumio service.--- Azure Blob storage: The service that supplies custom UI files to the Azure AD B2C policies.
+- **Azure AD B2C** - The authorization server that verifies user credentials, also known as the identity provider (IdP)
+- **Jumio** - Verifies user ID details
+- **Intermediate REST API** - Use it to implement Azure AD B2C and Jumio integration
+- **Azure Blob storage** - Use it to obtain custom UI files for the Azure AD B2C policies
The following architecture diagram shows the implementation.
-![Diagram of the architecture of a Azure AD B2C integration with Jumio.](./media/partner-jumio/jumio-architecture-diagram.png)
+ ![Diagram of the architecture of a Azure AD B2C integration with Jumio](./media/partner-jumio/jumio-architecture-diagram.png)
-|Step | Description |
-|:--| :--|
-| 1. | The user arrives at a page to either sign in or sign up to create an account. Azure AD B2C collects the user attributes.
-| 2. | Azure AD B2C calls the middle-layer API and passes on the user attributes.
-| 3. | The middle-layer API collects user attributes and transforms them into a format that Jumio API can consume. Then it sends the attributes to Jumio.
-| 4. | After Jumio consumes the information and processes it, it returns the result to the middle-layer API.
-| 5. | The middle-layer API processes the information and sends back relevant information to Azure AD B2C.
-| 6. | Azure AD B2C receives information back from the middle-layer API. If it shows a failure response, an error message is displayed to user. If it shows a success response, the user is authenticated and written into the directory.
+1. The user signs in, or signs up, and creates an account. Azure AD B2C collects user attributes.
+2. Azure AD B2C calls the middle-layer API and passes the user attributes.
+3. The middle-layer API converts user attributes into a Jumio API format and sends the attributes to Jumio.
+4. Jumio processes the attributes, and returns results to the middle-layer API.
+5. The middle-layer API processes the results and sends relevant information to Azure AD B2C.
+6. Azure AD B2C receives the information. If the response fails, an error message appears. If the response succeeds, the user is authenticated and written into the directory.
-## Sign up with Jumio
+## Create a Jumio account
-To create a Jumio account, contact [Jumio](https://www.jumio.com/contact/).
+To create a Jumio account, go to the jumio.com [Contact](https://www.jumio.com/contact/) page.
## Configure Azure AD B2C with Jumio
-After you create a Jumio account, you use the account to configure Azure AD B2C. The following sections describe the process in sequence.
+After you create a Jumio account, use it to configure Azure AD B2C.
### Deploy the API
-Deploy the provided [API code](https://github.com/azure-ad-b2c/partner-integrations/tree/master/samples/Jumio/API/Jumio.Api) to an Azure service. You can publish the code from Visual Studio, by following [these instructions](/visualstudio/deployment/quickstart-deploy-to-azure).
+From [samples/Jumio/API/Jumio.Api/](https://github.com/azure-ad-b2c/partner-integrations/tree/master/samples/Jumio/API/Jumio.Api), deploy the code to an Azure service. You can publish the code from Visual Studio.
>[!NOTE]
->You'll need the URL of the deployed service to configure Azure AD with the required settings.
+>To configure Azure AD, you'll need the deployed service URL.
### Deploy the client certificate
-1. A client certificate helps protect the Jumio API call. Create a self-signed certificate by using the following PowerShell sample code:
+A client certificate helps protect the Jumio API call.
+
+1. Create a self-signed certificate by using the following PowerShell sample code:
``` PowerShell $cert = New-SelfSignedCertificate -Type Custom -Subject "CN=Demo-SigningCertificate" -TextExtension @("2.5.29.37={text}1.3.6.1.5.5.7.3.3") -KeyUsage DigitalSignature -KeyAlgorithm RSA -KeyLength 2048 -NotAfter (Get-Date).AddYears(2) -CertStoreLocation "Cert:\CurrentUser\My"
Deploy the provided [API code](https://github.com/azure-ad-b2c/partner-integrati
```
- The certificate is then exported to the location specified for ``{your-local-path}``.
-
-3. Import the certificate to Azure App Service by following the instructions in [this article](../app-service/configure-ssl-certificate.md#upload-a-private-certificate).
+2. The certificate is exported to the location specified for ``{your-local-path}``.
+3. To import the certificate to Azure App Service, see [Upload a private certificate](../app-service/configure-ssl-certificate.md#upload-a-private-certificate).
### Create a signing/encryption key
-Create a random string with a length greater than 64 characters that contains only letters and numbers.
+1. Create a random string with a length greater than 64 characters (letters and numbers only).
-For example: ``C9CB44D98642A7062A0D39B94B6CDC1E54276F2E7CFFBF44288CEE73C08A8A65``
+ For example: ``C9CB44D98642A7062A0D39B94B6CDC1E54276F2E7CFFBF44288CEE73C08A8A65``
-Use the following PowerShell script to create the string:
+2. Use the following PowerShell script to create the string:
```PowerShell -join ((0x30..0x39) + ( 0x41..0x5A) + ( 0x61..0x7A) + ( 65..90 ) | Get-Random -Count 64 | % {[char]$_})
Use the following PowerShell script to create the string:
### Configure the API
-You can [configure application settings in Azure App Service](../app-service/configure-common.md#configure-app-settings). With this method, you can securely configure settings without checking them into a repository. You'll need to provide the following settings to the REST API:
+You can [configure application settings in Azure App Service](../app-service/configure-common.md#configure-app-settings) without checking them into a repository. You'll need to provide the following settings to the REST API:
| Application settings | Source | Notes |
-| :-- | :| :--|
-|JumioSettings:AuthUsername | Jumio account configuration | |
-|JumioSettings:AuthPassword | Jumio account configuration | |
-|AppSettings:SigningCertThumbprint|Thumbprint of the created self-signed certificate| |
-|AppSettings:IdTokenSigningKey| Signing key created using PowerShell | |
-| AppSettings:IdTokenEncryptionKey |Encryption key created using PowerShell
-| AppSettings:IdTokenIssuer | Issuer to be used for the JWT token (a GUID value is preferred) |
-| AppSettings:IdTokenAudience | Audience to be used for the JWT token (a GUID value is preferred) |
-|AppSettings:BaseRedirectUrl | Base URL of the Azure AD B2C policy | https://{your-tenant-name}.b2clogin.com/{your-application-id}|
-| WEBSITE_LOAD_CERTIFICATES| Thumbprint of the created self-signed certificate |
+| | | |
+|JumioSettings:AuthUsername | Jumio account configuration | N/A |
+|JumioSettings:AuthPassword | Jumio account configuration | N/A |
+|AppSettings:SigningCertThumbprint|The created self-signed certificate thumbprint| N/A |
+|AppSettings:IdTokenSigningKey| Signing key created using PowerShell |N/A |
+|AppSettings:IdTokenEncryptionKey |Encryption key created using PowerShell|N/A|
+|AppSettings:IdTokenIssuer | Issuer for the JWT token (a GUID value is preferred) |N/A|
+|AppSettings:IdTokenAudience | Audience for the JWT token (a GUID value is preferred) |N/A|
+|AppSettings:BaseRedirectUrl | Azure AD B2C policy base URL | https://{your-tenant-name}.b2clogin.com/{your-application-id}|
+|WEBSITE_LOAD_CERTIFICATES| The created self-signed certificate thumbprint |N/A|
### Deploy the UI 1. Set up a [blob storage container in your storage account](../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container).-
-2. Store the UI files from the [UI folder](https://github.com/azure-ad-b2c/partner-integrations/tree/master/samples/Jumio/UI) in your blob container.
+2. Store the UI files from the [/samples/Jumio/UI/](https://github.com/azure-ad-b2c/partner-integrations/tree/master/samples/Jumio/UI) in your blob container.
#### Update UI files
-1. In the UI files, go to the folder [ocean_blue](https://github.com/azure-ad-b2c/partner-integrations/tree/master/samples/Jumio/UI/ocean_blue).
-
+1. In the UI files, go to [/samples/Jumio/UI/ocean_blue/](https://github.com/azure-ad-b2c/partner-integrations/tree/master/samples/Jumio/UI/ocean_blue).
2. Open each HTML file.-
-3. Find and replace `{your-ui-blob-container-url}` with the URL of your blob container.
-
-4. Find and replace `{your-intermediate-api-url}` with the URL of the intermediate API app service.
+3. Find and replace `{your-ui-blob-container-url}` with your blob container URL.
+4. Find and replace `{your-intermediate-api-url}` with the intermediate API app service URL.
>[!NOTE]
-> As a best practice, we recommend that you add consent notification on the attribute collection page. Notify users that the information will be sent to third-party services for identity verification.
+> We recommend you add consent notification on the attribute collection page. Notify users the information goes to third-party services for identity verification.
### Configure the Azure AD B2C policy
-1. Go to the [Azure AD B2C policy](https://github.com/azure-ad-b2c/partner-integrations/tree/master/samples/Jumio/Policies) in the Policies folder.
-
-2. Follow [this article](tutorial-create-user-flows.md?pivots=b2c-custom-policy#custom-policy-starter-pack) to download the [LocalAccounts starter pack](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/tree/master/LocalAccounts).
-
+1. Go to the Azure AD B2C policy in [/samples/Jumio/Policies/](https://github.com/azure-ad-b2c/partner-integrations/tree/master/samples/Jumio/Policies).
+2. Use the instructions in [Custom policy starter pack](tutorial-create-user-flows.md?pivots=b2c-custom-policy#custom-policy-starter-pack) to download the [LocalAccounts](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/tree/master/LocalAccounts) starter pack.
3. Configure the policy for the Azure AD B2C tenant. >[!NOTE]
->Update the provided policies to relate to your specific tenant.
+>Update policies to relate to your tenant.
## Test the user flow
-1. Open the Azure AD B2C tenant. Under **Policies**, select **Identity Experience Framework**.
-
-2. Select your previously created **SignUpSignIn**.
-
-3. Select **Run user flow** and then:
-
- a. For **Application**, select the registered app (the sample is JWT).
+1. Open the Azure AD B2C tenant.
+2. Under **Policies**, select **Identity Experience Framework**.
+3. Select your created **SignUpSignIn**.
+4. Select **Run user flow**.
+5. For **Application**, select the registered app (example is JWT).
+6. For **Reply URL**, select the **redirect URL**.
+7. Select **Run user flow**.
+8. Complete the sign-up flow.
+9. Create an account.
+10. After the user attribute is created, Jumio is called.
- b. For **Reply URL**, select the **redirect URL**.
-
- c. Select **Run user flow**.
-
-4. Go through the sign-up flow and create an account.
-
-5. The Jumio service will be called during the flow, after the user attribute is created. If the flow is incomplete, check that the user isn't saved in the directory.
+>[!TIP]
+>If the flow is incomplete, confirm the user is, or isn't, saved in the directory.
## Next steps
-For additional information, review the following articles:
- - [Custom policies in Azure AD B2C](./custom-policy-overview.md)- - [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)
active-directory-b2c Partner Lexisnexis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-lexisnexis.md
Title: Tutorial to configure Azure Active Directory B2C with LexisNexis
description: Learn how to integrate Azure AD B2C authentication with LexisNexis which is a profiling and identity validation service and is used to verify user identification and provide comprehensive risk assessments based on the user's device. -+ - Previously updated : 09/13/2022 Last updated : 12/7/2022 # Tutorial for configuring LexisNexis with Azure Active Directory B2C
-In this sample tutorial, we provide guidance on how to integrate Azure AD B2C with [LexisNexis](https://risk.lexisnexis.com/products/threatmetrix/?utm_source=bingads&utm_medium=ppc&utm_campaign=SEM%7CLNRS%7CUS%7CEN%7CTMX%7CBR%7CBing&utm_term=threat%20metrix&utm_network=o&utm_device=c&msclkid=1e85e32ec18c1ae9bbc1bc2998e026bd). LexisNexis provides a variety of solutions, you can find them [here](https://risk.lexisnexis.com/products/threatmetrix/?utm_source=bingads&utm_medium=ppc&utm_campaign=SEM%7CLNRS%7CUS%7CEN%7CTMX%7CBR%7CBing&utm_term=threat%20metrix&utm_network=o&utm_device=c&msclkid=1e85e32ec18c1ae9bbc1bc2998e026bd). In this sample tutorial, we'll cover the **ThreatMetrix** solution from LexisNexis. ThreatMetrix is a profiling and identity validation service. It's used to verify user identification and provide comprehensive risk assessments based on the user's device.
+In this tutorial, learn how to integrate Azure Active Directory B2C (Azure AD B2C) with [LexisNexis ThreatMetrix](https://risk.lexisnexis.com/products/threatmetrix/?utm_source=bingads&utm_medium=ppc&utm_campaign=SEM%7CLNRS%7CUS%7CEN%7CTMX%7CBR%7CBing&utm_term=threat%20metrix&utm_network=o&utm_device=c&msclkid=1e85e32ec18c1ae9bbc1bc2998e026bd). Learn more about LexisNexis contact methods and [ThreatMetix](https://risk.lexisnexis.com/products/threatmetrix/?utm_source=bingads&utm_medium=ppc&utm_campaign=SEM%7CLNRS%7CUS%7CEN%7CTMX%7CBR%7CBing&utm_term=threat%20metrix&utm_network=o&utm_device=c&msclkid=1e85e32ec18c1ae9bbc1bc2998e026bd), the profiling and identity-validation service that also provides comprehensive risk assessments based on user devices.
+
+This integration's profiling is based on user information provided during the sign-up flow. ThreatMetrix permits the user to sign in, or not.
-This integration does profiling based on a few pieces of user information, which is provided by the user during sign-up flow. ThreatMetrix determines whether the user should be allowed to continue to log in or not. The following attributes are considered in ThreatMetrix's risk analysis:
+ThreatMetrix risk analysis attributes:
- Email-- Phone Number-- Profiling information collected from the user's machine
+- Phone number
+- Profiling information collected from the user device
## Prerequisites To get started, you'll need: -- An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).--- [An Azure AD B2C tenant](./tutorial-create-tenant.md) that is linked to your Azure subscription.
+- An Azure AD subscription
+ - If you don't have one, you can get an [Azure free account](https://azure.microsoft.com/free/)
+- [An Azure AD B2C tenant](./tutorial-create-tenant.md) linked to your Azure subscription
## Scenario description The ThreatMetrix integration includes the following components: -- Azure AD B2C ΓÇô The authorization server, responsible for verifying the userΓÇÖs credentials, also known as the identity provider--- ThreatMetrix ΓÇô The ThreatMetrix service takes inputs provided by the user and combines it with profiling information gathered from the user's machine to verify the security of the user interaction.--- Custom REST API ΓÇô This API implements the integration between Azure AD B2C and the ThreatMetrix service.
+- **Azure AD B2C** ΓÇô The authorization server that verifies user credentials, also known as the identity provider (IdP)
+- **ThreatMetrix** ΓÇô Combines user input with profiling information from the user device to verify the interaction's security
+- **Custom REST API** ΓÇô Use to implement the Azure AD B2C and ThreatMetrix integration
The following architecture diagram shows the implementation.
-![screenshot for lexisnexis-architecture-diagram](media/partner-lexisnexis/lexisnexis-architecture-diagram.png)
+ ![Diagram of lexisnexis solution architecture.](media/partner-lexisnexis/lexisnexis-architecture-diagram.png)
-|Step | Description |
-|:--|:-|
-|1. | User arrives at a login page. User selects sign-up to create a new account and enter information into the page. Azure AD B2C collects the user attributes.
-| 2. | Azure AD B2C calls the middle layer API and passes on the user attributes.
-| 3. | Middle layer API collects user attributes and transforms it into a format that LexisNexis API could consume. Then, sends it to LexisNexis.
-| 4. | LexisNexis consumes the information and processes it to validate user identification based on the risk analysis. Then, it returns the result to the middle layer API.
-| 5. | Middle layer API processes the information and sends back relevant information to Azure AD B2C.
-| 6. | Azure AD B2C receives information back from middle layer API. If it shows a Failure response, an error message is displayed to user. If it shows a Success response, the user is authenticated and granted access.
-## Onboard with LexisNexis
+1. User selects sign-up to create a new account and enters attributes. Azure AD B2C collects the attributes.
+2. Azure AD B2C calls the middle layer API and passes the user attributes.
+3. Middle layer API transforms attributes into a consumable API format and sends it to LexisNexis.
+4. LexisNexis validates user identification based on risk analysis and returns the results to the middle layer API.
+5. Middle layer API processes the results and sends relevant information to Azure AD B2C.
+6. Azure AD B2C receives information from middle layer API. If the response fails, an error message appears. If the response succeeds, the user is authenticated and granted access.
-1. To create a LexisNexis account, contact [LexisNexis](https://risk.lexisnexis.com/products/threatmetrix/?utm_source=bingads&utm_medium=ppc&utm_campaign=SEM%7CLNRS%7CUS%7CEN%7CTMX%7CBR%7CBing&utm_term=threat%20metrix&utm_network=o&utm_device=c&msclkid=1e85e32ec18c1ae9bbc1bc2998e026bd)
+## Create a LexisNexis account and policy
-2. Create a LexisNexis policy that meets your requirements. Use the documentation available [here](https://risk.lexisnexis.com/products/threatmetrix/?utm_source=bingads&utm_medium=ppc&utm_campaign=SEM%7CLNRS%7CUS%7CEN%7CTMX%7CBR%7CBing&utm_term=threat%20metrix&utm_network=o&utm_device=c&msclkid=1e85e32ec18c1ae9bbc1bc2998e026bd).
+1. To create a LexisNexis account, go to lexisnexis.com and select [Contact Us](https://risk.lexisnexis.com/products/threatmetrix/?utm_source=bingads&utm_medium=ppc&utm_campaign=SEM%7CLNRS%7CUS%7CEN%7CTMX%7CBR%7CBing&utm_term=threat%20metrix&utm_network=o&utm_device=c&msclkid=1e85e32ec18c1ae9bbc1bc2998e026bd).
+2. Create a policy using [LexisNexis documentation](https://risk.lexisnexis.com/products/threatmetrix/?utm_source=bingads&utm_medium=ppc&utm_campaign=SEM%7CLNRS%7CUS%7CEN%7CTMX%7CBR%7CBing&utm_term=threat%20metrix&utm_network=o&utm_device=c&msclkid=1e85e32ec18c1ae9bbc1bc2998e026bd).
+3. After account creation, you'll receive API configuration information. Use the following sections to complete the process.
>[!NOTE]
-> The name of the policy will be used later.
-
-Once an account is created, you'll receive the information you need for API configuration. The following sections describe the process.
+>You'll use the policy name later.
## Configure Azure AD B2C with LexisNexis
-### Part 1 - Deploy the API
+### Deploy the API
-Deploy the provided [API code](https://github.com/azure-ad-b2c/partner-integrations/tree/master/samples/ThreatMetrix/Api) to an Azure service. The code can be published from Visual Studio, following these [instructions](/visualstudio/deployment/quickstart-deploy-to-azure).
+To deploy the API code to an Azure service, go to [/samples/ThreatMetrix/Api](https://github.com/azure-ad-b2c/partner-integrations/tree/master/samples/ThreatMetrix/Api). You can publish the code from Visual Studio.
>[!NOTE]
->You'll need the URL of the deployed service to configure Azure AD with the required settings.
+>You'll need deployed service URL to configure Azure AD.
-### Part 2 - Configure the API
+### Configure the API
-Application settings can be [configured in the App service in Azure](../app-service/configure-common.md#configure-app-settings). With this method, settings can be securely configured without checking them into a repository. You'll need to provide the following settings to the REST API:
+You can [configure app settings](../app-service/configure-common.md#configure-app-settings) in the Azure App service, without checking them into a repository. You'll provide the following settings to the REST API:
| Application settings | Source | Notes |
-| :-- | :| :--|
-|ThreatMetrix:Url | ThreatMetrix account configuration | |
-|ThreatMetrix:OrgId | ThreatMetrix account configuration | |
-|ThreatMetrix:ApiKey |ThreatMetrix account configuration| |
-|ThreatMetrix:Policy | Name of policy created in ThreatMetrix | |
-| BasicAuth:ApiUsername |Define a username for the API| Username will be used in the Azure AD B2C configuration
-| BasicAuth:ApiPassword | Define a password for the API | Password will be used in the Azure AD B2C configuration
+| | | |
+|ThreatMetrix:Url | ThreatMetrix account configuration |N/A|
+|ThreatMetrix:OrgId | ThreatMetrix account configuration |N/A|
+|ThreatMetrix:ApiKey |ThreatMetrix account configuration|N/A|
+|ThreatMetrix:Policy | Policy name created in ThreatMetrix |N/A|
+| BasicAuth:ApiUsername |Enter an API username| Username is used in the Azure AD B2C configuration|
+| BasicAuth:ApiPassword | Enter an API password | Password is used in the Azure AD B2C configuration|
-### Part 3 - Deploy the UI
+### Deploy the UI
-This solution uses custom UI templates that are loaded by Azure AD B2C. These UI templates do the profiling that is sent directly to the ThreatMetrix service.
+This solution uses custom UI templates loaded by Azure AD B2C. These templates do the profiling that goes to ThreatMetrix.
-Refer to these [instructions](./customize-ui-with-html.md#custom-page-content-walkthrough) to deploy the included [UI files](https://github.com/azure-ad-b2c/partner-integrations/tree/master/samples/ThreatMetrix/ui-template) to a blob storage account. The instructions include setting up a blob storage account, configuring CORS, and enabling public access.
+Use the instructions in [custom page content walkthrough](./customize-ui-with-html.md#custom-page-content-walkthrough) to deploy the UI files in [/samples/ThreatMetrix/ui-template](https://github.com/azure-ad-b2c/partner-integrations/tree/master/samples/ThreatMetrix/ui-template) to a blob storage account. The instructions include setting up a blob storage account, configuring cross-origin resource sharing (CORS), and enabling public access.
-The UI is based on the [ocean blue template](https://github.com/azure-ad-b2c/partner-integrations/tree/master/samples/ThreatMetrix/ui-template/ocean_blue). All links within the UI should be updated to refer to the deployed location. In the UI folder, find and replace https://yourblobstorage/blobcontainer with the deployed location.
+The UI is based on the ocean blue template in [/samples/ThreatMetrix/ui-template/ocean_blue](https://github.com/azure-ad-b2c/partner-integrations/tree/master/samples/ThreatMetrix/ui-template/ocean_blue). Update UI links to refer to the deployed location. In the UI folder, find and replace `https://yourblobstorage/blobcontainer` with the deployed location.
-### Part 4 - Create API policy keys
+### Create API policy keys
-Refer to this [document](./secure-rest-api.md#add-rest-api-username-and-password-policy-keys) and create two policy keys ΓÇô one for the API username, and one for the API password that you defined above.
+To create two policy keys, follow the instructions in [add REST API username and password policy keys](./secure-rest-api.md#add-rest-api-username-and-password-policy-keys). One policy is for the API username, the other is for the API password, you created.
-The sample policy uses these key names:
+Example policy key names:
- B2C_1A_RestApiUsername- - B2C_1A_RestApiPassword
-### Part 5 - Update the API URL
+### Update the API URL
-In the provided [TrustFrameworkExtensions policy](https://github.com/azure-ad-b2c/partner-integrations/blob/master/samples/ThreatMetrix/policy/TrustFrameworkExtensions.xml), find the technical profile named `Rest-LexisNexus-SessionQuery`, and update the `ServiceUrl` metadata item with the location of the API deployed above.
+In [samples/ThreatMetrix/policy/TrustFrameworkExtensions.xml](https://github.com/azure-ad-b2c/partner-integrations/blob/master/samples/ThreatMetrix/policy/TrustFrameworkExtensions.xml), find the `Rest-LexisNexus-SessionQuery` technical profile, and update the `ServiceUrl` metadata item with the deployed API location.
-### Part 6 - Update UI URL
+### Update the UI URL
-In the provided [TrustFrameworkExtensions policy](https://github.com/azure-ad-b2c/partner-integrations/blob/master/samples/ThreatMetrix/policy/TrustFrameworkExtensions.xml), do a find and replace to search for https://yourblobstorage/blobcontainer/ with the location the UI files are deployed to.
+In [/samples/ThreatMetrix/policy/TrustFrameworkExtensions.xml](https://github.com/azure-ad-b2c/partner-integrations/blob/master/samples/ThreatMetrix/policy/TrustFrameworkExtensions.xml), search for and replace `https://yourblobstorage/blobcontainer/` with the UI-file location.
>[!NOTE]
-> As a best practice, we recommend that customers add consent notification in the attribute collection page. Notify users that information will be send to third-party services for Identity verification.
+>We recommend you add consent notification on the attribute collection page. Notify users that information goes to third-party services for identity verification.
-### Part 7 - Configure the Azure AD B2C policy
+### Configure the Azure AD B2C policy
-Refer to this [document](tutorial-create-user-flows.md?pivots=b2c-custom-policy#custom-policy-starter-pack) to download [Local Accounts starter pack](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/tree/master/LocalAccounts) and configure the [policy](https://github.com/azure-ad-b2c/partner-integrations/tree/master/samples/ThreatMetrix/policy) for the Azure AD B2C tenant.
+Go to the [custom policy starter pack](tutorial-create-user-flows.md?pivots=b2c-custom-policy#custom-policy-starter-pack) to download [LocalAccounts](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/tree/master/LocalAccounts). Configure the policy in [samples/ThreatMetrix/policy/](https://github.com/azure-ad-b2c/partner-integrations/tree/master/samples/ThreatMetrix/policy) for the Azure AD B2C tenant.
>[!NOTE]
->Update the provided policies to relate to your specific tenant.
+>Update the policies to relate to your tenant.
## Test the user flow
-1. Open the Azure AD B2C tenant and under Policies select **User flows**.
-
-2. Select your previously created **User Flow**.
-
-3. Select **Run user flow** and select the settings:
-
- a. **Application**: select the registered app (sample is JWT)
-
- b. **Reply URL**: select the **redirect URL**
-
- c. Select **Run user flow**.
-
-4. Go through sign-up flow and create an account
-
-5. Log-out
-
-6. Go through sign-in flow
-
-7. ThreatMetrix puzzle will pop up after you enter **continue**.
+1. Open the Azure AD B2C tenant.
+2. Under **Policies**, select **User flows**.
+3. Select the created **User Flow**.
+4. Select **Run user flow**.
+5. For **Application**, select the registered app (example is JWT).
+6. For **Reply URL**, select the **redirect URL**.
+7. Select **Run user flow**.
+8. Complete the sign-up flow.
+9. Create an account.
+10. Sign out.
+11. Complete the sign-in flow.
+12. Select **Continue**.
+13. The ThreatMetrix puzzle appears.
## Next steps
-For additional information, review the following articles:
- - [Custom policies in Azure AD B2C](./custom-policy-overview.md)- - [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)
active-directory-b2c Partner Nevis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-nevis.md
description: Learn how to integrate Azure AD B2C authentication with Nevis for passwordless authentication -+ Previously updated : 09/13/2022 Last updated : 12/8/2022 # Tutorial to configure Nevis with Azure Active Directory B2C for passwordless authentication
-In this sample tutorial, learn how to extend Azure AD B2C with [Nevis](https://www.nevis.net/en/solution/authentication-cloud) to enable passwordless authentication. Nevis provides a mobile-first, fully branded end-user experience with Nevis Access app to provide strong customer authentication and comply with Payment Services Directive 2 (PSD2) transaction requirements.
+In this tutorial, learn to enable passwordless authentication in Azure Active Directory B2C (Azure AD B2C) with the [Nevis](https://www.nevis.net/en/solution/authentication-cloud) Access app to enable customer authentication and comply with Payment Services Directive 2 (PSD2) transaction requirements. PSD2 is a European Union (EU) directive, administered by the European Commission (Directorate General Internal Market) to regulate payment services and payment service providers throughout the EU and European Economic Area (EEA).
## Prerequisites To get started, you'll need: -- A Nevis [trial account](https://www.nevis-security.com/aadb2c/)
+- A Nevis demo account
+ - Go to nevis.net for [Nevis + Microsoft Azure AD B2C](https://www.nevis-security.com/aadb2c/) to request an account
+- An Azure AD subscription
+ - If you don't have one, you can get an [Azure free account](https://azure.microsoft.com/free/)
+- An [Azure AD B2C tenant](./tutorial-create-tenant.md) linked to your Azure subscription
-- An Azure AD subscription. If you don't have one, get a [free account](https://azure.microsoft.com/free/).--- An [Azure AD B2C tenant](./tutorial-create-tenant.md) that is linked to your Azure subscription.--- Configured Azure AD B2C environment for using [custom policies](./tutorial-create-user-flows.md?pivots=b2c-custom-policy), if you wish to integrate Nevis into your sign-up policy flow.
+>[!NOTE]
+>To integrate Nevis into your sign-up policy flow, configure the Azure AD B2C environment to use custom policies. </br>See, [Tutorial: Create user flows and custom policies in Azure Active Directory B2C](/tutorial-create-user-flows.md?pivots=b2c-custom-policy).
## Scenario description
-In this scenario, add fully branded access app to your back-end application for passwordless authentication. The following components make up the solution:
+Add the branded Access app to your back-end application for passwordless authentication. The following components make up the solution:
-- An Azure AD B2C tenant, with a combined sign-in and sign-up policy to your back-end-- Nevis instance and its REST API to enhance Azure AD B2C-- Your own branded access app
+- **Azure AD B2C tenant** with a combined sign-in and sign-up policy for your back end
+- **Nevis instance** and its REST API to enhance Azure AD B2C
+- Your branded **Access** app
The diagram shows the implementation.
-![High-level password sign-in flow with Azure AD B2C and Nevis](./media/partner-nevis/nevis-architecture-diagram.png)
+ ![Diagram that shows high-level password sign-in flow with Azure AD B2C and Nevis.](./media/partner-nevis/nevis-architecture-diagram.png)
-|Step | Description |
-|:--| :--|
-| 1. | A user attempts to sign in or sign up to an application via Azure AD B2C sign-in and sign-up policy.
-| 2. | During sign-up, the Nevis Access App is registered to the user device using a QR code. A private key is generated on the user device and is used to sign the user requests.
-| 3. | Azure AD B2C uses a RESTful technical profile to start the login with the Nevis solution.
-| 4. | The login request is sent to the access app, either as a push message, QR code or as a deep-link.
-| 5. | The user approves the sign-in attempt with their biometrics. A message is then returned to Nevis, which verifies the login with the stored public key.
-| 6. | Azure AD B2C sends one last request to Nevis to confirm that the login was successfully completed.
-| 7. |Based on the success/failure message from Azure AD B2C user is granted/denied access to the application.
+1. A user attempts sign-in or sign-up to an application with Azure AD B2C policy.
+2. During sign-up, the Access is registered to the user device using a QR code. A private key is generated on the user device and is used to sign user requests.
+3. Azure AD B2C uses a RESTful technical profile to start sign-in with the Nevis solution.
+4. The sign-in request goes to Access, as a push message, QR code, or a deep-link.
+5. The user approves the sign-in attempt with their biometrics. A message goes to Nevis, which verifies sign-in with the stored public key.
+6. Azure AD B2C sends a request to Nevis to confirm sign-in is complete.
+7. The user is granted, or denied, access to the application with an Azure AD B2C success, or failure, message.
## Integrate your Azure AD B2C tenant
-### Onboard to Nevis
+### Request a Nevis account
-[Sign up for a Nevis account](https://www.nevis-security.com/aadb2c/).
-You'll receive two emails:
+1. Go to nevis.net for [Nevis + Microsoft Azure AD B2C](https://www.nevis-security.com/aadb2c/).
+2. Use the form request an account.
+3. Two emails arrive:
-1. A management account notification
-
-2. A mobile app invitation.
+* Management account notification
+* Mobile app invitation
### Add your Azure AD B2C tenant to your Nevis account
-1. From the Nevis management account trial email, copy your management key to your clipboard.
-
-2. Open https://console.nevis.cloud/ in a browser.
-
-3. Sign in to the management console with your key.
-
-4. Select **Add Instance**
-
-5. Select the newly created instance to open it.
-
-6. In the side navigation bar, select **Custom Integrations**
-
+1. From the management account trial email, copy your management key.
+2. In a browser, open https://console.nevis.cloud/.
+3. Use the management key to sign in to the management console.
+4. Select **Add Instance**.
+5. Select the created instance.
+6. In the side navigation, select **Custom Integrations**.
7. Select **Add custom integration**.-
-8. For Integration Name, enter your Azure AD B2C tenant name.
-
-9. For URL/Domain, enter `https://yourtenant.onmicrosoft.com`
-
+8. For **Integration Name**, enter your Azure AD B2C tenant name.
+9. For **URL/Domain**, enter `https://yourtenant.onmicrosoft.com`.
10. Select **Next**
+11. Select **Done**.
>[!NOTE] >You'll need the Nevis access token later.
-11. Select **Done**.
-
-### Install the Nevis Access app on your phone
-
-1. From the Nevis mobile app trial email, open the **Test Flight app** invitation.
+### Install Nevis Access on your phone
+1. From the Nevis mobile app invitation email, open the **Test Flight app** invitation.
2. Install the app.
-3. Follow the instructions given to install the Nevis Access app.
- ### Integrate Azure AD B2C with Nevis
-1. Open the [Azure portal](https://portal.azure.com/).
-
-2. Switch to your Azure AD B2C tenant. Make sure you've selected the right tenant, as the Azure AD B2C tenant usually is in a separate tenant.
-
-3. In the menu, select **Identity Experience Framework (IEF)**
-
-4. Select **Policy Keys**
-
-5. Select **Add** and create a new key with the following settings:
-
- a. Select **Manual** in Options
-
- b. Set Name to **AuthCloudAccessToken**
-
- c. Paste the previously stored **Nevis Access Token** in the Secret field
-
- d. For the Key Usage select **Encryption**
-
- e. Select **Create**
+1. Go to the [Azure portal](https://portal.azure.com/).
+2. Switch to your Azure AD B2C tenant. Note: the Azure AD B2C tenant usually is in a separate tenant.
+3. In the menu, select **Identity Experience Framework (IEF)**.
+4. Select **Policy Keys**.
+5. Select **Add**.
+6. Create a new key.
+7. For **Options**, select **Manual**.
+8. For **Name**, select **AuthCloudAccessToken**.
+9. For **Secret**, paste the stored **Nevis Access Token**.
+10. For **Key Usage**, select **Encryption**.
+11. Select **Create**.
### Configure and upload the nevis.html to Azure blob storage
-1. In your Identity Environment (IDE), go to the [**policy**](https://github.com/azure-ad-b2c/partner-integrations/tree/master/samples/Nevis/policy) folder.
-
-2. Open the [**nevis.html**](https://github.com/azure-ad-b2c/partner-integrations/blob/master/samples/Nevis/policy/nevis.html) file.
-
-3. Replace the **authentication_cloud_url** with the URL of your Nevis Admin console - `https://<instance_id>.mauth.nevis.cloud`.
-
-4. **Save** the changes to the file.
-
-5. Follow the [instructions](./customize-ui-with-html.md#2-create-an-azure-blob-storage-account) and upload the **nevis.html** file to your Azure blob storage.
-
-6. Follow the [instructions](./customize-ui-with-html.md#3-configure-cors) and enable Cross-Origin Resource Sharing (CORS) for this file.
-
-7. Once the upload is complete and CORS is enabled, select the **nevis.html** file in the list.
-
-8. In the **Overview** tab, next to the **URL**, select the **copy link** icon.
-
-9. Open the link in a new browser tab to make sure it displays a grey box.
+1. In your Identity Environment (IDE), go to the [/master/samples/Nevis/policy](https://github.com/azure-ad-b2c/partner-integrations/tree/master/samples/Nevis/policy) folder.
+2. In [/samples/Nevis/policy/nevis.html](https://github.com/azure-ad-b2c/partner-integrations/blob/master/samples/Nevis/policy/nevis.html) open the nevis.html file.
+3. Replace the **authentication_cloud_url** with the Nevis Admin console URL `https://<instance_id>.mauth.nevis.cloud`.
+4. Select **Save**.
+5. [Create an Azure Blob storage account](/customize-ui-with-html.md#2-create-an-azure-blob-storage-account).
+6. Upload the nevis.html file to your Azure blob storage.
+7. [Configure CORS](/customize-ui-with-html.md#3-configure-cors).
+8. Enable cross-origin resource sharing (CORS) for the file.
+9. In the list, select the **nevis.html** file.
+10. In the **Overview** tab, next to the **URL**, select the **copy link** icon.
+11. Open the link in a new browser tab to confirm a grey box appears.
>[!NOTE] >You'll need the blob link later.
-### Customize your TrustFrameworkBase.xml
-
-1. In your IDE, go to the [**policy**](https://github.com/azure-ad-b2c/partner-integrations/tree/master/samples/Nevis/policy) folder.
-
-2. Open the [**TrustFrameworkBase.xml**](https://github.com/azure-ad-b2c/partner-integrations/blob/master/samples/Nevis/policy/TrustFrameworkBase.xml) file.
-
-3. Replace **yourtenant** with your Azure tenant account name in the **TenantId**.
-
-4. Replace **yourtenant** with your Azure tenant account name in **PublicPolicyURI**.
-
-5. Replace all **authentication_cloud_url** instances with the URL of your Nevis Admin console
-
-6. **Save** the changes to your file.
+### Customize TrustFrameworkBase.xml
-### Customize your TrustFrameworkExtensions.xml
+1. In your IDE, go to the [/samples/Nevis/policy](https://github.com/azure-ad-b2c/partner-integrations/tree/master/samples/Nevis/policy) folder.
+2. Open [TrustFrameworkBase.xml](https://github.com/azure-ad-b2c/partner-integrations/blob/master/samples/Nevis/policy/TrustFrameworkBase.xml).
+3. Replace **your tenant** with your Azure tenant account name in **TenantId**.
+4. Replace **your tenant** with your Azure tenant account name in **PublicPolicyURI**.
+5. Replace all **authentication_cloud_url** instances with the Nevis Admin console URL.
+6. Select **Save**.
-1. In your IDE, go to the [**policy**](https://github.com/azure-ad-b2c/partner-integrations/tree/master/samples/Nevis/policy) folder.
+### Customize TrustFrameworkExtensions.xml
-2. Open the [**TrustFrameworkExtensions.xml**](https://github.com/azure-ad-b2c/partner-integrations/blob/master/samples/Nevis/policy/TrustFrameworkExtensions.xml) file.
+1. In your IDE, go to the [/samples/Nevis/policy](https://github.com/azure-ad-b2c/partner-integrations/tree/master/samples/Nevis/policy) folder.
+2. Open [TrustFrameworkExtensions.xml](https://github.com/azure-ad-b2c/partner-integrations/blob/master/samples/Nevis/policy/TrustFrameworkExtensions.xml).
+3. Replace **your tenant** with your Azure tenant account name in **TenantId**.
+4. Replace **your tenant** with your Azure tenant account name in **PublicPolicyURI**.
+5. Under **BasePolicy**, in the **TenantId**, replace **your tenant** with your Azure tenant account name.
+6. Under **BuildingBlocks**, replace **LoadUri** with the nevis.html blob link URL, in your blob storage account.
+7. Select **Save**.
-3. Replace **yourtenant** with your Azure tenant account name in the **TenantId**.
+### Customize SignUpOrSignin.xml
-4. Replace **yourtenant** with your Azure tenant account name in **PublicPolicyURI**.
+1. In your IDE, go to the [/samples/Nevis/policy](https://github.com/azure-ad-b2c/partner-integrations/tree/master/samples/Nevis/policy) folder.
+2. Open the [SignUpOrSignin.xml](https://github.com/azure-ad-b2c/partner-integrations/blob/master/samples/Nevis/policy/SignUpOrSignin.xml) file.
+3. Replace **your tenant** with your Azure tenant account name in **TenantId**.
+4. Replace **your tenant** with your Azure tenant account name in **PublicPolicyUri**.
+5. Under **BasePolicy**, in **TenantId**, replace **your tenant** with your Azure tenant account name.
+6. Select **Save**.
-5. Under **BasePolicy**, in the **TenantId**, also replace _yourtenant_ with your Azure tenant account name.
-
-6. Under **BuildingBlocks**, replace **LoadUri** with the blob link URL of your _nevis.html_ in your blob storage account.
-
-7. **Save** the file.
-
-### Customize your SignUpOrSignin.xml
-
-1. In your IDE, go to the [**policy**](https://github.com/azure-ad-b2c/partner-integrations/tree/master/samples/Nevis/policy) folder.
-
-2. Open the [**SignUpOrSignin.xml**](https://github.com/azure-ad-b2c/partner-integrations/blob/master/samples/Nevis/policy/SignUpOrSignin.xml) file.
-
-3. Replace **yourtenant** with your Azure tenant account name in the **TenantId**.
-
-4. Replace **yourtenant** with your Azure tenant account name in **PublicPolicyUri**.
-
-5. Under **BasePolicy**, in **TenantId**, also replace **yourtenant** with your Azure tenant account name.
-
-6. **Save** the file.
-
-### Upload your custom policies to Azure AD B2C
-
-1. Open your [Azure AD B2C tenant](https://portal.azure.com/#blade/Microsoft_AAD_B2CAdmin/TenantManagementMenuBlade/overview) home.
+### Upload custom policies to Azure AD B2C
+1. In the Azure portal, open your [Azure AD B2C tenant](https://portal.azure.com/#blade/Microsoft_AAD_B2CAdmin/TenantManagementMenuBlade/overview).
2. Select **Identity Experience Framework**.- 3. Select **Upload custom policy**.- 4. Select the **TrustFrameworkBase.xml** file you modified.- 5. Select the **Overwrite the custom policy if it already exists** checkbox. 6. Select **Upload**.- 7. Repeat step 5 and 6 for **TrustFrameworkExtensions.xml**.- 8. Repeat step 5 and 6 for **SignUpOrSignin.xml**. ## Test the user flow
-### Test account creation and Nevis Access app setup
-
-1. Open your [Azure AD B2C tenant](https://portal.azure.com/#blade/Microsoft_AAD_B2CAdmin/TenantManagementMenuBlade/overview) home.
+### Test account creation and Access setup
+1. In the Azure portal, open your [Azure AD B2C tenant](https://portal.azure.com/#blade/Microsoft_AAD_B2CAdmin/TenantManagementMenuBlade/overview).
2. Select **Identity Experience Framework**.-
-3. Scroll down to Custom policies and select **B2C_1A_signup_signin**.
-
+3. Scroll down to **Custom policies** and select **B2C_1A_signup_signin**.
4. Select **Run now**.-
-5. In the pop-up window, select **Sign up now**.
-
+5. In the window, select **Sign up now**.
6. Add your email address.- 7. Select **Send verification code**.-
-8. Copy over the verification code from the email.
-
+8. Copy the verification code from the email.
9. Select **Verify**.-
-10. Fill in the form with your new password and Display name.
-
+10. Fill in the form with your new password and display name.
11. Select **Create**.-
-12. You'll be taken to the QR code scan page.
-
+12. The QR code scan page appears.
13. On your phone, open the **Nevis Access app**.- 14. Select **Face ID**.
+15. The **Authenticator registration was successful** screen appears.
+16. Select **Continue**.
+17. On your phone, authenticate with your face.
+18. The [jwt.ms welcome](https://jwt.ms) page appears with your decoded token details.
-15. When the screen says **Authenticator registration was successful**, select **Continue**.
-
-16. On your phone, authenticate with your face again.
-
-17. You'll be taken to the [jwt.ms](https://jwt.ms) landing page that displays your decoded token details.
-
-### Test the pure passwordless sign-in
+### Test passwordless sign-in
1. Under **Identity Experience Framework**, select the **B2C_1A_signup_signin**.- 2. Select **Run now**.-
-3. In the pop-up window, select **Passwordless Authentication**.
-
+3. In the window, select **Passwordless Authentication**.
4. Enter your email address.- 5. Select **Continue**.-
-6. On your phone, in notifications, select **Nevis Access app notification**.
-
+6. On your phone, in Notifications, select **Nevis Access app notification**.
7. Authenticate with your face.-
-8. You'll be automatically taken to the [jwt.ms](https://jwt.ms) landing page that displays your tokens.
+8. The [jwt.ms welcome](https://jwt.ms) page appears with your tokens.
## Next steps
-For additional information, review the following articles
- - [Custom policies in Azure AD B2C](./custom-policy-overview.md)- - [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)
active-directory-b2c Partner Onfido https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-onfido.md
description: Learn how to integrate Azure AD B2C authentication with Onfido for document ID and facial biometrics verification -+ Previously updated : 08/03/2020 Last updated : 12/8/2022 # Tutorial for configuring Onfido with Azure Active Directory B2C
-In this sample tutorial, we provide guidance on how to integrate Azure AD B2C with [Onfido](https://onfido.com/). Onfido is a document ID and facial biometrics verification app. It allows companies to meet *Know Your Customer* and identity requirements in real time. Onfido uses sophisticated AI-based identity verification, which first verifies a photo ID, then matches it against their facial biometrics. This solution ties a digital identity to their real-world person and provides a safe onboarding experience while reducing fraud.
+In this tutorial, learn how to integrate Azure Active Directory B2C (Azure AD B2C) with [Onfido](https://onfido.com/), a document ID and facial biometrics verification app. Use it to meet *Know Your Customer* and identity requirements. Onfido uses artificial intelligence (AI) technology that verifies identity by matching a photo ID with facial biometrics. The solution connects a digital identity to a person, provides a reliable onboarding experience, and helps reduce fraud.
-In this sample, we connect Onfido's service in the sign-up or login flow to do identity verification. Informed decisions about which product and service the user can access is made based on Onfido's results.
+In this tutorial, you'll enable the Onfido service to verify identity in the sign-up, or sign-in, flow. Onfido results inform decisions about which products or services the user accesses.
## Prerequisites To get started, you'll need: -- An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).--- [An Azure AD B2C tenant](./tutorial-create-tenant.md) that is linked to your Azure subscription.--- An Onfido [trial account](https://onfido.com/signup/).
+- An Azure AD subscription
+ - If you don't have on, you can get an [Azure free account](https://azure.microsoft.com/free/)
+- [An Azure AD B2C tenant](./tutorial-create-tenant.md) linked to your Azure subscription
+- An Onfido trial account
+ - Go to onfido.com [Contact us](https://onfido.com/signup/) and fill out the form
## Scenario description The Onfido integration includes the following components: -- Azure AD B2C tenant ΓÇô The authorization server, responsible for verifying the user's credentials based on custom policies defined in the tenant. It's also known as the identity provider. It hosts the Onfido client app, which collects the user documents and transmits it to the Onfido API service.
+- **Azure AD B2C tenant** ΓÇô The authorization server that verifies user credentials based on custom policies defined in the tenant. It's also known as the identity provider (IdP). It hosts the Onfido client app, which collects the user documents and transmits them to the Onfido API service.
+- **Onfido client** ΓÇô A configurable, JavaScript client document-collection utility deployed in webpages. It checks details such as document size and quality.
+- **Intermediate REST API** ΓÇô Provides endpoints for the Azure AD B2C tenant to communicate with the Onfido API service. It handles data processing and adheres to security requirements of both.
+- **Onfido API service** ΓÇô The back-end service, which saves and verifies user documents.
-- Onfido client ΓÇô A configurable JavaScript client document collection utility deployed within other webpages. Collects the documents and does preliminary checks like document size and quality.
+The following architecture diagram shows the implementation.
-- Intermediate REST API ΓÇô Provides endpoints for the Azure AD B2C tenant to communicate with the Onfido API service, handling data processing and adhering to the security requirements of both.
+ ![Onfido architecture diagram.](media/partner-onfido/onfido-architecture-diagram.png)
-- Onfido API service ΓÇô The backend service provided by Onfido, which saves and verifies the documents provided by the user.
-The following architecture diagram shows the implementation.
+1. User signs up to create a new account and enters attributes. Azure AD B2C collects the attributes. Onfido client app hosted in Azure AD B2C checks for the user information.
+2. Azure AD B2C calls the middle layer API and passes the attributes.
+3. Middle layer API collects attributes and converts them to an Onfido API format.
+4. Onfido processes attributes to validate user identification and sends result to the middle layer API.
+5. Middle layer API processes the results and sends relevant information to Azure AD B2C, in JavaScript Object Notation (JSON) format.
+6. Azure AD B2C receives the information. If the response fails, an error message appears. If the response succeeds, the user is authenticated and written into the directory.
-![screenshot for onfido-architecture-diagram](media/partner-onfido/onfido-architecture-diagram.png)
+## Create an Onfido account
-|Step | Description |
-|:--| :--|
-| 1. | User arrives at a login page. User signs-up to create a new account and enters information into the page. Azure AD B2C collects the user attributes. Onfido client app hosted in Azure AD B2C does preliminary checks for the user information.
-| 2. | Azure AD B2C calls the middle layer API and passes on the user attributes.
-| 3. | Middle layer API collects user attributes and transforms it into a format that Onfido API could consume. Then, sends it to Onfido.
-| 4. | Onfido consumes the information and processes it to validate user identification. Then, it returns the result to the middle layer API.
-| 5. | Middle layer API processes the information and sends back relevant information in the correct JSON format to Azure AD B2C.
-| 6. | Azure AD B2C receives information back from middle layer API. If it shows a Failure response, an error message is displayed to user. If it shows a Success response, the user is authenticated and written into the directory.
+1. Create an Onfido account: go to onfido.com [Contact us](https://onfido.com/signup/) and fill out the form.
+2. Create an API key: go to [Get started (API v3.5)](https://documentation.onfido.com/).
-## Onboard with Onfido
+>[!NOTE]
+> You'll need the key later.
-1. To create an Onfido account, contact [Onfido](https://onfido.com/signup/).
+### Onfido documentation
-2. Once an account is created, create an [API key](https://documentation.onfido.com/). Live keys are billable, however, you can use the [sandbox keys for testing](https://documentation.onfido.com/?javascript#sandbox-and-live-differences) the solution. The sandbox keys produce the same result structure as live keys, however, the results are always predetermined. Documents aren't processed or saved.
+Live keys are billable, however, you can use the sandbox keys for testing. Go to onfido.com for, [Sandbox and live differences](https://documentation.onfido.com/?javascript#sandbox-and-live-differences). The sandbox keys produce the same result structure as live keys, however, results are predetermined. Documents aren't processed or saved.
->[!NOTE]
-> You will need the key later.
+For more Onfido documentation, see:
-For more information about Onfido, see [Onfido API documentation](https://documentation.onfido.com) and [Onfido Developer Hub](https://developers.onfido.com).
+* [Onfido API documentation](https://documentation.onfido.com)
+* [Onfido Developer Hub](https://developers.onfido.com)
## Configure Azure AD B2C with Onfido
-### Part 1 - Deploy the API
+### Deploy the API
-- Deploy the provided [API code](https://github.com/azure-ad-b2c/partner-integrations/tree/master/samples/OnFido-Combined/API/Onfido.Api) to an Azure service. The code can be published from Visual Studio, following these [instructions](/visualstudio/deployment/quickstart-deploy-to-azure).-- Set-up CORS, add **Allowed Origin** as https://{your_tenant_name}.b2clogin.com
+1. Deploy the API code to an Azure service. Go to [samples/OnFido-Combined/API/Onfido.Api/](https://github.com/azure-ad-b2c/partner-integrations/tree/master/samples/OnFido-Combined/API/Onfido.Api). You can publish the code from Visual Studio.
+2. Set up cross-origin resource sharing (CORS).
+3. Add **Allowed Origin** as `https://{your_tenant_name}.b2clogin.com`.
>[!NOTE]
->You'll need the URL of the deployed service to configure Azure AD with the required settings.
+>You'll need the deployed service URL to configure Azure AD.
#### Adding sensitive configuration settings
-Application settings can be configured in the [App service in Azure](../app-service/configure-common.md#configure-app-settings). The App service allows for settings to be securely configured without checking them into a repository. The REST API needs the following settings:
+[Configure app settings](../app-service/configure-common.md#configure-app-settings) in the Azure App service without checking them into a repository.
-| Application setting name | Source | Notes |
-|:-|:-|:-|
-|OnfidoSettings:AuthToken| Onfido Account |
+REST API settings:
-### Part 2 - Deploy the UI
+* **Application setting name**: OnfidoSettings:AuthToken
+* **Source**: Onfido Account
-#### Configure your storage location
+### Deploy the UI
-1. Set up a [blob storage container in your storage account](../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container)
-
-2. Store the UI files from the [UI folder](https://github.com/azure-ad-b2c/partner-integrations/tree/master/samples/OnFido-Combined/UI) to your blob container.
-
-3. Allow CORS access to storage container you created by following these instructions:
-
- a. Go to **Settings** >**Allowed Origin**, enter `https://{your_tenant_name}.b2clogin.com`. Replace your-tenant-name with the name of your Azure AD B2C tenant. For example, https://fabrikam.b2clogin.com. Use all lowercase letters when entering your tenant name.
-
- b. For **Allowed Methods**, select `GET` and `PUT`.
+#### Configure your storage location
- c. Select **Save**.
+1. In the Azure portal, [create a container](../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container).
+2. Store the UI files in [/samples/OnFido-Combined/UI](https://github.com/azure-ad-b2c/partner-integrations/tree/master/samples/OnFido-Combined/UI), in your blob container.
+3. Allow CORS access to the storage container you created: Go to **Settings** >**Allowed Origin**.
+4. Enter `https://{your_tenant_name}.b2clogin.com`.
+5. Replace your tenant name with your Azure AD B2C tenant name, using lower-case letters. For example, `https://fabrikam.b2clogin.com`.
+6. For **Allowed Methods**, select `GET` and `PUT`.
+7. Select **Save**.
#### Update UI files
-1. In the UI files, go to the folder [**ocean_blue**](https://github.com/azure-ad-b2c/partner-integrations/tree/master/samples/OnFido-Combined/UI/ocean_blue)
-
+1. In the UI files, go to [samples/OnFido-Combined/UI/ocean_blue](https://github.com/azure-ad-b2c/partner-integrations/tree/master/samples/OnFido-Combined/UI/ocean_blue).
2. Open each html file.-
-3. Find and replace `{your-ui-blob-container-url}` with the URL of where your UI **ocean_blue**, **dist**, and **assets** folders are located
-
-4. Find and replace `{your-intermediate-api-url}` with the URL of the intermediate API app service.
+3. Find `{your-ui-blob-container-url}`, and replace it with your UI **ocean_blue**, **dist**, and **assets** folder URLs.
+4. Find `{your-intermediate-api-url}`, and replace it with the intermediate API app service URL.
#### Upload your files
-1. Store the UI files from the UI folder to your blob container.
-
-2. Use [Azure Storage Explorer](../virtual-machines/disks-use-storage-explorer-managed-disks.md) to manage your files and access permissions.
+1. Store the UI folder files in your blob container.
+2. [Use Azure Storage Explorer to manage Azure managed disks](../virtual-machines/disks-use-storage-explorer-managed-disks.md) and access permissions.
-### Part 3 - Configure Azure AD B2C
+### Configure Azure AD B2C
#### Replace the configuration values
-In the provided [custom policies](https://github.com/azure-ad-b2c/partner-integrations/tree/master/samples/OnFido-Combined/Policies), find the following placeholders and replace with the corresponding values from your instance.
+In [/samples/OnFido-Combined/Policies](https://github.com/azure-ad-b2c/partner-integrations/tree/master/samples/OnFido-Combined/Policies), find the following placeholders and replace them with the corresponding values from your instance.
-| Placeholder | Replace with value | Example |
-|:|:-|:-|
-| {your_tenant_name} | Your tenant short name | "yourtenant" from yourtenant.onmicrosoft.com |
-| {your_tenantID} | TenantID of your Azure AD B2C tenant | 01234567-89ab-cdef-0123-456789abcdef |
-| {your_tenant_IdentityExperienceFramework_appid} | App ID of the IdentityExperienceFramework app configured in your Azure AD B2C tenant | 01234567-89ab-cdef-0123-456789abcdef |
-| {your_tenant_ ProxyIdentityExperienceFramework _appid} | App ID of the ProxyIdentityExperienceFramework app configured in your Azure AD B2C tenant | 01234567-89ab-cdef-0123-456789abcdef |
-| {your_tenant_extensions_appid} | App ID of your tenant's storage application | 01234567-89ab-cdef-0123-456789abcdef |
-| {your_tenant_extensions_app_objectid} | Object ID of your tenant's storage application | 01234567-89ab-cdef-0123-456789abcdef |
-| {your_app_insights_instrumentation_key} | Instrumentation key of your app insights instance*| 01234567-89ab-cdef-0123-456789abcdef|
-|{your_ui_file_base_url}| URL of the location where your UI **ocean_blue**, **dist**, and **assets** folders are located | https://yourstorage.blob.core.windows.net/UI/|
-| {your_app_service_URL} | URL of the app service you've set up | `https://yourapp.azurewebsites.net` |
+|Placeholder|Replace with value|Example|
+||||
+|{your_tenant_name}|Your tenant short name|"your tenant" from yourtenant.onmicrosoft.com|
+|{your_tenantID}|Your Azure AD B2C TenantID| 01234567-89ab-cdef-0123-456789abcdef|
+|{your_tenant_IdentityExperienceFramework_appid}|IdentityExperienceFramework app App ID configured in your Azure AD B2C tenant|01234567-89ab-cdef-0123-456789abcdef|
+|{your_tenant_ ProxyIdentityExperienceFramework_appid}|ProxyIdentityExperienceFramework app App ID configured in your Azure AD B2C tenant| 01234567-89ab-cdef-0123-456789abcdef|
+|{your_tenant_extensions_appid}|Your tenant storage application App ID| 01234567-89ab-cdef-0123-456789abcdef|
+|{your_tenant_extensions_app_objectid}|Your tenant storage application Object ID| 01234567-89ab-cdef-0123-456789abcdef|
+|{your_app_insights_instrumentation_key}|Your app insights instance* instrumentation key|01234567-89ab-cdef-0123-456789abcdef|
+|{your_ui_file_base_url}|Location URL of your UI folders **ocean_blue**, **dist**, and **assets**| `https://yourstorage.blob.core.windows.net/UI/`|
+|{your_app_service_URL}|The app service URL you set up|`https://yourapp.azurewebsites.net`|
-*App insights can be in a different tenant. This step is optional. Remove the corresponding TechnicalProfiles and OrchestrationSteps if not needed.
+*App insights can be in a different tenant. This step is optional. Remove the corresponding TechnicalProfiles and OrchestrationSteps, if they're not needed.
-### Part 4 - Configure the Azure AD B2C policy
+### Configure Azure AD B2C policy
-Refer to this [document](tutorial-create-user-flows.md?pivots=b2c-custom-policy#custom-policy-starter-pack) for instructions on how to set up your Azure AD B2C tenant and configure policies.
+See, [Custom policy starter pack](tutorial-create-user-flows.md?pivots=b2c-custom-policy#custom-policy-starter-pack) for instructions to set up your Azure AD B2C tenant and configure policies. Custom policies are a set of XML files you upload to your Azure AD B2C tenant to define technical profiles and user journeys.
>[!NOTE]
-> As a best practice, we recommend that customers add consent notification in the attribute collection page. Notify users that information will be send to third-party services for Identity verification.
+>We recommend you add consent notification on the attribute collection page. Notify users that information goes to third-party services for identity verification.
## Test the user flow
-1. Open the Azure AD B2C tenant and under Policies select **Identity Experience Framework**.
-
-2. Select your previously created **SignUpSignIn**.
-
-3. Select **Run user flow** and select the settings:
-
- a. **Application**: select the registered app (sample is JWT)
+1. Open the Azure AD B2C tenant.
+2. Under **Policies** select **Identity Experience Framework**.
+3. Select your previously created **SignUpSignIn**.
+4. Select **Run user flow**.
+5. For **Application**, select the registered app (example is JWT).
+6. For **Reply URL**, select the **redirect URL**.
+7. Select **Run user flow**.
+8. Complete the sign-up flow.
+9. Create an account.
+10. When the user attribute is created, Onfido is called during the flow.
- b. **Reply URL**: select the **redirect URL**
-
- c. Select **Run user flow**.
-
-4. Go through sign-up flow and create an account
-
-5. Onfido service will be called during the flow, after user attribute is created. If the flow is incomplete, check that user isn't saved in the directory.
+>[!NOTE]
+>If the flow is incomplete, confirm the user is saved in the directory.
## Next steps
-For additional information, review the following articles:
- - [Custom policies in Azure AD B2C](./custom-policy-overview.md)- - [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)
active-directory-b2c Partner Ping Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-ping-identity.md
description: Learn how to integrate Azure AD B2C authentication with Ping Identity -+ Previously updated : 09/13/2022 Last updated : 12/9/2022 # Tutorial: Configure Ping Identity with Azure Active Directory B2C for secure hybrid access
-In this sample tutorial, learn how to extend Azure Active Directory (AD) B2C with [PingAccess](https://www.pingidentity.com/en/software/pingaccess.html#:~:text=%20Modern%20Access%20Managementfor%20the%20Digital%20Enterprise%20,consistent%20enforcement%20of%20security%20policies%20by...%20More) and [PingFederate](https://www.pingidentity.com/en/software/pingfederate.html) to enable secure hybrid access.
+In this tutorial, learn how to extend the capabilities of Azure Active Directory B2C (Azure AD B2C) with [PingAccess](https://www.pingidentity.com/en/software/pingaccess.html#:~:text=%20Modern%20Access%20Managementfor%20the%20Digital%20Enterprise%20,consistent%20enforcement%20of%20security%20policies%20by...%20More) and [PingFederate](https://www.pingidentity.com/en/software/pingfederate.html). PingAccess provides access to applications and APIs, and a policy engine for authorized user access. PingFederate is an enterprise federation server for user authentication and single sign-on, an authority that permits customers, employees, and partners to access applications from devices. Use them together to enable secure hybrid access (SHA).
-Many existing web properties such as eCommerce sites and web applications that are exposed to the internet are deployed behind a proxy system, sometimes referred as a reverse proxy system. These proxy systems provide various functions including pre-authentication, policy enforcement, and traffic routing. Example use cases include protecting the web application from inbound web traffic and providing a uniform session management across distributed server deployments.
+Many e-commerce sites and web applications exposed to the internet are deployed behind proxy systems, or a reverse-proxy system. These proxy systems pre-authenticate, enforce policy, and route traffic. Typical scenarios include protecting web applications from inbound web traffic and providing a uniform session management across distributed server deployments.
-In most cases, this configuration includes an authentication translation layer that externalizes the authentication from the web application. Reverse proxies in turn provide the authenticated usersΓÇÖ context to the web applications, in a simpler form such as a header value in clear or digest form. In such a configuration, the applications aren't using any industry standard tokens such as Security Assertion Markup Language (SAML), OAuth or Open ID Connect (OIDC), rather depend on the proxy to provide the authentication context and maintain the session with the end-user agent such as browser or the native application. As a service running in a "man-in-the-middle", proxies can provide the ultimate session control. This also means the proxy service should be highly efficient and scalable, not to become a bottleneck or a single point of failure for the applications behind the proxy service. The diagram is a depiction of a typical reverse proxy implementation and flow of the communications.
+Generally, configurations include an authentication translation layer that externalizes the authentication from the web application. Reverse proxies provide the authenticated user context to the web applications, such as a header value in clear or digest form. The applications aren't using industry standard tokens such as Security Assertion Markup Language (SAML), OAuth, or Open ID Connect (OIDC). Instead, the proxy provides authentication context and maintains the session with the end-user agent such as browser or native application. As a service running as a man-in-the-middle, proxies provide significant session control. The proxy service is efficient and scalable, not a bottleneck for applications behind the proxy service. The diagram is a reverse-proxy implementation and communications flow.
-![image shows the reverse proxy implementation](./media/partner-ping/reverse-proxy.png)
+ ![Reverse proxy implementation](./media/partner-ping/reverse-proxy.png)
-If you are in a situation where you want to modernize the identity platform in such configurations, following concerns are raised.
+## Modernization
-- How can the effort for application modernization be decoupled from the identity platform modernization?
+If you want to modernize an identity platform in such configurations, there might be customer concerns:
-- How can a coexistence environment be established with modern and legacy authentication, consuming from the modernized identity service provider?
+- Decouple the effort to modernize applications from modernizing an identity platform
+- Environments with modern and legacy authentication, consuming from the modernized identity service provider
+ - Drive the end-user experience consistency
+ - Provide a single sign-in experience across applications
- a. How to drive the end-user experience consistency?
+In answer to these concerns, the approach in this tutorial is an Azure AD B2C, [PingAccess](https://www.pingidentity.com/en/software/pingaccess.html#:~:text=%20Modern%20Access%20Managementfor%20the%20Digital%20Enterprise%20,consistent%20enforcement%20of%20security%20policies%20by...%20More), and [PingFederate](https://www.pingidentity.com/en/software/pingfederate.html) integration.
- b. How to provide a single sign-in experience across the coexisting applications?
+## Shared environment
-We discuss an approach to solve such concerns by integrating Azure AD B2C with [PingAccess](https://www.pingidentity.com/en/software/pingaccess.html#:~:text=%20Modern%20Access%20Managementfor%20the%20Digital%20Enterprise%20,consistent%20enforcement%20of%20security%20policies%20by...%20More) and [PingFederate](https://www.pingidentity.com/en/software/pingfederate.html) technologies.
+A technically viable, and cost-effective, solution is to configure the reverse proxy system to use the modernized identity system, delegating authentication.
+Proxies support the modern authentication protocols and use the redirect-based (passive) authentication that sends users to the new identity provider (IdP).
-## Coexistence environment
+### Azure AD B2C as an identity provider
-A technically viable simple solution that is also cost effective is to configure the reverse proxy system to use the modernized identity system, delegating the authentication.
-Proxies in this case will support the modern authentication protocols and use the redirect based (passive) authentication that will send user to the new Identity provider (IdP).
+In Azure AD B2C, you define policies that drive user experiences and behaviors, also called user journeys. Each such policy exposes a protocol endpoint that can perform the authentication as an IdP. On the application side, there's no special handling required for certain policies. An application makes a standard authentication request to the protocol-specific authentication endpoint exposed by a policy.
+You can configure Azure AD B2C to share the same issuer across policies or unique issuer for each policy. Each application can point to policies by making a protocol-native authentication request, which drives user behaviors such as sign-in, sign-up, and profile edits. The diagram shows OIDC and SAML application workflows.
-### Azure AD B2C as an Identity provider
+ ![O I D C and S A M L implementation](./media/partner-ping/azure-ad-identity-provider.png)
-Azure AD B2C has the ability to define **policies** that drives different user experiences and behaviors that are also called **user journeys** as orchestrated from the server end. Each such policy exposes a protocol endpoint that can perform the authentication as if it were an IdP. There is no special handling needed on the application side for specific policies. Application simply makes an industry standard authentication request to the protocol-specific authentication endpoint exposed by the policy of interest.
-Azure AD B2C can be configured to share the same issuer across multiple policies or unique issuer for each policy. Each application can point to one or many of these policies by making a protocol native authentication request and drive desired user behaviors such as sign-in, sign-up, and profile edits. The diagram shows OIDC and SAML application workflows.
+The scenario can be challenging for the legacy applications to redirect the user accurately. The access request to the applications might not include the user experience context. In most cases, the proxy layer, or an integrated agent on the web application, intercepts the access request.
-![image shows the OIDC and SAML implementation](./media/partner-ping/azure-ad-identity-provider.png)
+### PingAccess reverse proxy
-While the scenario mentioned works well for modernized applications, it can be challenging for the legacy applications to appropriately redirect the user as the access request to the applications may not include the context for user experience. In most cases the proxy layer or an integrated agent on the web application intercepts the access request.
+You can deploy PingAccess as the reverse proxy. PingAccess intercepts a direct request by being the man-in-the-middle, or as a redirect from an agent running on the web application server.
-### PingAccess as a reverse proxy
+Configure PingAccess with OIDC, OAuth2, or SAML for authentication with an upstream authentication provider. You can configure an upstream IdP for this purpose on the PingAccess server. See the following diagram.
-Many customers have deployed PingAccess as the reverse proxy to play one or many roles as noted earlier in this article. PingAccess can intercept a direct request by way of being the man-in-the-middle or as a redirect that comes from an agent running on the web application server.
+ ![PingAccess with O I D C implementation](./media/partner-ping/authorization-flow.png)
-PingAccess can be configured with OIDC, OAuth2, or SAML to perform authentication against an upstream authentication provider. A single upstream IdP can be configured for this purpose on the PingAccess server. The following diagram shows this configuration.
+In a typical Azure AD B2C deployment with policies exposing IdPs, there's a challenge. PingAccess is configured with one, upstream IdP.
-![image shows the PingAccess with OIDC implementation](./media/partner-ping/authorization-flow.png)
+### PingFederate federation proxy
-In a typical Azure AD B2C deployment where multiple policies are exposing multiple **IdPs**, it poses a challenge. Since PingAccess can only be configured with a single upstream IdP.
+You can configure PingFederate as an authentication provider, or a proxy. for upstream IdPs. See the following diagram.
-### PingFederate as a federation proxy
+ ![PingFederate implementation](./media/partner-ping/pingfederate.png)
-PingFederate is an enterprise identity bridge that can be fully configured as an authentication provider or a proxy for other multiple upstream IdPs if needed. The following diagram shows this capability.
+Use this function to contextually, dynamically, or declaratively switch an inbound request to an Azure AD B2C policy. See the following diagram of protocol sequence flow.
-![image shows the PingFederate implementation](./media/partner-ping/pingfederate.png)
-
-This capability can be used to contextually/dynamically or declaratively switch an inbound request to a specific Azure AD B2C policy. The following is a depiction of protocol sequence flow for this configuration.
-
-![image shows the PingAccess and PingFederate workflow](./media/partner-ping/pingaccess-pingfederate-workflow.png)
+ ![image shows the PingAccess and PingFederate workflow](./media/partner-ping/pingaccess-pingfederate-workflow.png)
## Prerequisites To get started, you'll need: -- An Azure subscription. If you don't have one, get a [free account](https://azure.microsoft.com/free/).--- An [Azure AD B2C tenant](./tutorial-create-tenant.md) that is linked to your Azure subscription.--- PingAccess and PingFederate deployed in Docker containers or directly on Azure VMs.-
-## Connectivity
-
-Check that the following is connected.
--- **PingAccess server** ΓÇô Able to communicate with the PingFederate server, client browser, OIDC, OAuth well-known and keys discovery published by the Azure AD B2C service and PingFederate server.--- **PingFederate server** ΓÇô Able to communicate with the PingAccess server, client browser, OIDC, OAuth well-known and keys discovery published by the Azure AD B2C service.
+- An Azure subscription
+ - If you don't have one, get an [Azure free account](https://azure.microsoft.com/free/)
+- An [Azure AD B2C tenant](/tutorial-create-tenant.md) linked to your Azure subscription
+- PingAccess and PingFederate deployed in Docker containers or on Azure virtual machines (VMs)
-- **Legacy or header-based AuthN application** ΓÇô Able to communicate to and from PingAccess server.
+## Connectivity and communication
-- **SAML relying party application** ΓÇô Able to reach the browser traffic from the client. Able to access the SAML federation metadata published by the Azure AD B2C service.
+Confirm the following connectivity and communication.
-- **Modern application** ΓÇô Able to reach the browser traffic from the client. Able to access the OIDC, OAuth well-known, and keys discovery published by the Azure AD B2C service.--- **REST API** ΓÇô Able to reach the traffic from a native or web client. Able to access the OIDC, OAuth well-known, and keys discovery published by the Azure AD B2C service.
+- **PingAccess server** ΓÇô Communicates with the PingFederate server, client browser, OIDC, OAuth well-known and keys discovery published by the Azure AD B2C service and PingFederate server
+- **PingFederate server** ΓÇô Communicates with the PingAccess server, client browser, OIDC, OAuth well-known and keys discovery published by the Azure AD B2C service
+- **Legacy or header-based AuthN application** ΓÇô Communicates to and from PingAccess server
+- **SAML relying party application** ΓÇô Reaches the browser traffic from the client. Accesses the SAML federation metadata published by the Azure AD B2C service.
+- **Modern application** ΓÇô Reaches the browser traffic from the client. Accesses the OIDC, OAuth well-known, and keys discovery published by the Azure AD B2C service.
+- **REST API** ΓÇô Reaches the traffic from a native or web client. Accesses the OIDC, OAuth well-known, and keys discovery published by the Azure AD B2C service
## Configure Azure AD B2C
-You can use the basic user flows or advanced Identity enterprise framework (IEF) policies for this purpose. PingAccess generates the metadata endpoint based on the **Issuer** value using the [WebFinger](https://tools.ietf.org/html/rfc7033) based discovery convention.
-To follow this convention, update the Azure AD B2C issuer update using the policy properties in user flows.
+You can use basic user flows or advanced Identity Enterprise Framework (IEF) policies. PingAccess generates the metadata endpoint, based on the issuer value, by using the [WebFinger](https://tools.ietf.org/html/rfc7033) protocol for discovery convention. To follow this convention, update the Azure AD B2C issuer using user-flow policy properties.
-![image shows the token settings](./media/partner-ping/token-setting.png)
+ ![image shows the token settings](./media/partner-ping/token-setting.png)
-In the advanced policies, this can be configured using the **IssuanceClaimPattern** metadata element to **AuthorityWithTfp** value in the [JWT token issuer technical profile](./jwt-issuer-technical-profile.md).
+In the advanced policies, configuration includes the IssuanceClaimPattern metadata element to AuthorityWithTfp value in the [JWT token issuer technical profile](./jwt-issuer-technical-profile.md).
-## Configure PingAccess/PingFederate
+## Configure PingAccess and PingFederate
-The following section covers the required configuration.
-The diagram illustrates the overall user flow for the integration.
+Use the instructions in the following sections to configure PingAccess and PingFederate. See the following diagram of the overall integration user flow.
-![image shows the PingAccess and PingFederate integration](./media/partner-ping/pingaccess.png)
+ ![PingAccess and PingFederate integration](./media/partner-ping/pingaccess.png)
### Configure PingFederate as the token provider
-To configure PingFederate as the token provider for PingAccess, ensure connectivity from PingFederate to PingAccess is established followed by connectivity from PingAccess to PingFederate.
-See [this article](https://docs.pingidentity.com/bundle/pingaccess-61/page/zgh1581446287067.html) for configuration steps.
+To configure PingFederate as the token provider for PingAccess, ensure connectivity from PingFederate to PingAccess. Confirm connectivity from PingAccess to PingFederate.
+
+Go to pingidentity.com for, [Configure PingFederate as the token provider for PingAccess](https://docs.pingidentity.com/bundle/pingaccess-61/page/zgh1581446287067.html).
### Configure a PingAccess application for header-based authentication
-A PingAccess application must be created for the target web application for header-based authentication. Follow these steps.
+Use the following instructions to create a PingAccess application for the target web application, for header-based authentication.
-#### Step 1 ΓÇô Create a virtual host
+#### Create a virtual host
>[!IMPORTANT]
->To configure for this solution, virtual host need to be created for every application. For more information regarding configuration considerations and their impacts, see [Key considerations](https://docs.pingidentity.com/bundle/pingaccess-43/page/reference/pa_c_KeyConsiderations.html).
-
-Follow these steps to create a virtual host:
-
-1. Go to **Settings** > **Access** > **Virtual Hosts**
-
-2. Select **Add Virtual Host**
-
-3. In the Host field, enter the FQDN portion of the Application URL
-
-4. In the Port field, enter **443**
-
-5. Select **Save**
+>Create a virtual host for every application. For more information, see [What can I configure with PingAccess?]([https://docs.pingidentity.com/bundle/pingaccess-43/page/reference/pa_c_KeyConsiderations.html](https://docs.pingidentity.com/bundle/pingaccess-71/page/kkj1564006722708.html).
-#### Step 2 ΓÇô Create a web session
+To create a virtual host:
-Follow these steps to create a web session:
+1. Go to **Settings** > **Access** > **Virtual Hosts**.
+2. Select **Add Virtual Host**.
+3. For **Host**, enter the FQDN portion of the Application URL.
+4. For **Port**, enter **443**.
+5. Select **Save**.
-1. Navigate to **Settings** > **Access** > **Web Sessions**
+#### Create a web session
-2. Select **Add Web Session**
-
-3. Provide a **Name** for the web session.
-
-4. Select the **Cookie Type**, either **Signed JWT** or **Encrypted JWT**
-
-5. Provide a unique value for the **Audience**
-
-6. In the **Client ID** field, enter the **Azure AD Application ID**
-
-7. In the **Client Secret** field, enter the **Key** you generated for the application in Azure AD.
-
-8. Optional - You can create and use custom claims with the Microsoft Graph API. If you choose to do so, select **Advanced** and deselect the **Request Profile** and **Refresh User Attributes** options. For more information on using custom claims, see [use a custom claim](../active-directory/app-proxy/application-proxy-configure-single-sign-on-with-headers.md).
+To create a web session:
+1. Navigate to **Settings** > **Access** > **Web Sessions**.
+2. Select **Add Web Session**.
+3. Enter a **Name** for the web session.
+4. Select the **Cookie Type**: **Signed JWT** or **Encrypted JWT**.
+5. Enter a unique value for **Audience**.
+6. For **Client ID**, enter the **Azure AD Application ID**.
+7. For **Client Secret**, enter the **Key** you generated for the application in Azure AD.
+8. (Optional) Create and use custom claims with the Microsoft Graph API: Select **Advanced**. Deselect **Request Profile** and **Refresh User Attributes**. Learn more about custom claims: [Header-based single sign-on for on-premises apps with Azure AD App Proxy](../active-directory/app-proxy/application-proxy-configure-single-sign-on-with-headers.md).
9. Select **Save**
-#### Step 3 ΓÇô Create identity mapping
+#### Create identity mapping
>[!NOTE]
->Identity mapping can be used with more than one application if more than one application is expecting the same data in the header.
-
-Follow these steps to create identity mapping:
-
-1. Go to **Settings** > **Access** > **Identity Mappings**
+>You can use identity mapping with more than one application, if they're expecting the same data in the header.
-2. Select **Add Identity Mapping**
-
-3. Specify a **Name**
-
-4. Select the identity-mapping **Type of Header Identity Mapping**
+To create identity mapping:
+1. Go to **Settings** > **Access** > **Identity Mappings**.
+2. Select **Add Identity Mapping**.
+3. Specify a **Name*.
+4. Select the identity-mapping **Type of Header Identity Mapping**.
5. In the **Attribute-Mapping** table, specify the required mappings. For example,
- Attribute name | Header name |
+ | Attribute name | Header name |
||| | 'upn' | x-userprincipalname | | 'email' | x-email |
Follow these steps to create identity mapping:
6. Select **Save**
-#### Step 4 ΓÇô Create a site
+#### Create a site
>[!NOTE]
->In some configurations, it is possible that a site may contain more than one application. A site can be used with more than one application, where appropriate.
-
-Follow these steps to create a site:
-
-1. Go to **Main** > **Sites**
-
-2. Select **Add Site**
-
-3. Specify a **Name** for the site
-
-4. Enter the site **Target**. The target is the hostname:port pair for the server hosting the application. Don't enter the path for the application in this field. For example, an application at https://mysite:9999/AppName will have a target value of mysite: 9999
-
-5. Indicate whether or not the target is expecting secure connections.
+>In some configurations, a site can contain multiple applications. You can use a site with more than one application, when appropriate.
-6. If the target is expecting secure connections, set the Trusted Certificate Group to **Trust Any**.
+To create a site:
-7. Select **Save**
+1. Go to **Main** > **Sites**.
+2. Select **Add Site**.
+3. Enter the site **Name**.
+4. Enter the site **Target**. The target is the hostname:port pair for the server hosting the application. Don't enter the application path in this field. For example, an application at https://mysite:9999/AppName has a target value of mysite:9999.
+5. Indicate if the target expects secure connections.
+6. If the target expects secure connections, set the Trusted Certificate Group to **Trust Any**.
+7. Select **Save**.
-#### Step 5 ΓÇô Create an application
+#### Create an application
-Follow these steps to create an application in PingAccess for each application in Azure that you want to protect.
+To create an application in PingAccess for each application in Azure that you want to protect.
1. Go to **Main** > **Applications**
active-directory Howto Authentication Use Email Signin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-use-email-signin.md
Previously updated : 06/17/2022 Last updated : 01/03/2023
In the current preview state, the following limitations apply to email as an alt
* [Resource Owner Password Credentials (ROPC)](../develop/v2-oauth-ropc.md) * Legacy authentication such as POP3 and SMTP * Skype for Business
- * Microsoft 365 Admin Portal
* **Unsupported apps** - Some third-party applications may not work as expected if they assume that the `unique_name` or `preferred_username` claims are immutable or will always match a specific user attribute, such as UPN.
active-directory Msal Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-migration.md
If any of your applications use the Azure Active Directory Authentication Librar
## Why switch to MSAL?
-To understand 'Why MSAL?', it's important to first understand the differences between Microsoft identity platform (v2.0) and Azure Active Directory (v1.0) endpoints. The v1.0 endpoint is used by Azure AD Authentication Library (ADAL) while the v2.0 endpoint is used by Microsoft Authentication Library (MSAL). If you've developed apps against the v1.0 endpoint in the past, you're likely using ADAL. Since the v2.0 endpoint has changed significantly enough, the new library (MSAL) was built for the new endpoint entirely.
+If you've developed apps against Azure Active Directory (v1.0) endpoint in the past, you're likely using ADAL. Since Microsoft identity platform (v2.0) endpoint has changed significantly enough, the new library (MSAL) was built for the new endpoint entirely.
The following diagram shows the v2.0 vs v1.0 endpoint experience at a high level, including the app registration experience, SDKs, endpoints, and supported identities.
active-directory Azure Ad Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/azure-ad-account.md
Title: Azure AD Account identity provider
-description: Use Azure Active Directory to enable an external user (guest) to sign in to your Azure AD apps with their Azure AD work account.
+ Title: Add Azure AD Account as an identity provider
+description: Use Azure Active Directory to enable an external user (guest) to sign in to your Azure AD apps with their Azure AD work or school account.
# Add Azure Active Directory (Azure AD) as an identity provider for External Identities
-Azure Active Directory is available as an identity provider option for [B2B collaboration](what-is-b2b.md) by default. If an external guest user has an Azure AD account through work or school, they can redeem your B2B collaboration invitations or complete your sign-up user flows using their Azure AD account.
+Azure Active Directory is available as an identity provider option for [B2B collaboration](what-is-b2b.md#integrate-with-identity-providers) by default. If an external guest user has an Azure AD account through work or school, they can redeem your B2B collaboration invitations or complete your sign-up user flows using their Azure AD account.
## Guest sign-in using Azure Active Directory accounts
Azure Active Directory is available in the list of External Identities identity
### Azure AD account in the invitation flow
-When you [invite a guest user](add-users-administrator.md) to B2B collaboration, you can specify their Azure AD account as the email address they'll use to sign in.
+When you [invite a guest user](add-users-administrator.md) to B2B collaboration, you can specify their Azure AD account as the **Email address** they'll use to sign in.
:::image type="content" source="media/azure-ad-account/azure-ad-account-invite.png" alt-text="Screenshot of inviting a guest user using the Azure AD account." lightbox="media/azure-ad-account/azure-ad-account-invite.png":::
active-directory How To Connect Staged Rollout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-staged-rollout.md
You can roll out these options:
- **Pass-through authentication** + **Seamless SSO** - **Not supported** - **Password hash sync** + **Pass-through authentication** + **Seamless SSO** - **Certificate-based authentication settings**
+- **Azure multifactor authentication**
To configure Staged Rollout, follow these steps:
active-directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/overview.md
While developers can securely store the secrets in [Azure Key Vault](../../key-v
The following video shows how you can use managed identities:</br>
-> [!VIDEO https://learn.microsoft.com/Shows/On-NET/Using-Azure-Managed-identities/player?format=ny]
+> [!VIDEO https://learn-video.azurefd.net/vod/player?show=on-net&ep=using-azure-managed-identities]
Here are some of the benefits of using managed identities:
active-directory Blinq Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/blinq-provisioning-tutorial.md
Once you've configured provisioning, use the following resources to monitor your
* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md). ## Change Logs
-05/25/2022 - **Schema Discovery** feature enabled on this app.
+* 05/25/2022 - **Schema Discovery** feature enabled on this app.
+* 12/22/2022 - The source attribute of **addresses[type eq "work"].formatted** ha been changed to **Join("", [streetAddress], IIF(IsPresent([city]),", ",""), [city], IIF(IsPresent([state]),", ",""), [state], IIF(IsPresent([postalCode])," ",""), [postalCode]) --> addresses[type eq "work"].formatted**.
## More resources
active-directory Lucid All Products Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lucid-all-products-provisioning-tutorial.md
This section guides you through the steps to configure the Azure AD provisioning
1. Select **Save**.
-1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Lucid (All Products)**.
+1. Under the **Mappings** section, select **Provision Azure Active Directory Users**.
1. Review the user attributes that are synchronized from Azure AD to Lucid (All Products) in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Lucid (All Products) for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the Lucid (All Products) API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
This section guides you through the steps to configure the Azure AD provisioning
|urn:ietf:params:scim:schemas:extension:lucid:2.0:User:productLicenses.LucidscaleCreator|String||
-1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Lucid (All Products)**.
+1. Under the **Mappings** section, select **Provision Azure Active Directory Groups**.
1. Review the group attributes that are synchronized from Azure AD to Lucid (All Products) in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Lucid (All Products) for update operations. Select the **Save** button to commit any changes.
active-directory Tripwire Enterprise Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tripwire-enterprise-tutorial.md
Previously updated : 12/14/2022 Last updated : 01/02/2023
Complete the following steps to enable Azure AD single sign-on in the Azure port
![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
-1. On the **Set up Tripwire Enterprise** section, copy the appropriate URL(s) based on your requirement.
-
- ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
- ## Configure Tripwire Enterprise SSO
-To configure single sign-on on **Tripwire Enterprise** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Tripwire Enterprise support team](mailto:support@tripwire.com). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on in Tripwire Enterprise, please see **Using Tripwire Enterprise with SAML Authentication** section in the Tripwire Enterprise Hardeing Guide, available for download on the [Tripwire Customer Center](https://tripwireinc.force.com/customers/home). If you require assistance, contact [Tripwire Enterprise support team](mailto:support@tripwire.com).
### Create Tripwire Enterprise test user
-In this section, you create a user called Britta Simon in Tripwire Enterprise. Work with [Tripwire Enterprise support team](mailto:support@tripwire.com) to add the users in the Tripwire Enterprise platform. Users must be created and activated before you use single sign-on.
+To create a Tripwire Enterprise user, please see **Creating a User Account** section in the Tripwire Enterprise User Guide, available for download on the [Tripwire Customer Center](https://tripwireinc.force.com/customers/home). If you require assistance, contact [Tripwire Enterprise support team](mailto:support@tripwire.com).
## Test SSO
active-directory Configure Azure Active Directory For Cmmc Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/configure-azure-active-directory-for-cmmc-compliance.md
Previously updated : 12/13/2022 Last updated : 1/3/2023
Azure Active Directory helps you meet identity-related practice requirements in each Cybersecurity Maturity Model Certification (CMMC) level. To be compliant with requirements in CMMC, it's the responsibility of companies performing work with, and on behalf of, the US Dept. of Defense (DoD) to complete other configurations or processes.
-In CMMC Level 1, there are three domains that have one or more practices related to identity. The three domains are:
+In CMMC Level 1, there are three domains that have one or more practices related to identity:
* Access Control (AC) * Identification and Authentication (IA) * System and Information integrity (SI)
-In CMMC Level 2, there are 13 domains that have one or more practices related to identity. The domains are:
+In CMMC Level 2, there are 13 domains that have one or more practices related to identity:
-* Access Control
-* Audit & Accountability
-* Configuration Management
-* Identification & Authentication
-* Incident Response
-* Maintenance
-* Media Protection
-* Personnel Security
-* Physical Protection
-* Risk Assessment
-* Security Assessment
-* System and Communications Protection
-* System and Information Integrity
+* Access Control
+* Audit & Accountability
+* Configuration Management
+* Identification & Authentication
+* Incident Response
+* Maintenance
+* Media Protection
+* Personnel Security
+* Physical Protection
+* Risk Assessment
+* Security Assessment
+* System and Communications Protection
+* System and Information Integrity
-The remaining articles in this series provide guidance and links to resources, organized by level and domain. For each domain, there's a table with the relevant controls listed and links to content that provides step-by-step guidance to accomplish the practice.
+The remaining articles in this series provide guidance and links to resources, organized by level and domain. For each domain, there's a table with the relevant controls listed, and links to guidance to accomplish the practice.
Learn more: * DoD CMMC website - [Office of the Under Secretary of Defense for Acquisition & Sustainment Cybersecurity Maturity Model Certification](https://www.acq.osd.mil/cmmc/https://docsupdatetracker.net/index.html)- * Microsoft Download Center - [Microsoft Product Placemat for CMMC Level 3 (preview)](https://www.microsoft.com/download/details.aspx?id=102536) ### Next steps * [Configure CMMC Level 1 controls](configure-cmmc-level-1-controls.md)- * [Configure CMMC Level 2 Access Control (AC) controls](configure-cmmc-level-2-access-control.md)- * [Configure CMMC Level 2 Identification and Authentication (IA) controls](configure-cmmc-level-2-identification-and-authentication.md)- * [Configure CMMC Level 2 additional controls](configure-cmmc-level-2-additional-controls.md)
active-directory Configure Cmmc Level 1 Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/configure-cmmc-level-1-controls.md
Previously updated : 12/13/2022 Last updated : 1/3/2023
# Configure CMMC Level 1 controls Azure Active Directory meets identity-related practice requirements in each Cybersecurity Maturity Model Certification (CMMC) level. To be compliant with requirements in CMMC, it's the responsibility of companies performing work with, and on behalf of, the US Dept. of Defense (DoD) to complete other configurations or processes.
-In CMMC Level 1, there are three domains that have one or more practices related to identity. The three domains are:
+In CMMC Level 1, there are three domains that have one or more practices related to identity:
* Access Control (AC) * Identification and Authentication (IA)
The remainder of this content is organized by domain and associated practices. F
## Access Control domain
-The following table provides a list of control IDs and associated customer responsibilities and guidance.
+The following table provides a list of practice statement and objectives, and Azure AD guidance and recommendations to enable you to meet these requirements with Azure AD.
-| *Control* | *Guidance* |
+| CMMC practice statement and objectives | Azure AD guidance and recommendations |
| - | - |
-| AC.L1-3.1.1 | You're responsible for provisioning Azure AD accounts. Provisioning accounts in Azure AD is accomplished from external HR systems, on-premises Active Directory, or directly in the cloud. You configure Conditional Access to only grant access from a known (Registered/Managed) device. Additionally, apply the concept of least privilege when granting application permissions. Where possible, use delegated permission. <br><br>Provision users<br><li>[Plan cloud HR application to Azure Active Directory user provisioning](../app-provisioning/plan-cloud-hr-provision.md) <li>[Azure AD Connect sync: Understand and customize synchronization](../hybrid/how-to-connect-sync-whatis.md)<li>[Add or delete users ΓÇô Azure Active Directory](../fundamentals/add-users-azure-active-directory.md)<br><br>Provision devices<li>[What is device identity in Azure Active Directory](../devices/overview.md)<br><br>Configure applications<li>[QuickStart: Register an app in the Microsoft identity platform](../develop/quickstart-register-app.md)<li>[Microsoft identity platform scopes, permissions, & consent](../develop/v2-permissions-and-consent.md)<li>[Securing service principals in Azure Active Directory](../fundamentals/service-accounts-principal.md)<br><br>Conditional access<li>[What is Conditional Access in Azure Active Directory](../conditional-access/overview.md)<li>[Conditional Access require managed device](../conditional-access/require-managed-devices.md) |
-| AC.L1-3.1.2 | You're responsible for configuring access controls such as Role Based Access Controls (RBAC) with built-in or custom roles. Use role assignable groups to manage role assignments for multiple users requiring same access. Configure Attribute Based Access Controls (ABAC) with default or custom security attributes. The objective is to granularly control access to resources protected with Azure AD.<br><br>Provision RBAC<li>[Overview of role-based access control in Active Directory ](../roles/custom-overview.md)[Azure AD built-in roles](../roles/permissions-reference.md)<li>[Create and assign a custom role in Azure Active Directory](../roles/custom-create.md)<br><br>Provision ABAC<li>[What is Azure attribute-based access control (Azure ABAC)](/azure/role-based-access-control/conditions-overview)<li>[What are custom security attributes in Azure AD?](/azure/active-directory/fundamentals/custom-security-attributes-overview)<br><br>Provision groups for role assignment<li>[Use Azure AD groups to manage role assignments](../roles/groups-concept.md) |
-| AC.L1-3.1.20 | You're responsible for configuring conditional access policies using device controls and or network locations to control and or limit connections and use of external systems. Configure Terms of Use (TOU) for recorded user acknowledgment of terms and conditions for use of external systems for access.<br><br>Provision Conditional Access as required<li>[What is Conditional Access?](../conditional-access/overview.md)<li>[Require managed devices for cloud app access with Conditional Access](../conditional-access/require-managed-devices.md)<li>[Require device to be marked as compliant](../conditional-access/require-managed-devices.md)<li>[Conditional Access: Filter for devices](/azure/active-directory/conditional-access/concept-condition-filters-for-devices)<br><br>Use Conditional Access to block access<li>[Conditional Access - Block access by location](../conditional-access/howto-conditional-access-policy-location.md)<br><br>Configure terms of use<li>[Terms of use - Azure Active Directory](../conditional-access/terms-of-use.md)<li>[Conditional Access require terms of use ](../conditional-access/require-tou.md) |
-| AC.L1-3.1.22 | You're responsible for configuring Privileged Identity Management (PIM) to manage access to systems where posted information is publicly accessible. Require approvals with justification prior to role assignment in PIM. Configure Terms of Use (TOU) for systems where posted information is publicly accessible for recorded acknowledgment of terms and conditions for posting of publicly accessible information.<br><br>Plan PIM deployment<li>[What is Privileged Identity Management?](../privileged-identity-management/pim-configure.md)<li>[Plan a Privileged Identity Management deployment](../privileged-identity-management/pim-deployment-plan.md)<br><br>Configure terms of use<li>[Terms of use - Azure Active Directory](../conditional-access/terms-of-use.md)<li>[Conditional Access require terms of use ](../conditional-access/require-tou.md)<li>[Configure Azure AD role settings in PIM - Require Justification](../privileged-identity-management/pim-how-to-change-default-settings.md) |
+| AC.L1-3.1.1<br><br>**Practice statement:** Limit information system access to authorized users, processes acting on behalf of authorized users, or devices (including other information systems).<br><br>**Objectives:**<br>Determine if:<br>[a.] authorized users are identified;<br>[b.] processes acting on behalf of authorized users are identified;<br>[c.] devices (and other systems) authorized to connect to the system are identified;<br>[d.] system access is limited to authorized users;<br>[e.] system access is limited to processes acting on behalf of authorized users; and<br>[f.] system access is limited to authorized devices (including other systems). | You're responsible for setting up Azure AD accounts, which is accomplished from external HR systems, on-premises Active Directory, or directly in the cloud. You configure Conditional Access to only grant access from a known (Registered/Managed) device. In addition, apply the concept of least privilege when granting application permissions. Where possible, use delegated permission. <br><br>Set up users<br><li>[Plan cloud HR application to Azure Active Directory user provisioning](../app-provisioning/plan-cloud-hr-provision.md) <li>[Azure AD Connect sync: Understand and customize synchronization](../hybrid/how-to-connect-sync-whatis.md)<li>[Add or delete users ΓÇô Azure Active Directory](../fundamentals/add-users-azure-active-directory.md)<br><br>Set up devices<li>[What is device identity in Azure Active Directory](../devices/overview.md)<br><br>Configure applications<li>[QuickStart: Register an app in the Microsoft identity platform](../develop/quickstart-register-app.md)<li>[Microsoft identity platform scopes, permissions, & consent](../develop/v2-permissions-and-consent.md)<li>[Securing service principals in Azure Active Directory](../fundamentals/service-accounts-principal.md)<br><br>Conditional access<li>[What is Conditional Access in Azure Active Directory](../conditional-access/overview.md)<li>[Conditional Access require managed device](../conditional-access/require-managed-devices.md) |
+| AC.L1-3.1.2<br><br>**Practice statement:** Limit information system access to the types of transactions and functions that authorized users are permitted to execute.<br><br>**Objectives:**<br>Determine if:<br>[a.] the types of transactions and functions that authorized users are permitted to execute are defined; and<br>[b.] system access is limited to the defined types of transactions and functions for authorized users. | You're responsible for configuring access controls such as Role Based Access Controls (RBAC) with built-in or custom roles. Use role assignable groups to manage role assignments for multiple users requiring same access. Configure Attribute Based Access Controls (ABAC) with default or custom security attributes. The objective is to granularly control access to resources protected with Azure AD.<br><br>Set up RBAC<li>[Overview of role-based access control in Active Directory ](../roles/custom-overview.md)[Azure AD built-in roles](../roles/permissions-reference.md)<li>[Create and assign a custom role in Azure Active Directory](../roles/custom-create.md)<br><br>Set up ABAC<li>[What is Azure attribute-based access control (Azure ABAC)](/azure/role-based-access-control/conditions-overview)<li>[What are custom security attributes in Azure AD?](/azure/active-directory/fundamentals/custom-security-attributes-overview)<br><br>Configure groups for role assignment<li>[Use Azure AD groups to manage role assignments](../roles/groups-concept.md) |
+| AC.L1-3.1.20<br><br>**Practice statement:** Verify and control/limit connections to and use of external information systems.<br><br>**Objectives:**<br>Determine if:<br>[a.] connections to external systems are identified;<br>[b.] the use of external systems is identified;<br>[c.] connections to external systems are verified;<br>[d.] the use of external systems is verified;<br>[e.] connections to external systems are controlled and or limited; and<br>[f.] the use of external systems is controlled and or limited. | You're responsible for configuring conditional access policies using device controls and or network locations to control and or limit connections and use of external systems. Configure Terms of Use (TOU) for recorded user acknowledgment of terms and conditions for use of external systems for access.<br><br>Set up Conditional Access as required<li>[What is Conditional Access?](../conditional-access/overview.md)<li>[Require managed devices for cloud app access with Conditional Access](../conditional-access/require-managed-devices.md)<li>[Require device to be marked as compliant](../conditional-access/require-managed-devices.md)<li>[Conditional Access: Filter for devices](/azure/active-directory/conditional-access/concept-condition-filters-for-devices)<br><br>Use Conditional Access to block access<li>[Conditional Access - Block access by location](../conditional-access/howto-conditional-access-policy-location.md)<br><br>Configure terms of use<li>[Terms of use - Azure Active Directory](../conditional-access/terms-of-use.md)<li>[Conditional Access require terms of use ](../conditional-access/require-tou.md) |
+| AC.L1-3.1.22<br><br>**Practice statement:** Control information posted or processed on publicly accessible information systems.<br><br>**Objectives:**<br>Determine if:<br>[a.] individuals authorized to post or process information on publicly accessible systems are identified;<br>[b.] procedures to ensure FCI isn't posted or processed on publicly accessible systems are identified;<br>[c.] a review process is in place prior to posting of any content to publicly accessible systems; and<br>[d.] content on publicly accessible systems is reviewed to ensure that it doesn't include federal contract information (FCI). | You're responsible for configuring Privileged Identity Management (PIM) to manage access to systems where posted information is publicly accessible. Require approvals with justification prior to role assignment in PIM. Configure Terms of Use (TOU) for systems where posted information is publicly accessible for recorded acknowledgment of terms and conditions for posting of publicly accessible information.<br><br>Plan PIM deployment<li>[What is Privileged Identity Management?](../privileged-identity-management/pim-configure.md)<li>[Plan a Privileged Identity Management deployment](../privileged-identity-management/pim-deployment-plan.md)<br><br>Configure terms of use<li>[Terms of use - Azure Active Directory](../conditional-access/terms-of-use.md)<li>[Conditional Access require terms of use ](../conditional-access/require-tou.md)<li>[Configure Azure AD role settings in PIM - Require Justification](../privileged-identity-management/pim-how-to-change-default-settings.md) |
## Identification and Authentication (IA) domain
-The following table provides a list of control IDs and associated customer responsibilities and guidance.
+The following table provides a list of practice statement and objectives, and Azure AD guidance and recommendations to enable you to meet these requirements with Azure AD.
-| *Control* | *Guidance* |
+| CMMC practice statement and objectives | Azure AD guidance and recommendations |
| - | - |
-| IA.L1-3.5.1 | Azure AD uniquely identifies users, processes (service principal/workload identities), and devices via the ID property on the respective directory objects. You can filter log files to help with your assessment using the following links. Use the following reference to meet assessment objectives.<br><br>Filtering logs by user properties<li>[User resource type: ID Property](/graph/api/resources/user?view=graph-rest-1.0&preserve-view=true)<br><br>Filtering logs by service properties<li>[ServicePrincipal resource type: ID Property](/graph/api/resources/serviceprincipal?view=graph-rest-1.0&preserve-view=true)<br><br>Filtering logs by device properties<li>[Device resource type: ID Property](/graph/api/resources/device?view=graph-rest-1.0&preserve-view=true) |
-IA.L1-3.5.2 | Azure AD uniquely authenticates or verifies each user, process acting on behalf of user, or device as a prerequisite to system access. Use the following reference to meet assessment objectives.<br><br>Provision user accounts<li>[What is Azure Active Directory authentication?](../authentication/overview-authentication.md)<br><br>[Configure Azure Active Directory to meet NIST authenticator assurance levels](../standards/nist-overview.md)<br><br>Provision service principal accounts<li>[Service principal authentication](../fundamentals/service-accounts-principal.md)<br><br>Provision Device accounts<li>[What is a device identity?](../devices/overview.md)<li>[How it works: Device registration](../devices/device-registration-how-it-works.md)<li>[What is a Primary Refresh Token?](../devices/concept-primary-refresh-token.md)<li>[What does the Primary Refresh Token (PRT) contain?](/azure/active-directory/devices/concept-primary-refresh-token#what-does-the-prt-contain)|
+| IA.L1-3.5.1<br><br>**Practice statement:** Identify information system users, processes acting on behalf of users, or devices.<br><br>**Objectives:**<br>Determine if:<br>[a.] system users are identified;<br>[b.] processes acting on behalf of users are identified; and<br>[c.] devices accessing the system are identified. | Azure AD uniquely identifies users, processes (service principal/workload identities), and devices via the ID property on the respective directory objects. You can filter log files to help with your assessment using the following links. Use the following reference to meet assessment objectives.<br><br>Filtering logs by user properties<li>[User resource type: ID Property](/graph/api/resources/user?view=graph-rest-1.0&preserve-view=true)<br><br>Filtering logs by service properties<li>[ServicePrincipal resource type: ID Property](/graph/api/resources/serviceprincipal?view=graph-rest-1.0&preserve-view=true)<br><br>Filtering logs by device properties<li>[Device resource type: ID Property](/graph/api/resources/device?view=graph-rest-1.0&preserve-view=true) |
+IA.L1-3.5.2<br><br>**Practice statement:** Authenticate (or verify) the identities of those users, processes, or devices, as a prerequisite to allowing access to organizational information systems.<br><br>**Objectives:**<br>Determine if:<br>[a.] the identity of each user is authenticated or verified as a prerequisite to system access;<br>[b.] the identity of each process acting on behalf of a user is authenticated or verified as a prerequisite to system access; and<br>[c.] the identity of each device accessing or connecting to the system is authenticated or verified as a prerequisite to system access. | Azure AD uniquely authenticates or verifies each user, process acting on behalf of user, or device as a prerequisite to system access. Use the following reference to meet assessment objectives.<br><br>Set up user accounts<li>[What is Azure Active Directory authentication?](../authentication/overview-authentication.md)<br><br>[Configure Azure Active Directory to meet NIST authenticator assurance levels](../standards/nist-overview.md)<br><br>Set up service principal accounts<li>[Service principal authentication](../fundamentals/service-accounts-principal.md)<br><br>Set up device accounts<li>[What is a device identity?](../devices/overview.md)<li>[How it works: Device registration](../devices/device-registration-how-it-works.md)<li>[What is a Primary Refresh Token?](../devices/concept-primary-refresh-token.md)<li>What does the PRT contain |
## System and Information Integrity (SI) domain
-The following table provides a list of control IDs and associated responsibilities and guidance.
+The following table provides a list of practice statement and objectives, and Azure AD guidance and recommendations to enable you to meet these requirements with Azure AD.
-| *Control* | *Guidance* |
+| CMMC practice statement | Azure AD guidance and recommendations |
| - | - |
-| SI.L1-3.14.1<br><br>SI.L1-3.14.2<br><br>SI.L1-3.14.4<br><br>SI.L1-3.14.5 | **Consolidated Guidance for legacy managed devices**<br>Configure conditional access to require Hybrid Azure AD joined device. For devices that are joined to an on-premises AD, it's assumed that the control over these devices is enforced using management solutions such as Configuration Manager or group policy (GP). Because there's no method for Azure AD to determine whether any of these methods has been applied to a device, requiring a hybrid Azure AD joined device is a relatively weak mechanism to require a managed device. The administrator judges whether the methods applied to your on-premises domain-joined devices are strong enough to constitute a managed device, if the device is also a Hybrid Azure AD joined device.<br><br>**Consolidated guidance for cloud-managed (or co-management) devices**<br>Configure conditional access to require a device to be marked as compliant, the strongest form to request a managed device. This option requires a device to be registered with Azure AD, and to be marked as compliant by Intune or third-party mobile device management (MDM) system that manages Windows 10 devices via Azure AD integration.
+| SI.L1-3.14.1 - Identify, report, and correct information and information system flaws in a timely manner.<br><br>SI.L1-3.14.2 - Provide protection from malicious code at appropriate locations in organizational information systems.<br><br>SI.L1-3.14.4 - Update malicious code protection mechanisms when new releases are available.<br><br>SI.L1-3.14.5 - Perform periodic scans of the information system and real-time scans of files from external sources as files are downloaded, opened, or executed. | **Consolidated Guidance for legacy managed devices**<br>Configure conditional access to require Hybrid Azure AD joined device. For devices joined to an on-premises AD, it's assumed that the control over these devices is enforced using management solutions such as Configuration Manager or group policy (GP). Because there's no method for Azure AD to determine whether any of these methods has been applied to a device, requiring a hybrid Azure AD joined device is a relatively weak mechanism to require a managed device. The administrator judges whether the methods applied to your on-premises domain-joined devices are strong enough to constitute a managed device, if the device is also a Hybrid Azure AD joined device.<br><br>**Consolidated guidance for cloud-managed (or co-management) devices**<br>Configure conditional access to require a device to be marked as compliant, the strongest form to request a managed device. This option requires device registration with Azure AD, and indicated as compliant by Intune or a third-party mobile device management (MDM) system that manages Windows 10 devices via Azure AD integration.
### Next steps * [Configure Azure Active Directory for CMMC compliance](configure-azure-active-directory-for-cmmc-compliance.md)- * [Configure CMMC Level 2 Access Control (AC) controls](configure-cmmc-level-2-access-control.md)- * [Configure CMMC Level 2 Identification and Authentication (IA) controls](configure-cmmc-level-2-identification-and-authentication.md)- * [Configure CMMC Level 2 additional controls](configure-cmmc-level-2-additional-controls.md)
active-directory Configure Cmmc Level 2 Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/configure-cmmc-level-2-access-control.md
Previously updated : 12/14/2022 Last updated : 1/3/2023
Azure Active Directory can help you meet identity-related practice requirements in each Cybersecurity Maturity Model Certification (CMMC) level. To be compliant with requirements in [CMMC V2.0 level 2](https://cmmc-coe.org/maturity-level-two/), it's the responsibility of companies performing work with, and on behalf of, the US Dept. of Defense (DoD) to complete other configurations or processes.
-In CMMC Level 2, there are 13 domains that have one or more practices related to identity. The domains are:
+In CMMC Level 2, there are 13 domains that have one or more practices related to identity:
* Access Control (AC) * Audit & Accountability (AU)
The remainder of this article provides guidance for the Access Control (AC) doma
## Access Control (AC)
-The following table provides a list of control IDs and associated customer responsibilities and guidance.
+The following table provides a list of practice statement and objectives, and Azure AD guidance and recommendations to enable you to meet these requirements with Azure AD.
-| *Control* | *Guidance* |
+| CMMC practice statement and objectives | Azure AD guidance and recommendations |
| - | - |
-| AC.L2-3.1.3 | Configure Conditional Access policies to control the flow of CUI from trusted locations, trusted devices, approved applications and require app protection policy. For finer grained authorization to CUI, configure app-enforced restrictions(Exchange/SharePoint Online), App Control (with Microsoft Defender for Cloud Apps), Authentication Context. Deploy Azure AD Application Proxy to secure access to on-premises applications.<br>[Location condition in Azure Active Directory Conditional Access ](../conditional-access/location-condition.md)<br>[Grant controls in Conditional Access policy - Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Grant controls in Conditional Access policy - Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br>[Grant controls in Conditional Access policy - Require approved client app](../conditional-access/concept-conditional-access-grant.md)<br>[Grant controls in Conditional Access policy - Require app protection policy](../conditional-access/concept-conditional-access-grant.md)<br>[Session controls in Conditional Access policy - Application enforced restrictions](../conditional-access/concept-conditional-access-session.md)<br>[Protect with Microsoft Defender for Cloud Apps Conditional Access App Control](/defender-cloud-apps/proxy-intro-aad)<br>[Cloud apps, actions, and authentication context in Conditional Access policy ](../conditional-access/concept-conditional-access-cloud-apps.md)<br>[Remote access to on-premises apps using Azure AD Application Proxy](../app-proxy/application-proxy.md)<br><br>**Authentication Context**<br>[Configuring Authentication context & Assign to Conditional Access Policy](../conditional-access/concept-conditional-access-cloud-apps.md)<br><br>**Information Protection**<br>Know and protect your data; help prevent data loss.<br>[Protect your sensitive data with Microsoft Purview](/microsoft-365/compliance/information-protection?view=o365-worldwide&preserve-view=true)<br><br>**Conditional Access**<br>[Conditional Access for Azure information protection (AIP)](https://techcommunity.microsoft.com/t5/security-compliance-and-identity/conditional-access-policies-for-azure-information-protection/ba-p/250357) <br><br>**Application Proxy**<br>[Remote access to on-premises apps using Azure AD Application Proxy](../app-proxy/application-proxy.md) |
-|AC.L2-3.1.4 | Ensuring adequate separation of duties by scoping appropriate access. Configure Entitlement Management Access packages to govern access to applications, groups, Teams and SharePoint sites. Configure Separation of Duties checks within access packages to avoid a user obtaining excessive access. In Azure AD entitlement management, you can configure multiple policies, with different settings for each user community that will need access through an access package. This configuration includes restrictions such that a user of a particular group, or already assigned a different access package, isn't assigned other access packages, by policy.<br><br>Configure administrative units in Azure Active Directory to scope administrative privilege so that administrators with privileged roles are scoped to only have those privileges on limited set of directory objects(users, groups, devices).<br>[What is entitlement management?](../governance/entitlement-management-overview.md)<br>[What are access packages and what resources can I manage with them?](../governance/entitlement-management-overview.md)<br>[Configure separation of duties for an access package in Azure AD entitlement management](../governance/entitlement-management-access-package-incompatible.md)<br>[Administrative units in Azure Active Directory](../roles/administrative-units.md)|
-| AC.L2-3.1.5 | You're responsible for implementing and enforcing the rule of least privilege. This action can be accomplished with Privileged Identity Management for configuring enforcement, monitoring, and alerting. Set requirements and conditions for role membership.<br><br>Once privileged accounts are identified and managed, use [Entitlement Lifecycle Management](../governance/entitlement-management-overview.md) and [Access reviews](../governance/access-reviews-overview.md) to set, maintain and audit adequate access. Use the [MS Graph API](/graph/api/directoryrole-list-members?view=graph-rest-1.0&tabs=http&preserve-view=true) to discover and monitor directory roles.<br><br>**Assign roles**<br>[Assign Azure AD roles in PIM](../privileged-identity-management/pim-how-to-add-role-to-user.md)<br>[Assign Azure resource roles in Privileged Identity Management](../privileged-identity-management/pim-resource-roles-assign-roles.md)<br>[Assign eligible owners and members for privileged access groups](../privileged-identity-management/groups-assign-member-owner.md)<br><br>**Set role settings** <br>[Configure Azure AD role settings in PIM](../privileged-identity-management/pim-how-to-change-default-settings.md)<br>[Configure Azure resource role settings in PIM](../privileged-identity-management/pim-resource-roles-configure-role-settings.md)<br>[Configure privileged access groups settings in PIM](../privileged-identity-management/groups-role-settings.md)<br><br>**Set up alerts**<br>[Security alerts for Azure AD roles in PIM](../privileged-identity-management/pim-how-to-configure-security-alerts.md)<br>[Configure security alerts for Azure resource roles in Privileged Identity Management](../privileged-identity-management/pim-resource-roles-configure-alerts.md) |
-| AC.L2-3.1.6<br><br>AC.L2-3.1.7 |Requirements in AC.L2-3.1.6 and AC.L2-3.1.7 complement each other. Require separate accounts for privilege and non-privileged use. Configure Privileged Identity Management (PIM) to bring just-in-time(JIT) privileged access and remove standing access. Configure role based conditional access policies to limit access to productivity application for privileged users. For highly privileged users, secure devices as part of the privileged access story. All privileged actions are captured in the Azure AD Audit logs.<br>[Securing privileged access overview](/security/compass/overview)<br>[Configure Azure AD role settings in PIM](../privileged-identity-management/pim-how-to-change-default-settings.md)<br>[Users and groups in Conditional Access policy](../conditional-access/concept-conditional-access-users-groups.md)<br>[Why are privileged access devices important](/security/compass/privileged-access-devices) |
-| AC.L2-3.1.8 | Enable custom smart lockout settings. Configure lockout threshold and lockout duration in seconds to implement these requirements.<br>[Protect user accounts from attacks with Azure Active Directory smart lockout](../authentication/howto-password-smart-lockout.md)<br>[Manage Azure AD smart lockout values](../authentication/howto-password-smart-lockout.md) |
-| AC.L2-3.1.9 | With Azure AD, you can deliver notification or banner messages for all apps that require and record acknowledgment before granting access. You can granularly target these terms of use policies to specific users (Member or Guest). You can also customize them per application via conditional access policies.<br><br>**Conditional access** <br>[What is conditional access in Azure AD?](../conditional-access/overview.md)<br><br>**Terms of use**<br>[Azure Active Directory terms of use](../conditional-access/terms-of-use.md)<br>[View report of who has accepted and declined](../conditional-access/terms-of-use.md) |
-| AC.L2-3.1.10 | Implement device lock by using a conditional access policy to restrict access to compliant or hybrid Azure AD joined devices. Configure policy settings on the device to enforce device lock at the OS level with MDM solutions such as Intune. Endpoint Manager or group policy objects can also be considered in hybrid deployments. For unmanaged devices, configure the Sign-In Frequency setting to force users to reauthenticate.<br>[Require device to be marked as compliant](../conditional-access/require-managed-devices.md)<br>[Grant controls in Conditional Access policy - Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br>[User sign-in frequency](../conditional-access/howto-conditional-access-session-lifetime.md)<br><br>Configure devices for maximum minutes of inactivity until the screen locks ([Android](/mem/intune/configuration/device-restrictions-android), [iOS](/mem/intune/configuration/device-restrictions-ios), [Windows 10](/mem/intune/configuration/device-restrictions-windows-10)).|
-| AC.L2-3.1.11 | Enable Continuous Access Evaluation (CAE) for all supported applications. For application that don't support CAE, or for conditions not applicable to CAE, implement policies in Microsoft Defender for Cloud Apps to automatically terminate sessions when conditions occur. Additionally, configure Azure Active Directory Identity Protection to evaluate user and sign-in Risk. Use conditional access with Identity protection to allow user to automatically remediate risk.<br>[Continuous access evaluation in Azure AD](../conditional-access/concept-continuous-access-evaluation.md)<br>[Control cloud app usage by creating policies](/defender-cloud-apps/control-cloud-apps-with-policies)<br>[What is Azure Active Directory Identity Protection?](../identity-protection/overview-identity-protection.md)
-|AC.L2-3.1.12 | In todayΓÇÖs world, users access cloud-based applications almost exclusively remotely from unknown or untrusted networks. It's critical to securing this pattern of access to adopt zero trust principals. To meet these controls requirements in a modern cloud world we must verify each access request explicitly, implement least privilege and assume breach.<br><br>Configure named locations to delineate internal vs external networks. Configure conditional access app control to route access via Microsoft Defender for Cloud Apps (MDCA). Configure MDCA to control and monitor all sessions.<br>[Zero Trust Deployment Guide for Microsoft Azure Active Directory](https://www.microsoft.com/security/blog/2020/04/30/zero-trust-deployment-guide-azure-active-directory/)<br>[Location condition in Azure Active Directory Conditional Access](/azure/active-directory/conditional-access/location-condition)<br>[Deploy Cloud App Security Conditional Access App Control for Azure AD apps](/cloud-app-security/proxy-deployment-aad)<br>[What is Microsoft Defender for Cloud Apps?](/cloud-app-security/what-is-cloud-app-security)<br>[Monitor alerts raised in Microsoft Defender for Cloud Apps](/cloud-app-security/monitor-alerts) |
-| AC.L2-3.1.13 | All Azure AD customer-facing web services are secured with the Transport Layer Security (TLS) protocol and are implemented using FIPS-validated cryptography.<br>[Azure Active Directory Data Security Considerations (microsoft.com)](https://azure.microsoft.com/resources/azure-active-directory-data-security-considerations/) |
-| AC.L2-3.1.14 | Configure named locations to delineate internal vs external networks. Configure conditional access app control to route access via Microsoft Defender for Cloud Apps (MDCA). Configure MDCA to control and monitor all sessions. Secure devices used by privileged accounts as part of the privileged access story.<br>[Location condition in Azure Active Directory Conditional Access](/azure/active-directory/conditional-access/location-condition)<br>[Session controls in Conditional Access policy](/azure/active-directory/conditional-access/concept-conditional-access-session)<br>[Securing privileged access overview](/security/compass/overview) |
-| AC.L2-3.1.15 | Conditional Access is the Zero Trust control plane to target policies for access to your apps when combined with authentication context. You can apply different policies in those apps. Secure devices used by privileged accounts as part of the privileged access story. Configure conditional access policies to require the use of these secured devices by privileged users when performing privileged commands.<br>[Cloud apps, actions, and authentication context in Conditional Access policy](/azure/active-directory/conditional-access/concept-conditional-access-cloud-apps)<br>[Securing privileged access overview](/security/compass/overview)<br>[Filter for devices as a condition in Conditional Access policy](/azure/active-directory/conditional-access/concept-condition-filters-for-devices) |
-| AC.L2-3.1.18 | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to enforce mobile device configuration and connection profile. Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](/azure/active-directory/conditional-access/concept-conditional-access-grant)<br>[Require hybrid Azure AD joined device](/azure/active-directory/conditional-access/concept-conditional-access-grant)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started)<br>[What is app management in Microsoft Intune?](/mem/intune/apps/app-management) |
-| AC.L2-3.1.19 | **Managed Device**<br>Configure conditional access policies to enforce compliant or HAADJ device and to ensure managed devices are configured appropriately via device management solution to encrypt CUI<br><br>**Unmanaged Device**<br>Configure conditional access policies to require app protection policies.<br>[Grant controls in Conditional Access policy - Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Grant controls in Conditional Access policy - Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br>[Grant controls in Conditional Access policy - Require app protection policy](../conditional-access/concept-conditional-access-grant.md) |
-| AC.L2-3.1.21 | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to control the use of portable storage devices on systems. Configure policy settings on the Windows device to completely prohibit or restrict use of portable storage at the OS level. For all other devices where you may be unable to granularly control access to portable storage block download entirely with Microsoft Defender for Cloud Apps. Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](/azure/active-directory/conditional-access/concept-conditional-access-grant)<br>[Require hybrid Azure AD joined device](/azure/active-directory/conditional-access/concept-conditional-access-grant)<br>[Configure authentication session management - Azure Active Directory](/azure/active-directory/conditional-access/howto-conditional-access-session-lifetime)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started)<br>[Restrict USB devices using administrative templates in Microsoft Intune](/mem/intune/configuration/administrative-templates-restrict-usb)<br><br>**Microsoft Defender for Cloud Apps**<br>[Create session policies in Defender for Cloud Apps](/defender-cloud-apps/session-policy-aad)
+| AC.L2-3.1.3<br><br>**Practice statement:** Control the flow of CUI in accordance with approved authorizations.<br><br>**Objectives:**<br>Determine if:<br>[a.] information flow control policies are defined;<br>[b.] methods and enforcement mechanisms for controlling the flow of CUI are defined;<br>[c.] designated sources and destinations (for example, networks, individuals, and devices) for CUI within the system and between intercfeetonnected systems are identified;<br>[d.] authorizations for controlling the flow of CUI are defined; and<br>[e.] approved authorizations for controlling the flow of CUI are enforced. | Configure Conditional Access policies to control the flow of CUI from trusted locations, trusted devices, approved applications and require app protection policy. For finer grained authorization to CUI, configure app-enforced restrictions(Exchange/SharePoint Online), App Control (with Microsoft Defender for Cloud Apps), Authentication Context. Deploy Azure AD Application Proxy to secure access to on-premises applications.<br>[Location condition in Azure Active Directory Conditional Access ](../conditional-access/location-condition.md)<br>[Grant controls in Conditional Access policy - Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Grant controls in Conditional Access policy - Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br>[Grant controls in Conditional Access policy - Require approved client app](../conditional-access/concept-conditional-access-grant.md)<br>[Grant controls in Conditional Access policy - Require app protection policy](../conditional-access/concept-conditional-access-grant.md)<br>[Session controls in Conditional Access policy - Application enforced restrictions](../conditional-access/concept-conditional-access-session.md)<br>[Protect with Microsoft Defender for Cloud Apps Conditional Access App Control](/defender-cloud-apps/proxy-intro-aad)<br>[Cloud apps, actions, and authentication context in Conditional Access policy ](../conditional-access/concept-conditional-access-cloud-apps.md)<br>[Remote access to on-premises apps using Azure AD Application Proxy](../app-proxy/application-proxy.md)<br><br>**Authentication Context**<br>[Configuring Authentication context & Assign to Conditional Access Policy](../conditional-access/concept-conditional-access-cloud-apps.md)<br><br>**Information Protection**<br>Know and protect your data; help prevent data loss.<br>[Protect your sensitive data with Microsoft Purview](/microsoft-365/compliance/information-protection?view=o365-worldwide&preserve-view=true)<br><br>**Conditional Access**<br>[Conditional Access for Azure information protection (AIP)](https://techcommunity.microsoft.com/t5/security-compliance-and-identity/conditional-access-policies-for-azure-information-protection/ba-p/250357) <br><br>**Application Proxy**<br>[Remote access to on-premises apps using Azure AD Application Proxy](../app-proxy/application-proxy.md) |
+|AC.L2-3.1.4<br><br>**Practice statement:** Separate the duties of individuals to reduce the risk of malevolent activity without collusion.<br><br>**Objectives:**<br>Determine if:<br>[a.] the duties of individuals requiring separation are defined;<br>[b.] responsibilities for duties that require separation are assigned to separate individuals; and<br>[c.] access privileges that enable individuals to exercise the duties that require separation are granted to separate individuals. | Ensuring adequate separation of duties by scoping appropriate access. Configure Entitlement Management Access packages to govern access to applications, groups, Teams and SharePoint sites. Configure Separation of Duties checks within access packages to avoid a user obtaining excessive access. In Azure AD entitlement management, you can configure multiple policies, with different settings for each user community that will need access through an access package. This configuration includes restrictions such that a user of a particular group, or already assigned a different access package, isn't assigned other access packages, by policy.<br><br>Configure administrative units in Azure Active Directory to scope administrative privilege so that administrators with privileged roles are scoped to only have those privileges on limited set of directory objects(users, groups, devices).<br>[What is entitlement management?](../governance/entitlement-management-overview.md)<br>[What are access packages and what resources can I manage with them?](../governance/entitlement-management-overview.md)<br>[Configure separation of duties for an access package in Azure AD entitlement management](../governance/entitlement-management-access-package-incompatible.md)<br>[Administrative units in Azure Active Directory](../roles/administrative-units.md)|
+| AC.L2-3.1.5<br><br>**Practice statement:** Employ the principle of least privilege, including specific security functions and privileged accounts.<br><br>**Objectives:**<br>Determine if:<br>[a.] privileged accounts are identified;<br>[b.] access to privileged accounts is authorized in accordance with the principle of least privilege;<br>[c.] security functions are identified; and<br>[d.] access to security functions is authorized in accordance with the principle of least privilege. | You're responsible for implementing and enforcing the rule of least privilege. This action can be accomplished with Privileged Identity Management for configuring enforcement, monitoring, and alerting. Set requirements and conditions for role membership.<br><br>Once privileged accounts are identified and managed, use [Entitlement Lifecycle Management](../governance/entitlement-management-overview.md) and [Access reviews](../governance/access-reviews-overview.md) to set, maintain and audit adequate access. Use the [MS Graph API](/graph/api/directoryrole-list-members?view=graph-rest-1.0&tabs=http&preserve-view=true) to discover and monitor directory roles.<br><br>**Assign roles**<br>[Assign Azure AD roles in PIM](../privileged-identity-management/pim-how-to-add-role-to-user.md)<br>[Assign Azure resource roles in Privileged Identity Management](../privileged-identity-management/pim-resource-roles-assign-roles.md)<br>[Assign eligible owners and members for privileged access groups](../privileged-identity-management/groups-assign-member-owner.md)<br><br>**Set role settings** <br>[Configure Azure AD role settings in PIM](../privileged-identity-management/pim-how-to-change-default-settings.md)<br>[Configure Azure resource role settings in PIM](../privileged-identity-management/pim-resource-roles-configure-role-settings.md)<br>[Configure privileged access groups settings in PIM](../privileged-identity-management/groups-role-settings.md)<br><br>**Set up alerts**<br>[Security alerts for Azure AD roles in PIM](../privileged-identity-management/pim-how-to-configure-security-alerts.md)<br>[Configure security alerts for Azure resource roles in Privileged Identity Management](../privileged-identity-management/pim-resource-roles-configure-alerts.md) |
+| AC.L2-3.1.6<br><br>**Practice statement:** Use non-privileged accounts or roles when accessing non security functions.<br><br>**Objectives:**<br>Determine if:<br>[a.] non security functions are identified; and <br>[b.] users are required to use non-privileged accounts or roles when accessing non security functions.<br><br>AC.L2-3.1.7<br><br>**Practice statement:** Prevent non-privileged users from executing privileged functions and capture the execution of such functions in audit logs.<br><br>**Objectives:**<br>Determine if:<br>[a.] privileged functions are defined;<br>[b.] non-privileged users are defined;<br>[c.] non-privileged users are prevented from executing privileged functions; and<br>[d.] the execution of privileged functions is captured in audit logs. |Requirements in AC.L2-3.1.6 and AC.L2-3.1.7 complement each other. Require separate accounts for privilege and non-privileged use. Configure Privileged Identity Management (PIM) to bring just-in-time(JIT) privileged access and remove standing access. Configure role based conditional access policies to limit access to productivity application for privileged users. For highly privileged users, secure devices as part of the privileged access story. All privileged actions are captured in the Azure AD Audit logs.<br>[Securing privileged access overview](/security/compass/overview)<br>[Configure Azure AD role settings in PIM](../privileged-identity-management/pim-how-to-change-default-settings.md)<br>[Users and groups in Conditional Access policy](../conditional-access/concept-conditional-access-users-groups.md)<br>[Why are privileged access devices important](/security/compass/privileged-access-devices) |
+| AC.L2-3.1.8<br><br>**Practice statement:** Limit unsuccessful sign-on attempts.<br><br>**Objectives:**<br>Determine if:<br>[a.] the means of limiting unsuccessful sign-on attempts is defined; and<br>[b.] the defined means of limiting unsuccessful sign-on attempts is implemented. | Enable custom smart lock-out settings. Configure lock-out threshold and lock-out duration in seconds to implement these requirements.<br>[Protect user accounts from attacks with Azure Active Directory smart lockout](../authentication/howto-password-smart-lockout.md)<br>[Manage Azure AD smart lockout values](../authentication/howto-password-smart-lockout.md) |
+| AC.L2-3.1.9<br><br>**Practice statement:** Provide privacy and security notices consistent with applicable CUI rules.<br><br>**Objectives:**<br>Determine if:<br>[a.] privacy and security notices required by CUI-specified rules are identified, consistent, and associated with the specific CUI category; and<br>[b.] privacy and security notices are displayed. | With Azure AD, you can deliver notification or banner messages for all apps that require and record acknowledgment before granting access. You can granularly target these terms of use policies to specific users (Member or Guest). You can also customize them per application via conditional access policies.<br><br>**Conditional access** <br>[What is conditional access in Azure AD?](../conditional-access/overview.md)<br><br>**Terms of use**<br>[Azure Active Directory terms of use](../conditional-access/terms-of-use.md)<br>[View report of who has accepted and declined](../conditional-access/terms-of-use.md) |
+| AC.L2-3.1.10<br><br>**Practice statement:** Use session lock with pattern-hiding displays to prevent access and viewing of data after a period of inactivity.<br><br>**Objectives:**<br>Determine if:<br>[a.] the period of inactivity after which the system initiates a session lock is defined;<br>[b.] access to the system and viewing of data is prevented by initiating a session lock after the defined period of inactivity; and<br>[c.] previously visible information is concealed via a pattern-hiding display after the defined period of inactivity. | Implement device lock by using a conditional access policy to restrict access to compliant or hybrid Azure AD joined devices. Configure policy settings on the device to enforce device lock at the OS level with MDM solutions such as Intune. Endpoint Manager or group policy objects can also be considered in hybrid deployments. For unmanaged devices, configure the Sign-In Frequency setting to force users to reauthenticate.<br>[Require device to be marked as compliant](../conditional-access/require-managed-devices.md)<br>[Grant controls in Conditional Access policy - Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br>[User sign-in frequency](../conditional-access/howto-conditional-access-session-lifetime.md)<br><br>Configure devices for maximum minutes of inactivity until the screen locks ([Android](/mem/intune/configuration/device-restrictions-android), [iOS](/mem/intune/configuration/device-restrictions-ios), [Windows 10](/mem/intune/configuration/device-restrictions-windows-10)).|
+| AC.L2-3.1.11<br><br>**Practice statement:** Terminate (automatically) a user session after a defined condition.<br><br>**Objectives:**<br>Determine if:<br>[a.] conditions requiring a user session to terminate are defined; and<br>[b.] a user session is automatically terminated after any of the defined conditions occur. | Enable Continuous Access Evaluation (CAE) for all supported applications. For application that don't support CAE, or for conditions not applicable to CAE, implement policies in Microsoft Defender for Cloud Apps to automatically terminate sessions when conditions occur. Additionally, configure Azure Active Directory Identity Protection to evaluate user and sign-in Risk. Use conditional access with Identity protection to allow user to automatically remediate risk.<br>[Continuous access evaluation in Azure AD](../conditional-access/concept-continuous-access-evaluation.md)<br>[Control cloud app usage by creating policies](/defender-cloud-apps/control-cloud-apps-with-policies)<br>[What is Azure Active Directory Identity Protection?](../identity-protection/overview-identity-protection.md)
+|AC.L2-3.1.12<br><br>**Practice statement:** Monitor and control remote access sessions.<br><br>**Objectives:**<br>Determine if:<br>[a.] remote access sessions are permitted;<br>[b.] the types of permitted remote access are identified;<br>[c.] remote access sessions are controlled; and<br>[d.] remote access sessions are monitored. | In todayΓÇÖs world, users access cloud-based applications almost exclusively remotely from unknown or untrusted networks. It's critical to securing this pattern of access to adopt zero trust principals. To meet these controls requirements in a modern cloud world we must verify each access request explicitly, implement least privilege and assume breach.<br><br>Configure named locations to delineate internal vs external networks. Configure conditional access app control to route access via Microsoft Defender for Cloud Apps (MDCA). Configure MDCA to control and monitor all sessions.<br>[Zero Trust Deployment Guide for Microsoft Azure Active Directory](/security/blog/2020/04/30/zero-trust-deployment-guide-azure-active-directory/)<br>[Location condition in Azure Active Directory Conditional Access](/azure/active-directory/conditional-access/location-condition.md)<br>[Deploy Cloud App Security Conditional Access App Control for Azure AD apps](/cloud-app-security/proxy-deployment-aad.md)<br>[What is Microsoft Defender for Cloud Apps?](/cloud-app-security/what-is-cloud-app-security.md)<br>[Monitor alerts raised in Microsoft Defender for Cloud Apps](/cloud-app-security/monitor-alerts.md) |
+| AC.L2-3.1.13<br><br>**Practice statement:** Employ cryptographic mechanisms to protect the confidentiality of remote access sessions.<br><br>**Objectives:**<br>Determine if:<br>[a.] cryptographic mechanisms to protect the confidentiality of remote access sessions are identified; and<br>[b.] cryptographic mechanisms to protect the confidentiality of remote access sessions are implemented. | All Azure AD customer-facing web services are secured with the Transport Layer Security (TLS) protocol and are implemented using FIPS-validated cryptography.<br>[Azure Active Directory Data Security Considerations (microsoft.com)](https://azure.microsoft.com/resources/azure-active-directory-data-security-considerations/) |
+| AC.L2-3.1.14<br><br>**Practice statement:** Route remote access via managed access control points.<br><br>**Objectives:**<br>Determine if:<br>[a.] managed access control points are identified and implemented; and<br>[b.] remote access is routed through managed network access control points. | Configure named locations to delineate internal vs external networks. Configure conditional access app control to route access via Microsoft Defender for Cloud Apps (MDCA). Configure MDCA to control and monitor all sessions. Secure devices used by privileged accounts as part of the privileged access story.<br>[Location condition in Azure Active Directory Conditional Access](/azure/active-directory/conditional-access/location-condition.md)<br>[Session controls in Conditional Access policy](/azure/active-directory/conditional-access/concept-conditional-access-session.md)<br>[Securing privileged access overview](/security/compass/overview.md) |
+| AC.L2-3.1.15<br><br>**Practice statement:** Authorize remote execution of privileged commands and remote access to security-relevant information.<br><br>**Objectives:**<br>Determine if:<br>[a.] privileged commands authorized for remote execution are identified;<br>[b.] security-relevant information authorized to be accessed remotely is identified;<br>[c.] the execution of the identified privileged commands via remote access is authorized; and<br>[d.] access to the identified security-relevant information via remote access is authorized. | Conditional Access is the Zero Trust control plane to target policies for access to your apps when combined with authentication context. You can apply different policies in those apps. Secure devices used by privileged accounts as part of the privileged access story. Configure conditional access policies to require the use of these secured devices by privileged users when performing privileged commands.<br>[Cloud apps, actions, and authentication context in Conditional Access policy](/azure/active-directory/conditional-access/concept-conditional-access-cloud-apps.md)<br>[Securing privileged access overview](/security/compass/overview.md)<br>[Filter for devices as a condition in Conditional Access policy](/azure/active-directory/conditional-access/concept-condition-filters-for-devices.md) |
+| AC.L2-3.1.18<br><br>**Practice statement:** Control connection of mobile devices.<br><br>**Objectives:**<br>Determine if:<br>[a.] mobile devices that process, store, or transmit CUI are identified;<br>[b.] mobile device connections are authorized; and<br>[c.] mobile device connections are monitored and logged. | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to enforce mobile device configuration and connection profile. Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](/azure/active-directory/conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](/azure/active-directory/conditional-access/concept-conditional-access-grant.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started.md)<br>[What is app management in Microsoft Intune?](/mem/intune/apps/app-management.md) |
+| AC.L2-3.1.19<br><br>**Practice statement:** Encrypt CUI on mobile devices and mobile computing platforms.<br><br>**Objectives:**<br>Determine if:<br>[a.] mobile devices and mobile computing platforms that process, store, or transmit CUI are identified; and<br>[b.] encryption is employed to protect CUI on identified mobile devices and mobile computing platforms. | **Managed Device**<br>Configure conditional access policies to enforce compliant or HAADJ device and to ensure managed devices are configured appropriately via device management solution to encrypt CUI.<br><br>**Unmanaged Device**<br>Configure conditional access policies to require app protection policies.<br>[Grant controls in Conditional Access policy - Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Grant controls in Conditional Access policy - Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br>[Grant controls in Conditional Access policy - Require app protection policy](../conditional-access/concept-conditional-access-grant.md) |
+| AC.L2-3.1.21<br><br>**Practice statement:** Limit use of portable storage devices on external systems.<br><br>**Objectives:**<br>Determine if:<br>[a.] the use of portable storage devices containing CUI on external systems is identified and documented;<br>[b.] limits on the use of portable storage devices containing CUI on external systems are defined; and<br>[c.] the use of portable storage devices containing CUI on external systems is limited as defined. | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to control the use of portable storage devices on systems. Configure policy settings on the Windows device to completely prohibit or restrict use of portable storage at the OS level. For all other devices where you may be unable to granularly control access to portable storage block download entirely with Microsoft Defender for Cloud Apps. Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](/azure/active-directory/conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](/azure/active-directory/conditional-access/concept-conditional-access-grant.md)<br>[Configure authentication session management - Azure Active Directory](/azure/active-directory/conditional-access/howto-conditional-access-session-lifetime.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started.md)<br>[Restrict USB devices using administrative templates in Microsoft Intune](/mem/intune/configuration/administrative-templates-restrict-usb.md)<br><br>**Microsoft Defender for Cloud Apps**<br>[Create session policies in Defender for Cloud Apps](/defender-cloud-apps/session-policy-aad.md)
### Next steps * [Configure Azure Active Directory for CMMC compliance](configure-azure-active-directory-for-cmmc-compliance.md)- * [Configure CMMC Level 1 controls](configure-cmmc-level-1-controls.md)- * [Configure CMMC Level 2 Identification and Authentication (IA) controls](configure-cmmc-level-2-identification-and-authentication.md)- * [Configure CMMC Level 2 additional controls](configure-cmmc-level-2-additional-controls.md)
active-directory Configure Cmmc Level 2 Additional Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/configure-cmmc-level-2-additional-controls.md
Previously updated : 12/13/2022 Last updated : 1/3/2023
Azure Active Directory helps meet identity-related practice requirements in each Cybersecurity Maturity Model Certification (CMMC) level. To be compliant with requirements in [CMMC V2.0 level 2](https://cmmc-coe.org/maturity-level-two/), it's the responsibility of companies performing work with, and on behalf of, the US Dept. of Defense (DoD) to complete other configurations or processes.
-In CMMC Level 2, there are 13 domains that have one or more practices related to identity. The domains are:
+In CMMC Level 2, there are 13 domains that have one or more practices related to identity:
* Access Control (AC) * Audit & Accountability (AU)
The following table provides a list of control IDs and associated customer respo
| *Control* | *Guidance* | | - | - |
-| AU.L2-3.3.1<br><br>AU.L2-3.3.2 | All operations are audited within the Azure AD audit logs. Each audit log entry contains a userΓÇÖs immutable objectID that can be used to uniquely trace an individual system user to each action. You can collect and analyze logs by using a Security Information and Event Management (SIEM) solution such as Microsoft Sentinel. Alternatively, you can use Azure Event Hubs to integrate logs with third-party SIEM solutions to enable monitoring and notification.<br>[Audit activity reports in the Azure Active Directory portal](/azure/active-directory/reports-monitoring/concept-audit-logs)<br>[Connect Azure Active Directory data to Microsoft Sentinel](/azure/sentinel/connect-azure-active-directory)<br>[Tutorial: Stream logs to an Azure event hub](/azure/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub) |
+| AU.L2-3.3.1<br><br>AU.L2-3.3.2 | All operations are audited in the Azure AD audit logs. Each audit log entry contains a userΓÇÖs immutable objectID that can be used to uniquely trace an individual system user to each action. You can collect and analyze logs by using a Security Information and Event Management (SIEM) solution such as Microsoft Sentinel. Alternatively, you can use Azure Event Hubs to integrate logs with third-party SIEM solutions to enable monitoring and notification.<br>[Audit activity reports in the Azure Active Directory portal](/azure/active-directory/reports-monitoring/concept-audit-logs)<br>[Connect Azure Active Directory data to Microsoft Sentinel](/azure/sentinel/connect-azure-active-directory)<br>[Tutorial: Stream logs to an Azure event hub](/azure/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub) |
| AU.L2-3.3.4 | Azure Service Health notifies you about Azure service incidents so you can take action to mitigate downtime. Configure customizable cloud alerts for Azure Active Directory. <br>[What is Azure Service Health?](/azure/service-health/overview)<br>[Three ways to get notified about Azure service issues](https://azure.microsoft.com/blog/three-ways-to-get-notified-about-azure-service-issues/)<br>[Azure Service Health](https://azure.microsoft.com/get-started/azure-portal/service-health/) | | AU.L2-3.3.6 | Ensure Azure AD events are included in event logging strategy. You can collect and analyze logs by using a Security Information and Event Management (SIEM) solution such as Microsoft Sentinel. Alternatively, you can use Azure Event Hubs to integrate logs with third-party SIEM solutions to enable monitoring and notification. Use Azure AD entitlement management with access reviews to ensure compliance status of accounts. <br>[Audit activity reports in the Azure Active Directory portal](/azure/active-directory/reports-monitoring/concept-audit-logs)<br>[Connect Azure Active Directory data to Microsoft Sentinel](/azure/sentinel/connect-azure-active-directory)<br>[Tutorial: Stream logs to an Azure event hub](/azure/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub) | | AU.L2-3.3.8<br><br>AU.L2-3.3.9 | Azure AD logs are retained by default for 30 days. These logs are unable to modified or deleted and are only accessible to limited set of privileged roles.<br>[Sign-in logs in Azure Active Directory](/azure/active-directory/reports-monitoring/concept-sign-ins)<br>[Audit logs in Azure Active Directory](/azure/active-directory/reports-monitoring/concept-audit-logs)
The following table provides a list of control IDs and associated customer respo
| *Control* | *Guidance* | | - | - | | CM.L2-3.4.2 | Adopt a zero-trust security posture. Use conditional access policies to restrict access to compliant devices. Configure policy settings on the device to enforce security configuration settings on the device with MDM solutions such as Microsoft Intune. Microsoft Endpoint Configuration Manager(MECM) or group policy objects can also be considered in hybrid deployments and combined with conditional access require hybrid Azure AD joined device.<br><br>**Zero-trust**<br>[Securing identity with Zero Trust](/security/zero-trust/identity)<br><br>**Conditional access**<br>[What is conditional access in Azure AD?](/azure/active-directory/conditional-access/overview)<br>[Grant controls in Conditional Access policy](/azure/active-directory/conditional-access/concept-conditional-access-grant)<br><br>**Device policies**<br>[What is Microsoft Intune?](/mem/intune/fundamentals/what-is-intune)<br>[What is Defender for Cloud Apps?](/cloud-app-security/what-is-cloud-app-security)<br>[What is app management in Microsoft Intune?](/mem/intune/apps/app-management)<br>[Microsoft Endpoint Manager overview](/mem/endpoint-manager-overview) |
-| CM.L2-3.4.5 | Azure Active Directory (Azure AD) is a cloud-based identity and access management service. Customers don't have physical access to the Azure AD datacenters. As such, each physical access restriction above is satisfied by Microsoft and inherited by the customers of Azure AD. Implement Azure AD role based access controls. Eliminate standing privileged access, provide just in time access with approval workflows with Privileged Identity Management.<br>[Overview of Azure Active Directory role-based access control (RBAC)](/azure/active-directory/roles/custom-overview)<br>[What is Privileged Identity Management?](/azure/active-directory/privileged-identity-management/pim-configure)<br>[Approve or deny requests for Azure AD roles in PIM](/azure/active-directory/privileged-identity-management/azure-ad-pim-approval-workflow) |
+| CM.L2-3.4.5 | Azure Active Directory (Azure AD) is a cloud-based identity and access management service. Customers don't have physical access to the Azure AD datacenters. As such, each physical access restriction is satisfied by Microsoft and inherited by the customers of Azure AD. Implement Azure AD role based access controls. Eliminate standing privileged access, provide just in time access with approval workflows with Privileged Identity Management.<br>[Overview of Azure Active Directory role-based access control (RBAC)](/azure/active-directory/roles/custom-overview)<br>[What is Privileged Identity Management?](/azure/active-directory/privileged-identity-management/pim-configure)<br>[Approve or deny requests for Azure AD roles in PIM](/azure/active-directory/privileged-identity-management/azure-ad-pim-approval-workflow) |
| CM.L2-3.4.6 | Configure device management solutions (Such as Microsoft Intune) to implement a custom security baseline applied to organizational systems to remove non-essential applications and disable unnecessary services. Leave only the fewest capabilities necessary for the systems to operate effectively. Configure conditional access to restrict access to compliant or hybrid Azure AD joined devices. <br>[What is Microsoft Intune](/mem/intune/fundamentals/what-is-intune)<br>[Require device to be marked as compliant](../conditional-access/require-managed-devices.md)<br>[Grant controls in Conditional Access policy - Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md) | | CM.L2-3.4.7 | Use Application Administrator role to delegate authorized use of essential applications. Use App Roles or group claims to manage least privilege access within application. Configure user consent to require admin approval and don't allow group owner consent. Configure Admin consent request workflows to enable users to request access to applications that require admin consent. Use Microsoft Defender for Cloud Apps to identify unsanctioned/unknown application use. Use this telemetry to then determine essential/non-essential apps.<br>[Azure AD built-in roles - Application Administrator](/azure/active-directory/roles/permissions-reference)<br>[Azure AD App Roles - App Roles vs. Groups ](/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps)<br>[Configure how users consent to applications](/azure/active-directory/manage-apps/configure-user-consent?tabs=azure-portal.md)<br>[Configure group owner consent to apps accessing group data](/azure/active-directory/manage-apps/configure-user-consent-groups?tabs=azure-portal.md)<br>[Configure the admin consent workflow](/azure/active-directory/manage-apps/configure-admin-consent-workflow)<br>[What is Defender for Cloud Apps?](/defender-cloud-apps/what-is-defender-for-cloud-apps)<br>[Discover and manage Shadow IT tutorial](/defender-cloud-apps/tutorial-shadow-it) | | CM.L2-3.4.8 <br><br>CM.L2-3.4.9 | Configure MDM/configuration management policy to prevent the use of unauthorized software. Configure conditional access grant controls to require compliant or hybrid joined device to incorporate device compliance with MDM/configuration management policy into the conditional access authorization decision.<br>[What is Microsoft Intune](/mem/intune/fundamentals/what-is-intune)<br>[Conditional Access - Require compliant or hybrid joined devices](/azure/active-directory/conditional-access/howto-conditional-access-policy-compliant-device) |
The following table provides a list of control IDs and associated customer respo
### Next steps * [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)- * [Configure additional controls](configure-cmmc-level-2-additional-controls.md)- * [Conditional Access require managed device - Require Hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)- * [Conditional Access require managed device - Require device to be marked as compliant](../conditional-access/require-managed-devices.md)- * [What is Microsoft Intune?](/mem/intune/fundamentals/what-is-intune)- * [Co-management for Windows 10 devices](/mem/configmgr/comanage/overview)
active-directory Configure Cmmc Level 2 Identification And Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/configure-cmmc-level-2-identification-and-authentication.md
Previously updated : 12/13/2022 Last updated : 1/3/2023
-# Configure CMMC Level 2 Identification and Authentication (IA) controls
+# Configure CMMC Level 2 Identification and Authentication (IA) controls
Azure Active Directory helps you meet identity-related practice requirements in each Cybersecurity Maturity Model Certification (CMMC) level. To complete other configurations or processes to be compliant with [CMMC V2.0 level 2](https://cmmc-coe.org/maturity-level-two/)requirements, is the responsibility of companies performing work with, and on behalf of, the US Dept. of Defense (DoD).
The remainder of this article provides guidance for the Identification and Autho
## Identification & Authentication
-The following table provides a list of control IDs and associated customer responsibilities and guidance.
+The following table provides a list of practice statement and objectives, and Azure AD guidance and recommendations to enable you to meet these requirements with Azure AD.
-| *Control* | *Guidance* |
+| CMMC practice statement and objectives | Azure AD guidance and recommendations |
| - | - |
-| IA.L2-3.5.3 | The following are definitions for the terms used for this control area:<li>**Local Access** - Access to an organizational information system by a user (or process acting on behalf of a user) communicating through a direct connection without the use of a network.<li>**Network Access** - Access to an information system by a user (or a process acting on behalf of a user) communicating through a network (for example, local area network, wide area network, Internet).<li>**Privileged User** - A user that's authorized (and therefore, trusted) to perform security-relevant functions that ordinary users aren't authorized to perform.<br><br>Breaking down the above requirement means:<li>All users are required MFA for network/remote access.<li>Only privileged users are required MFA for local access. If regular user accounts have administrative rights only on their computers, they're not a ΓÇ£privileged accountΓÇ¥ and don't require MFA for local access.<br><br> You're responsible for configuring Conditional Access to require multifactor authentication. Enable Azure AD Authentication methods that meet AAL2 and above.<br>[Grant controls in Conditional Access policy - Azure Active Directory](/azure/active-directory/conditional-access/concept-conditional-access-grant)<br>[Achieve NIST authenticator assurance levels with Azure Active Directory](/azure/active-directory/standards/nist-overview)<br>[Authentication methods and features - Azure Active Directory](/azure/active-directory/authentication/concept-authentication-methods) |
-| IA.L2-3.5.4 | All Azure AD Authentication methods at AAL2 and above are replay resistant.<br>[Achieve NIST authenticator assurance levels with Azure Active Directory](/azure/active-directory/standards/nist-overview) |
-| IA.L2-3.5.5 | All user, group, device object globally unique identifiers (GUIDs) are guaranteed unique and non-reusable for the lifetime of the Azure AD tenant.<br>[user resource type - Microsoft Graph v1.0](/graph/api/resources/user?view=graph-rest-1.0&preserve-view=true)<br>[group resource type - Microsoft Graph v1.0](/graph/api/resources/group?view=graph-rest-1.0&preserve-view=true)<br>[device resource type - Microsoft Graph v1.0](/graph/api/resources/device?view=graph-rest-1.0&preserve-view=true) |
-| IA.L2-3.5.6 | Implement account management automation with Microsoft Graph and Azure AD PowerShell. Use Microsoft Graph to monitor sign-in activity and Azure AD PowerShell to take action on accounts within the required time frame.<br><br>**Determine inactivity**<br>[Manage inactive user accounts in Azure AD](/azure/active-directory/reports-monitoring/howto-manage-inactive-user-accounts)<br>[Manage stale devices in Azure AD](/azure/active-directory/devices/manage-stale-devices)<br><br>**Remove or disable accounts**<br>[Working with users in Microsoft Graph](/graph/api/resources/user)<br>[Get a user](/graph/api/user-get?tabs=http)<br>[Update user](/graph/api/user-update?tabs=http)<br>[Delete a user](/graph/api/user-delete?tabs=http)<br><br>**Work with devices in Microsoft Graph**<br>[Get device](/graph/api/device-get?tabs=http)<br>[Update device](/graph/api/device-update?tabs=http)<br>[Delete device](/graph/api/device-delete?tabs=http)<br><br>**[Use Azure AD PowerShell](/powershell/module/azuread/)**<br>[Get-AzureADUser](/powershell/module/azuread/get-azureaduser)<br>[Set-AzureADUser](/powershell/module/azuread/set-azureaduser.md)<br>[Get-AzureADDevice](/powershell/module/azuread/get-azureaddevice)<br>[Set-AzureADDevice](/powershell/module/azuread/set-azureaddevice) |
-| IA.L2-3.5.7 <br><br>IA.L2-3.5.8 | We **strongly encourage** passwordless strategies. This control is only applicable to password authenticators, so removing passwords as an available authenticator renders this control not applicable.<br><br>Per NIST SP 800-63 B Section 5.1.1: Maintain a list of commonly used, expected, or compromised passwords.<br><br>With Azure AD password protection, default global banned password lists are automatically applied to all users in an Azure AD tenant. To support your business and security needs, you can define entries in a custom banned password list. When users change or reset their passwords, these banned password lists are checked to enforce the use of strong passwords.<br>For customers that require strict password character change, password reuse and complexity requirements use hybrid accounts configured with Password-Hash-Sync. This action ensures the passwords synchronized to Azure AD inherit the restrictions configured in Active Directory password policies. Further protect on-premises passwords by configuring on-premises Azure AD Password Protection for Active Directory Domain Services.<br>[NIST Special Publication 800-63 B](https://pages.nist.gov/800-63-3/sp800-63b.html)<br>[NIST Special Publication 800-53 Revision 5 (IA-5 - Control enhancement (1)](https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-53r5.pdf)<br>[Eliminate bad passwords using Azure AD password protection](../authentication/concept-password-ban-bad.md)<br>[What is password hash synchronization with Azure AD?](../hybrid/whatis-phs.md) |
-| IA.L2-3.5.9 | An Azure AD user initial password is a temporary single use password that once successfully used is immediately required to be changed to a permanent password. Microsoft strongly encourages the adoption of passwordless authentication methods. Users can bootstrap Passwordless authentication methods using Temporary Access Pass (TAP). TAP is a time and use limited passcode issued by an admin that satisfies strong authentication requirements. Use of passwordless authentication along with the time and use limited TAP completely eliminates the use of passwords (and their reuse).<br>[Add or delete users - Azure Active Directory](/azure/active-directory/fundamentals/add-users-azure-active-directory)<br>[Configure a Temporary Access Pass in Azure AD to register Passwordless authentication methods](/azure/active-directory/authentication/howto-authentication-temporary-access-pass)<br>[Passwordless authentication](/azure/active-directory/authentication/concept-authentication-passwordless) |
-| IA.L2-3.5.10 | **Secret Encryption at Rest**:<br>In addition to disk level encryption, when at rest, secrets stored in the directory are encrypted using the Distributed Key Manager(DKM). The encryption keys are stored in Azure AD core store and in turn are encrypted with a scale unit key. The key is stored in a container that is protected with directory ACLs, for highest privileged users and specific services. The symmetric key is typically rotated every six months. Access to the environment is further protected with operational controls and physical security.<br><br>**Encryption in Transit**:<br>To assure data security, Directory Data in Azure AD is signed and encrypted while in transit between data centers within a scale unit. The data is encrypted and unencrypted by the Azure AD core store tier, which resides inside secured server hosting areas of the associated Microsoft data centers.<br><br>Customer-facing web services are secured with the Transport Layer Security (TLS) protocol.<br>For more information, [download](https://azure.microsoft.com/resources/azure-active-directory-data-security-considerations/) *Data Protection Considerations - Data Security*. On page 15, there are more details.<br>[Demystifying Password Hash Sync (microsoft.com)](https://www.microsoft.com/security/blog/2019/05/30/demystifying-password-hash-sync/)<br>[Azure Active Directory Data Security Considerations](https://aka.ms/aaddatawhitepaper) |
-|IA.L2-3.5.11 | By default, Azure AD obscures all authenticator feedback. |
+| IA.L2-3.5.3<br><br>**Practice statement:** Use multifactor authentication for local and network access to privileged accounts and for network access to non-privileged accounts. <br><br>**Objectives:**<br>Determine if:<br>[a.] privileged accounts are identified;<br>[b.] multifactor authentication is implemented for local access to privileged accounts;<br>[c.] multifactor authentication is implemented for network access to privileged accounts; and<br>[d.] multifactor authentication is implemented for network access to non-privileged accounts. | The following items are definitions for the terms used for this control area:<li>**Local Access** - Access to an organizational information system by a user (or process acting on behalf of a user) communicating through a direct connection without the use of a network.<li>**Network Access** - Access to an information system by a user (or a process acting on behalf of a user) communicating through a network (for example, local area network, wide area network, Internet).<li>**Privileged User** - A user that's authorized (and therefore, trusted) to perform security-relevant functions that ordinary users aren't authorized to perform.<br><br>Breaking down the previous requirement means:<li>All users are required MFA for network/remote access.<li>Only privileged users are required MFA for local access. If regular user accounts have administrative rights only on their computers, they're not a ΓÇ£privileged accountΓÇ¥ and don't require MFA for local access.<br><br> You're responsible for configuring Conditional Access to require multifactor authentication. Enable Azure AD Authentication methods that meet AAL2 and higher.<br>[Grant controls in Conditional Access policy - Azure Active Directory](/azure/active-directory/conditional-access/concept-conditional-access-grant.md)<br>[Achieve NIST authenticator assurance levels with Azure Active Directory](/azure/active-directory/standards/nist-overview.md)<br>[Authentication methods and features - Azure Active Directory](/azure/active-directory/authentication/concept-authentication-methods.md) |
+| IA.L2-3.5.4<br><br>**Practice statement:** Employ replay-resistant authentication mechanisms for network access to privileged and non-privileged accounts.<br><br>**Objectives:**<br>Determine if:<br>[a.] replay-resistant authentication mechanisms are implemented for network account access to privileged and non-privileged accounts. | All Azure AD Authentication methods at AAL2 and above are replay resistant.<br>[Achieve NIST authenticator assurance levels with Azure Active Directory](/azure/active-directory/standards/nist-overview.md) |
+| IA.L2-3.5.5<br><br>**Practice statement:** Prevent reuse of identifiers for a defined period.<br><br>**Objectives:**<br>Determine if:<br>[a.] a period within which identifiers can't be reused is defined; and<br>[b.] reuse of identifiers is prevented within the defined period. | All user, group, device object globally unique identifiers (GUIDs) are guaranteed unique and non-reusable for the lifetime of the Azure AD tenant.<br>[user resource type - Microsoft Graph v1.0](/graph/api/resources/user?view=graph-rest-1.0&preserve-view=true)<br>[group resource type - Microsoft Graph v1.0](/graph/api/resources/group?view=graph-rest-1.0&preserve-view=true)<br>[device resource type - Microsoft Graph v1.0](/graph/api/resources/device?view=graph-rest-1.0&preserve-view=true) |
+| IA.L2-3.5.6<br><br>**Practice statement:** Disable identifiers after a defined period of inactivity.<br><br>**Objectives:**<br>Determine if:<br>[a.] a period of inactivity after which an identifier is disabled is defined; and<br>[b.] identifiers are disabled after the defined period of inactivity. | Implement account management automation with Microsoft Graph and Azure AD PowerShell. Use Microsoft Graph to monitor sign-in activity and Azure AD PowerShell to take action on accounts within the required time frame.<br><br>**Determine inactivity**<br>[Manage inactive user accounts in Azure AD](/azure/active-directory/reports-monitoring/howto-manage-inactive-user-accounts.md)<br>[Manage stale devices in Azure AD](/azure/active-directory/devices/manage-stale-devices.md)<br><br>**Remove or disable accounts**<br>[Working with users in Microsoft Graph](/graph/api/resources/users.md)<br>[Get a user](/graph/api/user-get?tabs=http)<br>[Update user](/graph/api/user-update?tabs=http)<br>[Delete a user](/graph/api/user-delete?tabs=http)<br><br>**Work with devices in Microsoft Graph**<br>[Get device](/graph/api/device-get?tabs=http)<br>[Update device](/graph/api/device-update?tabs=http)<br>[Delete device](/graph/api/device-delete?tabs=http)<br><br>**[Use Azure AD PowerShell](/powershell/module/azuread/)**<br>[Get-AzureADUser](/powershell/module/azuread/get-azureaduser.md)<br>[Set-AzureADUser](/powershell/module/azuread/set-azureaduser.md)<br>[Get-AzureADDevice](/powershell/module/azuread/get-azureaddevice.md)<br>[Set-AzureADDevice](/powershell/module/azuread/set-azureaddevice.md) |
+| IA.L2-3.5.7<br><br>**Practice statement:**<br><br>**Objectives:** Enforce a minimum password complexity and change of characters when new passwords are created.<br>Determine if:<br>[a.] password complexity requirements are defined;<br>[b.] password change of character requirements are defined;<br>[c.] minimum password complexity requirements as defined are enforced when new passwords are created; and<br>[d.] minimum password change of character requirements as defined are enforced when new passwords are created.<br><br>IA.L2-3.5.8<br><br>**Practice statement:** Prohibit password reuse for a specified number of generations.<br><br>**Objectives:**<br>Determine if:<br>[a.] the number of generations during which a password cannot be reused is specified; and<br>[b.] reuse of passwords is prohibited during the specified number of generations. | We **strongly encourage** passwordless strategies. This control is only applicable to password authenticators, so removing passwords as an available authenticator renders this control not applicable.<br><br>Per NIST SP 800-63 B Section 5.1.1: Maintain a list of commonly used, expected, or compromised passwords.<br><br>With Azure AD password protection, default global banned password lists are automatically applied to all users in an Azure AD tenant. To support your business and security needs, you can define entries in a custom banned password list. When users change or reset their passwords, these banned password lists are checked to enforce the use of strong passwords.<br>For customers that require strict password character change, password reuse and complexity requirements use hybrid accounts configured with Password-Hash-Sync. This action ensures the passwords synchronized to Azure AD inherit the restrictions configured in Active Directory password policies. Further protect on-premises passwords by configuring on-premises Azure AD Password Protection for Active Directory Domain Services.<br>[NIST Special Publication 800-63 B](https://pages.nist.gov/800-63-3/sp800-63b.html)<br>[NIST Special Publication 800-53 Revision 5 (IA-5 - Control enhancement (1)](https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-53r5.pdf)<br>[Eliminate bad passwords using Azure AD password protection](../authentication/concept-password-ban-bad.md)<br>[What is password hash synchronization with Azure AD?](../hybrid/whatis-phs.md) |
+| IA.L2-3.5.9<br><br>**Practice statement:** Allow temporary password use for system logons with an immediate change to a permanent password.<br><br>**Objectives:**<br>Determine if:<br>[a.] an immediate change to a permanent password is required when a temporary password is used for system sign-on. | An Azure AD user initial password is a temporary single use password that once successfully used is immediately required to be changed to a permanent password. Microsoft strongly encourages the adoption of passwordless authentication methods. Users can bootstrap Passwordless authentication methods using Temporary Access Pass (TAP). TAP is a time and use limited passcode issued by an admin that satisfies strong authentication requirements. Use of passwordless authentication along with the time and use limited TAP completely eliminates the use of passwords (and their reuse).<br>[Add or delete users - Azure Active Directory](/azure/active-directory/fundamentals/add-users-azure-active-directory.md)<br>[Configure a Temporary Access Pass in Azure AD to register Passwordless authentication methods](/azure/active-directory/authentication/howto-authentication-temporary-access-pass.md)<br>[Passwordless authentication](/security/business/solutions/passwordless-authentication?ef_id=369464fc2ba818d0bd6507de2cde3d58:G:s&OCID=AIDcmmdamuj0pc_SEM_369464fc2ba818d0bd6507de2cde3d58:G:s&msclkid=369464fc2ba818d0bd6507de2cde3d58) |
+| IA.L2-3.5.10<br><br>**Practice statement:** Store and transmit only cryptographically protected passwords.<br><br>**Objectives:**<br>Determine if:<br>[a.] passwords are cryptographically protected in storage; and<br>[b.] passwords are cryptographically protected in transit. | **Secret Encryption at Rest**:<br>In addition to disk level encryption, when at rest, secrets stored in the directory are encrypted using the Distributed Key Manager(DKM). The encryption keys are stored in Azure AD core store and in turn are encrypted with a scale unit key. The key is stored in a container that is protected with directory ACLs, for highest privileged users and specific services. The symmetric key is typically rotated every six months. Access to the environment is further protected with operational controls and physical security.<br><br>**Encryption in Transit**:<br>To assure data security, Directory Data in Azure AD is signed and encrypted while in transit between data centers within a scale unit. The data is encrypted and unencrypted by the Azure AD core store tier, which resides inside secured server hosting areas of the associated Microsoft data centers.<br><br>Customer-facing web services are secured with the Transport Layer Security (TLS) protocol.<br>For more information, [download](https://azure.microsoft.com/resources/azure-active-directory-data-security-considerations/) *Data Protection Considerations - Data Security*. On page 15, there are more details.<br>[Demystifying Password Hash Sync (microsoft.com)](https://www.microsoft.com/security/blog/2019/05/30/demystifying-password-hash-sync/)<br>[Azure Active Directory Data Security Considerations](https://aka.ms/aaddatawhitepaper) |
+|IA.L2-3.5.11<br><br>**Practice statement:** Obscure feedback of authentication information.<br><br>**Objectives:**<br>Determine if:<br>[a.] authentication information is obscured during the authentication process. | By default, Azure AD obscures all authenticator feedback. |
### Next steps * [Configure Azure Active Directory for CMMC compliance](configure-azure-active-directory-for-cmmc-compliance.md)- * [Configure CMMC Level 1 controls](configure-cmmc-level-1-controls.md)- * [Configure CMMC Level 2 Access Control (AC) controls](configure-cmmc-level-2-access-control.md)- * [Configure CMMC Level 2 additional controls](configure-cmmc-level-2-additional-controls.md)
active-directory Standards Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/standards-overview.md
Previously updated : 09/13/2022 Last updated : 12/7/2022 # Configure Azure Active Directory to meet identity standards
-In today's world of interconnected infrastructures, compliance with governmental and industry frameworks and standards is often mandatory.
+In today's world of interconnected infrastructures, compliance with governmental and industry frameworks and standards is often mandatory. Microsoft engages with governments, regulators, and standards bodies to understand and meet compliance requirements for Azure. There are [90 Azure compliance certifications](../../compliance/index.yml), which include many for various regions and countries. Azure has 35 compliance offerings for key industries including,
-Compliance frameworks can be extremely complex. Microsoft engages with governments, regulators, and standards bodies to understand and meet compliance needs in its Azure platform. You can take advantage of more than [90 Azure compliance certifications](../../compliance/index.yml). These compliance offerings include many that are specific to global regions and countries. Azure also offers 35 compliance offerings specific to key industries, including health, government, finance, education, manufacturing, and media.
+* Health
+* Government
+* Finance
+* Education
+* Manufacturing
+* Media
-## Azure compliance provides a head start
+## Azure compliance is a head start
-Compliance is a shared responsibility among Microsoft, cloud service providers (CSPs), and organizations. You can rely on Azure compliance certifications as a basis for your compliance, and then configure Azure Active Directory to meet identity standards.
+Compliance is a shared responsibility for Microsoft, cloud service providers (CSPs), and organizations. Use Azure compliance certifications as a basis for your compliance, and then configure Azure Active Directory to meet identity standards.
+
+CSPs, government agencies, and those who work with them, must meet one or more sets of government standards, which can include:
-CSPs, governmental agencies, and those who work with them must often meet stringent standards for one or more governments. These standards can include the following:
* [US Federal Risk and Authorization Management Program (FedRAMP)](/azure/compliance/offerings/offering-fedramp)
-* [National Institute of Standards and Technologies (NIST)](/azure/compliance/offerings/offering-nist-800-53).
+* [National Institute of Standards and Technologies (NIST)](/azure/compliance/offerings/offering-nist-800-53)
+
+CSPs and organizations in industries such as healthcare and finance have standards, such as:
-CSPs and organizations in industries such as healthcare and finance must also meet industry standards, such as:
-* [HIPPA](/azure/compliance/offerings/offering-hipaa-us)
-* [Sorbanes-Oxley (SOX)](/azure/compliance/offerings/offering-sox-us)
+* [Health Insurance Portability and Accountability Act of 1996 (HIPPA)](/azure/compliance/offerings/offering-hipaa-us)
+* [Sarbanes-Oxley Act of 2002 (SOX)](/azure/compliance/offerings/offering-sox-us)
To learn more about supported compliance frameworks, see [Azure compliance offerings](/azure/compliance/offerings/). ## Next steps
-[Configure Azure Active Directory to achieve NIST authenticator assurance levels](nist-overview.md)
-
-[Configure Azure Active directory to meet FedRAMP High Impact level](configure-azure-active-directory-for-fedramp-high-impact.md)
+* [Configure Azure Active Directory to achieve NIST authenticator assurance levels](nist-overview.md)
+* [Configure Azure Active directory to meet FedRAMP High Impact level](configure-azure-active-directory-for-fedramp-high-impact.md)
aks Auto Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/auto-upgrade-cluster.md
If youΓÇÖre using Planned Maintenance and Auto-Upgrade, your upgrade will start
For more information on Planned Maintenance, see [Use Planned Maintenance to schedule maintenance windows for your Azure Kubernetes Service (AKS) cluster][planned-maintenance].
+## Auto upgrade limitations
+
+If youΓÇÖre using Auto-Upgrade you cannot anymore upgrade the control plane first, and then upgrade the individual node pools. Auto-Upgrade will always upgrade the control plane and the node pools together. In Auto-Upgrade there is no concept of upgrading the control plane only, and trying to run the command `az aks upgrade --control-plane-only` will raise the error: `NotAllAgentPoolOrchestratorVersionSpecifiedAndUnchanged: Using managed cluster api, all Agent pools' OrchestratorVersion must be all specified or all unspecified. If all specified, they must be stay unchanged or the same with control plane.`
+ ## Best practices for auto-upgrade The following best practices will help maximize your success when using auto-upgrade:
aks Azure Blob Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-blob-csi.md
To have a storage volume persist for your workload, you can use a StatefulSet. T
[csi-specification]: https://github.com/container-storage-interface/spec/blob/master/spec.md [csi-blob-storage-open-source-driver]: https://github.com/kubernetes-sigs/blob-csi-driver [csi-blob-storage-open-source-driver-uninstall-steps]: https://github.com/kubernetes-sigs/blob-csi-driver/blob/master/docs/install-csi-driver-master.md#clean-up-blob-csi-driver
+[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
<!-- LINKS - internal --> [compare-access-with-nfs]: ../storage/common/nfs-comparison.md
To have a storage volume persist for your workload, you can use a StatefulSet. T
[csi-storage-driver-overview]: csi-storage-drivers.md [azure-disk-csi-driver]: azure-disk-csi.md [azure-files-csi-driver]: azure-files-csi.md
+[install-azure-cli]: /cli/azure/install_azure_cli
aks Azure Csi Blob Storage Static https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-blob-storage-static.md
The following example demonstrates how to mount a Blob storage container as a pe
kubectl create -f pv-blob-nfs.yaml ```
-3. Create a `pvc-blob-nfs.yaml` file with a *PersistentVolume*. For example:
+3. Create a `pvc-blob-nfs.yaml` file with a *PersistentVolumeClaim*. For example:
```yml kind: PersistentVolumeClaim
aks Azure Files Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-files-csi.md
Title: Use Container Storage Interface (CSI) driver for Azure Files on Azure Kub
description: Learn how to use the Container Storage Interface (CSI) driver for Azure Files in an Azure Kubernetes Service (AKS) cluster. Previously updated : 12/06/2022 Last updated : 01/03/2023 # Use Azure Files Container Storage Interface (CSI) driver in Azure Kubernetes Service (AKS)
-The Azure Files Container Storage Interface (CSI) driver is a [CSI specification][csi-specification]-compliant driver used by Azure Kubernetes Service (AKS) to manage the lifecycle of Azure Files shares. The CSI is a standard for exposing arbitrary block and file storage systems to containerized workloads on Kubernetes.
+The Azure Files Container Storage Interface (CSI) driver is a [CSI specification][csi-specification]-compliant driver used by Azure Kubernetes Service (AKS) to manage the lifecycle of Azure file shares. The CSI is a standard for exposing arbitrary block and file storage systems to containerized workloads on Kubernetes.
By adopting and using CSI, AKS now can write, deploy, and iterate plug-ins to expose new or improve existing storage systems in Kubernetes. Using CSI drivers in AKS avoids having to touch the core Kubernetes code and wait for its release cycles.
In addition to the original in-tree driver features, Azure Files CSI driver supp
|Name | Meaning | Available Value | Mandatory | Default value | | | | |
-|skuName | Azure Files storage account type (alias: `storageAccountType`)| `Standard_LRS`, `Standard_ZRS`, `Standard_GRS`, `Standard_RAGRS`, `Standard_RAGZRS`,`Premium_LRS`, `Premium_ZRS` | No | `StandardSSD_LRS`<br> Minimum file share size for Premium account type is 100 GB.<br> ZRS account type is supported in limited regions.<br> NFS file share only supports Premium account type.|
+|skuName | Azure Files storage account type (alias: `storageAccountType`)| `Standard_LRS`, `Standard_ZRS`, `Standard_GRS`, `Standard_RAGRS`, `Standard_RAGZRS`,`Premium_LRS`, `Premium_ZRS` | No | `StandardSSD_LRS`<br> Minimum file share size for Premium account type is 100 GiB.<br> ZRS account type is supported in limited regions.<br> NFS file share only supports Premium account type.|
|fsType | File System Type | `ext4`, `ext3`, `ext2`, `xfs`| Yes | `ext4` for Linux| |location | Specify Azure region where Azure storage account will be created. | `eastus`, `westus`, etc. | No | If empty, driver uses the same location name as current AKS cluster.| |resourceGroup | Specify the resource group where the Azure Disks will be created | Existing resource group name | No | If empty, driver uses the same resource group name as current AKS cluster.|
In addition to the original in-tree driver features, Azure Files CSI driver supp
| | **Following parameters are only for NFS protocol** | | | |rootSquashType | Specify root squashing behavior on the share. The default is `NoRootSquash` | `AllSquash`, `NoRootSquash`, `RootSquash` | No | |mountPermissions | Mounted folder permissions. The default is `0777`. If set to `0`, driver doesn't perform `chmod` after mount | `0777` | No |
-| | **Following parameters are only for vnet setting, e.g. NFS, private end point** | | |
+| | **Following parameters are only for vnet setting, e.g. NFS, private endpoint** | | |
|vnetResourceGroup | Specify Vnet resource group where virtual network is defined. | Existing resource group name. | No | If empty, driver uses the `vnetResourceGroup` value in Azure cloud config file. | |vnetName | Virtual network name | Existing virtual network name. | No | If empty, driver uses the `vnetName` value in Azure cloud config file. | |subnetName | Subnet name | Existing subnet name of the agent node. | No | If empty, driver uses the `subnetName` value in Azure cloud config file. |
A storage class is used to define how an Azure file share is created. A storage
* **Premium_ZRS**: Premium zone-redundant storage > [!NOTE]
-> Azure Files supports Azure Premium Storage. The minimum premium file share is 100 GB.
+> Azure Files supports Azure Premium Storage. The minimum premium file share capacity is 100 GiB.
When you use storage CSI drivers on AKS, there are two more built-in `StorageClasses` that use the Azure Files CSI storage drivers. The other CSI storage classes are created with the cluster alongside the in-tree default storage classes.
You can request a larger volume for a PVC. Edit the PVC object, and specify a la
> [!NOTE] > A new PV is never created to satisfy the claim. Instead, an existing volume is resized.
-In AKS, the built-in `azurefile-csi` storage class already supports expansion, so use the [PVC created earlier with this storage class](#dynamically-create-azure-files-pvs-by-using-the-built-in-storage-classes). The PVC requested a 100Gi file share. We can confirm that by running:
+In AKS, the built-in `azurefile-csi` storage class already supports expansion, so use the [PVC created earlier with this storage class](#dynamically-create-azure-files-pvs-by-using-the-built-in-storage-classes). The PVC requested a 100GiB file share. We can confirm that by running:
```bash kubectl exec -it nginx-azurefile -- df -h /mnt/azurefile
If your Azure Files resources are protected with a private endpoint, you must cr
* `storageAccount`: The storage account name. * `server`: The FQDN of the storage account's private endpoint (for example, `<storage account name>.privatelink.file.core.windows.net`).
-Create a file named *private-azure-file-sc.yaml*, and then paste the following example manifest in the file. Replace the values for `<resourceGroup>` and `<storageAccountName>`.
+Create a file named `private-azure-file-sc.yaml`, and then paste the following example manifest in the file. Replace the values for `<resourceGroup>` and `<storageAccountName>`.
```yaml apiVersion: storage.k8s.io/v1
The output of the command resembles the following example:
storageclass.storage.k8s.io/private-azurefile-csi created ```
-Create a file named *private-pvc.yaml*, and then paste the following example manifest in the file:
+Create a file named `private-pvc.yaml`, and then paste the following example manifest in the file:
```yaml apiVersion: v1
kubectl apply -f private-pvc.yaml
This option is optimized for random access workloads with in-place data updates and provides full POSIX file system support. This section shows you how to use NFS shares with the Azure File CSI driver on an AKS cluster.
-### Prerequsites
+### Prerequisites
-- Your AKS clusters service principal or managed identity must be added to the Contributor role to the storage account. - Your AKS cluster *Control plane* identity (that is, your AKS cluster name) is added to the [Contributor](../role-based-access-control/built-in-roles.md#contributor) role in the resource group hosting the VNet.
+- Your AKS cluster's service principal or managed service identity (MSI) must be added to the Contributor role to the storage account.
+
+> [!NOTE]
+> You can use a private endpoint instead of allowing access to the selected VNet.
### Create NFS file share storage class
storageclass.storage.k8s.io/azurefile-csi-nfs created
### Create a deployment with an NFS-backed file share
-You can deploy an example [stateful set](https://github.com/kubernetes-sigs/azurefile-csi-driver/blob/master/deploy/example/nfs/statefulset.yaml) that saves timestamps into a file `data.txt` by deploying the following command with the [kubectl apply][kubectl-apply] command:
+You can deploy an example **stateful set** that saves timestamps into a file `data.txt` with the [kubectl apply][kubectl-apply] command:
```bash
-kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/azurefile-csi-driver/master/deploy/example/nfs/statefulset.yaml
+kubectl apply -f
+
+apiVersion: apps/v1
+kind: StatefulSet
+metadata:
+ name: statefulset-azurefile
+ labels:
+ app: nginx
+spec:
+ podManagementPolicy: Parallel # default is OrderedReady
+ serviceName: statefulset-azurefile
+ replicas: 1
+ template:
+ metadata:
+ labels:
+ app: nginx
+ spec:
+ nodeSelector:
+ "kubernetes.io/os": linux
+ containers:
+ - name: statefulset-azurefile
+ image: mcr.microsoft.com/oss/nginx/nginx:1.19.5
+ command:
+ - "/bin/bash"
+ - "-c"
+ - set -euo pipefail; while true; do echo $(date) >> /mnt/azurefile/outfile; sleep 1; done
+ volumeMounts:
+ - name: persistent-storage
+ mountPath: /mnt/azurefile
+ updateStrategy:
+ type: RollingUpdate
+ selector:
+ matchLabels:
+ app: nginx
+ volumeClaimTemplates:
+ - metadata:
+ name: persistent-storage
+ annotations:
+ volume.beta.kubernetes.io/storage-class: azurefile-csi-nfs
+ spec:
+ accessModes: ["ReadWriteMany"]
+ resources:
+ requests:
+ storage: 100Gi
``` The output of the command resembles the following example:
accountname.file.core.windows.net:/accountname/pvc-fa72ec43-ae64-42e4-a8a2-55660
``` > [!NOTE]
-> Note that since NFS file share is in Premium account, the minimum file share size is 100GB. If you create a PVC with a small storage size, you might encounter an error similar to the following: *failed to create file share ... size (5)...*.
+> Note that because the NFS file share is in a Premium account, the minimum file share size is 100 GiB. If you create a PVC with a small storage size, you might encounter an error similar to the following: *failed to create file share ... size (5)...*.
## Windows containers
The output of the commands resembles the following example:
## Next steps - To learn how to use CSI driver for Azure Disks, see [Use Azure Disks with CSI driver][azure-disk-csi].-- To learn how to use CSI driver for Azure Blob storage (preview), see [Use Azure Blob storage with CSI driver][azure-blob-csi] (preview).
+- To learn how to use CSI driver for Azure Blob storage, see [Use Azure Blob storage with CSI driver][azure-blob-csi].
- For more about storage best practices, see [Best practices for storage and backups in Azure Kubernetes Service][operator-best-practices-storage]. <!-- LINKS - external -->
aks Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/best-practices.md
Last updated 03/09/2021
# Cluster operator and developer best practices to build and manage applications on Azure Kubernetes Service (AKS) Building and running applications successfully in Azure Kubernetes Service (AKS) require understanding and implementation of some key considerations, including:+ * Multi-tenancy and scheduler features. * Cluster and pod security.
-* Business continuity and disaster recovery.
-
+* Business continuity and disaster recovery.
The AKS product group, engineering teams, and field teams (including global black belts [GBBs]) contributed to, wrote, and grouped the following best practices and conceptual articles. Their purpose is to help cluster operators and developers understand the considerations above and implement the appropriate features. - ## Cluster operator best practices As a cluster operator, work together with application owners and developers to understand their needs. You can then use the following best practices to configure your AKS clusters as needed.
To help understand some of the features and components of these best practices,
## Next steps
-If you need to get started with AKS, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
+For guidance on a creating full solutions with AKS for production, see [AKS solution guidance][aks-solution-guidance].
<!-- LINKS - internal -->
-[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
-[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
-[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
+[aks-solution-guidance]: /azure/architecture/reference-architectures/containers/aks-start-here?WT.mc_id=AKSDOCSPAGE
aks Howto Deploy Java Liberty App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-liberty-app.md
Previously updated : 09/13/2022 Last updated : 12/21/2022 keywords: java, jakartaee, javaee, microprofile, open-liberty, websphere-liberty, aks, kubernetes
This article uses the Azure Marketplace offer for Open/WebSphere Liberty to acce
* Install a Java SE implementation (for example, [Eclipse Open J9](https://www.eclipse.org/openj9/)). * Install [Maven](https://maven.apache.org/download.cgi) 3.5.0 or higher. * Install [Docker](https://docs.docker.com/get-docker/) for your OS.
-* Make sure you have been assigned either the `Owner` role or the `Contributor` and `User Access Administrator` roles in the subscription. You can verify it by following steps in [List role assignments for a user or group](../role-based-access-control/role-assignments-list-portal.md#list-role-assignments-for-a-user-or-group).
+* Make sure you've been assigned either the `Owner` role or the `Contributor` and `User Access Administrator` roles in the subscription. You can verify it by following steps in [List role assignments for a user or group](../role-based-access-control/role-assignments-list-portal.md#list-role-assignments-for-a-user-or-group).
## Create a Liberty on AKS deployment using the portal
The following steps guide you to create a Liberty runtime on AKS. After completi
1. In the **Basics** pane, create a new resource group. Because resource groups must be unique within a subscription, pick a unique name. An easy way to have unique names is to use a combination of your initials, today's date, and some identifier. For example, `ejb0913-java-liberty-project-rg`. 1. Select *East US* as **Region**. 1. Select **Next: Configure cluster**.
-1. This section allows you to select an existing AKS cluster and Azure Container Registry (ACR), instead of causing the deployment to create a new one, if desired. This capability enables you to leverage the sidecar pattern, as shown in the [Azure architecture center](/azure/architecture/patterns/sidecar). You can also adjust the settings for the size and number of the virtual machines in the AKS node pool. Leave all other values at the defaults and select **Next: Networking**.
+1. This section allows you to select an existing AKS cluster and Azure Container Registry (ACR), instead of causing the deployment to create a new one, if desired. This capability enables you to use the sidecar pattern, as shown in the [Azure architecture center](/azure/architecture/patterns/sidecar). You can also adjust the settings for the size and number of the virtual machines in the AKS node pool. Leave all other values at the defaults and select **Next: Networking**.
1. Next to **Connect to Azure Application Gateway?** select **Yes**. This pane lets you customize the following deployment options. 1. You can customize the virtual network and subnet into which the deployment will place the resources. Leave these values at their defaults.
- 1. You can provide the TLS/SSL certificate presented by the Azure Application Gateway. Leave the values at the default to cause the offer to generate a self-signed certificate. Do not go to production using a self-certificate. For more information about self-signed certificates, see [Create a self-signed public certificate to authenticate your application](../active-directory/develop/howto-create-self-signed-certificate.md).
- 1. You can enable cookie based affinity, also known as sticky sessions. We want this enabled for this article, so ensure this option is selected.
+ 1. You can provide the TLS/SSL certificate presented by the Azure Application Gateway. Leave the values at the default to cause the offer to generate a self-signed certificate. Don't go to production using a self-signed certificate. For more information about self-signed certificates, see [Create a self-signed public certificate to authenticate your application](../active-directory/develop/howto-create-self-signed-certificate.md).
+ 1. You can enable cookie based affinity, also known as sticky sessions. We want sticky sessions enabled for this article, so ensure this option is selected.
![Screenshot of the enable cookie-based affinity checkbox.](./media/howto-deploy-java-liberty-app/enable-cookie-based-affinity.png) 1. Select **Review + create** to validate your selected options. 1. When you see the message **Validation Passed**, select **Create**. The deployment may take up to 20 minutes.
Now that the database and AKS cluster have been created, we can proceed to prepa
## Configure and deploy the sample application
-Follow the steps in this section to deploy the sample application on the Liberty runtime. These steps use Maven and the `liberty-maven-plugin`. To learn more about the `liberty-maven-plugin` see [Building a web application with Maven](https://openliberty.io/guides/maven-intro.html).
+Follow the steps in this section to deploy the sample application on the Liberty runtime. These steps use Maven.
### Check out the application
java-app
Γöé Γöé Γö£ΓöÇ openlibertyapplication-agic.yaml Γöé Γö£ΓöÇ docker/ Γöé Γöé Γö£ΓöÇ Dockerfile
-Γöé Γöé Γö£ΓöÇ Dockerfile-local
Γöé Γöé Γö£ΓöÇ Dockerfile-wlp
-Γöé Γöé Γö£ΓöÇ Dockerfile-wlp-local
Γöé Γö£ΓöÇ liberty/config/ Γöé Γöé Γö£ΓöÇ server.xml Γöé Γö£ΓöÇ java/
java-app
The directories *java*, *resources*, and *webapp* contain the source code of the sample application. The code declares and uses a data source named `jdbc/JavaEECafeDB`.
-In the *aks* directory, we placed three deployment files. *db-secret.xml* is used to create [Kubernetes Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) with DB connection credentials. The file *openlibertyapplication-agic.yaml* is used to deploy the application image.
+In the *aks* directory, we placed three deployment files. *db-secret.xml* is used to create [Kubernetes Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) with DB connection credentials. The file *openlibertyapplication-agic.yaml* is used to deploy the application image. In the *docker* directory, there are two files to create the application image with either Open Liberty or WebSphere Liberty.
In directory *liberty/config*, the *server.xml* FILE is used to configure the DB connection for the Open Liberty and WebSphere Liberty cluster. ### Build the project
-Now that you've gathered the necessary properties, you can build the application. The POM file for the project reads many properties from the environment. The reason for this parameterization is to avoid having to hard-code values such as database server names, passwords, and other identifiers into the example source code. This allows the sample source code to be easier to use in a wider variety of contexts. These variables are used to also populate `JavaEECafeDB` properties in *server.xml* and in yaml files located in *src/main/aks*.
+Now that you've gathered the necessary properties, you can build the application. The POM file for the project reads many variables from the environment. As part of the Maven build, these variables are used to populate values in the YAML files located in *src/main/aks*. You can do something similar for your application outside Maven if you prefer.
```bash cd <path-to-your-repo>/java-app
-# The following variables will be used for deployment file generation
+# The following variables will be used for deployment file generation into target.
export LOGIN_SERVER=<Azure_Container_Registery_Login_Server_URL> export REGISTRY_NAME=<Azure_Container_Registery_Name> export USER_NAME=<Azure_Container_Registery_Username>
mvn clean install
### (Optional) Test your project locally
-Use your local ide, or `liberty:run` command to run and test the project locally before deploying to Azure.
+You can now run and test the project locally before deploying to Azure. For convenience, we use the `liberty-maven-plugin`. To learn more about the `liberty-maven-plugin`, see [Building a web application with Maven](https://openliberty.io/guides/maven-intro.html). For your application, you can do something similar using any other mechanism, such as your local IDE. You can also consider using the `liberty:devc` option intended for development with containers. You can read more about `liberty:devc` in the [Liberty docs](https://openliberty.io/docs/latest/development-mode.html#_container_support_for_dev_mode).
-1. Start your local docker environment if you haven't done so already. The instructions for doing this vary depending on the host operating system. `liberty:run` will also leverage the environment variables defined in the above step.
-
-1. Start the application in `liberty:run` mode
+1. Start the application using `liberty:run`. `liberty:run` will also use the environment variables defined in the previous step.
```bash cd <path-to-your-repo>/java-app
Use your local ide, or `liberty:run` command to run and test the project locally
1. Verify the application works as expected. You should see a message similar to `[INFO] [AUDIT] CWWKZ0003I: The application javaee-cafe updated in 1.930 seconds.` in the command output if successful. Go to `http://localhost:9080/` in your browser and verify the application is accessible and all functions are working.
-1. Press `Ctrl+C` to stop `liberty:run` mode.
+1. Press <kbd>Ctrl</kbd>+<kbd>C</kbd> to stop.
### Build image for AKS deployment
-After successfully running the app in the Liberty Docker container, you can run the `docker build` command to build the image.
+You can now run the `docker build` command to build the image.
```bash cd <path-to-your-repo>/java-app/target
docker build -t javaee-cafe:v1 --pull --file=Dockerfile .
docker build -t javaee-cafe:v1 --pull --file=Dockerfile-wlp . ```
+### (Optional) Test the Docker image locally
+
+You can now use the following steps to test the Docker image locally before deploying to Azure.
+
+1. Run the image using the following command. Note we're using the environment variables defined previously.
+
+ ```bash
+ docker run -it --rm -p 9080:9080 \
+ -e DB_SERVER_NAME=${DB_SERVER_NAME} \
+ -e DB_NAME=${DB_NAME} \
+ -e DB_USER=${DB_USER} \
+ -e DB_PASSWORD=${DB_PASSWORD} \
+ javaee-cafe:v1
+ ```
+
+1. Once the container starts, go to `http://localhost:9080/` in your browser to access the application.
+
+1. Press <kbd>Ctrl</kbd>+<kbd>C</kbd> to stop.
+ ### Upload image to ACR Now, we upload the built image to the ACR created in the offer.
aks Intro Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/intro-kubernetes.md
Learn more about deploying and managing AKS.
[aks-monitor]: monitor-aks.md [azure-monitor]: ../azure-monitor/containers/containers.md [azure-logs]: ../azure-monitor/logs/log-analytics-overview.md
-[helm]: /quickstart-helm.md
-[aks-best-practices]: /best-practices.md
+[helm]: quickstart-helm.md
+[aks-best-practices]: best-practices.md
aks Use Kms Etcd Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-kms-etcd-encryption.md
The following limitations apply when you integrate KMS etcd encryption with AKS:
* The maximum number of secrets that a cluster enabled with KMS supports is 2,000. * Bring your own (BYO) Azure Key Vault from another tenant isn't supported. * With KMS enabled, you can't change associated Azure Key Vault model (public, private). To [change associated key vault mode][changing-associated-key-vault-mode], you need to disable and enable KMS again.
-* If a cluster is enabled KMS with private key vault and not using `VNet integration` tunnel, then stop/start cluster is not allowed.
+* If a cluster is enabled KMS with private key vault and not using the `API Server VNet integration` tunnel, then stop/start cluster is not allowed.
KMS supports [public key vault][Enable-KMS-with-public-key-vault] and [private key vault][Enable-KMS-with-private-key-vault].
api-management Authorizations Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authorizations-overview.md
The feature consists of two parts, management and runtime:
For public preview the following limitations exist:
+- Authorizations feature only supports Service Principal and Managed Identity as access policies.
+- Authorizations feature only supports /.default app-only scopes while acquire token for https://.../authorizationmanager audience.
- Authorizations feature is not supported in the following regions: swedencentral, australiacentral, australiacentral2, jioindiacentral. - Authorizations feature is not supported in National Clouds. - Authorizations feature is not supported on self-hosted gateways.
For public preview the following limitations exist:
- Maximum configured number of authorizations per authorization provider: 10,000 - Maximum configured number of access policies per authorization: 100 - Maximum requests per minute per service: 250-- Authorization code PKCE flow with code challenge isn't supported.-- API documentation is not available yet. Please see [this](https://github.com/Azure/APIManagement-Authorizations) GitHub repository with samples. ### Authorization providers
app-service Tutorial Multi Container App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-multi-container-app.md
To connect the WordPress app to this new MySQL server, you'll configure a few Wo
To make these changes, use the [az webapp config appsettings set](/cli/azure/webapp/config/appsettings#az-webapp-config-appsettings-set) command in Cloud Shell. App settings are case-sensitive and space-separated. ```azurecli-interactive
-az webapp config appsettings set --resource-group myResourceGroup --name <app-name> --settings WORDPRESS_DB_HOST="<mysql-server-name>.mysql.database.azure.com" WORDPRESS_DB_USER="adminuser@<mysql-server-name>" WORDPRESS_DB_PASSWORD="My5up3rStr0ngPaSw0rd!" WORDPRESS_DB_NAME="wordpress" MYSQL_SSL_CA="BaltimoreCyberTrustroot.crt.pem"
+az webapp config appsettings set --resource-group myResourceGroup --name <app-name> --settings WORDPRESS_DB_HOST="<mysql-server-name>.mysql.database.azure.com" WORDPRESS_DB_USER="adminuser" WORDPRESS_DB_PASSWORD="My5up3rStr0ngPaSw0rd!" WORDPRESS_DB_NAME="wordpress" MYSQL_SSL_CA="BaltimoreCyberTrustroot.crt.pem"
``` When the app setting has been created, Cloud Shell shows information similar to the following example:
When the app setting has been created, Cloud Shell shows information similar to
{ "name": "WORDPRESS_DB_USER", "slotSetting": false,
- "value": "adminuser@&lt;mysql-server-name&gt;"
+ "value": "adminuser"
}, { "name": "WORDPRESS_DB_NAME",
When the app setting has been created, Cloud Shell shows information similar to
{ "name": "WORDPRESS_DB_USER", "slotSetting": false,
- "value": "adminuser@&lt;mysql-server-name&gt;"
+ "value": "adminuser"
}, { "name": "WP_REDIS_HOST",
azure-maps Rest Sdk Developer Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/rest-sdk-developer-guide.md
Azure Maps JavaScript/TypeScript SDK supports LTS versions of [Node.js][Node.js]
| [Search][js search readme] | [@azure/maps-search][js search package] | [search samples][js search sample] | | [Route][js route readme] | [@azure-rest/maps-route][js route package] | [route samples][js route sample] |
-<!--For more information, see the [JavaScript/TypeScript SDK Developers Guide](how-to-dev-guide-js-sdk.md).-->
+For more information, see the [JavaScript/TypeScript SDK Developers Guide](how-to-dev-guide-js-sdk.md).
## Java
azure-monitor Alerts Common Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-common-schema.md
Title: Common alert schema for Azure Monitor alerts description: Understand the common alert schema, why you should use it, and how to enable it. Previously updated : 03/14/2019 Last updated : 12/22/2022 # Common alert schema
-This article describes what the common alert schema is, the benefits of using it, and how to enable it.
+The common alert schema standardizes the consumption experience for alert notifications in Azure. Historically, activity log, metric, and log alerts each had their own email templates and webhook schemas. The common alert schema provides one standardized schema for all alert notifications.
-## What is the common alert schema?
+A standardized schema can help you minimize the number of integrations, which simplifies the process of managing and maintaining your integrations.
-The common alert schema standardizes the consumption experience for alert notifications in Azure. Today, Azure has three alert types, metric, log, and activity log. Historically, they've had their own email templates and webhook schemas. With the common alert schema, you can now receive alert notifications with a consistent schema.
+The common alert schema provides a consistent structure for:
+- **Email templates**: Use the detailed email template to diagnose issues at a glance. Embedded links to the alert instance on the portal and to the affected resource ensure that you can quickly jump into the remediation process.
+- **JSON structure**: Use the consistent JSON structure to build integrations for all alert types using:
+ - Azure Logic Apps
+ - Azure Functions
+ - Azure Automation runbook
-Any alert instance describes the resource that was affected and the cause of the alert. These instances are described in the common schema in the following sections:
+The new schema enables a richer alert consumption experience across both the Azure portal and the Azure mobile app.
-- **Essentials**: Standardized fields, common across all alert types, describe what resource the alert is on along with other common alert metadata. Examples include severity or description.-- **Alert context**: These fields describe the cause of the alert, with fields that vary based on the alert type. For example, a metric alert would have fields like the metric name and metric value in the alert context. An activity log alert would have information about the event that generated the alert.
+> [!NOTE]
+> Alerts generated by [VM insights](../vm/vminsights-overview.md) do not support the common schema.
+
+## Structure of the common schema
-You might want to route the alert instance to a specific team based on a pivot such as a resource group. The common schema uses the essential fields to provide standardized routing logic for all alert types. The team can use the context fields for their investigation.
+The common schema includes information about the affected resource and the cause of the alert in these sections:
+- **Essentials**: Standardized fields, used by all alert types that describe the resource affected by the alert and common alert metadata, such as severity or description.
-As a result, you can potentially have fewer integrations, which makes the process of managing and maintaining them a much simpler task. Future alert payload enrichments like customization and diagnostic enrichment will only surface in the common schema.
+ If you want to route alert instances to specific teams based on criteria such as a resource group, you can use the fields in the **Essentials** section to provide routing logic for all alert types. The teams that receive the alert notification can then use the context fields for their investigation.
+- **Alert context**: Fields that vary depending on the type of the alert. The alert context fields describe the cause of the alert. For example, a metric alert would have fields like the metric name and metric value in the alert context. An activity log alert would have information about the event that generated the alert.
-## What enhancements does the common alert schema bring?
+## Sample alert payload
-You'll see the benefits of using a common alert schema in your alert notifications. A common alert schema provides these benefits:
+```json
+{
+ "schemaId": "azureMonitorCommonAlertSchema",
+ "data": {
+ "essentials": {
+ "alertId": "/subscriptions/<subscription ID>/providers/Microsoft.AlertsManagement/alerts/b9569717-bc32-442f-add5-83a997729330",
+ "alertRule": "WCUS-R2-Gen2",
+ "severity": "Sev3",
+ "signalType": "Metric",
+ "monitorCondition": "Resolved",
+ "monitoringService": "Platform",
+ "alertTargetIDs": [
+ "/subscriptions/<subscription ID>/resourcegroups/pipelinealertrg/providers/microsoft.compute/virtualmachines/wcus-r2-gen2"
+ ],
+ "configurationItems": [
+ "wcus-r2-gen2"
+ ],
+ "originAlertId": "3f2d4487-b0fc-4125-8bd5-7ad17384221e_PipeLineAlertRG_microsoft.insights_metricAlerts_WCUS-R2-Gen2_-117781227",
+ "firedDateTime": "2019-03-22T13:58:24.3713213Z",
+ "resolvedDateTime": "2019-03-22T14:03:16.2246313Z",
+ "description": "",
+ "essentialsVersion": "1.0",
+ "alertContextVersion": "1.0"
+ },
+ "alertContext": {
+ "properties": null,
+ "conditionType": "SingleResourceMultipleMetricCriteria",
+ "condition": {
+ "windowSize": "PT5M",
+ "allOf": [
+ {
+ "metricName": "Percentage CPU",
+ "metricNamespace": "Microsoft.Compute/virtualMachines",
+ "operator": "GreaterThan",
+ "threshold": "25",
+ "timeAggregation": "Average",
+ "dimensions": [
+ {
+ "name": "ResourceId",
+ "value": "3efad9dc-3d50-4eac-9c87-8b3fd6f97e4e"
+ }
+ ],
+ "metricValue": 7.727
+ }
+ ]
+ }
+ }
+ }
+}
+```
-| Action | Enhancements|
+## Essentials fields
+
+| Field | Description|
|:|:|
-| Email | A consistent and detailed email template. You can use it to easily diagnose issues at a glance. Embedded deep links to the alert instance on the portal and the affected resource ensure that you can quickly jump into the remediation process. |
-| Webhook/Azure Logic Apps/Azure Functions/Azure Automation runbook | A consistent JSON structure for all alert types. You can use it to easily build integrations across the different alert types. |
+| alertId | The unique resource ID that identifies the alert instance. |
+| alertRule | The name of the alert rule that generated the alert instance. |
+| Severity | The severity of the alert. Possible values are Sev0, Sev1, Sev2, Sev3, or Sev4. |
+| signalType | Identifies the signal on which the alert rule was defined. Possible values are Metric, Log, or Activity Log. |
+| monitorCondition | When an alert fires, the alert's monitor condition is set to **Fired**. When the underlying condition that caused the alert to fire clears, the monitor condition is set to **Resolved**. |
+| monitoringService | The monitoring service or solution that generated the alert. The fields for the alert context are dictated by the monitoring service. |
+| alertTargetIds | The list of the Azure Resource Manager IDs that are affected targets of an alert. For a log alert defined on a Log Analytics workspace or Application Insights instance, it's the respective workspace or application. |
+| configurationItems |The list of affected resources of an alert.<br>In some cases, the configuration items can be different from the alert targets. For example, in metric-for-log or log alerts defined on a Log Analytics workspace, the configuration items are the actual resources sending the telemetry and not the workspace.<br><ul><li>In the log alerts API (Scheduled Query Rules) v2021-08-01, the `configurationItem` values are taken from explicitly defined dimensions in this priority: `Computer`, `_ResourceId`, `ResourceId`, `Resource`.</li><li>In earlier versions of the log alerts API, the `configurationItem` values are taken implicitly from the results in this priority: `Computer`, `_ResourceId`, `ResourceId`, `Resource`.</li></ul>In ITSM systems, the `configurationItems` field is used to correlate alerts to resources in a configuration management database. |
+| originAlertId | The ID of the alert instance, as generated by the monitoring service generating it. |
+| firedDateTime | The date and time when the alert instance was fired in Coordinated Universal Time (UTC). |
+| resolvedDateTime | The date and time when the monitor condition for the alert instance is set to **Resolved** in UTC. Currently only applicable for metric alerts.|
+| description | The description, as defined in the alert rule. |
+|essentialsVersion| The version number for the essentials section.|
+|alertContextVersion | The version number for the `alertContext` section. |
++
+## Alert context fields for metric alerts
+
+### Sample metric alert with a static threshold and the monitoringService = `Platform`
+
+```json
+{
+ "alertContext": {
+ "properties": null,
+ "conditionType": "SingleResourceMultipleMetricCriteria",
+ "condition": {
+ "windowSize": "PT5M",
+ "allOf": [
+ {
+ "metricName": "Percentage CPU",
+ "metricNamespace": "Microsoft.Compute/virtualMachines",
+ "operator": "GreaterThan",
+ "threshold": "25",
+ "timeAggregation": "Average",
+ "dimensions": [
+ {
+ "name": "ResourceId",
+ "value": "3efad9dc-3d50-4eac-9c87-8b3fd6f97e4e"
+ }
+ ],
+ "metricValue": 31.1105
+ }
+ ],
+ "windowStartTime": "2019-03-22T13:40:03.064Z",
+ "windowEndTime": "2019-03-22T13:45:03.064Z"
+ }
+ }
+}
+```
-The new schema will also enable a richer alert consumption experience across both the Azure portal and the Azure mobile app in the immediate future.
+### Sample metric alert with a dynamic threshold and the monitoringService = Platform
-Learn more about the [schema definitions for webhooks, Logic Apps, Azure Functions, and Automation runbooks](./alerts-common-schema-definitions.md).
+```json
+{
+ "alertContext": {
+ "properties": null,
+ "conditionType": "DynamicThresholdCriteria",
+ "condition": {
+ "windowSize": "PT5M",
+ "allOf": [
+ {
+ "alertSensitivity": "High",
+ "failingPeriods": {
+ "numberOfEvaluationPeriods": 1,
+ "minFailingPeriodsToAlert": 1
+ },
+ "ignoreDataBefore": null,
+ "metricName": "Egress",
+ "metricNamespace": "microsoft.storage/storageaccounts",
+ "operator": "GreaterThan",
+ "threshold": "47658",
+ "timeAggregation": "Total",
+ "dimensions": [],
+ "metricValue": 50101
+ }
+ ],
+ "windowStartTime": "2021-07-20T05:07:26.363Z",
+ "windowEndTime": "2021-07-20T05:12:26.363Z"
+ }
+ }
+}
+```
+### Sample metric alert for availability tests and the monitoringService = Platform
+
+```json
+{
+ "alertContext": {
+ "properties": null,
+ "conditionType": "WebtestLocationAvailabilityCriteria",
+ "condition": {
+ "windowSize": "PT5M",
+ "allOf": [
+ {
+ "metricName": "Failed Location",
+ "metricNamespace": null,
+ "operator": "GreaterThan",
+ "threshold": "2",
+ "timeAggregation": "Sum",
+ "dimensions": [],
+ "metricValue": 5,
+ "webTestName": "myAvailabilityTest-myApplication"
+ }
+ ],
+ "windowStartTime": "2019-03-22T13:40:03.064Z",
+ "windowEndTime": "2019-03-22T13:45:03.064Z"
+ }
+ }
+}
+```
+
+## Alert context fields for Log alerts
> [!NOTE]
-> The following actions don't support the common alert schema ITSM Connector.
+> When you enable the common schema, the fields in the payload are reset to the common schema fields. Therefore, log alerts have these limitations regarding the common schema:
+> - The common schema is not supported for log alerts using webhooks with a custom email subject and/or JSON payload, since the common schema overwrites the custom configurations.
+> - Alerts using the common schema have an upper size limit of 256 KB per alert. If the log alerts payload includes search results that cause the alert to exceed the maximum size, the search results aren't embedded in the log alerts payload. You can check if the payload includes the search results with the `IncludedSearchResults` flag. Use `LinkToFilteredSearchResultsAPI` or `LinkToSearchResultsAPI` to access query results with the [Log Analytics API](/rest/api/loganalytics/dataaccess/query/get) if the search results are not included.
+
+### Sample log alert when the monitoringService = Platform
+
+```json
+{
+ "alertContext": {
+ "SearchQuery": "Perf | where ObjectName == \"Processor\" and CounterName == \"% Processor Time\" | summarize AggregatedValue = avg(CounterValue) by bin(TimeGenerated, 5m), Computer",
+ "SearchIntervalStartTimeUtc": "3/22/2019 1:36:31 PM",
+ "SearchIntervalEndtimeUtc": "3/22/2019 1:51:31 PM",
+ "ResultCount": 2,
+ "LinkToSearchResults": "https://portal.azure.com/#Analyticsblade/search/index?_timeInterval.intervalEnd=2018-03-26T09%3a10%3a40.0000000Z&_timeInterval.intervalDuration=3600&q=Usage",
+ "LinkToFilteredSearchResultsUI": "https://portal.azure.com/#Analyticsblade/search/index?_timeInterval.intervalEnd=2018-03-26T09%3a10%3a40.0000000Z&_timeInterval.intervalDuration=3600&q=Usage",
+ "LinkToSearchResultsAPI": "https://api.loganalytics.io/v1/workspaces/workspaceID/query?query=Heartbeat&timespan=2020-05-07T18%3a11%3a51.0000000Z%2f2020-05-07T18%3a16%3a51.0000000Z",
+ "LinkToFilteredSearchResultsAPI": "https://api.loganalytics.io/v1/workspaces/workspaceID/query?query=Heartbeat&timespan=2020-05-07T18%3a11%3a51.0000000Z%2f2020-05-07T18%3a16%3a51.0000000Z",
+ "SeverityDescription": "Warning",
+ "WorkspaceId": "12345a-1234b-123c-123d-12345678e",
+ "SearchIntervalDurationMin": "15",
+ "AffectedConfigurationItems": [
+ "INC-Gen2Alert"
+ ],
+ "SearchIntervalInMinutes": "15",
+ "Threshold": 10000,
+ "Operator": "Less Than",
+ "Dimensions": [
+ {
+ "name": "Computer",
+ "value": "INC-Gen2Alert"
+ }
+ ],
+ "SearchResults": {
+ "tables": [
+ {
+ "name": "PrimaryResult",
+ "columns": [
+ {
+ "name": "$table",
+ "type": "string"
+ },
+ {
+ "name": "Computer",
+ "type": "string"
+ },
+ {
+ "name": "TimeGenerated",
+ "type": "datetime"
+ }
+ ],
+ "rows": [
+ [
+ "Fabrikam",
+ "33446677a",
+ "2018-02-02T15:03:12.18Z"
+ ],
+ [
+ "Contoso",
+ "33445566b",
+ "2018-02-02T15:16:53.932Z"
+ ]
+ ]
+ }
+ ],
+ "dataSources": [
+ {
+ "resourceId": "/subscriptions/a5ea55e2-7482-49ba-90b3-60e7496dd873/resourcegroups/test/providers/microsoft.operationalinsights/workspaces/test",
+ "tables": [
+ "Heartbeat"
+ ]
+ }
+ ]
+ },
+ "IncludedSearchResults": "True",
+ "AlertType": "Metric measurement"
+ }
+}
+```
+### Sample log alert when the monitoringService = Application Insights
+
+```json
+{
+ "alertContext": {
+ "SearchQuery": "requests | where resultCode == \"500\" | summarize AggregatedValue = Count by bin(Timestamp, 5m), IP",
+ "SearchIntervalStartTimeUtc": "3/22/2019 1:36:33 PM",
+ "SearchIntervalEndtimeUtc": "3/22/2019 1:51:33 PM",
+ "ResultCount": 2,
+ "LinkToSearchResults": "https://portal.azure.com/AnalyticsBlade/subscriptions/12345a-1234b-123c-123d-12345678e/?query=search+*+&timeInterval.intervalEnd=2018-03-26T09%3a10%3a40.0000000Z&_timeInterval.intervalDuration=3600&q=Usage",
+ "LinkToFilteredSearchResultsUI": "https://portal.azure.com/AnalyticsBlade/subscriptions/12345a-1234b-123c-123d-12345678e/?query=search+*+&timeInterval.intervalEnd=2018-03-26T09%3a10%3a40.0000000Z&_timeInterval.intervalDuration=3600&q=Usage",
+ "LinkToSearchResultsAPI": "https://api.applicationinsights.io/v1/apps/0MyAppId0/metrics/requests/count",
+ "LinkToFilteredSearchResultsAPI": "https://api.applicationinsights.io/v1/apps/0MyAppId0/metrics/requests/count",
+ "SearchIntervalDurationMin": "15",
+ "SearchIntervalInMinutes": "15",
+ "Threshold": 10000.0,
+ "Operator": "Less Than",
+ "ApplicationId": "8e20151d-75b2-4d66-b965-153fb69d65a6",
+ "Dimensions": [
+ {
+ "name": "IP",
+ "value": "1.1.1.1"
+ }
+ ],
+ "SearchResults": {
+ "tables": [
+ {
+ "name": "PrimaryResult",
+ "columns": [
+ {
+ "name": "$table",
+ "type": "string"
+ },
+ {
+ "name": "Id",
+ "type": "string"
+ },
+ {
+ "name": "Timestamp",
+ "type": "datetime"
+ }
+ ],
+ "rows": [
+ [
+ "Fabrikam",
+ "33446677a",
+ "2018-02-02T15:03:12.18Z"
+ ],
+ [
+ "Contoso",
+ "33445566b",
+ "2018-02-02T15:16:53.932Z"
+ ]
+ ]
+ }
+ ],
+ "dataSources": [
+ {
+ "resourceId": "/subscriptions/a5ea27e2-7482-49ba-90b3-52e7496dd873/resourcegroups/test/providers/microsoft.operationalinsights/workspaces/test",
+ "tables": [
+ "Heartbeat"
+ ]
+ }
+ ]
+ },
+ "IncludedSearchResults": "True",
+ "AlertType": "Metric measurement"
+ }
+}
+```
+
+### Sample log alert when the monitoringService = Log Alerts V2
+
+> [!NOTE]
+> Log alert rules from API version 2020-05-01 use this payload type, which only supports common schema. Search results aren't embedded in the log alerts payload when you use this version. Use [dimensions](./alerts-unified-log.md#split-by-alert-dimensions) to provide context to fired alerts. You can also use `LinkToFilteredSearchResultsAPI` or `LinkToSearchResultsAPI` to access query results with the [Log Analytics API](/rest/api/loganalytics/dataaccess/query/get). If you must embed the results, use a logic app with the provided links to generate a custom payload.
+
+```json
+{
+ "alertContext": {
+ "properties": {
+ "name1": "value1",
+ "name2": "value2"
+ },
+ "conditionType": "LogQueryCriteria",
+ "condition": {
+ "windowSize": "PT10M",
+ "allOf": [
+ {
+ "searchQuery": "Heartbeat",
+ "metricMeasureColumn": "CounterValue",
+ "targetResourceTypes": "['Microsoft.Compute/virtualMachines']",
+ "operator": "LowerThan",
+ "threshold": "1",
+ "timeAggregation": "Count",
+ "dimensions": [
+ {
+ "name": "Computer",
+ "value": "TestComputer"
+ }
+ ],
+ "metricValue": 0.0,
+ "failingPeriods": {
+ "numberOfEvaluationPeriods": 1,
+ "minFailingPeriodsToAlert": 1
+ },
+ "linkToSearchResultsUI": "https://portal.azure.com#@12345a-1234b-123c-123d-12345678e/blade/Microsoft_Azure_Monitoring_Logs/LogsBlade/source/Alerts.EmailLinks/scope/%7B%22resources%22%3A%5B%7B%22resourceId%22%3A%22%2Fsubscriptions%212345a-1234b-123c-123d-12345678e%2FresourceGroups%2FContoso%2Fproviders%2FMicrosoft.Compute%2FvirtualMachines%2FContoso%22%7D%5D%7D/q/eJzzSE0sKklKTSypUSjPSC1KVQjJzE11T81LLUosSU1RSEotKU9NzdNIAfJKgDIaRgZGBroG5roGliGGxlYmJlbGJnoGEKCpp4dDmSmKMk0A/prettify/1/timespan/2020-07-07T13%3a54%3a34.0000000Z%2f2020-07-09T13%3a54%3a34.0000000Z",
+ "linkToFilteredSearchResultsUI": "https://portal.azure.com#@12345a-1234b-123c-123d-12345678e/blade/Microsoft_Azure_Monitoring_Logs/LogsBlade/source/Alerts.EmailLinks/scope/%7B%22resources%22%3A%5B%7B%22resourceId%22%3A%22%2Fsubscriptions%212345a-1234b-123c-123d-12345678e%2FresourceGroups%2FContoso%2Fproviders%2FMicrosoft.Compute%2FvirtualMachines%2FContoso%22%7D%5D%7D/q/eJzzSE0sKklKTSypUSjPSC1KVQjJzE11T81LLUosSU1RSEotKU9NzdNIAfJKgDIaRgZGBroG5roGliGGxlYmJlbGJnoGEKCpp4dDmSmKMk0A/prettify/1/timespan/2020-07-07T13%3a54%3a34.0000000Z%2f2020-07-09T13%3a54%3a34.0000000Z",
+ "linkToSearchResultsAPI": "https://api.loganalytics.io/v1/subscriptions/12345a-1234b-123c-123d-12345678e/resourceGroups/Contoso/providers/Microsoft.Compute/virtualMachines/Contoso/query?query=Heartbeat%7C%20where%20TimeGenerated%20between%28datetime%282020-07-09T13%3A44%3A34.0000000%29..datetime%282020-07-09T13%3A54%3A34.0000000%29%29&timespan=2020-07-07T13%3a54%3a34.0000000Z%2f2020-07-09T13%3a54%3a34.0000000Z",
+ "linkToFilteredSearchResultsAPI": "https://api.loganalytics.io/v1/subscriptions/12345a-1234b-123c-123d-12345678e/resourceGroups/Contoso/providers/Microsoft.Compute/virtualMachines/Contoso/query?query=Heartbeat%7C%20where%20TimeGenerated%20between%28datetime%282020-07-09T13%3A44%3A34.0000000%29..datetime%282020-07-09T13%3A54%3A34.0000000%29%29&timespan=2020-07-07T13%3a54%3a34.0000000Z%2f2020-07-09T13%3a54%3a34.0000000Z"
+ }
+ ],
+ "windowStartTime": "2020-07-07T13:54:34Z",
+ "windowEndTime": "2020-07-09T13:54:34Z"
+ }
+ }
+}
+```
+
+## Alert context fields for activity log alerts
+
+### Sample activity log alert when the monitoringService = Activity Log - Administrative
+
+```json
+{
+ "alertContext": {
+ "authorization": {
+ "action": "Microsoft.Compute/virtualMachines/restart/action",
+ "scope": "/subscriptions/<subscription ID>/resourceGroups/PipeLineAlertRG/providers/Microsoft.Compute/virtualMachines/WCUS-R2-ActLog"
+ },
+ "channels": "Operation",
+ "claims": "{\"aud\":\"https://management.core.windows.net/\",\"iss\":\"https://sts.windows.net/12345a-1234b-123c-123d-12345678e/\",\"iat\":\"1553260826\",\"nbf\":\"1553260826\",\"exp\":\"1553264726\",\"aio\":\"42JgYNjdt+rr+3j/dx68v018XhuFAwA=\",\"appid\":\"e9a02282-074f-45cf-93b0-50568e0e7e50\",\"appidacr\":\"2\",\"http://schemas.microsoft.com/identity/claims/identityprovider\":\"https://sts.windows.net/12345a-1234b-123c-123d-12345678e/\",\"http://schemas.microsoft.com/identity/claims/objectidentifier\":\"9778283b-b94c-4ac6-8a41-d5b493d03aa3\",\"http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier\":\"9778283b-b94c-4ac6-8a41-d5b493d03aa3\",\"http://schemas.microsoft.com/identity/claims/tenantid\":\"12345a-1234b-123c-123d-12345678e\",\"uti\":\"v5wYC9t9ekuA2rkZSVZbAA\",\"ver\":\"1.0\"}",
+ "caller": "9778283b-b94c-4ac6-8a41-d5b493d03aa3",
+ "correlationId": "8ee9c32a-92a1-4a8f-989c-b0ba09292a91",
+ "eventSource": "Administrative",
+ "eventTimestamp": "2019-03-22T13:56:31.2917159+00:00",
+ "eventDataId": "161fda7e-1cb4-4bc5-9c90-857c55a8f57b",
+ "level": "Informational",
+ "operationName": "Microsoft.Compute/virtualMachines/restart/action",
+ "operationId": "310db69b-690f-436b-b740-6103ab6b0cba",
+ "status": "Succeeded",
+ "subStatus": "",
+ "submissionTimestamp": "2019-03-22T13:56:54.067593+00:00"
+ }
+}
+```
+
+### Sample activity log alert when the monitoringService = Activity Log - Policy
-## How do I enable the common alert schema?
+```json
+{
+ "alertContext": {
+ "authorization": {
+ "action": "Microsoft.Resources/checkPolicyCompliance/read",
+ "scope": "/subscriptions/<GUID>"
+ },
+ "channels": "Operation",
+ "claims": "{\"aud\":\"https://management.azure.com/\",\"iss\":\"https://sts.windows.net/<GUID>/\",\"iat\":\"1566711059\",\"nbf\":\"1566711059\",\"exp\":\"1566740159\",\"aio\":\"42FgYOhynHNw0scy3T/bL71+xLyqEwA=\",\"appid\":\"<GUID>\",\"appidacr\":\"2\",\"http://schemas.microsoft.com/identity/claims/identityprovider\":\"https://sts.windows.net/<GUID>/\",\"http://schemas.microsoft.com/identity/claims/objectidentifier\":\"<GUID>\",\"http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier\":\"<GUID>\",\"http://schemas.microsoft.com/identity/claims/tenantid\":\"<GUID>\",\"uti\":\"Miy1GzoAG0Scu_l3m1aIAA\",\"ver\":\"1.0\"}",
+ "caller": "<GUID>",
+ "correlationId": "<GUID>",
+ "eventSource": "Policy",
+ "eventTimestamp": "2019-08-25T11:11:34.2269098+00:00",
+ "eventDataId": "<GUID>",
+ "level": "Warning",
+ "operationName": "Microsoft.Authorization/policies/audit/action",
+ "operationId": "<GUID>",
+ "properties": {
+ "isComplianceCheck": "True",
+ "resourceLocation": "eastus2",
+ "ancestors": "<GUID>",
+ "policies": "[{\"policyDefinitionId\":\"/providers/Microsoft.Authorization/policyDefinitions/<GUID>/\",\"policySetDefinitionId\":\"/providers/Microsoft.Authorization/policySetDefinitions/<GUID>/\",\"policyDefinitionReferenceId\":\"vulnerabilityAssessmentMonitoring\",\"policySetDefinitionName\":\"<GUID>\",\"policyDefinitionName\":\"<GUID>\",\"policyDefinitionEffect\":\"AuditIfNotExists\",\"policyAssignmentId\":\"/subscriptions/<GUID>/providers/Microsoft.Authorization/policyAssignments/SecurityCenterBuiltIn/\",\"policyAssignmentName\":\"SecurityCenterBuiltIn\",\"policyAssignmentScope\":\"/subscriptions/<GUID>\",\"policyAssignmentSku\":{\"name\":\"A1\",\"tier\":\"Standard\"},\"policyAssignmentParameters\":{}}]"
+ },
+ "status": "Succeeded",
+ "subStatus": "",
+ "submissionTimestamp": "2019-08-25T11:12:46.1557298+00:00"
+ }
+}
+```
-Use action groups in the Azure portal or use the REST API to enable the common alert schema. You can enable a new schema at the action level. For example, you must separately opt in for an email action and a webhook action.
+### Sample activity log alert when the monitoringService = Activity Log - Autoscale
+
+```json
+{
+ "alertContext": {
+ "channels": "Admin, Operation",
+ "claims": "{\"http://schemas.xmlsoap.org/ws/2005/05/identity/claims/spn\":\"Microsoft.Insights/autoscaleSettings\"}",
+ "caller": "Microsoft.Insights/autoscaleSettings",
+ "correlationId": "<GUID>",
+ "eventSource": "Autoscale",
+ "eventTimestamp": "2019-08-21T16:17:47.1551167+00:00",
+ "eventDataId": "<GUID>",
+ "level": "Informational",
+ "operationName": "Microsoft.Insights/AutoscaleSettings/Scaleup/Action",
+ "operationId": "<GUID>",
+ "properties": {
+ "description": "The autoscale engine attempting to scale resource '/subscriptions/d<GUID>/resourceGroups/testRG/providers/Microsoft.Compute/virtualMachineScaleSets/testVMSS' from 9 instances count to 10 instances count.",
+ "resourceName": "/subscriptions/<GUID>/resourceGroups/voiceassistancedemo/providers/Microsoft.Compute/virtualMachineScaleSets/alexademo",
+ "oldInstancesCount": "9",
+ "newInstancesCount": "10",
+ "activeAutoscaleProfile": "{\r\n \"Name\": \"Auto created scale condition\",\r\n \"Capacity\": {\r\n \"Minimum\": \"1\",\r\n \"Maximum\": \"10\",\r\n \"Default\": \"1\"\r\n },\r\n \"Rules\": [\r\n {\r\n \"MetricTrigger\": {\r\n \"Name\": \"Percentage CPU\",\r\n \"Namespace\": \"microsoft.compute/virtualmachinescalesets\",\r\n \"Resource\": \"/subscriptions/<GUID>/resourceGroups/testRG/providers/Microsoft.Compute/virtualMachineScaleSets/testVMSS\",\r\n \"ResourceLocation\": \"eastus\",\r\n \"TimeGrain\": \"PT1M\",\r\n \"Statistic\": \"Average\",\r\n \"TimeWindow\": \"PT5M\",\r\n \"TimeAggregation\": \"Average\",\r\n \"Operator\": \"GreaterThan\",\r\n \"Threshold\": 0.0,\r\n \"Source\": \"/subscriptions/<GUID>/resourceGroups/testRG/providers/Microsoft.Compute/virtualMachineScaleSets/testVMSS\",\r\n \"MetricType\": \"MDM\",\r\n \"Dimensions\": [],\r\n \"DividePerInstance\": false\r\n },\r\n \"ScaleAction\": {\r\n \"Direction\": \"Increase\",\r\n \"Type\": \"ChangeCount\",\r\n \"Value\": \"1\",\r\n \"Cooldown\": \"PT1M\"\r\n }\r\n }\r\n ]\r\n}",
+ "lastScaleActionTime": "Wed, 21 Aug 2019 16:17:47 GMT"
+ },
+ "status": "Succeeded",
+ "submissionTimestamp": "2019-08-21T16:17:47.2410185+00:00"
+ }
+}
+```
+
+### Sample activity log alert when the monitoringService = Activity Log - Security
+
+```json
+{
+ "alertContext": {
+ "channels": "Operation",
+ "correlationId": "<GUID>",
+ "eventSource": "Security",
+ "eventTimestamp": "2019-08-26T08:34:14+00:00",
+ "eventDataId": "<GUID>",
+ "level": "Informational",
+ "operationName": "Microsoft.Security/locations/alerts/activate/action",
+ "operationId": "<GUID>",
+ "properties": {
+ "threatStatus": "Quarantined",
+ "category": "Virus",
+ "threatID": "2147519003",
+ "filePath": "C:\\AlertGeneration\\test.eicar",
+ "protectionType": "Windows Defender",
+ "actionTaken": "Blocked",
+ "resourceType": "Virtual Machine",
+ "severity": "Low",
+ "compromisedEntity": "testVM",
+ "remediationSteps": "[\"No user action is necessary\"]",
+ "attackedResourceType": "Virtual Machine"
+ },
+ "status": "Active",
+ "submissionTimestamp": "2019-08-26T09:28:58.3019107+00:00"
+ }
+}
+```
+
+### Sample activity log alert when the monitoringService = ServiceHealth
+
+```json
+{
+ "alertContext": {
+ "authorization": null,
+ "channels": 1,
+ "claims": null,
+ "caller": null,
+ "correlationId": "f3cf2430-1ee3-4158-8e35-7a1d615acfc7",
+ "eventSource": 2,
+ "eventTimestamp": "2019-06-24T11:31:19.0312699+00:00",
+ "httpRequest": null,
+ "eventDataId": "<GUID>",
+ "level": 3,
+ "operationName": "Microsoft.ServiceHealth/maintenance/action",
+ "operationId": "<GUID>",
+ "properties": {
+ "title": "Azure Synapse Analytics Scheduled Maintenance Pending",
+ "service": "Azure Synapse Analytics",
+ "region": "East US",
+ "communication": "<MESSAGE>",
+ "incidentType": "Maintenance",
+ "trackingId": "<GUID>",
+ "impactStartTime": "2019-06-26T04:00:00Z",
+ "impactMitigationTime": "2019-06-26T12:00:00Z",
+ "impactedServices": "[{\"ImpactedRegions\":[{\"RegionName\":\"East US\"}],\"ServiceName\":\"Azure Synapse Analytics\"}]",
+ "impactedServicesTableRows": "<tr>\r\n<td align='center' style='padding: 5px 10px; border-right:1px solid black; border-bottom:1px solid black'>Azure Synapse Analytics</td>\r\n<td align='center' style='padding: 5px 10px; border-bottom:1px solid black'>East US<br></td>\r\n</tr>\r\n",
+ "defaultLanguageTitle": "Azure Synapse Analytics Scheduled Maintenance Pending",
+ "defaultLanguageContent": "<MESSAGE>",
+ "stage": "Planned",
+ "communicationId": "<GUID>",
+ "maintenanceId": "<GUID>",
+ "isHIR": "false",
+ "version": "0.1.1"
+ },
+ "status": "Active",
+ "subStatus": null,
+ "submissionTimestamp": "2019-06-24T11:31:31.7147357+00:00",
+ "ResourceType": null
+ }
+}
+```
+
+### Sample activity log alert when the monitoringService = ResourceHealth
+
+```json
+{
+ "alertContext": {
+ "channels": "Admin, Operation",
+ "correlationId": "<GUID>",
+ "eventSource": "ResourceHealth",
+ "eventTimestamp": "2019-06-24T15:42:54.074+00:00",
+ "eventDataId": "<GUID>",
+ "level": "Informational",
+ "operationName": "Microsoft.Resourcehealth/healthevent/Activated/action",
+ "operationId": "<GUID>",
+ "properties": {
+ "title": "This virtual machine is stopping and deallocating as requested by an authorized user or process",
+ "details": null,
+ "currentHealthStatus": "Unavailable",
+ "previousHealthStatus": "Available",
+ "type": "Downtime",
+ "cause": "UserInitiated"
+ },
+ "status": "Active",
+ "submissionTimestamp": "2019-06-24T15:45:20.4488186+00:00"
+ }
+}
+```
+
+## Alert context fields for Prometheus alerts
+
+### Sample Prometheus alert
+
+```json
+{
+ "alertContext": {
+ "interval": "PT1M",
+ "expression": "sql_up > 0",
+ "expressionValue": "0",
+ "for": "PT2M",
+ "labels": {
+ "Environment": "Prod",
+ "cluster": "myCluster1"
+ },
+ "annotations": {
+ "summary": "alert on SQL availability"
+ },
+ "ruleGroup": "/subscriptions/<subscription ID>/resourceGroups/myResourceGroup/providers/Microsoft.AlertsManagement/prometheusRuleGroups/myRuleGroup"
+ }
+}
+```
+
+## Enable the common alert schema
+
+Use action groups in the Azure portal or use the REST API to enable the common alert schema. Schemas are defined at the action level. For example, you must separately enable the schema for an email action and a webhook action.
> [!NOTE]
-> Smart detection alerts support the common schema by default. No opt-in is required.
->
-> Alerts generated by [VM insights](../vm/vminsights-overview.md) currently don't support the common schema.
->
+> Smart detection alerts support the common schema by default. You don't have to enable the common schema for smart detection alerts.
-### Through the Azure portal
+### Enable the common schema in the Azure portal
![Screenshot that shows the common alert schema opt in.](media/alerts-common-schema/portal-opt-in.png) 1. Open any existing action or a new action in an action group. 1. Select **Yes** to enable the common alert schema.
-### Through the Action Groups REST API
+### Enable the common schema using the REST API
+
+You can also use the [Action Groups API](/rest/api/monitor/actiongroups) to opt in to the common alert schema. In the [create or update](/rest/api/monitor/actiongroups/createorupdate) REST API call,
+- Set the "useCommonAlertSchema" flag to `true` to enable the common schema
+- Set the "useCommonAlertSchema" flag to `false` to use the non-common schema for email, webhook, Logic Apps, Azure Functions, or Automation runbook actions.
+
-You can also use the [Action Groups API](/rest/api/monitor/actiongroups) to opt in to the common alert schema. While you make the [create or update](/rest/api/monitor/actiongroups/createorupdate) REST API call, you can set the flag "useCommonAlertSchema" to `true` to opt in or `false` to opt out for email, webhook, Logic Apps, Azure Functions, or Automation runbook actions.
+#### Sample REST API call for using the common schema
-For example, the following request body made to the [create or update](/rest/api/monitor/actiongroups/createorupdate) REST API will:
+The following [create or update](/rest/api/monitor/actiongroups/createorupdate) REST API request:
-- Enable the common alert schema for the email action "John Doe's email."-- Disable the common alert schema for the email action "Jane Smith's email."-- Enable the common alert schema for the webhook action "Sample webhook."
+- Enables the common alert schema for the email action "John Doe's email."
+- Disables the common alert schema for the email action "Jane Smith's email."
+- Enables the common alert schema for the webhook action "Sample webhook."
```json {
For example, the following request body made to the [create or update](/rest/api
## Next steps -- [Learn the common alert schema definitions for webhooks, Logic Apps, Azure Functions, and Automation runbooks](./alerts-common-schema-definitions.md) - [Learn how to create a logic app that uses the common alert schema to handle all your alerts](./alerts-common-schema-integrations.md)
azure-monitor Alerts Create New Alert Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-new-alert-rule.md
Previously updated : 12/27/2022 Last updated : 12/28/2022 # Create a new alert rule
Then you define these elements for the resulting alert actions by using:
1. (Optional) If you've configured action groups for this alert rule, you can add custom properties to the alert payload to add more information to the payload. In the **Custom properties** section, add the property **Name** and **Value** for the custom property you want included in the payload.
+ > [!NOTE]
+ > The [common schema](alerts-common-schema.md) overwrites custom configurations. Therefore, you can't use both custom properties and the common schema for log alerts.
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-log-rule-details-tab.png" alt-text="Screenshot that shows the Details tab when creating a new log alert rule."::: ### [Activity log alert](#tab/activity-log)
Then you define these elements for the resulting alert actions by using:
1. (Optional) In the **Advanced options** section, select **Enable upon creation** for the alert rule to start running as soon as you're done creating it. 1. (Optional) If you've configured action groups for this alert rule, you can add custom properties to the alert payload to add more information to the payload. In the **Custom properties** section, add the property **Name** and **Value** for the custom property you want included in the payload.
+ > [!NOTE]
+ > The [common schema](alerts-common-schema.md) overwrites custom configurations. Therefore, you can't use both custom properties and the common schema for activity log alerts.
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-activity-log-rule-details-tab.png" alt-text="Screenshot that shows the Actions tab when creating a new activity log alert rule."::: ### [Resource Health alert](#tab/resource-health)
Then you define these elements for the resulting alert actions by using:
:::image type="content" source="media/alerts-create-new-alert-rule/alerts-rule-review-create.png" alt-text="Screenshot that shows the Review and create tab when creating a new alert rule.":::
-## Create a new alert rule by using the CLI
+## Create a new alert rule using the CLI
-You can create a new alert rule by using the [Azure CLI](/cli/azure/get-started-with-azure-cli). The following code examples use [Azure Cloud Shell](../../cloud-shell/overview.md). You can see the full list of the [Azure CLI commands for Azure Monitor](/cli/azure/azure-cli-reference-for-monitor#azure-monitor-references).
+You can create a new alert rule using the [Azure CLI](/cli/azure/get-started-with-azure-cli). The following code examples use [Azure Cloud Shell](../../cloud-shell/overview.md). You can see the full list of the [Azure CLI commands for Azure Monitor](/cli/azure/azure-cli-reference-for-monitor#azure-monitor-references).
1. In the [portal](https://portal.azure.com/), select **Cloud Shell**. At the prompt, use the commands that follow.
You can create a new alert rule by using the [Azure CLI](/cli/azure/get-started-
-## Create a new alert rule by using PowerShell
--- To create a metric alert rule by using PowerShell, use the [Add-AzMetricAlertRuleV2](/powershell/module/az.monitor/add-azmetricalertrulev2) cmdlet.-- To create a log alert rule by using PowerShell, use the [New-AzScheduledQueryRule](/powershell/module/az.monitor/new-azscheduledqueryrule) cmdlet.-- To create an activity log alert rule by using PowerShell, use the [Set-AzActivityLogAlert](/powershell/module/az.monitor/set-azactivitylogalert) cmdlet.-
-## Create an activity log alert rule from the Activity log pane
-
-You can also create an activity log alert on future events similar to an activity log event that already occurred.
-
-1. In the [portal](https://portal.azure.com/), [go to the Activity log pane](../essentials/activity-log.md#view-the-activity-log).
-1. Filter or find the desired event. Then create an alert by selecting **Add activity log alert**.
+## Create a new alert rule with PowerShell
- :::image type="content" source="media/alerts-create-new-alert-rule/create-alert-rule-from-activity-log-event-new.png" alt-text="Screenshot that shows creating an alert rule from an activity log event." lightbox="media/alerts-create-new-alert-rule/create-alert-rule-from-activity-log-event-new.png":::
+- To create a metric alert rule using PowerShell, use the [Add-AzMetricAlertRuleV2](/powershell/module/az.monitor/add-azmetricalertrulev2) cmdlet.
+- To create a log alert rule using PowerShell, use the [New-AzScheduledQueryRule](/powershell/module/az.monitor/new-azscheduledqueryrule) cmdlet.
+- To create an activity log alert rule using PowerShell, use the [Set-AzActivityLogAlert](/powershell/module/az.monitor/set-azactivitylogalert) cmdlet.
-1. The **Create alert rule** wizard opens, with the scope and condition already provided according to the previously selected activity log event. If necessary, you can edit and modify the scope and condition at this stage. By default, the exact scope and condition for the new rule are copied from the original event attributes. For example, the exact resource on which the event occurred, and the specific user or service name that initiated the event, are both included by default in the new alert rule.
+## Create a new alert rule using an ARM template
- If you want to make the alert rule more general, modify the scope and condition accordingly. See steps 3-9 in the section "Create a new alert rule in the Azure portal."
-
-1. Follow the rest of the steps from [Create a new alert rule in the Azure portal](#create-a-new-alert-rule-in-the-azure-portal).
-
-## Create an activity log alert rule by using an ARM template
-
-To create an activity log alert rule by using an Azure Resource Manager template (ARM template), create a `microsoft.insights/activityLogAlerts` resource. Then fill in all related properties.
+You can use an [Azure Resource Manager template (ARM template)](../../azure-resource-manager/templates/syntax.md) to configure alert rules consistently in all of your environments.
+1. Create a new resource, using the following resource types:
+ - For metric alerts: `Microsoft.Insights/metricAlerts`
+ - For log alerts: `Microsoft.Insights/scheduledQueryRules`
+ - For activity log, service health, and resource health alerts: `microsoft.Insights/activityLogAlerts`
+ > [!NOTE]
+ > - Metric alerts for an Azure Log Analytics workspace resource type (`Microsoft.OperationalInsights/workspaces`) are configured differently than other metric alerts. For more information, see [Resource Template for Metric Alerts for Logs](alerts-metric-logs.md#resource-template-for-metric-alerts-for-logs).
+ > - We recommend that you create the metric alert using the same resource group as your target resource.
+1. Copy one of the templates from these sample ARM templates.
+ - For metric alerts: [Resource Manager template samples for metric alert rules](resource-manager-alerts-metric.md)
+ - For log alerts: [Resource Manager template samples for log alert rules](resource-manager-alerts-log.md)
+ - For activity log alerts: [Resource Manager template samples for activity log alert rules](resource-manager-alerts-activity-log.md)
+ - For resource health alerts: [Resource Manager template samples for resource health alert rules](resource-manager-alerts-resource-health.md)
+1. Edit the template file to contain appropriate information for your alert, and save the file as \<your-alert-template-file\>.json.
+1. Edit the corresponding parameters file to customize the alert, and save as \<your-alert-template-file\>.parameters.json.
+1. Set the `metricName` parameter, using one of the values in [Azure Monitor supported metrics](../essentials/metrics-supported.md).
+1. Deploy the template using [PowerShell](../../azure-resource-manager/templates/deploy-powershell.md#deploy-local-template) or the [CLI](../../azure-resource-manager/templates/deploy-cli.md#deploy-local-template).
+
+### Additional properties for activity log alert ARM templates
> [!NOTE]
->The highest level that activity log alerts can be defined is the subscription level. Define the alert to alert per subscription. You can't define an alert on two subscriptions.
+> - Activity log alerts are defined at the subscription level. You can't define a single alert rule on more than one subscription.
+> - It may take up to five minutes for a new activity log alert rule to become active.
-The following fields are the options in the ARM template for the conditions fields. The **Resource Health**, **Advisor** and **Service Health** fields have extra properties fields.
+ARM templates for activity log alerts contain additional properties for the conditions fields. The **Resource Health**, **Advisor** and **Service Health** fields have extra properties fields.
|Field |Description | |||
The following fields are the options in the ARM template for the conditions fiel
|subStatus |Usually, this field is the HTTP status code of the corresponding REST call. This field can also include other strings describing a substatus. Examples of HTTP status codes include `OK` (HTTP Status Code: 200), `No Content` (HTTP Status Code: 204), and `Service Unavailable` (HTTP Status Code: 503), among many others. | |resourceType |The type of the resource that was affected by the event. An example is `Microsoft.Resources/deployments`. |
-This example sets the condition to the **Administrative** category:
-
-```json
-"condition": {
- "allOf": [
- {
- "field": "category",
- "equals": "Administrative"
- },
- {
- "field": "resourceType",
- "equals": "Microsoft.Resources/deployments"
- }
- ]
- }
-
-```
-
-This example template creates an activity log alert rule by using the **Administrative** condition:
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "activityLogAlertName": {
- "type": "string",
- "metadata": {
- "description": "Unique name (within the Resource Group) for the Activity log alert."
- }
- },
- "activityLogAlertEnabled": {
- "type": "bool",
- "defaultValue": true,
- "metadata": {
- "description": "Indicates whether or not the alert is enabled."
- }
- },
- "actionGroupResourceId": {
- "type": "string",
- "metadata": {
- "description": "Resource Id for the Action group."
- }
- }
- },
- "resources": [
- {
- "type": "Microsoft.Insights/activityLogAlerts",
- "apiVersion": "2017-04-01",
- "name": "[parameters('activityLogAlertName')]",
- "location": "Global",
- "properties": {
- "enabled": "[parameters('activityLogAlertEnabled')]",
- "scopes": [
- "[subscription().id]"
- ],
- "condition": {
- "allOf": [
- {
- "field": "category",
- "equals": "Administrative"
- },
- {
- "field": "operationName",
- "equals": "Microsoft.Resources/deployments/write"
- },
- {
- "field": "resourceType",
- "equals": "Microsoft.Resources/deployments"
- }
- ]
- },
- "actions": {
- "actionGroups":
- [
- {
- "actionGroupId": "[parameters('actionGroupResourceId')]"
- }
- ]
- }
- }
- }
- ]
-}
-```
-
-This sample JSON can be saved as, for example, *sampleActivityLogAlert.json*. You can deploy the sample by using [Azure Resource Manager in the Azure portal](../../azure-resource-manager/templates/deploy-portal.md).
- For more information about the activity log fields, see [Azure activity log event schema](../essentials/activity-log-schema.md).
-> [!NOTE]
-> It might take up to five minutes for the new activity log alert rule to become active.
-
-## Create a new activity log alert rule by using the REST API
+## Create an activity log alert rule from the Activity log pane
-The Azure Monitor Activity Log Alerts API is a REST API. It's fully compatible with the Azure Resource Manager REST API. You can use it with PowerShell by using the Resource Manager cmdlet or the Azure CLI.
+You can also create an activity log alert on future events similar to an activity log event that already occurred.
+1. In the [portal](https://portal.azure.com/), [go to the Activity log pane](../essentials/activity-log.md#view-the-activity-log).
+1. Filter or find the desired event. Then create an alert by selecting **Add activity log alert**.
-### Deploy the ARM template with PowerShell
+ :::image type="content" source="media/alerts-create-new-alert-rule/create-alert-rule-from-activity-log-event-new.png" alt-text="Screenshot that shows creating an alert rule from an activity log event." lightbox="media/alerts-create-new-alert-rule/create-alert-rule-from-activity-log-event-new.png":::
-To use PowerShell to deploy the sample ARM template shown in the [previous section](#create-an-activity-log-alert-rule-by-using-an-arm-template), use the following command:
+1. The **Create alert rule** wizard opens, with the scope and condition already provided according to the previously selected activity log event. If necessary, you can edit and modify the scope and condition at this stage. By default, the exact scope and condition for the new rule are copied from the original event attributes. For example, the exact resource on which the event occurred, and the specific user or service name that initiated the event, are both included by default in the new alert rule.
-```powershell
-New-AzResourceGroupDeployment -ResourceGroupName "myRG" -TemplateFile sampleActivityLogAlert.json -TemplateParameterFile sampleActivityLogAlert.parameters.json
-```
+ If you want to make the alert rule more general, modify the scope and condition accordingly. See steps 3-9 in the section "Create a new alert rule in the Azure portal."
-The *sampleActivityLogAlert.parameters.json* file contains values for the parameters that you need for alert rule creation.
+1. Follow the rest of the steps from [Create a new alert rule in the Azure portal](#create-a-new-alert-rule-in-the-azure-portal).
## Changes to the log alert rule creation experience
azure-monitor Resource Manager Alerts Activity Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/resource-manager-alerts-activity-log.md
+
+ Title: Resource Manager template samples for activity log alerts
+description: Sample Azure Resource Manager templates to deploy Azure Monitor activity log alerts.
+++ Last updated : 12/28/2022++
+# Resource Manager template samples for activity log alert rules in Azure Monitor
+
+This article includes samples of [Azure Resource Manager templates](../../azure-resource-manager/templates/syntax.md) to create and configure activity log alerts in Azure Monitor.
++
+## Activity log alert rule using the **Administrative** condition:
+
+This example sets the condition to the **Administrative** category:
+
+```json
+"condition": {
+ "allOf": [
+ {
+ "field": "category",
+ "equals": "Administrative"
+ },
+ {
+ "field": "resourceType",
+ "equals": "Microsoft.Resources/deployments"
+ }
+ ]
+ }
+```
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "activityLogAlertName": {
+ "type": "string",
+ "metadata": {
+ "description": "Unique name (within the Resource Group) for the Activity log alert."
+ }
+ },
+ "activityLogAlertEnabled": {
+ "type": "bool",
+ "defaultValue": true,
+ "metadata": {
+ "description": "Indicates whether or not the alert is enabled."
+ }
+ },
+ "actionGroupResourceId": {
+ "type": "string",
+ "metadata": {
+ "description": "Resource Id for the Action group."
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Insights/activityLogAlerts",
+ "apiVersion": "2017-04-01",
+ "name": "[parameters('activityLogAlertName')]",
+ "location": "Global",
+ "properties": {
+ "enabled": "[parameters('activityLogAlertEnabled')]",
+ "scopes": [
+ "[subscription().id]"
+ ],
+ "condition": {
+ "allOf": [
+ {
+ "field": "category",
+ "equals": "Administrative"
+ },
+ {
+ "field": "operationName",
+ "equals": "Microsoft.Resources/deployments/write"
+ },
+ {
+ "field": "resourceType",
+ "equals": "Microsoft.Resources/deployments"
+ }
+ ]
+ },
+ "actions": {
+ "actionGroups":
+ [
+ {
+ "actionGroupId": "[parameters('actionGroupResourceId')]"
+ }
+ ]
+ }
+ }
+ }
+ ]
+}
+```
+
+## Template to send activity log alerts on service notifications
+The following template creates an action group with an email target and enables all service health notifications for the target subscription. Save this template as `CreateServiceHealthAlert.json`.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "actionGroups_name": {
+ "type": "string",
+ "defaultValue": "SubHealth"
+ },
+ "activityLogAlerts_name": {
+ "type": "string",
+ "defaultValue": "ServiceHealthActivityLogAlert"
+ },
+ "emailAddress": {
+ "type": "string"
+ }
+ },
+ "variables": {
+ "alertScope": "[format('/subscriptions/{0}', subscription().subscriptionId)]"
+ },
+ "resources": [
+ {
+ "type": "microsoft.insights/actionGroups",
+ "apiVersion": "2019-06-01",
+ "name": "[parameters('actionGroups_name')]",
+ "location": "Global",
+ "properties": {
+ "groupShortName": "[parameters('actionGroups_name')]",
+ "enabled": true,
+ "emailReceivers": [
+ {
+ "name": "[parameters('actionGroups_name')]",
+ "emailAddress": "[parameters('emailAddress')]"
+ }
+ ],
+ "smsReceivers": [],
+ "webhookReceivers": []
+ }
+ },
+ {
+ "type": "microsoft.insights/activityLogAlerts",
+ "apiVersion": "2017-04-01",
+ "name": "[parameters('activityLogAlerts_name')]",
+ "location": "Global",
+ "properties": {
+ "scopes": [
+ "[variables('alertScope')]"
+ ],
+ "condition": {
+ "allOf": [
+ {
+ "field": "category",
+ "equals": "ServiceHealth"
+ },
+ {
+ "field": "properties.incidentType",
+ "equals": "Incident"
+ }
+ ]
+ },
+ "actions": {
+ "actionGroups": [
+ {
+ "actionGroupId": "[resourceId('microsoft.insights/actionGroups', parameters('actionGroups_name'))]",
+ "webhookProperties": {}
+ }
+ ]
+ },
+ "enabled": true
+ },
+ "dependsOn": [
+ "[resourceId('microsoft.insights/actionGroups', parameters('actionGroups_name'))]"
+ ]
+ }
+ ]
+}
+```
+
+## Next steps
+
+- [Get other sample templates for Azure Monitor](../resource-manager-samples.md).
+- [Learn more about alert rules](./alerts-overview.md).
azure-monitor Resource Manager Alerts Resource Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/resource-manager-alerts-resource-health.md
+
+ Title: Resource Manager template samples for resource health alerts
+description: Sample Azure Resource Manager templates to deploy Azure Monitor resource health alerts.
+++ Last updated : 05/11/2022+++
+# Resource Manager template samples for resource health alert rules in Azure Monitor
+
+This article includes samples of [Azure Resource Manager templates](../../azure-resource-manager/templates/syntax.md) to create and configure resource health alerts in Azure Monitor.
++
+See [Configure resource health alerts using Resource Manager templates](../../service-health/resource-health-alert-arm-template-guide.md) for information about configuring resource health alerts using ARM templates.
+
+## Basic template for Resource Health alerts
+
+You can use this base template as a starting point for creating Resource Health alerts. This template will work as written, and will sign you up to receive alerts for all newly activated resource health events across all resources in a subscription.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "activityLogAlertName": {
+ "type": "string",
+ "metadata": {
+ "description": "Unique name (within the Resource Group) for the Activity log alert."
+ }
+ },
+ "actionGroupResourceId": {
+ "type": "string",
+ "metadata": {
+ "description": "Resource Id for the Action group."
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Insights/activityLogAlerts",
+ "apiVersion": "2017-04-01",
+ "name": "[parameters('activityLogAlertName')]",
+ "location": "Global",
+ "properties": {
+ "enabled": true,
+ "scopes": [
+ "[subscription().id]"
+ ],
+ "condition": {
+ "allOf": [
+ {
+ "field": "category",
+ "equals": "ResourceHealth"
+ },
+ {
+ "field": "status",
+ "equals": "Active"
+ }
+ ]
+ },
+ "actions": {
+ "actionGroups":
+ [
+ {
+ "actionGroupId": "[parameters('actionGroupResourceId')]"
+ }
+ ]
+ }
+ }
+ }
+ ]
+}
+```
+++
+## Template to maximize the signal to noise ratio
+
+This sample template is configured to maximize the signal to noise ratio. Keep in mind that there can be instances cases where the currentHealthStatus, previousHealthStatus, and cause property values may be null in some events.
+
+See [Configure resource health alerts using Resource Manager templates](../../service-health/resource-health-alert-arm-template-guide.md) for information about configuring resource health alerts using ARM templates.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "activityLogAlertName": {
+ "type": "string",
+ "metadata": {
+ "description": "Unique name (within the Resource Group) for the Activity log alert."
+ }
+ },
+ "actionGroupResourceId": {
+ "type": "string",
+ "metadata": {
+ "description": "Resource Id for the Action group."
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Insights/activityLogAlerts",
+ "apiVersion": "2017-04-01",
+ "name": "[parameters('activityLogAlertName')]",
+ "location": "Global",
+ "properties": {
+ "enabled": true,
+ "scopes": [
+ "[subscription().id]"
+ ],
+ "condition": {
+ "allOf": [
+ {
+ "field": "category",
+ "equals": "ResourceHealth",
+ "containsAny": null
+ },
+ {
+ "anyOf": [
+ {
+ "field": "properties.currentHealthStatus",
+ "equals": "Available",
+ "containsAny": null
+ },
+ {
+ "field": "properties.currentHealthStatus",
+ "equals": "Unavailable",
+ "containsAny": null
+ },
+ {
+ "field": "properties.currentHealthStatus",
+ "equals": "Degraded",
+ "containsAny": null
+ }
+ ]
+ },
+ {
+ "anyOf": [
+ {
+ "field": "properties.previousHealthStatus",
+ "equals": "Available",
+ "containsAny": null
+ },
+ {
+ "field": "properties.previousHealthStatus",
+ "equals": "Unavailable",
+ "containsAny": null
+ },
+ {
+ "field": "properties.previousHealthStatus",
+ "equals": "Degraded",
+ "containsAny": null
+ }
+ ]
+ },
+ {
+ "anyOf": [
+ {
+ "field": "properties.cause",
+ "equals": "PlatformInitiated",
+ "containsAny": null
+ }
+ ]
+ },
+ {
+ "anyOf": [
+ {
+ "field": "status",
+ "equals": "Active",
+ "containsAny": null
+ },
+ {
+ "field": "status",
+ "equals": "Resolved",
+ "containsAny": null
+ },
+ {
+ "field": "status",
+ "equals": "In Progress",
+ "containsAny": null
+ },
+ {
+ "field": "status",
+ "equals": "Updated",
+ "containsAny": null
+ }
+ ]
+ }
+ ]
+ },
+ "actions": {
+ "actionGroups": [
+ {
+ "actionGroupId": "[parameters('actionGroupResourceId')]"
+ }
+ ]
+ }
+ }
+ }
+ ]
+}
+```
+
+## Next steps
+
+Learn more about Resource Health:
+- [Azure Resource Health overview](../../service-health/resource-health-overview.md)
+- [Resource types and health checks available through Azure Resource Health](../../service-health/resource-health-checks-resource-types.md)
+- [Configure Alerts for Service Health](../../service-health/alerts-activity-log-service-notifications-arm.md)
azure-monitor Basic Logs Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md
By default, all tables in your Log Analytics workspace are Analytics tables, and
| [ACSCallAutomationIncomingOperations](/azure/azure-monitor/reference/tables/ACSCallAutomationIncomingOperations) | Communication Services incoming requests Calls. | | [ACSCallRecordingSummary](/azure/azure-monitor/reference/tables/acscallrecordingsummary) | Communication Services recording summary logs. | | [ACSRoomsIncomingOperations](/azure/azure-monitor/reference/tables/acsroomsincomingoperations) | Communication Services Rooms incoming requests operations. |
+| [AHDSMedTechDiagnosticLogs](/azure/azure-monitor/reference/tables/AHDSMedTechDiagnosticLogs) | Health Data Services operational logs. |
| [AppTraces](/azure/azure-monitor/reference/tables/apptraces) | Application Insights Freeform traces. | | [AMSLiveEventOperations](/azure/azure-monitor/reference/tables/AMSLiveEventOperations) | Azure Media Services encoder connects, disconnects, or discontinues. | | [AMSKeyDeliveryRequests](/azure/azure-monitor/reference/tables/AMSKeyDeliveryRequests) | Azure Media Services HTTP request details for key, or license acquisition. |
azure-monitor Tutorial Logs Ingestion Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-logs-ingestion-portal.md
Instead of directly configuring the schema of the table, you can use the portal
```kusto source | extend TimeGenerated = todatetime(Time)
- | parse RawData.value with
+ | parse RawData with
ClientIP:string ' ' * ' ' *
azure-netapp-files Understand Guidelines Active Directory Domain Service Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/understand-guidelines-active-directory-domain-service-site.md
na Previously updated : 11/07/2022 Last updated : 01/03/2022 # Understand guidelines for Active Directory Domain Services site design and planning for Azure NetApp Files
A separate discovery process for AD DS LDAP servers occurs when LDAP is enabled
Incorrect or incomplete AD DS site topology or configuration can result in volume creation failures, problems with client queries, authentication failures, and failures to modify Azure NetApp Files AD connections.
-The AD Site Name field is required to create an Azure NetApp Files AD connection. The AD DS site defined must exist and be properly configured.
+>[!IMPORTANT]
+>The AD Site Name field is required to create an Azure NetApp Files AD connection. The AD DS site defined must exist and be properly configured.
Azure NetApp Files uses the AD DS Site to discover the domain controllers and subnets assigned to the AD DS Site defined in the AD Site Name. All domain controllers assigned to the AD DS Site must have good network connectivity from the Azure virtual network interfaces used by ANF and be reachable. AD DS domain controller VMs assigned to the AD DS Site that are used by Azure NetApp Files must be excluded from cost management policies that shut down VMs.
azure-resource-manager Common Deployment Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/common-deployment-errors.md
Title: Troubleshoot common Azure deployment errors
description: Troubleshoot common Azure deployment errors for resources that are deployed with Bicep files or Azure Resource Manager templates (ARM templates). tags: top-support-issue Previously updated : 09/12/2022 Last updated : 01/03/2023
azure-resource-manager Create Troubleshooting Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/create-troubleshooting-template.md
Title: Create a troubleshooting template
description: Describes how to create a template to troubleshoot Azure resource deployed with Azure Resource Manager templates (ARM templates) or Bicep files. tags: top-support-issue Previously updated : 09/14/2022 Last updated : 01/03/2023 # Create a troubleshooting template
azure-resource-manager Deployment Quota Exceeded https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/deployment-quota-exceeded.md
Title: Deployment quota exceeded description: Describes how to resolve the error of having more than 800 deployments in the resource group history. Previously updated : 09/12/2022 Last updated : 01/03/2023
azure-resource-manager Enable Debug Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/enable-debug-logging.md
Title: Enable debug logging
description: Describes how to enable debug logging to troubleshoot Azure resources deployed with Bicep files or Azure Resource Manager templates (ARM templates). tags: top-support-issue Previously updated : 12/30/2022 Last updated : 01/03/2023
azure-resource-manager Error Invalid Name Segments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/error-invalid-name-segments.md
Title: Invalid resource name and type segments description: Describes how to resolve an error when the resource name and type don't have the same number of segments. Previously updated : 09/12/2022 Last updated : 01/03/2023 # Resolve errors for resource name and type mismatch
azure-resource-manager Error Invalid Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/error-invalid-template.md
Title: Invalid template errors description: Describes how to resolve invalid template errors when deploying Bicep files or Azure Resource Manager templates (ARM templates). Previously updated : 12/28/2022 Last updated : 01/03/2023 # Resolve errors for invalid template
azure-resource-manager Error Job Size Exceeded https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/error-job-size-exceeded.md
Title: Job size exceeded error description: Describes how to troubleshoot errors for job size exceeded or if the template is too large for deployments using a Bicep file or Azure Resource Manager template (ARM template). Previously updated : 09/12/2022 Last updated : 01/03/2023 # Resolve errors for job size exceeded
azure-resource-manager Error Not Found https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/error-not-found.md
Title: Resource not found errors description: Describes how to resolve errors when a resource can't be found. The error might occur when you deploy a Bicep file or Azure Resource Manager template, or when doing management tasks. Previously updated : 09/12/2022 Last updated : 01/03/2023
azure-resource-manager Error Parent Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/error-parent-resource.md
Title: Parent resource errors description: Describes how to resolve errors when you deploy a resource that's dependent on a parent resource in a Bicep file or Azure Resource Manager template (ARM template). Previously updated : 09/12/2022 Last updated : 01/03/2023 # Resolve errors for parent resources
azure-resource-manager Error Policy Requestdisallowedbypolicy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/error-policy-requestdisallowedbypolicy.md
Title: Request disallowed by policy error description: Describes the error for request disallowed by policy when deploying resources with an Azure Resource Manager template (ARM template) or Bicep file. Previously updated : 09/12/2022 Last updated : 01/03/2023
azure-resource-manager Error Register Resource Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/error-register-resource-provider.md
Title: Resource provider registration errors description: Describes how to resolve Azure resource provider registration errors for resources deployed with a Bicep file or Azure Resource Manager template (ARM template). Previously updated : 09/12/2022 Last updated : 01/03/2023
azure-resource-manager Error Reserved Resource Name https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/error-reserved-resource-name.md
Title: Reserved resource name errors description: Describes how to resolve errors when providing a resource name that includes a reserved word. Previously updated : 09/12/2022 Last updated : 01/03/2023 # Resolve errors for reserved resource names
azure-resource-manager Error Resource Quota https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/error-resource-quota.md
Title: Resource quota errors description: Describes how to resolve resource quota errors when deploying resources with an Azure Resource Manager template (ARM template) or Bicep file. Previously updated : 09/12/2022 Last updated : 01/03/2023
azure-resource-manager Error Sku Not Available https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/error-sku-not-available.md
Title: SKU not available errors description: Describes how to troubleshoot the SKU not available error when deploying resources with an Azure Resource Manager template (ARM template) or Bicep file. Previously updated : 09/12/2022 Last updated : 01/03/2023
azure-resource-manager Error Storage Account Name https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/error-storage-account-name.md
Title: Resolve errors for storage account names description: Describes how to resolve errors for Azure storage account names that can occur during deployment with a Bicep file or Azure Resource Manager template (ARM template). Previously updated : 09/12/2022 Last updated : 01/03/2023 # Resolve errors for storage account names
azure-resource-manager Find Error Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/find-error-code.md
Title: Find error codes
description: Describes how to find error codes to troubleshoot Azure resources deployed with Azure Resource Manager templates (ARM templates) or Bicep files. tags: top-support-issue Previously updated : 09/14/2022 Last updated : 01/03/2023
azure-resource-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/overview.md
Title: Overview of deployment troubleshooting for Bicep files and ARM templates description: Describes deployment troubleshooting when you use Bicep files or Azure Resource Manager templates (ARM templates) to deploy Azure resources. Previously updated : 09/14/2022 Last updated : 01/03/2023
azure-resource-manager Quickstart Troubleshoot Arm Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/quickstart-troubleshoot-arm-deployment.md
Title: Troubleshoot ARM template JSON deployments description: Learn how to troubleshoot Azure Resource Manager template (ARM template) JSON deployments. Previously updated : 09/14/2022 Last updated : 01/03/2023
azure-resource-manager Quickstart Troubleshoot Bicep Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/quickstart-troubleshoot-bicep-deployment.md
Title: Troubleshoot Bicep file deployments description: Learn how to monitor and troubleshoot Bicep file deployments. Shows activity logs and deployment history. Previously updated : 09/14/2022 Last updated : 01/03/2023
azure-web-pubsub Howto Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-custom-domain.md
$ curl -vvv https://contoso.example.com/api/health
It should return `200` status code without any certificate error.
+## Key Vault in private network
+
+If you have configured [Private Endpoint](../private-link/private-endpoint-overview.md) to your Key Vault, Azure Web PubSub Service cannot access the Key Vault via public network. You need to set up a [Shared Private Endpoint](./howto-secure-shared-private-endpoints-key-vault.md) to let Azure Web PubSub Service access your Key Vault via private network.
+
+After you create a Shared Private Endpoint, you can create a custom certificate as usual. **You don't have to change the domain in Key Vault URI**. For example, if your Key Vault base URI is `https://contoso.vault.azure.net`, you still use this URI to configure custom certificate.
+
+You don't have to explicitly allow Azure Web PubSub Service IPs in Key Vault firewall settings. For more info, see [Key Vault private link diagnostics](../key-vault/general/private-link-diagnostics.md).
+ ## Next steps + [How to enable managed identity for Azure Web PubSub Service](howto-use-managed-identity.md)
azure-web-pubsub Howto Secure Shared Private Endpoints Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-secure-shared-private-endpoints-key-vault.md
+
+ Title: Access Key Vault in private network through Shared Private Endpoints
+
+description: How to access key vault in private network through Shared Private Endpoints
+++ Last updated : 01/03/2023+++
+# Access Key Vault in private network through Shared Private Endpoints
+
+Azure Web PubSub Service can access your Key Vault in private network through Shared Private Endpoints. In this way you don't have to expose your Key Vault on public network.
+
+ :::image type="content" alt-text="Diagram showing architecture of shared private endpoint." source="media\howto-secure-shared-private-endpoints-key-vault\shared-private-endpoint-overview.png" :::
+
+## Shared Private Link Resources Management
+
+Private endpoints of secured resources that are created through Azure Web PubSub Service APIs are referred to as *shared private link resources*. This is because you're "sharing" access to a resource, such as an Azure Key Vault, that has been integrated with the [Azure Private Link service](https://azure.microsoft.com/services/private-link/). These private endpoints are created inside Azure Web PubSub Service execution environment and aren't directly visible to you.
+
+> [!NOTE]
+> The examples in this article are based on the following assumptions:
+> * The resource ID of this Azure Web PubSub Service is _/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/webpubsub/contoso-webpubsub .
+> * The resource ID of Azure Key Vault is _/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.KeyVault/vaults/contoso-kv_.
+
+The rest of the examples show how the *contoso-webpubsub* service can be configured so that its outbound calls to Key Vault go through a private endpoint rather than public network.
+
+### Step 1: Create a shared private link resource to the Key Vault
+
+#### [Azure portal](#tab/azure-portal)
+
+1. In the Azure portal, go to your Azure Web PubSub Service resource.
+1. In the menu pane, select **Networking**. Switch to **Private access** tab.
+1. Click **Add shared private endpoint**.
+
+ :::image type="content" alt-text="Screenshot of shared private endpoints management." source="media\howto-secure-shared-private-endpoints-key-vault\portal-shared-private-endpoints-management.png" lightbox="media\howto-secure-shared-private-endpoints-key-vault\portal-shared-private-endpoints-management.png" :::
+
+1. Fill in a name for the shared private endpoint.
+1. Select the target linked resource either by selecting from your owned resources or by filling a resource ID.
+1. Click **Add**.
+
+ :::image type="content" alt-text="Screenshot of adding a shared private endpoint." source="media\howto-secure-shared-private-endpoints-key-vault\portal-shared-private-endpoints-add.png" :::
+
+1. The shared private endpoint resource will be in **Succeeded** provisioning state. The connection state is **Pending** approval at target resource side.
+
+ :::image type="content" alt-text="Screenshot of an added shared private endpoint." source="media\howto-secure-shared-private-endpoints-key-vault\portal-shared-private-endpoints-added.png" lightbox="media\howto-secure-shared-private-endpoints-key-vault\portal-shared-private-endpoints-added.png" :::
+
+#### [Azure CLI](#tab/azure-cli)
+
+You can make the following API call with the [Azure CLI](/cli/azure/) to create a shared private link resource:
+
+```dotnetcli
+az rest --method put --uri https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/webpubsub/contoso-webpubsub/sharedPrivateLinkResources/kv-pe?api-version=2022-08-01-preview --body @create-pe.json
+```
+
+The contents of the *create-pe.json* file, which represent the request body to the API, are as follows:
+
+```json
+{
+ "name": "contoso-kv",
+ "properties": {
+ "privateLinkResourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.KeyVault/vaults/contoso-kv",
+ "groupId": "vault",
+ "requestMessage": "please approve"
+ }
+}
+```
+
+The process of creating an outbound private endpoint is a long-running (asynchronous) operation. As in all asynchronous Azure operations, the `PUT` call returns an `Azure-AsyncOperation` header value that looks like the following:
+
+```plaintext
+"Azure-AsyncOperation": "https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/webpubsub/contoso-webpubsub/operationStatuses/c0786383-8d5f-4554-8d17-f16fcf482fb2?api-version=2022-08-01-preview"
+```
+
+You can poll this URI periodically to obtain the status of the operation.
+
+If you're using the CLI, you can poll for the status by manually querying the `Azure-AsyncOperationHeader` value,
+
+```dotnetcli
+az rest --method get --uri https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/webpubsub/contoso-webpubsub/operationStatuses/c0786383-8d5f-4554-8d17-f16fcf482fb2?api-version=2022-08-01-preview
+```
+
+Wait until the status changes to "Succeeded" before proceeding to the next steps.
+
+--
+
+### Step 2a: Approve the private endpoint connection for the Key Vault
+
+#### [Azure portal](#tab/azure-portal)
+
+1. In the Azure portal, select the **Networking** tab of your Key Vault and navigate to **Private endpoint connections**. After the asynchronous operation has succeeded, there should be a request for a private endpoint connection with the request message from the previous API call.
+
+ :::image type="content" alt-text="Screenshot of the Azure portal, showing the Private endpoint connections pane." source="media\howto-secure-shared-private-endpoints-key-vault\portal-key-vault-approve-private-endpoint.png" :::
+
+1. Select the private endpoint that Azure Web PubSub Service created. Click **Approve**.
+
+ Make sure that the private endpoint connection appears as shown in the following screenshot. It could take one to two minutes for the status to be updated in the portal.
+
+ :::image type="content" alt-text="Screenshot of the Azure portal, showing an Approved status on the Private endpoint connections pane." source="media\howto-secure-shared-private-endpoints-key-vault\portal-key-vault-approved-private-endpoint.png" :::
+
+#### [Azure CLI](#tab/azure-cli)
+
+1. List private endpoint connections.
+
+ ```dotnetcli
+ az network private-endpoint-connection list -n <key-vault-resource-name> -g <key-vault-resource-group-name> --type 'Microsoft.KeyVault/vaults'
+ ```
+
+ There should be a pending private endpoint connection. Note down its ID.
+
+ ```json
+ [
+ {
+ "id": "<id>",
+ "location": "",
+ "name": "",
+ "properties": {
+ "privateLinkServiceConnectionState": {
+ "actionRequired": "None",
+ "description": "Please approve",
+ "status": "Pending"
+ }
+ }
+ }
+ ]
+ ```
+
+1. Approve the private endpoint connection.
+
+ ```dotnetcli
+ az network private-endpoint-connection approve --id <private-endpoint-connection-id>
+ ```
+
+--
+
+### Step 2b: Query the status of the shared private link resource
+
+It takes minutes for the approval to be propagated to Azure Web PubSub Service. You can check the state using either Azure portal or Azure CLI.
+
+#### [Azure portal](#tab/azure-portal)
+
+ :::image type="content" alt-text="Screenshot of an approved shared private endpoint." source="media\howto-secure-shared-private-endpoints-key-vault\portal-shared-private-endpoints-approved.png" lightbox="media\howto-secure-shared-private-endpoints-key-vault\portal-shared-private-endpoints-approved.png" :::
+
+#### [Azure CLI](#tab/azure-cli)
+
+```dotnetcli
+az rest --method get --uri https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/webpubsub/contoso-webpubsub/sharedPrivateLinkResources/func-pe?api-version=2022-08-01-preview
+```
+
+This would return a JSON, where the connection state would show up as "status" under the "properties" section.
+
+```json
+{
+ "name": "contoso-kv",
+ "properties": {
+ "privateLinkResourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.KeyVault/vaults/contoso-kv",
+ "groupId": "vaults",
+ "requestMessage": "please approve",
+ "status": "Approved",
+ "provisioningState": "Succeeded"
+ }
+}
+
+```
+
+If the "Provisioning State" (`properties.provisioningState`) of the resource is `Succeeded` and "Connection State" (`properties.status`) is `Approved`, it means that the shared private link resource is functional and Azure Web PubSub Service can communicate over the private endpoint.
+
+--
+
+At this point, the private endpoint between Azure Web PubSub Service and Azure Key Vault is established.
+
+Now you can configure features like custom domain as usual. **You don't have to use a special domain for Key Vault**. DNS resolution is automatically handled by Azure Web PubSub Service.
+
+## Next steps
+
+Learn more:
+++ [What are private endpoints?](../private-link/private-endpoint-overview.md)++ [Configure custom domain](howto-custom-domain.md)
azure-web-pubsub Reference Odata Filter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-odata-filter.md
Title: OData filter syntax in Azure Web PubSub service
-description: OData language reference and full syntax used for creating filter expressions in Azure Web PubSub service queries.
+description: This article provides an OData language reference and the full syntax for creating filter expressions in Azure Web PubSub service queries.
Last updated 11/11/2022
-# OData filter syntax in Azure Web PubSub service
+# OData filter syntax in the Azure Web PubSub service
-Azure Web PubSub's **filter** parameter defines inclusion or exclusion criteria for sending messages to connections. This parameter is used in the [Send to all](/rest/api/webpubsub/dataplane/web-pub-sub/send-to-all), [Send to group](/rest/api/webpubsub/dataplane/web-pub-sub/send-to-group), and [Send to user](/rest/api/webpubsub/dataplane/web-pub-sub/send-to-user) operations.
+The Azure Web PubSub `filter` parameter defines inclusion or exclusion criteria for sending messages to connections. This parameter is used in the [Send to all](/rest/api/webpubsub/dataplane/web-pub-sub/send-to-all), [Send to group](/rest/api/webpubsub/dataplane/web-pub-sub/send-to-group), and [Send to user](/rest/api/webpubsub/dataplane/web-pub-sub/send-to-user) operations.
This article provides the following resources: -- A description of the OData syntax of the **filter** parameter with examples.-- A description of the complete [Extended Backus-Naur Form](#formal-grammar) grammar.-- A browsable [syntax diagram](https://aka.ms/awps/filter-syntax-diagram) to interactively explore the syntax grammar rules.
+- A description of the OData syntax of the `filter` parameter with examples.
+- A description of the complete [Extended Backus-Naur Form (EBNF)](#formal-grammar) grammar.
## Syntax
-A filter in the OData language is boolean expression, which in turn can be one of several types of expression, as shown by the following EBNF ([Extended Backus-Naur Form](https://en.wikipedia.org/wiki/Extended_BackusΓÇôNaur_form)) description:
+A filter in the OData language is a Boolean expression. It can be one of several expression types, as shown in the following EBNF description:
``` /* Identifiers */
boolean_expression ::= logical_expression
| '(' boolean_expression ')' ```
-An interactive syntax diagram is available at, [OData syntax diagram for Azure Web PubSub service](https://aka.ms/awps/filter-syntax-diagram).
+You can use an [interactive syntax diagram](https://aka.ms/awps/filter-syntax-diagram) to explore the syntax grammar rules.
-For the complete EBNF, see [formal grammar section](#formal-grammar) .
+The [Formal grammar](#formal-grammar) section of this article provides the complete EBNF.
### Identifiers
-Using the filter syntax, you can control sending messages to connections matching the identifier criteria. Azure Web PubSub supports below identifiers:
+By using the filter syntax, you can control sending messages to connections that match the identifier criteria. Azure Web PubSub supports the following identifiers:
-| Identifier | Description | Note | Examples |
+| Identifier | Description | Note | Example |
| | |--| --
-| `userId` | The userId of the connection. | Case insensitive. It can be used in [string operations](#supported-operations). | `userId eq 'user1'`
-| `connectionId` | The connectionId of the connection. | Case insensitive. It can be used in [string operations](#supported-operations). | `connectionId ne '123'`
-| `groups` | The collection of groups the connection is currently in. | Case insensitive. It can be used in [collection operations](#supported-operations). | `'group1' in groups`
+| `userId` | The user ID of the connection | Case insensitive. It can be used in [string operations](#supported-operations). | `userId eq 'user1'`
+| `connectionId` | The connection ID of the connection | Case insensitive. It can be used in [string operations](#supported-operations). | `connectionId ne '123'`
+| `groups` | The collection of groups that the connection is currently in | Case insensitive. It can be used in [collection operations](#supported-operations). | `'group1' in groups`
-Identifiers refer to the property value of a connection. Azure Web PubSub supports three identifiers matching the property name of the connection model. and supports identifiers `userId` and `connectionId` in string operations, supports identifier `groups` in [collection operations](#supported-operations). For example, to filter out connections with userId `user1`, we specify the filter as `userId eq 'user1'`. Read through the below sections for more samples using the filter.
+Identifiers refer to the property value of a connection. Azure Web PubSub supports three identifiers that match the property name of the connection model. The service supports the `userId` and `connectionId` identifiers in string operations, and it supports the `groups` identifier in [collection operations](#supported-operations).
+
+For example, to filter out connections with a user ID of `user1`, you specify the filter as `userId eq 'user1'`. Read through the following sections for more examples of using the filter.
### Boolean expressions
-The expression for a filter is a boolean expression. Azure Web PubSub sends messages to connections with filter expressions evaluated to `true`.
+The expression for a filter is a Boolean expression. Azure Web PubSub sends messages to connections with filter expressions evaluated to `true`.
-The types of boolean expressions include:
+The types of Boolean expressions include:
-- Logical expressions that combine other boolean expressions using the operators `and`, `or`, and `not`. -- Comparison expressions, which compare fields or range variables to constant values using the operators `eq`, `ne`, `gt`, `lt`, `ge`, and `le`.-- The boolean literals `true` and `false`. These constants can be useful sometimes when programmatically generating filters, but otherwise don't tend to be used in practice.-- Boolean expressions in parentheses. Using parentheses helps to explicitly determine the order of operations in a filter. For more information on the default precedence of the OData operators, see [operator precedence section](#operator-precedence).
+- Logical expressions that combine other Boolean expressions by using the operators `and`, `or`, and `not`.
+- Comparison expressions, which compare fields or range variables to constant values by using the operators `eq`, `ne`, `gt`, `lt`, `ge`, and `le`.
+- The Boolean literals `true` and `false`. These constants can be useful sometimes when you're programmatically generating filters. Otherwise, they don't tend to be used in practice.
+- Boolean expressions in parentheses. Using parentheses helps to explicitly determine the order of operations in a filter. The [Operator precedence](#operator-precedence) section of this article describes the default precedence of the OData operators.
### Supported operations
The filter syntax supports the following operations:
| Operator | Description | Example | | |
-| **Logical Operators**
+| **Logical operators**
| `and` | Logical and | `length(userId) le 10 and length(userId) gt 3` | `or` | Logical or | `length(userId) gt 10 or length(userId) le 3` | `not` | Logical negation | `not endswith(userId, 'milk')`
-| **Comparison Operators**
+| **Comparison operators**
| `eq` | Equal | `userId eq 'user1'`, </br> `userId eq null` | `ne` | Not equal | `userId ne 'user1'`, </br> `userId ne null` | `gt` | Greater than | `length(userId) gt 10` | `ge` | Greater than or equal | `length(userId) ge 10` | `lt` | Less than | `length(userId) lt 3` | `le` | Less than or equal | `'group1' in groups`, </br> `user in ('user1','user2')`
-| **In Operator**
-| `in` | The right operand MUST be either a comma-separated list of primitive values, enclosed in parentheses, or a single expression that resolves to a collection.| `userId ne 'user1'`
-| **Grouping Operator**
+| **In operator**
+| `in` | Right operand *must* be either a comma-separated list of primitive values, enclosed in parentheses, or a single expression that resolves to a collection| `userId ne 'user1'`
+| **Grouping operator**
| `()` | Controls the evaluation order of an expression | `userId eq 'user1' or (not (startswith(userId,'user2'))`
-| **String Functions**
-| `string tolower(string p)` | Get the lower case for the string value | `tolower(userId) eq 'user1'` can match connections for user `USER1`
-| `string toupper(string p)` | Get the upper case for the string value | `toupper(userId) eq 'USER1'` can match connections for user `user1`
-| `string trim(string p)` | Trim the string value | `trim(userId) eq 'user1'` can match connections for user ` user1 `
+| **String functions**
+| `string tolower(string p)` | Gets the lower case for the string value | `tolower(userId) eq 'user1'` can match connections for user `USER1`
+| `string toupper(string p)` | Gets the upper case for the string value | `toupper(userId) eq 'USER1'` can match connections for user `user1`
+| `string trim(string p)` | Trims the string value | `trim(userId) eq 'user1'` can match connections for user ` user1 `
| `string substring(string p, int startIndex)`,</br>`string substring(string p, int startIndex, int length)` | Substring of the string | `substring(userId,5,2) eq 'ab'` can match connections for user `user-ab-de`
-| `bool endswith(string p0, string p1)` | Check if `p0` ends with `p1` | `endswith(userId,'de')` can match connections for user `user-ab-de`
-| `bool startswith(string p0, string p1)` | Check if `p0` starts with `p1` | `startswith(userId,'user')` can match connections for user `user-ab-de`
-| `int indexof(string p0, string p1)` | Get the index of `p1` in `p0`. Returns `-1` if `p0` doesn't contain `p1`. | `indexof(userId,'-ab-') ge 0` can match connections for user `user-ab-de`
-| `int length(string p)` | Get the length of the input string | `length(userId) gt 1` can match connections for user `user-ab-de`
-| **Collection Functions**
-| `int length(collection p)` | Get the length of the collection | `length(groups) gt 1` can match connections in two groups
+| `bool endswith(string p0, string p1)` | Checks if `p0` ends with `p1` | `endswith(userId,'de')` can match connections for user `user-ab-de`
+| `bool startswith(string p0, string p1)` | Checks if `p0` starts with `p1` | `startswith(userId,'user')` can match connections for user `user-ab-de`
+| `int indexof(string p0, string p1)` | Gets the index of `p1` in `p0`, or returns `-1` if `p0` doesn't contain `p1` | `indexof(userId,'-ab-') ge 0` can match connections for user `user-ab-de`
+| `int length(string p)` | Gets the length of the input string | `length(userId) gt 1` can match connections for user `user-ab-de`
+| **Collection function**
+| `int length(collection p)` | Gets the length of the collection | `length(groups) gt 1` can match connections in two groups
### Operator precedence
-If you write a filter expression with no parentheses around its subexpressions, Azure Web PubSub service will evaluate it according to a set of operator precedence rules. These rules are based on which operators are used to combine subexpressions. The following table lists groups of operators in order from highest to lowest precedence:
+If you write a filter expression with no parentheses around its subexpressions, the Azure Web PubSub service will evaluate it according to a set of operator precedence rules. These rules are based on which operators are used to combine subexpressions. The following table lists groups of operators in order from highest to lowest precedence:
-| Group | Operator(s) |
+| Group | Operators |
| | | | Logical operators | `not` | | Comparison operators | `eq`, `ne`, `gt`, `lt`, `ge`, `le` | | Logical operators | `and` | | Logical operators | `or` |
-An operator that is higher in the above table will "bind more tightly" to its operands than other operators. For example, `and` is of higher precedence than `or`, and comparison operators are of higher precedence than either of them, so the following two expressions are equivalent:
+An operator that's higher in the preceding table will "bind more tightly" to its operands than other operators do. For example, `and` has higher precedence than `or`, and comparison operators have higher precedence than either of them. So, the following two expressions are equivalent:
```odata-filter-expr length(userId) gt 0 and length(userId) lt 3 or length(userId) gt 7 and length(userId) lt 10 ((length(userId) gt 0) and (length(userId) lt 3)) or ((length(userId) gt 7) and (length(userId) lt 10)) ```
-The `not` operator has the highest precedence of all, even higher than the comparison operators. If you write a filter like this:
+The `not` operator has the highest precedence of all. It's even higher than the comparison operators. If you write a filter like this:
```odata-filter-expr not length(userId) gt 5
You'll get this error message:
Invalid syntax for 'not length(userId)': Type 'null', expect 'bool'. (Parameter 'filter') ```
-This error happens because the operator is associated with just the `length(userId)` expression, which is of type `null` when `userId` is `null`, and not with the entire comparison expression. The fix is to put the operand of `not` in parentheses:
+This error happens because the operator is associated with just the `length(userId)` expression, and not with the entire comparison expression. The `length(userId)` expression is of type `null` when `userId` is `null`. The fix is to put the operand of `not` in parentheses:
```odata-filter-expr not (length(userId) gt 5)
not (length(userId) gt 5)
### Filter size limitations
-There are limits to the size and complexity of filter expressions that you can send to Azure Web PubSub service. The limits are based roughly on the number of clauses in your filter expression. A good guideline is that if you have over 100 clauses, you are at risk of exceeding the limit. To avoid exceeding the limit, design your application so that it doesn't generate filters of unbounded size.
+There are limits to the size and complexity of filter expressions that you can send to the Azure Web PubSub service. The limits are based roughly on the number of clauses in your filter expression. A good guideline is that if you have more than 100 clauses, you're at risk of exceeding the limit. To avoid exceeding the limit, design your application so that it doesn't generate filters of unbounded size.
## Examples
-1. Send to multiple groups
-
- ```odata-filter-expr
- filter='group1' in groups or 'group2' in groups or 'group3' in groups
- ```
-2. Send to multiple users in some specific group
- ```odata-filter-expr
- filter=userId in ('user1', 'user2', 'user3') and 'group1' in groups
- ```
-3. Send to some user but not some specific connectionId
- ```odata-filter-expr
- filter=userId eq 'user1' and connectionId ne '123'
- ```
-4. Send to some user not in some specific group
- ```odata-filter-expr
- filter=userId eq 'user1' and (not ('group1' in groups))
- ```
-5. Escape `'` when userId contains `'`
- ```odata-filter-expr
- filter=userId eq 'user''1'
- ```
+Send to multiple groups:
-## Formal grammar
+```odata-filter-expr
+filter='group1' in groups or 'group2' in groups or 'group3' in groups
+```
+
+Send to multiple users in a specific group:
-We can describe the subset of the OData language supported by Azure Web PubSub service using an EBNF ([Extended Backus-Naur Form](https://en.wikipedia.org/wiki/Extended_BackusΓÇôNaur_form)) grammar. Rules are listed "top-down", starting with the most complex expressions, then breaking them down into more primitive expressions. The top is the grammar rule for `$filter` that corresponds to specific parameter `filter` of the Azure Web PubSub service `Send*` REST APIs:
+```odata-filter-expr
+filter=userId in ('user1', 'user2', 'user3') and 'group1' in groups
+```
+
+Send to a user but not a specific connection ID:
+
+```odata-filter-expr
+filter=userId eq 'user1' and connectionId ne '123'
+```
+
+Send to a user who's not in a specific group:
+
+```odata-filter-expr
+filter=userId eq 'user1' and (not ('group1' in groups))
+```
+
+Escape `'` when the user ID contains `'`:
+
+```odata-filter-expr
+filter=userId eq 'user''1'
+```
+
+## Formal grammar
+The following [Extended Backus-Naur Form](https://en.wikipedia.org/wiki/Extended_BackusΓÇôNaur_form) grammar can describe the subset of the OData language that the Azure Web PubSub service supports. This grammar lists rules "top down," by starting with the most complex expressions and then breaking them down into more primitive expressions. The top is the grammar rule for `$filter` that corresponds to the specific `filter` parameter of the Azure Web PubSub service's `Send*` REST APIs.
``` /* Top-level rule */
length_function_call ::= "length" '(' string_expression | collection_exp
## Next steps
backup Back Up Azure Stack Hyperconverged Infrastructure Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/back-up-azure-stack-hyperconverged-infrastructure-virtual-machines.md
# Back up Azure Stack HCI virtual machines with Azure Backup Server
-This article explains how to back up virtual machines on Azure Stack HCI using Microsoft Azure Backup Server (MABS).
+This article describes how to back up virtual machines on Azure Stack HCI using Microsoft Azure Backup Server (MABS).
## Supported scenarios
These are the prerequisites for backing up virtual machines with MABS:
2. Set up the MABS protection agent on the server or each cluster node.
-3. In the MABS Administrator console, select **Protection** > **Create protection group** to open the **Create New Protection Group** wizard.
+3. On the MABS Administrator console, select **Protection** > **Create protection group** to open the **Create New Protection Group** wizard.
4. On the **Select Group Members** page, select the VMs you want to protect from the host servers on which they're located. We recommend you put all VMs that will have the same protection policy into one protection group. To make efficient use of space, enable colocation. Colocation allows you to locate data from different protection groups on the same disk or tape storage, so that multiple data sources have a single replica and recovery point volume. 5. On the **Select Data Protection Method** page, specify a protection group name. Select **I want short-term protection using Disk** and select **I want online protection** if you want to back up data to Azure using the Azure Backup service.
-6. In **Specify Short-Term Goals** > **Retention range**, specify how long you want to retain disk data. In **Synchronization frequency**, specify how often incremental backups of the data should run. Alternatively, instead of selecting an interval for incremental backups you can enable **Just before a recovery point**. With this setting enabled, MABS will run an express full backup just before each scheduled recovery point.
+6. On **Specify Short-Term Goals** > **Retention range**, specify how long you want to retain disk data. In **Synchronization frequency**, specify how often incremental backups of the data should run. Alternatively, instead of selecting an interval for incremental backups you can enable **Just before a recovery point**. With this setting enabled, MABS will run an express full backup just before each scheduled recovery point.
> [!NOTE] >If you're protecting application workloads, recovery points are created in accordance with Synchronization frequency, provided the application supports incremental backups. If it doesn't, then MABS runs an express full backup, instead of an incremental backup, and creates recovery points in accordance with the express backup schedule.<br></br>The backup process doesn't back up the checkpoints associated with VMs.
cognitive-services V3 0 Break Sentence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/reference/v3-0-break-sentence.md
A successful response is a JSON array with one result for each string in the inp
* `language`: Code of the detected language.
- * `score`: A float value indicating the confidence in the result. The score is between zero and one and a low score indicates a low confidence.
+ * `score`: A float value indicating the confidence in the result. The score is between zero (0) and one (1.0). A low score (<= 0.4) indicates a low confidence.
The `detectedLanguage` property is only present in the result object when language auto-detection is requested.
communication-services Sub Eligibility Number Capability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/sub-eligibility-number-capability.md
The tables below summarize current availability:
| USA & Puerto Rico | Local | - | - | General Availability | General Availability\* | | USA | Short-Codes | General Availability | General Availability | - | - |
-\* Available through Azure Bot Framework and Dynamics only
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
## Customers with UK Azure billing addresses
The tables below summarize current availability:
| Canada | Toll-Free | Public Preview | Public Preview | Public Preview | Public Preview\* | | Canada | Local | - | - | Public Preview | Public Preview\* |
-\* Available through Azure Bot Framework and Dynamics only
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
## Customers with Ireland Azure billing addresses
The tables below summarize current availability:
| UK | Local | - | - | Public Preview | Public Preview\* |
-\* Available through Azure Bot Framework and Dynamics only
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
## Customers with Denmark Azure billing addresses
The tables below summarize current availability:
| UK | Toll-Free | - | - | Public Preview | Public Preview\* | | UK | Local | - | - | Public Preview | Public Preview\* |
-\* Available through Azure Bot Framework and Dynamics only
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
## Customers with Canada Azure billing addresses
The tables below summarize current availability:
| UK | Local | - | - | Public Preview | Public Preview\* |
-\* Available through Azure Bot Framework and Dynamics only
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
## Customers with Italy Azure billing addresses
The tables below summarize current availability:
| Italy | Toll-Free** | - | - | Public Preview | Public Preview\* | | Italy | Local** | - | - | Public Preview | Public Preview\* |
-\* Available through Azure Bot Framework and Dynamics only
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
\** Allowing the purchase of Italian phone numbers for CSP and LSP customers is planned only for General Availability launch.
The tables below summarize current availability:
| USA & Puerto Rico | Toll-Free | General Availability | General Availability | Public Preview | Public Preview\* | | USA & Puerto Rico | Local | - | - | Public Preview | Public Preview\* |
-\* Available through Azure Bot Framework and Dynamics only
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
## Next steps
confidential-ledger Write Transaction Receipts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/write-transaction-receipts.md
We authenticate using the [DefaultAzureCredential class](/python/api/azure-ident
credential = DefaultAzureCredential() ```
-Then, we get and save the Confidential Ledger service certificate using the Certificate client from the [Confidential Ledger Identity URL](https://identity.confidential-ledger.core.azure.com/ledgerIdentity). The service certificate is a network identity public key certificate used as root of trust for [TLS](https://microsoft.github.io/CCF/main/overview/glossary.html#term-TLS) server authentication. In other words, it's used as the Certificate Authority (CA) for establishing a TLS connection with any of the nodes in the CCF network.
+Then, we get and save the Confidential Ledger service certificate using the Certificate client from the Confidential Ledger Identity URL. The service certificate is a network identity public key certificate used as root of trust for [TLS](https://microsoft.github.io/CCF/main/overview/glossary.html#term-TLS) server authentication. In other words, it's used as the Certificate Authority (CA) for establishing a TLS connection with any of the nodes in the CCF network.
```python # Create a Certificate client and use it to
container-apps Log Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/log-monitoring.md
The system log data is accessible by querying the `ContainerAppSystemlogs_CL` ta
## Console Logs
-Console logs originate from the `stderr` and `stdout` messages from the containers in your container app and Dapr sidecars. You can view console logs by querying the `ContainerAppConsolelogs_CL` table.
+Console logs originate from the `stderr` and `stdout` messages from the containers in your container app and Dapr sidecars. You can view console logs by querying the `ContainerAppConsoleLogs_CL` table.
> [!TIP] > Instrumenting your code with well-defined log messages can help you to understand how your code is performing and to debug issues. To learn more about best practices refer to [Design for operations](/azure/architecture/guide/design-principles/design-for-operations).
Log Analytics is a tool in the Azure portal that you can use to view and analyze
Start Log Analytics from **Logs** in the sidebar menu on your container app page. You can also start Log Analytics from **Monitor>Logs**.
-You can query the logs using the tables listed in the **CustomLogs** category **Tables** tab. The tables in this category are the `ContainerAppSystemlogs_CL` and `ContainerAppConsolelogs_CL` tables.
+You can query the logs using the tables listed in the **CustomLogs** category **Tables** tab. The tables in this category are the `ContainerAppSystemlogs_CL` and `ContainerAppConsoleLogs_CL` tables.
:::image type="content" source="media/observability/log-analytics-query-page.png" alt-text="Screenshot of the Log Analytics custom log tables.":::
cosmos-db Configure Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/configure-synapse-link.md
The first step to use Synapse Link is to enable it for your Azure Cosmos DB data
1. [Create a new Azure account](create-sql-api-dotnet.md#create-account), or select an existing Azure Cosmos DB account.
-1. Navigate to your Azure Cosmos DB account and open the **Features** pane.
+1. Navigate to your Azure Cosmos DB account and open the **Azure Synapse Link** under Intergrations in the left pane.
-1. Select **Synapse Link** from the features list.
+1. Select **Enable**. This process can take 1 to 5 minutes to complete.
- :::image type="content" source="./media/configure-synapse-link/find-synapse-link-feature.png" alt-text="Find Synapse Link feature":::
-
-1. Next it prompts you to enable Synapse Link on your account. Select **Enable**. This process can take 1 to 5 minutes to complete.
-
- :::image type="content" source="./media/configure-synapse-link/enable-synapse-link-feature.png" alt-text="Enable Synapse Link feature":::
+ :::image type="content" source="./media/configure-synapse-link/enable-synapse-link.png" alt-text="Screenshot showing how to enable Synapse Link feature.":::
1. Your account is now enabled to use Synapse Link. Next see how to create analytical store enabled containers to automatically start replicating your operational data from the transactional store to the analytical store.
Please note the following details when enabling Azure Synapse Link on your exist
### Azure portal
+#### New container
1. Sign in to the [Azure portal](https://portal.azure.com/) or the [Azure Cosmos DB Explorer](https://cosmos.azure.com/). 1. Navigate to your Azure Cosmos DB account and open the **Data Explorer** tab.
Please note the following details when enabling Azure Synapse Link on your exist
1. After the container is created, verify that analytical store has been enabled by clicking **Settings**, right below Documents in Data Explorer, and check if the **Analytical Store Time to Live** option is turned on.
+#### Existing container
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) or the [Azure Cosmos DB Explorer](https://cosmos.azure.com/).
+
+1. Navigate to your Azure Cosmos DB account and open the **Azure Synapse Link** tab.
+
+1. Under the **Enable Azure Synapse Link for your containers** section select the container.
+
+ :::image type="content" source="./media/configure-synapse-link/enable-synapse-link-existing-container.png" alt-text="Screenshot showing how to turn on analytical store for an Azure Cosmos DB existing container.":::
+
+1. After the container enablement, verify that analytical store has been enabled by clicking **Settings**, right below Documents in Data Explorer, and check if the **Analytical Store Time to Live** option is turned on.
+ > [!NOTE] > You can also enable Synapse Link for your account using the **Power BI** and the **Synapse Link** pane, in the **Integrations** section of the left navigation menu.
cost-management-billing Tutorial Acm Create Budgets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-acm-create-budgets.md
To create or update action groups, select **Manage action group** while you're c
Next, select **Add action group** and create the action group.
-Budget integration with action groups works for action groups that have enabled or disabled common alert schema. For more information on how to enable common alert schema, see [How do I enable the common alert schema?](../../azure-monitor/alerts/alerts-common-schema.md#how-do-i-enable-the-common-alert-schema)
+Budget integration with action groups works for action groups that have enabled or disabled common alert schema. For more information on how to enable common alert schema, see [How do I enable the common alert schema?](../../azure-monitor/alerts/alerts-common-schema.md#enable-the-common-alert-schema)
## View budgets in the Azure mobile app
data-factory Choose The Right Integration Runtime Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/choose-the-right-integration-runtime-configuration.md
+
+ Title: Choose the right integration-runtime configuration for your scenario
+description: Some recommended architectures for each integration runtime.
++++++ Last updated : 12/14/2022++
+# Choose the right integration-runtime configuration for your scenario
+
+The integration runtime is a very important part of the infrastructure for the data integration solution provided by Azure Data Factory. This requires you to fully consider how to adapt to the existing network structure and data source at the beginning of designing the solution, as well as consider performance, security and cost.
+
+## Comparison of different types of integration runtimes
+
+In Azure Data Factory, we have three kinds of integration runtimes: the Azure integration runtime, the self-hosted integration runtime and the Azure-SSIS integration runtime. For the Azure integration runtime, you can also enable a managed virtual network which makes its architecture different than the global Azure integration runtime.
+
+This table lists the differences in some aspects of all integration runtimes, you can choose the appropriate one according to your actual needs. For the Azure-SSIS integration runtime, you can learn more in the article [Create an Azure-SSIS integration runtime](create-azure-ssis-integration-runtime.md).
+
+| Feature | Azure integration runtime | Azure integration runtime with managed virtual network | Self-hosted integration runtime |
+| - | - | | - |
+| Managed compute | Y | Y | N |
+| Auto-scale | Y | Y* | N |
+| Dataflow | Y | Y | N |
+| On-premises data access | N | Y** | Y |
+| Private Link/Private Endpoint | N | Y*** | Y |
+| Custom component/driver | N | N | Y |
+
+ \* When time-to-live (TTL) is enabled, the compute size of integration runtime is reserved according to the configuration and canΓÇÖt be auto-scaled.
+
+ ** On-premises environments must be connected to Azure via Express Route or VPN. Custom components and drivers are not supported.
+
+ *** The private endpoints are managed by the Azure Data Factory service.
+
+It is very important to choose an appropriate type of integration runtime. Not only must it be suitable for your existing architecture and requirements for data integration, but you also need to consider how to further meet growing business needs and any future increase in workload. But there is no one-size-fits-all approach. The following consideration can help you navigate the decision:
+
+1. What are the integration runtime and data store locations?<br>
+ The integration runtime location defines the location of its back-end compute, and where the data movement, activity dispatching and data transformation are performed. To obtain better performance and transmission efficiency, the integration runtime should be closer to the data source or sink.
+
+ - The Azure integration runtime automatically detects the most suitable location based on some rules (also known as auto-resolve). See details here: [Azure IR location](concepts-integration-runtime.md#azure-ir-location).
+ - The Azure integration runtime with a managed virtual network has the same region as your data factory. It canΓÇÖt be auto resolved like the Azure integration runtime.
+ - The self-hosted integration runtime is located in the region of your local machines or Azure virtual machines.
+
+2. Is the data store publicly accessible?<br>
+ If the data store is publicly accessible, the difference between the different types of integration runtimes is not very large. If the store is behind a firewall or in a private network such as an on-premises or virtual network, the better choices are the Azure integration runtime with a managed virtual network or the self-hosted integration runtime.
+
+ - There is some additional setup needed such as Private Link Service and Load Balancer when using the Azure integration runtime with a managed virtual network to access a data store behind a firewall or in a private network. You can refer to this tutorial [Access on-premises SQL Server from Data Factory Managed VNet using Private Endpoint](tutorial-managed-virtual-network-on-premise-sql-server.md) as an example. If the data store is in an on-premises environment, then the on-premises must be connected to Azure via Express Route or an S2S VPN.
+ - The self-hosted integration runtime is more flexible and does not require additional settings, Express Route, or VPN. But you need to provide and maintain the machine by yourself.
+ - You can also add the public IP addresses of the Azure integration runtime to the allowlist of your firewall and allow it to access the data store, but itΓÇÖs not a desirable solution in highly secure production environments.
+
+3. What level of security do you require during data transmission?<br>
+ If you need to process highly confidential data, you want to defend against, for example, man-in-the-middle attacks during data transmission. Then you can choose to use a Private Endpoint and Private Link to ensure data security.
+
+ - You can create managed private endpoints to your data stores when using the Azure integration runtime with a managed virtual network. The private endpoints are maintained by the Azure Data Factory service within the managed virtual network.
+ - You can also create private endpoints in your virtual network and the self-hosted integration runtime can leverage them to access data stores.
+ - The Azure integration runtime doesnΓÇÖt support Private Endpoint and Private Link.
+
+4. What level of maintenance are you able to provide?<br>
+ Maintaining infrastructure, servers, and equipment is one of the important tasks of the IT department of an enterprise. It usually takes a lot of time and effort.
+
+ - You donΓÇÖt need to worry about the maintenance such as update, patch and version of the Azure integration runtime and the Azure integration runtime with a managed virtual network. The Azure Data Factory service will take care of all the maintenance efforts.
+ - Because the self-hosted integration runtime is installed on customer machines, the maintenance must be taken care of by end users. You can, however, enable auto-update to automatically get the latest version of the self-hosted integration runtime whenever there is an update. To learn about how to enable auto-update and manage version control of the self-hosted integration runtime, you can refer to the article [Self-hosted integration runtime auto-update and expire notification](self-hosted-integration-runtime-auto-update.md). We also provide a diagnostic tool for the self-hosted integration runtime to health-check some common issues. To learn more about the diagnostic tool, refer to the article [Self-hosted integration runtime diagnostic tool](self-hosted-integration-runtime-diagnostic-tool.md). In addition, we recommend using Azure Monitor and Azure Log Analytics specifically to collect that data and enable a single pane of glass monitoring for your self-hosted integration runtimes. Learn more about configuring this in the article [Configure the self-hosted integration runtime for log analytics collection](how-to-configure-shir-for-log-analytics-collection.md) for instructions.
+
+5. What concurrency requirements do you have?<br>
+ When processing large-scale data, such as large-scale data migration, we hope to improve the efficiency and speed of processing as much as possible. Concurrency is often a major requirement for data integration.
+
+ - The Azure integration runtime has the highest concurrency support among all integration runtime types. Data integration unit (DIU) is the unit of capability to run on Azure Data Factory. You can select the desired number of DIU for e.g. Copy activity. Within the scope of DIU, you can run multiple activities at the same time. For different region groups, we will have different upper limitations. Learn about the details of these limits in the article [Data Factory limits](../azure-resource-manager/management/azure-subscription-service-limits.md#data-factory-limits).
+ - The Azure integration runtime with a managed virtual network has a similar mechanism to the Azure integration runtime, but due to some architectural constraints, the concurrency it can support is less than Azure integration runtime.
+ - The concurrent activities that the self-hosted integration runtime can run depend on the machine size and cluster size. You can choose a larger machine or use more self-hosted integration nodes in the cluster if you need greater concurrency.
+
+6. Do you require any specific features?<br>
+ There are some functional differences between the types of integration runtimes.
+
+ - Dataflow is supported by the Azure integration runtime and the Azure integration runtime with a managed virtual network. However, you canΓÇÖt run Dataflow using self-hosted integration runtime.
+ - If you need to install custom components, such as ODBC drivers, a JVM, or a SQL Server certificate, the self-hosted integration runtime is your only option. Custom components are not supported by the Azure integration runtime or the Azure integration runtime with a managed virtual network.
+
+## Architecture for integration runtime
+
+Based on the characteristics of each integration runtime, different architectures are generally required to meet the business needs of data integration. The following are some typical architectures that can be used as a reference.
+
+### Azure integration runtime
+
+The Azure integration runtime is a fully managed, auto-scaled compute that you can use to move data from Azure or non-Azure data sources.
++
+1. The traffic from the Azure integration runtime to data stores is through public network.
+1. We provide a range of static public IP addresses for the Azure integration runtime and these IP addresses can be added to the allowlist of the target data store firewall. To learn more about how to get public IP addresses of the Azure Integration runtime, refer to the article [Azure Integration Runtime IP addresses](azure-integration-runtime-ip-addresses.md).
+1. The Azure integration runtime can be auto-resolved according to the region of the data source and data sink. Or you can choose a specific region. We recommend you choose the region closest to your data source or sink, which can provide better execution performance. Learn more about performance considerations in the article [Troubleshoot copy activity on Azure IR](copy-activity-performance-troubleshooting.md#troubleshoot-copy-activity-on-azure-ir).
+
+### Azure integration runtime with managed virtual network
+
+When using the Azure integration runtime with a managed virtual network, you should use managed private endpoints to connect your data sources to ensure data security during transmission. With some additional settings such as Private Link Service and Load Balancer, managed private endpoints can also be used to access on-premises data sources.
++
+1. A managed private endpoint canΓÇÖt be reused across different environments. You need to create a set of managed private endpoints for each environment. For all data sources supported by managed private endpoints, refer to the article [Supported data sources and services](managed-virtual-network-private-endpoint.md#supported-data-sources-and-services).
+1. You can also use managed private endpoints for connections to external compute resources that you want to orchestrate such as Azure Databricks and Azure Functions. To see the full list of supported external compute resources, refer to the article [Supported data sources and services](managed-virtual-network-private-endpoint.md#supported-data-sources-and-services).
+1. Managed virtual network is managed by the Azure Data Factory service. VNET peering is not supported between a managed virtual network and a customer virtual network.
+1. Customers canΓÇÖt directly change configurations such as the NSG rule on a managed virtual network.
+1. If any property of a managed private endpoint is different between environments, you can override it by parameterizing that property and providing the respective value during deployment. See details in the article [Best practices for CI/CD](continuous-integration-delivery.md#best-practices-for-cicd).
+
+### Self-hosted integration runtime
+
+To prevent data from different environments from interfering with each other and ensure the security of the production environment, we need to create a corresponding self-hosted integration runtime for each environment. This ensures sufficient isolation between different environments.
++
+Since the self-hosted integration runtime runs on a customer managed machine, in order to reduce the cost, maintenance, and upgrade efforts as much as possible, we can make use of the shared functions of the self-hosted integration runtime for different projects in the same environment. For details on self-hosted integration runtime sharing, refer to the article [Create a shared self-hosted integration runtime in Azure Data Factory](create-shared-self-hosted-integration-runtime-powershell.md). At the same time, to make the data more secure during transmission, we can choose to use a private link to connect the data sources and key vault, and connect the communication between the self-hosted integration runtime and the Azure Data Factory service.
++
+1. Express Route is not mandatory. Without Express Route, the data will not reach the sink through private networks such as a virtual network or a private link, but through the public network.
+1. If the on-premises network is connected to the Azure virtual network via Express Route or VPN, then the self-hosted integration runtime can be installed on virtual machines in a Hub VNET.
+1. The hub-spoke virtual network architecture can be used not only for different projects but also for different environments (Prod, QA and Dev).
+1. The self-hosted integration runtime can be shared with multiple data factories. The primary data factory references it as a shared self-hosted integration runtime and others refer to it as a linked self-hosted integration runtime. A physical self-hosted integration runtime can have multiple nodes in a cluster. Communication only happens between the primary self-hosted integration runtime and primary node, with work being distributed to secondary nodes from the primary node.
+1. Credentials of on-premises data stores can be stored either in the local machine or an Azure Key Vault. Azure Key Vault is highly recommended.
+1. Communication between the self-hosted integration runtime and data factory can go through a private link. But currently, interactive authoring via Azure Relay and automatically updating to the latest version from the download center donΓÇÖt support private link. The traffic goes through the firewall of on-premises environment. For more details, refer to the article [Azure Private Link for Azure Data Factory](data-factory-private-link.md).
+1. The private link is only required for the primary data factory. All traffic goes through primary data factory, then to other data factories.
+1. The same name of the self-hosted integration runtime across all stages of CI/CD is expected. You can consider using a ternary factory just to contain the shared self-hosted integration runtimes and use linked self-hosted integration runtime in the various production stages. For more details, refer to the article [Continuous integration and delivery](continuous-integration-delivery.md).
+1. You can control how the traffic goes to the download center and Azure Relay using configurations of your on-premises network and Express Route, either through an on-premises proxy or hub virtual network. Make sure the traffic is allowed by proxy or NSG rules.
+1. If you want to secure communication between self-hosted integration runtime nodes, you can enable remote access from the intranet with a TLS/SSL certificate. For more details, refer to the article [Enable remote access from intranet with TLS/SSL certificate (Advanced)](tutorial-enable-remote-access-intranet-tls-ssl-certificate.md).
data-factory Solution Template Copy New Files Last Modified Date https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-copy-new-files-last-modified-date.md
The template defines six parameters:
## How to use this solution template
-1. Go to template **Copy new files only by LastModifiedDate**. Create a **New** connection to your source storage store. The source storage store is where you want to copy files from.
+1. Go to template **Copy new files only by LastModifiedDate**. Create a **New** connection to to your destination store. The destination store is where you want to copy files to.
:::image type="content" source="media/solution-template-copy-new-files-last-modified-date/copy-new-files-last-modified-date-1.png" alt-text="Create a new connection to the source":::
-2. Create a **New** connection to your destination store. The destination store is where you want to copy files to.
+2. Create a **New** connection to your source storage store. The source storage store is where you want to copy files from.
:::image type="content" source="media/solution-template-copy-new-files-last-modified-date/copy-new-files-last-modified-date-3.png" alt-text="Create a new connection to the destination":::
defender-for-cloud Asset Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/asset-inventory.md
Title: Microsoft Defender for Cloud's asset inventory
+ Title: Using the asset inventory to view your security posture with Microsoft Defender for Cloud
description: Learn about Microsoft Defender for Cloud's asset management experience providing full visibility over all your Defender for Cloud monitored resources. Previously updated : 11/14/2022 Last updated : 01/03/2023 # Use asset inventory to manage your resources' security posture
-The asset inventory page of Microsoft Defender for Cloud provides a single page for viewing the security posture of the resources you've connected to Microsoft Defender for Cloud.
-
-Defender for Cloud periodically analyzes the security state of resources connected to your subscriptions to identify potential security vulnerabilities. It then provides you with recommendations on how to remediate those vulnerabilities.
-
-When any resource has outstanding recommendations, they'll appear in the inventory.
+The asset inventory page of Microsoft Defender for Cloud shows the [security posture](concept-cloud-security-posture-management.md) of the resources you've connected to Defender for Cloud. Defender for Cloud periodically analyzes the security state of resources connected to your subscriptions to identify potential security issues and provides you with active recommendations. Active recommendations are recommendations that can be resolved to improve your security posture.
Use this view and its filters to address such questions as: -- Which of my subscriptions with enhanced security features enabled have outstanding recommendations?
+- Which of my subscriptions with [Defender plans](defender-for-cloud-introduction.md#cwpidentify-unique-workload-security-requirements) enabled have outstanding recommendations?
- Which of my machines with the tag 'Production' are missing the Log Analytics agent? - How many of my machines tagged with a specific tag have outstanding recommendations? - Which machines in a specific resource group have a known vulnerability (using a CVE number)?
-The asset management possibilities for this tool are substantial and continue to grow.
-
-> [!TIP]
-> The security recommendations on the asset inventory page are the same as those on the **Recommendations** page, but here they're shown according to the affected resource. For information about how to resolve recommendations, see [Implementing security recommendations in Microsoft Defender for Cloud](review-security-recommendations.md).
+The security recommendations on the asset inventory page are also shown in the **Recommendations** page, but here they're shown according to the affected resource. Learn more about [implementing security recommendations](review-security-recommendations.md).
## Availability
The asset management possibilities for this tool are substantial and continue to
|Release state:|General availability (GA)| |Pricing:|Free<br> Some features of the inventory page, such as the [software inventory](#access-a-software-inventory) require paid solutions to be in-place| |Required roles and permissions:|All users|
-|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet) <br> <br> Software inventory is not currently supported in national clouds.|
+|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet) <br> <br> Software inventory isn't currently supported in national clouds.|
## What are the key features of asset inventory?
The inventory page provides the following tools:
Before you define any filters, a prominent strip of values at the top of the inventory view shows: - **Total resources**: The total number of resources connected to Defender for Cloud.-- **Unhealthy resources**: Resources with active security recommendations. [Learn more about security recommendations](review-security-recommendations.md).
+- **Unhealthy resources**: Resources with active security recommendations that you can implement. [Learn more about implementing security recommendations](review-security-recommendations.md).
- **Unmonitored resources**: Resources with agent monitoring issues - they have the Log Analytics agent deployed, but the agent isn't sending data or has other health issues.-- **Unregistered subscriptions**: Any subscription in the selected scope that haven't yet been connected to Microsoft Defender for Cloud.
+- **Unregistered subscriptions**: Any subscription in the selected scope that hasn't yet been connected to Microsoft Defender for Cloud.
### 2 - Filters
-The multiple filters at the top of the page provide a way to quickly refine the list of resources according to the question you're trying to answer. For example, if you wanted to answer the question *Which of my machines with the tag 'Production' are missing the Log Analytics agent?* you could combine the **Agent monitoring** filter with the **Tags** filter.
+The multiple filters at the top of the page provide a way to quickly refine the list of resources according to the question you're trying to answer. For example, if you wanted to know which of your machines with the tag 'Production' are missing the Log Analytics agent, you can filter the list for **Agent monitoring**:"Not installed" and **Tags**:"Production".
As soon as you've applied filters, the summary values are updated to relate to the query results.
As soon as you've applied filters, the summary values are updated to relate to t
## How does asset inventory work?
-Asset inventory utilizes [Azure Resource Graph (ARG)](../governance/resource-graph/index.yml), an Azure service that provides the ability to query Defender for Cloud's security posture data across multiple subscriptions.
+Asset inventory utilizes [Azure Resource Graph (ARG)](../governance/resource-graph/index.yml), an Azure service that lets you query Defender for Cloud's security posture data across multiple subscriptions.
ARG is designed to provide efficient resource exploration with the ability to query at scale.
-Using the [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/), asset inventory can quickly produce deep insights by cross-referencing Defender for Cloud data with other resource properties.
+You can use [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/) in the asset inventory to quickly produce deep insights by cross-referencing Defender for Cloud data with other resource properties.
## How to use asset inventory 1. From Defender for Cloud's sidebar, select **Inventory**.
-1. Use the **Filter by name** box to display a specific resource, or use the filters as described below.
-
-1. Select the relevant options in the filters to create the specific query you want to perform.
+1. Use the **Filter by name** box to display a specific resource, or use the filters to focus on specific resources.
By default, the resources are sorted by the number of active security recommendations.
Using the [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/), asset
1. <a id="onoffpartial"></a>To use the **Defender for Cloud** filter, select one or more options (Off, On, or Partial):
- - **Off** - Resources that aren't protected by a Microsoft Defender plan. You can right-click on any of these and upgrade them:
+ - **Off** - Resources not protected by a Microsoft Defender plan. You can right-click on the resources and upgrade them:
:::image type="content" source="./media/asset-inventory/upgrade-resource-inventory.png" alt-text="Upgrade a resource to be protected by the relevant Microsoft Defender plan via right-click." lightbox="./media/asset-inventory/upgrade-resource-inventory.png":::
- - **On** - Resources that are protected by a Microsoft Defender plan
- - **Partial** - This applies to **subscriptions** that have some but not all of the Microsoft Defender plans disabled. For example, the following subscription has seven Microsoft Defender plans disabled.
+ - **On** - Resources protected by a Microsoft Defender plan
+ - **Partial** - **Subscriptions** with some but not all of the Microsoft Defender plans disabled. For example, the following subscription has seven Microsoft Defender plans disabled.
:::image type="content" source="./media/asset-inventory/pricing-tier-partial.png" alt-text="Subscription partially protected by Microsoft Defender plans.":::
If you've already enabled the integration with Microsoft Defender for Endpoint a
:::image type="content" source="media/asset-inventory/software-inventory-filters.gif" alt-text="If you've enabled the threat and vulnerability solution, Defender for Cloud's asset inventory offers a filter to select resources by their installed software."::: > [!NOTE]
-> The "Blank" option shows machines without Microsoft Defender for Endpoint (or without Microsoft Defender for Servers).
+> The "Blank" option shows machines without Microsoft Defender for Endpoint or without Microsoft Defender for Servers.
-As well as the filters in the asset inventory page, you can explore the software inventory data from Azure Resource Graph Explorer.
+Besides the filters in the asset inventory page, you can explore the software inventory data from Azure Resource Graph Explorer.
Examples of using Azure Resource Graph Explorer to access and explore software inventory data:
Examples of using Azure Resource Graph Explorer to access and explore software i
## FAQ - Inventory
-### Why aren't all of my subscriptions, machines, storage accounts, etc. shown?
+### Why aren't all of my resources shown, such as subscriptions, machines, storage accounts?
-The inventory view lists your Defender for Cloud connected resources from a Cloud Security Posture Management (CSPM) perspective. The filters don't return every resource in your environment; only the ones with outstanding (or 'active') recommendations.
+The inventory view lists your Defender for Cloud connected resources from a Cloud Security Posture Management (CSPM) perspective. The filters show only the resources with active recommendations.
-For example, the following screenshot shows a user with access to 8 subscriptions but only 7 currently have recommendations. So when they filter by **Resource type = Subscriptions**, only those 7 subscriptions with active recommendations appear in the inventory:
+For example, if you have access to eight subscriptions but only seven currently have recommendations, filter by **Resource type = Subscriptions** shows only the seven subscriptions with active recommendations:
### Why do some of my resources show blank values in the Defender for Cloud or monitoring agent columns?
-Not all Defender for Cloud monitored resources have agents. For example, Azure Storage accounts or PaaS resources such as disks, Logic Apps, Data Lake Analysis, and Event Hub don't need agents to be monitored by Defender for Cloud.
+Not all Defender for Cloud monitored resources require agents. For example, Defender for Cloud doesn't require agents to monitor Azure Storage accounts or PaaS resources, such as disks, Logic Apps, Data Lake Analysis, and Event Hubs.
When pricing or agent monitoring isn't relevant for a resource, nothing will be shown in those columns of inventory.
defender-for-cloud Continuous Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/continuous-export.md
This article describes how to configure continuous export to Log Analytics works
|-|:-| |Release state:|General availability (GA)| |Pricing:|Free|
-|Required roles and permissions:|<ul><li>**Security admin** or **Owner** on the resource group</li><li>Write permissions for the target resource.</li><li>If you're using the Azure Policy 'DeployIfNotExist' policies described below, you'll also need permissions for assigning policies</li><li>To export data to Event Hubs, you'll need Write permission on the Event Hubs Policy.</li><li>To export to a Log Analytics workspace:<ul><li>if it **has the SecurityCenterFree solution**, you'll need a minimum of read permissions for the workspace solution: `Microsoft.OperationsManagement/solutions/read`</li><li>if it **doesn't have the SecurityCenterFree solution**, you'll need write permissions for the workspace solution: `Microsoft.OperationsManagement/solutions/action`</li><li>Learn more about [Azure Monitor and Log Analytics workspace solutions](../azure-monitor/insights/solutions.md)</li></ul></li></ul>|
+|Required roles and permissions:|<ul><li>**Security admin** or **Owner** on the resource group</li><li>Write permissions for the target resource.</li><li>If you're using the [Azure Policy 'DeployIfNotExist' policies](#configure-continuous-export-at-scale-using-the-supplied-policies), you'll also need permissions for assigning policies</li><li>To export data to Event Hubs, you'll need Write permission on the Event Hubs Policy.</li><li>To export to a Log Analytics workspace:<ul><li>if it **has the SecurityCenterFree solution**, you'll need a minimum of read permissions for the workspace solution: `Microsoft.OperationsManagement/solutions/read`</li><li>if it **doesn't have the SecurityCenterFree solution**, you'll need write permissions for the workspace solution: `Microsoft.OperationsManagement/solutions/action`</li><li>Learn more about [Azure Monitor and Log Analytics workspace solutions](../azure-monitor/insights/solutions.md)</li></ul></li></ul>|
|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet)| ## What data types can be exported?
Continuous export can export the following data types whenever they change:
## Set up a continuous export
-You can configure continuous export from the Microsoft Defender for Cloud pages in Azure portal, via the REST API, or at scale using the supplied Azure Policy templates. Select the appropriate tab below for details of each.
+You can configure continuous export from the Microsoft Defender for Cloud pages in Azure portal, via the REST API, or at scale using the supplied Azure Policy templates.
### [**Use the Azure portal**](#tab/azure-portal) ### Configure continuous export from the Defender for Cloud pages in Azure portal
-The steps below are necessary whether you're setting up a continuous export to Log Analytics or Azure Event Hubs.
+If you're setting up a continuous export to Log Analytics or Azure Event Hubs:
1. From Defender for Cloud's menu, open **Environment settings**.
The steps below are necessary whether you're setting up a continuous export to L
:::image type="content" source="./media/continuous-export/include-security-findings-toggle.png" alt-text="Include security findings toggle in continuous export configuration." :::
-1. From the "Export target" area, choose where you'd like the data saved. Data can be saved in a target of a different subscription (for example on a Central Event Hub instance or a central Log Analytics workspace).
+1. From the "Export target" area, choose where you'd like the data saved. Data can be saved in a target of a different subscription (for example, on a Central Event Hubs instance or a central Log Analytics workspace).
- You can also send the data to an [Event hub or Log Analytics workspace in a different tenant](#export-data-to-an-azure-event-hub-or-log-analytics-workspace-in-another-tenant).
+ You can also send the data to an [Event hubs or Log Analytics workspace in a different tenant](#export-data-to-an-azure-event-hub-or-log-analytics-workspace-in-another-tenant).
1. Select **Save**.
The steps below are necessary whether you're setting up a continuous export to L
Continuous export can be configured and managed via the Microsoft Defender for Cloud [automations API](/rest/api/defenderforcloud/automations). Use this API to create or update rules for exporting to any of the following possible destinations: -- Azure Event Hub
+- Azure Event Hubs
- Log Analytics workspace - Azure Logic Apps
-You can also send the data to an [Event hub or Log Analytics workspace in a different tenant](#export-data-to-an-azure-event-hub-or-log-analytics-workspace-in-another-tenant).
+You can also send the data to an [Event Hubs or Log Analytics workspace in a different tenant](#export-data-to-an-azure-event-hub-or-log-analytics-workspace-in-another-tenant).
Here are some examples of options that you can only use in the API:
Here are some examples of options that you can only use in the API:
Automating your organization's monitoring and incident response processes can greatly improve the time it takes to investigate and mitigate security incidents.
-To deploy your continuous export configurations across your organization, use the supplied Azure Policy 'DeployIfNotExist' policies described below to create and configure continuous export procedures.
+To deploy your continuous export configurations across your organization, use the supplied Azure Policy 'DeployIfNotExist' policies to create and configure continuous export procedures.
**To implement these policies**
-1. From the table below, select the policy you want to apply:
+1. Select the policy you want to apply from this table:
|Goal |Policy |Policy ID | ||||
Continuous export can be helpful in to prepare for BCDR scenarios where the targ
Learn more in [Azure Event Hubs - Geo-disaster recovery](../event-hubs/event-hubs-geo-dr.md).
-### What is the minimum SAS policy permissions required when exporting data to Azure Event Hub?
+### What is the minimum SAS policy permissions required when exporting data to Azure Event Hubs?
**Send** is the minimum SAS policy permissions required. For step-by-step instructions, see **Step 1. Create an Event Hubs namespace and event hub with send permissions** in [this article](./export-to-splunk-or-qradar.md#step-1-create-an-event-hubs-namespace-and-event-hub-with-send-permissions).
defender-for-cloud Defender For Cloud Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-cloud-introduction.md
Last updated 10/04/2022
Microsoft Defender for Cloud is a Cloud Security Posture Management (CSPM) and Cloud Workload Protection Platform (CWPP) for all of your Azure, on-premises, and multicloud (Amazon AWS and Google GCP) resources. Defender for Cloud fills three vital needs as you manage the security of your resources and workloads in the cloud and on-premises: - [**Defender for Cloud secure score**](secure-score-security-controls.md) **continually assesses** your security posture so you can track new security opportunities and precisely report on the progress of your security efforts. - [**Defender for Cloud recommendations**](security-policy-concept.md) **secures** your workloads with step-by-step actions that protect your workloads from known security risks.
defender-for-cloud Defender For Containers Vulnerability Assessment Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-vulnerability-assessment-azure.md
Title: Identify vulnerabilities in Azure Container Registry with Microsoft Defen
description: Learn how to use Defender for Containers to scan images in your Azure Container Registry to find vulnerabilities. Previously updated : 10/24/2022 Last updated : 01/03/2023
The triggers for an image scan are:
- **On push** - Whenever an image is pushed to your registry, Defender for Containers automatically scans that image. To trigger the scan of an image, push it to your repository. -- **Recently pulled** - Since new vulnerabilities are discovered every day, **Microsoft Defender for Containers** also scans, on a weekly basis, any image that has been pulled within the last 30 days. There's no extra charge for these rescans; as mentioned above, you're billed once per image.
+- **Recently pulled** - Since new vulnerabilities are discovered every day, **Microsoft Defender for Containers** also scans, on a weekly basis, any image that has been pulled within the last 30 days. There's no extra charge for these rescans because you're billed once per image.
- **On import** - Azure Container Registry has import tools to bring images to your registry from an existing registry. **Microsoft Defender for Containers** scans any supported images you import. Learn more in [Import container images to a container registry](../container-registry/container-registry-import-images.md).
For a list of the types of images and container registries supported by Microsof
The recommendation details page opens with additional information. This information includes the list of registries with vulnerable images ("Affected resources") and the remediation steps.
-1. Select a specific registry to see the repositories within it that have vulnerable repositories.
+1. Select a specific registry to see the repositories in it that have vulnerable repositories.
![Select a registry.](media/monitor-container-security/acr-finding-select-registry.png) The registry details page opens with the list of affected repositories.
-1. Select a specific repository to see the repositories within it that have vulnerable images.
+1. Select a specific repository to see the repositories in it that have vulnerable images.
![Select a repository.](media/monitor-container-security/acr-finding-select-repository.png)
Some images may reuse tags from an image that was already scanned. For example,
### Does Defender for Containers scan images in Microsoft Container Registry? Currently, Defender for Containers can scan images in Azure Container Registry (ACR) and AWS Elastic Container Registry (ECR) only.
-Docker Registry, Microsoft Artifact Registry/Microsoft Container Registry, and Microsoft Azure Red Hat OpenShift (ARO) built-in container image registry are not supported.
+Docker Registry, Microsoft Artifact Registry/Microsoft Container Registry, and Microsoft Azure Red Hat OpenShift (ARO) built-in container image registry aren't supported.
Images should first be imported to ACR. Learn more about [importing container images to an Azure container registry](/azure/container-registry/container-registry-import-images?tabs=azure-cli). ## Next steps
defender-for-cloud Plan Defender For Servers Data Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-defender-for-servers-data-workspace.md
Last updated 11/06/2022+ # Review data residency and workspace design
defender-for-iot Sensor Inventory Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/api/sensor-inventory-apis.md
This section lists the supported fields for the [protocols](#protocol) object in
| Name | Type | Nullable / Not nullable | List of values | |--|--|--|--| | **id** | Numeric. Defines the protocol's internal ID. | Not nullable | - |
-|<a name="protocol-name"></a>**name** |String. Defines the device name. |Not nullable | For more information, see below. <br><br>**Note**: To extend Defender for IoT support to proprietary protocols, create a Horizon plugin. For more information, see [Extend support to proprietary protocols](../overview.md#extend-support-to-proprietary-protocols).|
+|<a name="protocol-name"></a>**name** |String. Defines the device name. |Not nullable | For more information, see below. <br><br>**Note**: To extend Defender for IoT support to proprietary protocols, create a Horizon plugin. For more information, see [Extend support to proprietary protocols](../overview.md#extend-support-to-proprietary-ot-protocols).|
|**ipAddresses** | JSON array of strings of protocol IP addresses. |Not nullable | - | The following values are supported as [protocol names](#protocol-name) out-of-the-box:
defender-for-iot Dell Poweredge R350 E1800 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/dell-poweredge-r350-e1800.md
+
+ Title: Dell PowerEdge R350 for OT monitoring - Microsoft Defender for IoT
+description: Learn about the Dell PowerEdge R350 appliance's configuration when used for OT monitoring with Microsoft Defender for IoT in enterprise deployments.
Last updated : 11/30/2022+++
+# Dell PowerEdge R350
+
+This article describes the Dell PowerEdge R350 appliance, supported for OT sensors in an enterprise deployment.
+The Dell PowerEdge R350 is also available for the on-premises management console.
+
+|Appliance characteristic | Description|
+|||
+|**Hardware profile** | E1800|
+|**Performance** | Max bandwidth: 1 Gbps<br>Max devices: 10,000<br>Up to 8x RJ45 monitoring ports or 6x SFP (OPT) |
+|**Physical Specifications** | Mounting: 1U<br>Dimensions (H x W x D) 1.70 in x 17.09 in x 22.18 in<br>Dimensions (H x W x D) 4.28 cm x 43.4 cm x 56.3 cm|
+|**Status** | Supported, available as a pre-configured appliance|
+
+The following image shows a view of the Dell PowerEdge R350 front panel:
++
+The following image shows a view of the Dell PowerEdge R350 back panel:
++
+## Specifications
+
+|Component| Technical specifications|
+|:-|:-|
+|Chassis| 1U rack server|
+|Dimensions| (H x W x D) 1.70 in x 17.09 in x 22.18 in, 4.28 cm x 43.4 cm x 56.3 cm|
+|Weight| Max 28.96 lb/13.14 Kg|
+|Processor| Intel Xeon E-2334 3.4 GHz <br>8M Cache<br> 4C/8T, Turbo (65W), 3200 MT/s, XE Only|
+|Memory|32 GB = 2x 16 GB 3200MT/s DDR4 ECC UDIMM|
+|Storage| 4x 1 TB Hard Drive SATA 6 Gbps 7.2K 512n 3.5in Hot-Plug with PERC H755 Controller Card - RAID 10|
+|Network controller|On-board: Broadcom 5720 Dual Port 1 Gb On-Board LOM <br>On-board LOM: iDRAC9, Enterprise 15G<br>External: Broadcom 5719 Quad Port 1 GbE BASE-T Adapter, PCIe Low Profile|
+|Management|iDRAC9 Enterprise|
+|Device access| Two rear USB 3.0|
+|One front| USB 3.0|
+|Power| Dual, Hot-Plug, Redundant Power Supply (1+1), 600W|
+|Rack support| ReadyRails Sliding Rails With Cable Management Arm|
+
+## Dell PowerEdge R350 - Bill of Materials
+
+|Quantity|PN|Description|
+|-||-|
+|1| 210-BBTW | OEM R350XE Server |
+|1| 990-10090 | EX-Works |
+|1| 412-AAPW | Heatsink for 80W or less CPU |
+|1| 370-AAIP | Performance Optimized |
+|1| 370-AGNY | 3200MT/s UDIMM |
+|2| 370-AGQU | 16 GB UDIMM, 3200MT/s, ECC |
+|1| 384-BBBH | Power Saving BIOS Settings |
+|1| 800-BBDM | UEFI BIOS Boot Mode with GPT Partition |
+|2| 450-AKMP | Dual, Hot-Plug, Redundant Power Supply (1+1), 600W |
+|1| 450-AADY | C13 to C14, PDU Style, 10 AMP, 6.5 Feet (2m), Power Cord |
+|1| 330-BBWS | Riser Config 0, 1 x8, 1 x16 slots |
+|1| 384-BCYX | OEM R350 Motherboard with Broadcom 5720 Dual Port 1 Gb On-Board LOM |
+|1| 385-BBQV | iDRAC9, Enterprise 15G |
+|1| 542-BBBP | On-Board LOM |
+|1| 470-AFBU | BOSS Blank |
+|1| 379-BCRF | iDRAC, Legacy Password |
+|1| 379-BCQV | iDRAC Group Manager, Enabled |
+|1| 611-BBBF | No Operating System |
+|1| 605-BBFN | No Media Required |
+|1| 770-BDEL | ReadyRails Sliding Rails With Cable Management Arm |
+|1| 709-BBIJ | Parts Only Warranty 15 Months |
+|1| 865-BBPG | ProSupport and Next Business Day Onsite Service Initial, 15 Month(s) |
+|1| 338-CCOZ | Intel Xeon E-2334 3.4 GHz, 8M Cache, 4C/8T, Turbo (65W), 3200 MT/s, XE Only |
+|1| 325-BEIF | Brand/Bezel, Dell Branded, PowerEdge R350XE |
+|1| 389-ECFF | PowerEdge R350 CE and CCC Marking |
+|1| 321-BGVQ | 3.5" Chassis with up to 4 Hot Plug Hard Drives |
+|1| 750-ADOY | Standard Fan |
+|1| 429-ABHN | DVD +/-RW, SATA, Internal for Hot Plug Chassis |
+|1| 405-ABBT | PERC H755 Controller Card |
+|1| 461-AADZ | No Trusted Platform Module |
+|1| 683-11870 | No Installation Service Selected (Contact Sales Rep for more details) |
+|1| 865-BBPF | ProSupport and Next Business Day Onsite Service Extension, 24 Month(s) |
+|4| 400-BLLH | 1 TB Hard Drive SATA 6 Gbps 7.2K 512n 3.5in Hot-Plug |
+|1| 540-BBDF | Broadcom 5719 Quad Port 1 GbE BASE-T Adapter, PCIe Low Profile |
+|1| 780-BCDQ | RAID 10 |
+
+## Optional Expansion Modules
+
+Optional modules for additional monitoring ports can be installed:
+
+|Location |Type |Specifications |
+|-||-|
+| PCIe Expansion <br>Slot 1 or 2 | Quad Port Ethernet | 540-BBDV<br>Intel QP i350 4 x 1Gbe Copper, PCIe Low Profile |
+| PCIe Expansion <br>Slot 1 or 2 | Quad Port Ethernet | 540-BBDF<br>Broadcom 5719 Quad Port 1GbE BASE-T Adapter, PCIe Low Profile |
+| PCIe Expansion <br>Slot 1 or 2 | Dual Port Ethernet | 540-BCSE<br>Intel X710-T2L DP 2 x 10Gbe Copper, PCIe Low Profile |
+| PCIe Expansion <br>Slot 1 or 2 | Dual Port SFP+ | 540-BBML<br>Intel X710 DP 2 x 10Gbe SFP+, PCIe Low Profile |
+| PCIe Expansion <br>Slot 1 or 2 | Dual Port SFP+ | 540-BBVI<br>Broadcom 57412 Dual Port 10GbE SFP+ Adapter, PCIe Low Profile |
+| PCIe Expansion <br>Slot 1 or 2 | SFP+ Transceiver | 407-BCBN or 407-BBOU - SFP+ 10G SR |
+| PCIe Expansion <br>Slot 1 or 2 | SFP+ Transceiver | 407-BBOP - SFP+ 10G LR |
+| PCIe Expansion <br>Slot 1 or 2 | SFP+ Transceiver | 407-BBOS - SFP+ 1G COPPER |
+| PCIe Expansion <br>Slot 1 or 2 | INTEL X710 SFP+ Transceiver | 407-BBVJ - SFP+ 1G/10G SR (INTEL ONLY) |
+
+## Dell PowerEdge R350 installation
+
+This section describes how to install Defender for IoT software on the Dell PowerEdge R350 appliance.
+
+Before installing the software on the Dell appliance, you need to adjust the appliance's BIOS configuration.
+
+> [!NOTE]
+> Installation procedures are only relevant if you need to re-install software on a pre-configured device, or if you buy your own hardware and configure the appliance yourself.
+>
+
+### Prerequisites
+
+To install the Dell PowerEdge R350 appliance, you'll need:
+
+- An Enterprise license for Dell Remote Access Controller (iDrac)
+
+- A BIOS configuration XML
+
+### Configure the Dell BIOS
+
+ An integrated iDRAC manages the Dell appliance with Lifecycle Controller (LC). The LC is embedded in every Dell PowerEdge server and provides functionality that helps you deploy, update, monitor, and maintain your Dell PowerEdge appliances.
+
+To establish the communication between the Dell appliance and the management computer, you need to define the iDRAC IP address and the management computer's IP address on the same subnet.
+
+When the connection is established, the BIOS is configurable.
+
+**To configure the iDRAC IP address**:
+
+1. Power up the sensor.
+
+1. If the OS is already installed, select the F2 key to enter the BIOS configuration.
+
+1. Select **iDRAC Settings**.
+
+1. Select **Network**.
+
+ > [!NOTE]
+ > During the installation, you must configure the default iDRAC IP address and password mentioned in the following steps. After the installation, you change these definitions.
+
+1. Change the static IPv4 address to **10.100.100.250**.
+
+1. Change the static subnet mask to **255.255.255.0**.
+
+ :::image type="content" source="../media/tutorial-install-components/idrac-network-settings-screen-v2.png" alt-text="Screenshot that shows the static subnet mask in iDRAC settings.":::
+
+1. Select **Back** > **Finish**.
+
+**To configure the Dell BIOS**:
+
+This procedure describes how to update the Dell PowerEdge R350 configuration for your OT deployment.
+
+Configure the appliance BIOS only if you didn't purchase your appliance from Arrow, or if you have an appliance, but don't have access to the XML configuration file.
+
+1. Access the appliance's BIOS directly by using a keyboard and screen, or use iDRAC.
+
+ - If the appliance isn't a Defender for IoT appliance, open a browser and go to the IP address configured beforehand. Sign in with the Dell default administrator privileges. Use **root** for the username and **calvin** for the password.
+
+ - If the appliance is a Defender for IoT appliance, sign in by using **XXX** for the username and **XXX** for the password.
+
+1. After you access the BIOS, go to **Device Settings**.
+
+1. Choose the RAID-controlled configuration by selecting **Integrated RAID controller 1: Dell PERC\<PERC H755 Adapter\> Configuration Utility**.
+
+1. Select **Configuration Management**.
+
+1. Select **Create Virtual Disk**.
+
+1. In the **Select RAID Level** field, select **RAID10**. In the **Virtual Disk Name** field, enter **ROOT** and select **Physical Disks**.
+
+1. Select **Check All** and then select **Apply Changes**
+
+1. Select **Ok**.
+
+1. Scroll down and select **Create Virtual Disk**.
+
+1. Select the **Confirm** check box and select **Yes**.
+
+1. Select **OK**.
+
+1. Return to the main screen and select **System BIOS**.
+
+1. Select **Boot Settings**.
+
+1. For the **Boot Mode** option, select **BIOS**.
+
+1. Select **Back**, and then select **Finish** to exit the BIOS settings.
+
+### Install Defender for IoT software on the Dell PowerEdge R350
+
+This procedure describes how to install Defender for IoT software on the Dell PowerEdge R350.
+
+The installation process takes about 20 minutes. After the installation, the system restarts several times.
+
+**To install the software**:
+
+1. Verify that the version media is mounted to the appliance in one of the following ways:
+
+ - Connect an external CD or disk-on-key that contains the sensor software you downloaded from the Azure portal.
+
+ - Mount the ISO image by using iDRAC. After signing in to iDRAC, select the virtual console, and then select **Virtual Media**.
+
+1. In the **Map CD/DVD** section, select **Choose File**.
+
+1. Choose the version ISO image file for this version from the dialog box that opens.
+
+1. Select the **Map Device** button.
+
+ :::image type="content" source="../media/tutorial-install-components/mapped-device-on-virtual-media-screen-v2.png" alt-text="Screenshot that shows a mapped device.":::
+
+1. The media is mounted. Select **Close**.
+
+1. Start the appliance. When you're using iDRAC, you can restart the servers by selecting the **Console Control** button. Then, on the **Keyboard Macros**, select the **Apply** button, which will start the Ctrl+Alt+Delete sequence.
+
+1. Continue by installing OT sensor or on-premises management software. For more information, see [Defender for IoT software installation](../how-to-install-software.md).
+
+## Next steps
+
+Continue understanding system requirements for physical or virtual appliances. For more information, see [Which appliances do I need?](../ot-appliance-sizing.md).
+
+Then, use any of the following procedures to continue:
+
+- [Purchase sensors or download software for sensors](../onboard-sensors.md#purchase-sensors-or-download-software-for-sensors)
+- [Download software for an on-premises management console](../how-to-manage-the-on-premises-management-console.md#download-software-for-the-on-premises-management-console)
+- [Install software](../how-to-install-software.md)
defender-for-iot Virtual Management Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/virtual-management-hyper-v.md
Before you begin the installation, make sure you have the following items:
- Available hardware resources for the virtual machine. For more information, see [OT monitoring with virtual appliances](../ot-virtual-appliances.md). -- The on-premises management console software [downloaded from Defender for IoT in the Azure portal](../how-to-install-software.md#download-software-files-from-the-azure-portal)
+- The on-premises management console software [downloaded from Defender for IoT in the Azure portal](../ot-deploy/install-software-on-premises-management-console.md#download-software-files-from-the-azure-portal).
Make sure the hypervisor is running.
This procedure describes how to create a virtual machine for your on-premises ma
The VM will start from the ISO image, and the language selection screen will appear.
-1. Continue with the [generic procedure for installing on-premises management console software](../how-to-install-software.md#install-ot-monitoring-software).
+1. Continue with the [generic procedure for installing on-premises management console software](../ot-deploy/install-software-on-premises-management-console.md).
## Next steps
Then, use any of the following procedures to continue:
- [Purchase sensors or download software for sensors](../onboard-sensors.md#purchase-sensors-or-download-software-for-sensors) - [Download software for an on-premises management console](../how-to-manage-the-on-premises-management-console.md#download-software-for-the-on-premises-management-console)-- [Install software](../how-to-install-software.md)
+- [Install Microsoft Defender for IoT on-premises management console software](../ot-deploy/install-software-on-premises-management-console.md)
defender-for-iot Virtual Management Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/virtual-management-vmware.md
The on-premises management console supports both VMware and Hyper-V deployment o
- Available hardware resources for the virtual machine. For more information, see [OT monitoring with virtual appliances](../ot-virtual-appliances.md). -- The on-premises management console software [downloaded from Defender for IoT in the Azure portal](../how-to-install-software.md#download-software-files-from-the-azure-portal)
+- The on-premises management console software [downloaded from Defender for IoT in the Azure portal](../ot-deploy/install-software-on-premises-management-console.md#download-software-files-from-the-azure-portal).
Make sure the hypervisor is running.
This procedure describes how to create a virtual machine for your on-premises ma
The VM will start from the ISO image, and the language selection screen will appear.
-1. Continue with the [generic procedure for installing on-premises management console software](../how-to-install-software.md#install-ot-monitoring-software).
-
+1. Continue with the [generic procedure for installing on-premises management console software](../ot-deploy/install-software-on-premises-management-console.md).
## Next steps
Then, use any of the following procedures to continue:
- [Purchase sensors or download software for sensors](../onboard-sensors.md#purchase-sensors-or-download-software-for-sensors) - [Download software for an on-premises management console](../how-to-manage-the-on-premises-management-console.md#download-software-for-the-on-premises-management-console)-- [Install software](../how-to-install-software.md)
+- [Install Microsoft Defender for IoT on-premises management console software](../ot-deploy/install-software-on-premises-management-console.md)
defender-for-iot Virtual Sensor Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/virtual-sensor-hyper-v.md
The on-premises management console supports both VMware and Hyper-V deployment o
- Available hardware resources for the virtual machine. For more information, see [OT monitoring with virtual appliances](../ot-virtual-appliances.md). -- The OT sensor software [downloaded from Defender for IoT in the Azure portal](../how-to-install-software.md#download-software-files-from-the-azure-portal).
+- The OT sensor software [downloaded from Defender for IoT in the Azure portal](../ot-deploy/install-software-ot-sensor.md#download-software-files-from-the-azure-portal).
Make sure the hypervisor is running.
This procedure describes how to create a virtual machine by using Hyper-V.
1. Select **Specify Generation** > **Generation 1**.
-1. Specify the memory allocation [according to your organization's needs](../ot-appliance-sizing.md), in standard RAM denomination (eg. 8192, 16384, 32768). Do not enable **Dyanmic Memory**.
+1. Specify the memory allocation [according to your organization's needs](../ot-appliance-sizing.md), in standard RAM denomination (eg. 8192, 16384, 32768). Do not enable **Dynamic Memory**.
1. Configure the network adaptor according to your server network topology. Under the "Hardware Acceleration" blade, disable "Virtual Machine Queue" for the monitoring (SPAN) network interface.
This procedure describes how to create a virtual machine by using Hyper-V.
The VM will start from the ISO image, and the language selection screen will appear.
-1. Continue with the [generic procedure for installing sensor software](../how-to-install-software.md#install-ot-monitoring-software).
+1. Continue with the [generic procedure for installing sensor software](../how-to-install-software.md).
Then, use any of the following procedures to continue:
- [Purchase sensors or download software for sensors](../onboard-sensors.md#purchase-sensors-or-download-software-for-sensors) - [Download software for an on-premises management console](../how-to-manage-the-on-premises-management-console.md#download-software-for-the-on-premises-management-console)-- [Install software](../how-to-install-software.md)
+- [Install OT monitoring software on OT sensors](../ot-deploy/install-software-ot-sensor.md)
defender-for-iot Virtual Sensor Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/virtual-sensor-vmware.md
Before you begin the installation, make sure you have the following items:
- Available hardware resources for the virtual machine. For more information, see [OT monitoring with virtual appliances](../ot-virtual-appliances.md). -- The OT sensor software [downloaded from Defender for IoT in the Azure portal](../how-to-install-software.md#download-software-files-from-the-azure-portal).
+- The OT sensor software [downloaded from Defender for IoT in the Azure portal](../ot-deploy/install-software-ot-sensor.md#download-software-files-from-the-azure-portal).
- Traffic mirroring configured on your vSwitch. For more information, see [Configure traffic mirroring with a ESXi vSwitch](../traffic-mirroring/configure-mirror-esxi.md).
This procedure describes how to create a virtual machine by using ESXi.
The VM will start from the ISO image, and the language selection screen will appear.
-1. Continue with the [generic procedure for installing sensor software](../how-to-install-software.md#install-ot-monitoring-software).
+1. Continue with the [generic procedure for installing sensor software](../ot-deploy/install-software-ot-sensor.md).
## Next steps
defender-for-iot Architecture Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/architecture-connections.md
OT network sensors connect to Azure to provide data about detected devices, aler
The cloud connection methods described in this article are supported only for OT sensor version 22.x and later. All methods provide: -- **Improved security**, without additional security configurations. Connect to Azure using specific and secure firewall rules](how-to-set-up-your-network#sensor-access-to-azure-portal.md), without the need for any wildcards.
+- **Improved security**, without additional security configurations. [Connect to Azure using specific and secure endpoints](how-to-set-up-your-network.md#sensor-access-to-azure-portal), without the need for any wildcards.
- **Encryption**, Transport Layer Security (TLS1.2/AES-256) provides encrypted communication between the sensor and Azure resources.
defender-for-iot Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/architecture.md
Title: System architecture for OT monitoring - Microsoft Defender for IoT description: Learn about the Microsoft Defender for IoT system architecture and data flow. Previously updated : 03/24/2022 Last updated : 12/25/2022 # System architecture for OT system monitoring The Microsoft Defender for IoT system is built to provide broad coverage and visibility from diverse data sources.
-The following image shows how data can stream into Defender for IoT from network sensors and partner sources to provide a unified view of IoT/OT security. Defender for IoT in the Azure portal provides asset inventories, vulnerability assessments, and continuous threat monitoring.
+The following image shows how data can stream into Defender for IoT from network sensors and third-party sources to provide a unified view of IoT/OT security. Defender for IoT in the Azure portal provides asset inventories, vulnerability assessments, and continuous threat monitoring.
:::image type="content" source="media/architecture/system-architecture.png" alt-text="Diagram of the Defender for IoT OT system architecture." border="false"::: Defender for IoT connects to both cloud and on-premises components, and is built for scalability in large and geographically distributed environments.
-Defender for IoT systems include the following components:
+Defender for IoT includes the following OT security monitoring components:
- **The Azure portal**, for cloud management and integration to other Microsoft services, such as Microsoft Sentinel.-- **Network sensors**, deployed on either a virtual machine or a physical appliance. You can configure your OT sensors as cloud-connected sensors, or fully on-premises sensors.-- **An on-premises management console** for cloud-connected or local, air-gapped site management.-- **An embedded security agent** (optional).
+- **OT network sensors**, to detect OT devices across your network. OT network sensors are deployed on either a virtual machine or a physical appliance, and configured as cloud-connected sensors, or fully on-premises, locally managed sensors.
+- **An on-premises management console** for centralized OT site management in local, air-gapped environments.
+
+## What is a Defender for IoT committed device?
+ ## OT network sensors OT network sensors discover and continuously monitor network traffic across your OT devices. -- Network sensors are purpose-built for OT networks. They connect to a SPAN port or network TAP and can provide visibility into risks within minutes of connecting to the network.
+- Network sensors are purpose-built for OT networks and connect to a SPAN port or network TAP. OT network sensors can provide visibility into risks within minutes of connecting to the network.
- Network sensors use OT-aware analytics engines and Layer-6 Deep Packet Inspection (DPI) to detect threats, such as fileless malware, based on anomalous or unauthorized activity.
-Data collection, processing, analysis, and alerting takes place directly on the sensor. Running processes directly on the sensor can be ideal for locations with low bandwidth or high-latency connectivity because only the metadata is transferred on for management, either to the Azure portal or an on-premises management console.
+Data collection, processing, analysis, and alerting takes place directly on the sensor, which can be ideal for locations with low bandwidth or high-latency connectivity. Only the metadata is transferred on for management, either to the Azure portal or an on-premises management console.
-### Cloud-connected vs. local sensors
+For more information, see [Onboard OT sensors to Defender for IoT](onboard-sensors.md).
+
+### Cloud-connected vs. local OT sensors
Cloud-connected sensors are sensors that are connected to Defender for IoT in Azure, and differ from locally managed sensors as follows:
-When you have a cloud connected sensor:
+When you have a cloud connected OT network sensor:
- All data that the sensor detects is displayed in the sensor console, but alert information is also delivered to Azure, where it can be analyzed and shared with other Azure services.
When you have a cloud connected sensor:
In contrast, when working with locally managed sensors: -- View any data for a specific sensor from the sensor console. For a unified view of all information detected by several sensors, use an on-premises management console. For more information, see [Manage sensors from the management console](how-to-manage-sensors-from-the-on-premises-management-console.md).
+- View any data for a specific sensor from the sensor console. For a unified view of all information detected by several sensors, use an on-premises management console.
- You must manually upload any threat intelligence packages to locally managed sensors. - Sensor names can be updated in the sensor console.
-### What is a Defender for IoT committed device?
+For more information, see [Manage OT sensors from the sensor console](how-to-manage-individual-sensors.md) and [Manage OT sensors from the management console](how-to-manage-sensors-from-the-on-premises-management-console.md).
+### Analytics engines on OT network sensors
-## Analytics engines
-
-Defender for IoT sensors apply analytics engines on ingested data, triggering alerts based on both real-time and pre-recorded traffic.
+OT network sensors analyze ingested data using built-in analytics engines, and trigger alerts based on both real-time and pre-recorded traffic.
Analytics engines provide machine learning and profile analytics, risk analysis, a device database and set of insights, threat intelligence, and behavioral analytics.
-For example, the **policy violation detection** engine alerts users of any deviation from baseline behavior, such as unauthorized use of specific function codes, access to specific objects, or changes to device configuration. The policy violation engine models industry control system (ICS) networks as deterministic sequences of states and transitions - using a patented technique called Industrial Finite State Modeling (IFSM). The policy violation detection engine creates a baseline for industrial control system (ICS) networks. Since many detection algorithms were built for IT, rather than OT, networks, an extra baseline for ICS networks helps to shorten the systems learning curve for new detections.
-
-OT network sensors include the following analytics engines:
+For example, the **policy violation detection** engine models industry control system (ICS) networks and alerts users of any deviation from baseline behavior. Deviations might include unauthorized use of specific function codes, access to specific objects, or changes to device configuration.
-- **Protocol violation detection engine**: Identifies the use of packet structures and field values that violate ICS protocol specifications, for example: Modbus exception, and initiation of an obsolete function code alerts.
+Since many detection algorithms were built for IT, rather than OT networks, the extra baseline for ICS networks helps to shorten the system's learning curve for new detections.
-- **Industrial malware detection engine**: Identifies behaviors that indicate the presence of known malware, such as Conficker, Black Energy, Havex, WannaCry, NotPetya, and Triton.
+OT network sensors include the following analytics engines:
-- **Anomaly detection engine**: Detects unusual machine-to-machine (M2M) communications and behaviors. By modeling ICS networks as deterministic sequences of states and transitions, the platform requires a shorter learning period than generic mathematical approaches or analytics originally developed for IT rather than OT. It also detects anomalies faster, with minimal false positives. Anomaly detection engine alerts include Excessive SMB sign-in attempts, and PLC Scan Detected alerts.
+|Name |Description |
+|||
+|**Protocol violation detection engine** | Identifies the use of packet structures and field values that violate ICS protocol specifications. <br><br>For example, Modbus exceptions or the initiation of an obsolete function code alerts. |
+|**Industrial malware detection engine** | Identifies behaviors that indicate the presence of known malware, such as Conficker, Black Energy, Havex, WannaCry, NotPetya, and Triton. |
+|**Anomaly detection engine** | Detects unusual machine-to-machine (M2M) communications and behaviors. <br><br>This engine models ICS networks and therefore requires a shorter learning period than analytics developed for IT. Anomalies are detected faster, with minimal false positives. <br><br>For example, Excessive SMB sign-in attempts, and PLC Scan Detected alerts. |
+|**Operational incident detection** | Detects operational issues such as intermittent connectivity that can indicate early signs of equipment failure. <br><br> For example, the device might be disconnected (unresponsive), or the Siemens S7 stop PLC command was sent alerts. |
-- **Operational incident detection**: Detects operational issues such as intermittent connectivity that can indicate early signs of equipment failure. For example, the device might be disconnected (unresponsive), and Siemens S7 stop PLC command was sent alerts. ## Management options Defender for IoT provides hybrid network support using the following management options: -- **The Azure portal**. Use the Azure portal as a single pane of glass to view all data ingested from your devices via network sensors. The Azure portal provides extra value, such as [workbooks](workbooks.md), [connections to Microsoft Sentinel](../../sentinel/iot-solution.md?bc=%2fazure%2fdefender-for-iot%2fbreadcrumb%2ftoc.json&tabs=use-out-of-the-box-analytics-rules-recommended&toc=%2fazure%2fdefender-for-iot%2forganizations%2ftoc.json), and more.
+- **The Azure portal**. Use the Azure portal as a single pane of glass to view all data ingested from your devices via cloud-connected network sensors. The Azure portal provides extra value, such as [workbooks](workbooks.md), [connections to Microsoft Sentinel](iot-solution.md), [security recommendations](recommendations.md), and more.
- Also use the Azure portal to obtain new appliances and software updates, onboard and maintain your sensors in Defender for IoT, and update threat intelligence packages.
+ Also use the Azure portal to obtain new appliances and software updates, onboard and maintain your sensors in Defender for IoT, and update threat intelligence packages. For example:
:::image type="content" source="media/architecture/portal.png" alt-text="Screenshot of the Defender for I O T default view on the Azure portal."lightbox="media/architecture/portal.png"::: -- **The sensor console**. You can also view detections for devices connected to a specific sensor from the sensor's console. Use the sensor console to view a network map, an extensive range of reports, forward information to partner systems, and more.
+- **The OT sensor console**. View detections for devices connected to a specific OT sensor from the sensor's console. Use the sensor console to view a network map for devices detected by that sensor, a timeline of all events that occur on the sensor, forward sensor information to partner systems, and more. For example:
:::image type="content" source="media/release-notes/new-interface.png" alt-text="Screenshot that shows the updated interface." lightbox="media/release-notes/new-interface.png"::: -- **The on-premises management console**. In air-gapped environments, you can get a central view of data from all of your sensors from an on-premises management console. The on-premises management console also provides extra maintenance tools and reporting features.
+- **The on-premises management console**. In air-gapped environments, you can get a central view of data from all of your sensors from an on-premises management console. The on-premises management console also lets you organize your network into separate sites and zones to support a [Zero Trust](/security/zero-trust/) mindset, and provides extra maintenance tools and reporting features.
## Next steps
-For OT environments, understand the supported methods for connecting network sensors to Defender for IoT.
+> [!div class="nextstepaction"]
+> [Understand OT sensor connection methods](architecture-connections.md)
-For more information, see:
+> [!div class="nextstepaction"]
+> [Connect OT sensors to Microsoft Defender for IoT](connect-sensors.md)
-- [Frequently asked questions](resources-frequently-asked-questions.md)-- [Sensor connection methods](architecture-connections.md)-- [Connect your sensors to Microsoft Defender for IoT](connect-sensors.md)
+> [!div class="nextstepaction"]
+> [Frequently asked questions](resources-frequently-asked-questions.md)
defender-for-iot Faqs Ot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/faqs-ot.md
For more information, see [Activate and set up your sensor](how-to-activate-and-
## How do I check the sanity of my deployment
-After installing the software for your sensor or on-premises management console, you'll want to perform the [Post-installation validation](how-to-install-software.md#post-installation-validation).
+After installing the software for your sensor or on-premises management console, you'll want to perform the [Post-installation validation](ot-deploy/post-install-validation-ot-software.md).
You can also use our [UI and CLI tools](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md#check-system-health) to check system health and review your overall system statistics.
defender-for-iot Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/getting-started.md
Title: Get started with Microsoft Defender for IoT
-description: In this quickstart, set up a trial for Microsoft Defender for IoT and understand next steps required to configure your network sensors.
+ Title: Get started with OT network security monitoring - Microsoft Defender for IoT
+description: Use this quickstart to set up a trial OT plan with Microsoft Defender for IoT and understand the next steps required to configure your network sensors.
Previously updated : 03/24/2022 Last updated : 12/25/2022
-# Quickstart: Get started with Defender for IoT
+# Quickstart: Get started with OT network security monitoring
-This quickstart takes you through the initial steps of setting up Defender for IoT, including:
+This quickstart describes how to set up a trial plan for OT security monitoring with Microsoft Defender for IoT.
-- Identify and plan OT monitoring system architecture-- Add Defender for IoT to an Azure subscription-
-You can use this procedure to set up a Defender for IoT trial. The trial provides 30-day support for 1000 devices and a virtual sensor, which you can use to monitor traffic, analyze data, generate alerts, understand network risks and vulnerabilities and more.
+A trial plan for OT monitoring provides 30-day support for 1000 devices. Use this trial with a [virtual sensor](tutorial-onboarding.md) or on-premises sensors to monitor traffic, analyze data, generate alerts, understand network risks and vulnerabilities, and more.
## Prerequisites
Before you start, make sure that you have:
- Access to the Azure portal as a [Security Admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner). For more information, see [Azure user roles for OT and Enterprise IoT monitoring with Defender for IoT](roles-azure.md).
-If you're using a Defender for IoT sensor version earlier than 22.1.x, you must also have an Azure IoT Hub (Free or Standard tier) **Contributor** role, for cloud-connected management. Make sure that the **Microsoft Defender for IoT** feature is enabled.
-
-### Supported service regions
-
-Defender for IoT routes all traffic from all European regions to the *West Europe* regional datacenter. It routes traffic from all remaining regions to the *East US* regional datacenter.
-
-If you're using a legacy experience of Defender for IoT and are connecting through your own IoT Hub, the IoT Hub supported regions are also relevant for your organization. For more information, see [IoT Hub supported regions](https://azure.microsoft.com/global-infrastructure/services/?products=iot-hub).
- ## Identify and plan your OT solution architecture
-We recommend that you identify system requirements and plan your OT network monitoring architecture before you start, even if you plan to start with a trial subscription.
--- To deploy Defender for IoT, you'll need network switches that support traffic monitoring via a SPAN port and hardware appliances for NTA sensors.-
- For on-premises machines, including network sensors, on-premises management consoles and for air-gapped environments you'll need administrative user permissions for activities. These include activation, managing SSL/TLS certificates, managing passwords, and so on.
--- Research your own network architecture and monitor bandwidth. Check requirements for creating certificates and other network details, and clarify the sensor appliances you'll need for your own network load.
+We recommend that you identify system requirements and plan your OT network monitoring architecture before you start, even if you're starting with a trial subscription.
- Calculate the approximate number of devices you'll be monitoring. Devices can be added in intervals of **100**, such as **100**, **200**, **300**. The numbers of monitored devices are called *committed devices*. For more information, see [What is a Defender for IoT committed device?](architecture.md#what-is-a-defender-for-iot-committed-device)
+- Make sure that you have network switches that support [traffic monitoring](best-practices/traffic-mirroring-methods.md) via a SPAN port and TAPs (Test Access Points).
-Microsoft Defender for IoT supports both physical and virtual deployments. For physical deployments, you'll be able to purchase certified, pre-configured appliances, or download software to install yourself.
+- Research your own network architecture and decide which and how much data you'll want to monitor. Check any requirements for creating certificates and other details, and [understand where on your network](best-practices/understand-network-architecture.md) you'll want to place your OT network sensors.
-For more information, see:
+- If you want to use on-premises sensors, make sure that you have the [hardware appliances](ot-appliance-sizing.md) for those sensors and any administrative user permissions.
-- [Best practices for planning your OT network monitoring](best-practices/plan-network-monitoring.md)-- [Sensor connection methods](architecture-connections.md)-- [Prepare your OT network for Microsoft Defender for IoT](how-to-set-up-your-network.md)-- [Predeployment checklist](pre-deployment-checklist.md)-- [Identify required appliances](how-to-identify-required-appliances.md)
+For more information, see the [OT monitoring predeployment checklist](pre-deployment-checklist.md).
-## Add a Defender for IoT plan for OT networks
+## Add a trial Defender for IoT plan for OT networks
-This procedure describes how to add a Defender for IoT plan for OT networks to an Azure subscription.
+This procedure describes how to add a trial Defender for IoT plan for OT networks to an Azure subscription.
-**To onboard a Defender for IoT plan for OT networks**:
+**To add your plan**:
-1. In the Azure portal, go to **Defender for IoT** > **Plans and pricing**.
+1. In the Azure portal, go to **Defender for IoT** and select **Plans and pricing** > **Add plan**.
-1. Select **Add plan**.
+1. In the **Plan settings** pane, define the following settings:
-1. In the **Plan settings** pane, define the plan:
-
- - **Subscription**. Select the subscription where you would like to add a plan.
-
- You'll need a [Security admin](/azure/role-based-access-control/built-in-roles#security-admin), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or [Owner](/azure/role-based-access-control/built-in-roles#owner) role for the subscription.
+ - **Subscription**: Select the Azure subscription where you want to add a plan. You'll need a [Security admin](/azure/role-based-access-control/built-in-roles#security-admin), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or [Owner](/azure/role-based-access-control/built-in-roles#owner) role for the selected subscription.
> [!TIP]
- > If your subscription isn't listed, check your account details and confirm your permissions with the subscription owner.
-
- - **Price plan**. Select a monthly or annual commitment, or a [trial](billing.md#free-trial).
-
- Microsoft Defender for IoT provides a 30-day free trial for the first 1,000 committed devices for evaluation purposes.
+ > If your subscription isn't listed, check your account details and confirm your permissions with the subscription owner. Also make sure that you have the right subscriptions selected in your Azure settings > **Directories + subscriptions** page.
- For more information, see the [Microsoft Defender for IoT pricing page](https://azure.microsoft.com/pricing/details/iot-defender/).
-
- - **Committed sites**. Relevant for annual commitments only. Enter the number of committed sites.
-
- - **Number of devices**. If you selected a monthly or annual commitment, enter the number of [committed devices](architecture.md#what-is-a-defender-for-iot-committed-device) you'll want to monitor. If you select a trial, there is a default of 1000 devices.
+ - **Price plan**: For the sake of this quickstart, select **Trial - 30 days - 1000 assets limit**.
For example:
- :::image type="content" source="media/how-to-manage-subscriptions/onboard-ot-plans-pricing.png" alt-text="Screenshot of the plan settings pane to add or edit a plan for OT networks." lightbox="media/how-to-manage-subscriptions/onboard-ot-plans-pricing.png":::
+ :::image type="content" source="media/getting-started/ot-trial.png" alt-text="Screenshot of adding a plan for OT networks to your subscription.":::
-1. Select **Next**.
+1. Select **Next** to review your selections on the **Review and purchase** tab.
-1. Review your plan, select the **I accept the terms** option, and then select **Purchase**.
+1. On the **Review and purchase** tab, select the **I accept the terms and conditions** option > **Purchase**.
-Your new plan is listed under the relevant subscription in the **Plans** grid. For more information, see [Manage your subscriptions](how-to-manage-subscriptions.md).
+Your new plan is listed under the relevant subscription on the **Plans and pricing** > **Plans** page. For more information, see [Manage your subscriptions](how-to-manage-subscriptions.md).
## Next steps
-Continue with [Tutorial: Get started with OT network security](tutorial-onboarding.md) or [Enhance IoT security monitoring with an Enterprise IoT network sensor (Public preview)](eiot-sensor.md).
+> [!div class="nextstepaction"]
+> [Onboard and activate a virtual OT sensor](tutorial-onboarding.md)
+
+> [!div class="nextstepaction"]
+> [Use a pre-configure physical appliance](ot-pre-configured-appliances.md)
+
+> [!div class="nextstepaction"]
+> [Understand Defender for IoT subscription billing](billing.md)
-For more information, see:
+> [!div class="nextstepaction"]
+> [Defender for IoT pricing](https://azure.microsoft.com/pricing/details/iot-defender/)
-- [Welcome to Microsoft Defender for IoT for organizations](overview.md)-- [Microsoft Defender for IoT architecture](architecture.md)-- [Defender for IoT subscription billing](billing.md)
defender-for-iot How To Activate And Set Up Your On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-activate-and-set-up-your-on-premises-management-console.md
Ensure that sensors send information to the on-premises management console. Make
Two options are available for connecting Microsoft Defender for IoT sensors to the on-premises management console: -- Connect from the sensor console.-- Connect by using tunneling.
+- [Connect from the sensor console](#connect-sensors-to-the-on-premises-management-console-from-the-sensor-console)
+- [Connect sensors by using tunneling](#connect-sensors-by-using-tunneling)
-After connecting, you must set up a site with these sensors.
+After connecting, you must [set up a site](#set-up-a-site) with these sensors.
### Connect sensors to the on-premises management console from the sensor console
-To connect sensors to the on-premises management console from the sensor console:
+**To connect sensors to the on-premises management console from the sensor console**:
-1. On the on-premises management console, select **System Settings**.
+1. In the on-premises management console, select **System Settings**.
1. Copy the string in the **Copy Connection String** box. :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/connection-string.png" alt-text="Screenshot that shows copying the connection string for the sensor.":::
-1. On the sensor, go to **System Settings** and select **Connection to Management Console** :::image type="icon" source="media/how-to-manage-sensors-from-the-on-premises-management-console/connection-to-management-console.png" border="false":::
+1. On the sensor, go to **System Settings** > **Connection to Management Console**.
1. Paste the copied connection string from the on-premises management console into the **Connection string** box.
To connect sensors to the on-premises management console from the sensor console
### Connect sensors by using tunneling
-Enable a secured tunneling connection between organizational sensors and the on-premises management console. This setup circumvents interaction with the organizational firewall. As a result, it reduces the attack surface.
+Enhance system security by preventing direct user access to the sensor. Instead of direct access, use proxy tunneling to let users access the sensor from the on-premises management console with a single firewall rule. This technique narrows the possibility of unauthorized access to the network environment beyond the sensor. The user's experience when signing in to the sensor remains the same.
Using tunneling allows you to connect to the on-premises management console from its IP address and a single port (9000 by default) to any sensor.
+For example, the following image shows a sample architecture where users access the sensor consoles via the on-premises management console.
-To set up tunneling at the on-premises management console:
-1. Sign in to the on-premises management console and run the following command:
+**To set up tunneling at the on-premises management console**:
+
+1. Sign in to the on-premises management console's CLI with the *cyberx* or the *support* user credentials and run the following command:
```bash
- cyberx-management-tunnel-enable
+ sudo cyberx-management-tunnel-enable
```
+ For more information on users, see [Default privileged on-premises users](roles-on-premises.md#default-privileged-on-premises-users).
+ 1. Allow a few minutes for the connection to start.
+
+ When tunneling access is configured, the following URL syntax is used to access the sensor consoles: `https://<on-premises management console address>/<sensor address>/<page URL>`
You can also customize the port range to a number other than 9000. An example is 10000.
-To use a new port:
-
-1. Sign in to the on-premises management console and run the following command:
+**To use a new port**:
- ```bash
- sudo cyberx-management-tunnel-enable --port 10000
-
- ```
+Sign in to the on-premises management console and run the following command:
-1. Disable the connection, when required.
+```bash
+sudo cyberx-management-tunnel-enable --port 10000
+
+```
-To disable:
+**To disable the connection**:
Sign in to the on-premises management console and run the following command:
- ```bash
- cyberx-management-tunnel-disable
+```bash
+cyberx-management-tunnel-disable
- ```
+```
No configuration is needed on the sensor.
-To view log files:
-
-Review log information in the log files.
-
-To access log files:
+**To access the tunneling log files**:
-1. Sign in to the on-premises management console and go to */var/log/apache2.log*.
-1. Sign in to the sensor and go to */var/cyberx/logs/tunnel.log*.
+1. **From the on-premises management console**: Sign in and go to */var/log/apache2.log*.
+1. **From the sensor**: Sign in and go to */var/cyberx/logs/tunnel.log*.
## Set up a site
defender-for-iot How To Install Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-install-software.md
- Title: Install OT network monitoring software - Microsoft Defender for IoT
-description: Learn how to install agentless monitoring software for an OT sensor and an on-premises management console for Microsoft Defender for IoT. Use this article if you're reinstalling software on a preconfigured appliance, or if you've chosen to install software on your own appliances.
Previously updated : 11/09/2022---
-# Install OT agentless monitoring software
-
-This article describes how to install agentless monitoring software for OT sensors and on-premises management consoles. You might need the procedures in this article if you're reinstalling software on a preconfigured appliance, or if you've chosen to install software on your own appliances.
--
-## Download software files from the Azure portal
-
-Download OT sensor and on-premises management console software from the Azure portal.
-
-On the Defender for IoT > **Getting started** page, select the **Sensor**, **On-premises management console**, or **Updates** tab and locate the software you need.
-
-If you're updating from a previous version, check the options carefully to ensure that you have the correct update path for your situation.
-
-Mount the ISO file onto your hardware appliance or VM using one of the following options:
--- **Physical media** ΓÇô burn the ISO file to your external storage, and then boot from the media.-
- - DVDs: First burn the software to the DVD as an image
- - USB drive: First make sure that youΓÇÖve created a bootable USB drive with software such as [Rufus](https://rufus.ie/en/), and then save the software to the USB drive. USB drives must have USB version 3.0 or later.
-
- Your physical media must have a minimum of 4-GB storage.
--- **Virtual mount** ΓÇô use iLO for HPE appliances, or iDRAC for Dell appliances to boot the ISO file.-
-## Pre-installation configuration
-
-Each appliance type comes with its own set of instructions that are required before installing Defender for IoT software.
-
-Make sure that you've completed any specific procedures required for your appliance before installing Defender for IoT software. For more information, see the [OT monitoring appliance catalog](appliance-catalog/appliance-catalog-overview.md).
-
-For more information, see:
--- [Which appliances do I need?](ot-appliance-sizing.md)-- [Pre-configured physical appliances for OT monitoring](ot-pre-configured-appliances.md), including the catalog of available appliances-- [OT monitoring with virtual appliances](ot-virtual-appliances.md)--
-## Install OT monitoring software
-
-This section provides generic procedures for installing OT monitoring software on sensors or an on-premises management console.
-
-Select one of the following tabs, depending on which type of software you're installing.
-
-# [OT sensor](#tab/sensor)
-
-This procedure describes how to install OT sensor software on a physical or virtual appliance after you've booted the ISO file on your appliance.
-
-> [!Note]
-> Towards the end of this process you will be presented with the usernames and passwords for your device. Make sure to copy these down as these passwords will not be presented again.
-
-**To install the sensor's software**:
-
-1. When the installation boots, you're first prompted to select the hardware profile you want to install.
-
- :::image type="content" source="media/tutorial-install-components/sensor-architecture.png" alt-text="Screenshot of the sensor's hardware profile options." lightbox="media/tutorial-install-components/sensor-architecture.png":::
-
- For more information, see [Which appliances do I need?](ot-appliance-sizing.md).
-
- System files are installed, the sensor reboots, and then sensor files are installed. This process can take a few minutes.
-
- When the installation steps are complete, the Ubuntu **Package configuration** screen is displayed, with the `Configuring iot-sensor` wizard, showing a prompt to select your monitor interfaces.
-
- In this wizard, use the up or down arrows to navigate, and the SPACE bar to select an option. Press ENTER to advance to the next screen.
-
-1. In the `Select monitor interfaces` screen, select the interfaces you want to monitor.
-
- > [!IMPORTANT]
- > Make sure that you select only interfaces that are connected.
- > If you select interfaces that are enabled but not connected, the sensor will show a *No traffic monitored* health notification in the Azure portal. If you connect more traffic sources after installation and want to monitor them with Defender for IoT, you can add them via the CLI.
-
- By default, eno1 is reserved for the management interface and we recommend that you leave this option unselected.
-
- For example:
-
- :::image type="content" source="media/tutorial-install-components/monitor-interface.png" alt-text="Screenshot of the select monitor interface screen.":::
-
-1. In the `Select erspan monitor interfaces` screen, select any ERSPAN monitoring ports that you have. The wizard lists available interfaces, even if you don't have any ERSPAN monitoring ports in your system. If you have no ERSPAN monitoring ports, leave all options unselected.
-
- For example:
-
- :::image type="content" source="media/tutorial-install-components/erspan-monitor.png" alt-text="Screenshot of the select erspan monitor screen.":::
-
-1. In the `Select management interface` screen, we recommend keeping the default `eno1` value selected as the management interface.
-
- For example:
-
- :::image type="content" source="media/tutorial-install-components/management-interface.png" alt-text="Screenshot of the management interface select screen.":::
-
-1. In the `Enter sensor IP address` screen, enter the IP address for the sensor appliance you're installing.
-
- :::image type="content" source="media/tutorial-install-components/sensor-ip-address.png" alt-text="Screenshot of the sensor IP address screen.":::
-
-1. In the `Enter path to the mounted backups folder` screen, enter the path to the sensor's mounted backups. We recommend using the default path of `/opt/sensor/persist/backups`. For example:
-
- :::image type="content" source="media/tutorial-install-components/mounted-backups-path.png" alt-text="Screenshot of the mounted backup path screen.":::
-
-1. In the `Enter Subnet Mask` screen, enter the IP address for the sensor's subnet mask. For example:
-
- :::image type="content" source="media/tutorial-install-components/sensor-subnet-ip.png" alt-text="Screenshot of the Enter Subnet Mask screen.":::
-
-1. In the `Enter Gateway` screen, enter the sensor's default gateway IP address. For example:
-
- :::image type="content" source="media/tutorial-install-components/sensor-gateway-ip.png" alt-text="Screenshot of the Enter Gateway screen.":::
-
-1. In the `Enter DNS server` screen, enter the sensor's DNS server IP address. For example:
-
- :::image type="content" source="media/tutorial-install-components/sensor-dns-ip.png" alt-text="Screenshot of the Enter DNS server screen.":::
-
-1. In the `Enter hostname` screen, enter the sensor hostname. For example:
-
- :::image type="content" source="media/tutorial-install-components/sensor-hostname.png" alt-text="Screenshot of the Enter hostname screen.":::
-
-1. In the `Run this sensor as a proxy server (Preview)` screen, select `<Yes>` only if you want to configure a proxy, and then enter the proxy credentials as prompted.
-
- The default configuration is without a proxy.
-
- For more information, see [Connect Microsoft Defender for IoT sensors without direct internet access by using a proxy (legacy)](how-to-connect-sensor-by-proxy.md).
--
-1. <a name=credentials></a>The installation process starts running and then shows the credentials screen. For example:
-
- :::image type="content" source="media/tutorial-install-components/login-information.png" alt-text="Screenshot of the final screen of the installation with usernames, and passwords.":::
-
- Save the usernames and passwords listed, as the passwords are unique and this is the only time that the credentials are listed. Copy the credentials to a safe place so that you can use them when signing into the sensor for the first time.
-
- For more information, see [Default privileged on-premises users](roles-on-premises.md#default-privileged-on-premises-users).
-
- Select `<Ok>` when you're ready to continue.
-
- The installation continues running again, and then reboots when the installation is complete. Upon reboot, you're prompted to enter credentials to sign in. For example:
-
- :::image type="content" source="media/tutorial-install-components/sensor-sign-in.png" alt-text="Screenshot of a sensor sign-in screen after installation.":::
-
-1. Enter the credentials for one of the users that you'd copied down in the [previous step](#credentials).
-
- - If the `iot-sensor login:` prompt disappears, press **ENTER** to have it shown again.
- - When you enter your password, the password characters don't display on the screen. Make sure you enter them carefully.
-
- When you've successfully signed in, the following confirmation screen appears:
-
- :::image type="content" source="media/tutorial-install-components/install-complete.png" alt-text="Screenshot of the sign-in confirmation.":::
-
-Make sure that your sensor is connected to your network, and then you can sign in to your sensor via a network-connected browser. For more information, see [Activate and set up your sensor](how-to-activate-and-set-up-your-sensor.md#activate-and-set-up-your-sensor).
--
-# [On-premises management console](#tab/on-prem)
--
-This procedure describes how to install on-premises management console software on a physical or virtual appliance.
-
-The installation process takes about 20 minutes. After the installation, the system is restarted several times.
-
-During the installation process, you can add a secondary NIC. If you choose not to install the secondary Network Interface Card (NIC) during installation, you can [add a secondary NIC](#add-a-secondary-nic-optional) at a later time.
-
-**To install the software**:
-
-1. Select your preferred language for the installation process.
-
- :::image type="content" source="media/tutorial-install-components/on-prem-language-select.png" alt-text="Select your preferred language for the installation process.":::
-
-1. Select **MANAGEMENT-RELEASE-\<version\>\<deployment type\>**.
-
- :::image type="content" source="media/tutorial-install-components/on-prem-install-screen.png" alt-text="Select your version.":::
-
-1. In the Installation Wizard, define the network properties:
-
- :::image type="content" source="media/tutorial-install-components/on-prem-first-steps-install.png" alt-text="Screenshot that shows the appliance profile.":::
-
- | Parameter | Configuration |
- |--|--|
- | **configure management network interface** | For Dell: **eth0, eth1** <br /> For HP: **enu1, enu2** <br> Or <br />**possible value** |
- | **configure management network IP address** | Enter an IP address |
- | **configure subnet mask** | Enter an IP address|
- | **configure DNS** | Enter an IP address |
- | **configure default gateway IP address** | Enter an IP address|
-
-1. **(Optional)** If you would like to install a secondary NIC, define the following appliance profile, and network properties:
-
- :::image type="content" source="media/tutorial-install-components/on-prem-secondary-nic-install.png" alt-text="Screenshot that shows the Secondary NIC install questions.":::
-
- | Parameter | Configuration |
- |--|--|
- | **configure sensor monitoring interface** (Optional) | **eth1** or **possible value** |
- | **configure an IP address for the sensor monitoring interface** | Enter an IP address |
- | **configure a subnet mask for the sensor monitoring interface** | Enter an IP address |
-
-1. Accept the settings and continue by typing `Y`.
-
-1. After about 10 minutes, the two sets of credentials appear. For example:
-
- :::image type="content" source="media/tutorial-install-components/credentials-screen.png" alt-text="Copy these credentials as they won't be presented again.":::
-
- Save the usernames and passwords, you'll need these credentials to access the platform the first time you use it.
-
- For more information, see [Default privileged on-premises users](roles-on-premises.md#default-privileged-on-premises-users).
-
-1. Select **Enter** to continue.
-
-For information on how to find the physical port on your appliance, see [Find your port](#find-your-port).
-
-### Add a secondary NIC (optional)
-
-You can enhance security to your on-premises management console by adding a secondary NIC dedicated for attached sensors within an IP address range. When you use a secondary NIC, the first is dedicated for end-users, and the secondary supports the configuration of a gateway for routed networks.
--
-Both NICs will support the user interface (UI). If you choose not to deploy a secondary NIC, all of the features will be available through the primary NIC.
-
-This procedure describes how to add a secondary NIC if you've already installed your on-premises management console.
-
-**To add a secondary NIC**:
-
-1. Use the network reconfigure command:
-
- ```bash
- sudo cyberx-management-network-reconfigure
- ```
-
-1. Enter the following responses to the following questions:
-
- :::image type="content" source="media/tutorial-install-components/network-reconfig-command.png" alt-text="Screenshot of the required answers to configure your appliance. ":::
-
- | Parameters | Response to enter |
- |--|--|
- | **Management Network IP address** | `N` |
- | **Subnet mask** | `N` |
- | **DNS** | `N` |
- | **Default gateway IP Address** | `N` |
- | **Sensor monitoring interface** <br>Optional. Relevant when sensors are on a different network segment.| `Y`, and select a possible value |
- | **An IP address for the sensor monitoring interface** | `Y`, and enter an IP address that's accessible by the sensors|
- | **A subnet mask for the sensor monitoring interface** | `Y`, and enter an IP address that's accessible by the sensors|
- | **Hostname** | Enter the hostname |
-
-1. Review all choices and enter `Y` to accept the changes. The system reboots.
-
-### Find your port
-
-If you're having trouble locating the physical port on your device, you can use the following command to find your port:
-
-```bash
-sudo ethtool -p <port value> <time-in-seconds>
-```
-
-This command will cause the light on the port to flash for the specified time period. For example, entering `sudo ethtool -p eno1 120`, will have port eno1 flash for 2 minutes, allowing you to find the port on the back of your appliance.
---
-## Post-installation validation
-
-After you've finished installing OT monitoring software on your appliance, test your system to make sure that processes are running correctly. The same validation process applies to all appliance types.
-
-System health validations are supported via the sensor or on-premises management console UI or CLI, and are available for both the *support* and *cyberx* users.
-
-After installing OT monitoring software, make sure to run the following tests:
--- **Sanity test**: Verify that the system is running.--- **Version**: Verify that the version is correct.--- **ifconfig**: Verify that all the input interfaces configured during the installation process are running.-
-#### Gateway checks
-
-Use the `route` command to show the gateway's IP address. For example:
-
-``` CLI
-<root@xsense:/# route -n
-Kernel IP routing table
-Destination Gateway Genmask Flags Metric Ref Use Iface
-0.0.0.0 172.18.0.1 0.0.0.0 UG 0 0 0 eth0
-172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
->
-```
-
-Use the `arp -a` command to verify that there is a binding between the MAC address and the IP address of the default gateway. For example:
-
-``` CLI
-<root@xsense:/# arp -a
-cusalvtecca101-gi0-02-2851.network.microsoft.com (172.18.0.1) at 02:42:b0:3a:e8:b5 [ether] on eth0
-mariadb_22.2.6.27-r-c64cbca.iot_network_22.2.6.27-r-c64cbca (172.18.0.5) at 02:42:ac:12:00:05 [ether] on eth0
-redis_22.2.6.27-r-c64cbca.iot_network_22.2.6.27-r-c64cbca (172.18.0.3) at 02:42:ac:12:00:03 [ether] on eth0
->
-```
-
-#### DNS checks
-
-Use the `cat /etc/resolv.conf` command to find the IP address that's configured for DNS traffic. For example:
-``` CLI
-<root@xsense:/# cat /etc/resolv.conf
-search reddog.microsoft.com
-nameserver 127.0.0.11
-options ndots:0
->
-```
-
-Use the `host` command to resolve an FQDN. For example:
-
-``` CLI
-<root@xsense:/# host www.apple.com
-www.apple.com is an alias for www.apple.com.edgekey.net.
-www.apple.com.edgekey.net is an alias for www.apple.com.edgekey.net.globalredir.akadns.net.
-www.apple.com.edgekey.net.globalredir.akadns.net is an alias for e6858.dscx.akamaiedge.net.
-e6858.dscx.akamaiedge.net has address 72.246.148.202
-e6858.dscx.akamaiedge.net has IPv6 address 2a02:26f0:5700:1b4::1aca
-e6858.dscx.akamaiedge.net has IPv6 address 2a02:26f0:5700:182::1aca
->
-```
-
-#### Firewall checks
-
-Use the `wget` command to verify that port 443 is open for communication. For example:
-
-``` CLI
-<root@xsense:/# wget https://www.apple.com
2022-11-09 11:21:15-- https://www.apple.com/
-Resolving www.apple.com (www.apple.com)... 72.246.148.202, 2a02:26f0:5700:1b4::1aca, 2a02:26f0:5700:182::1aca
-Connecting to www.apple.com (www.apple.com)|72.246.148.202|:443... connected.
-HTTP request sent, awaiting response... 200 OK
-Length: 99966 (98K) [text/html]
-Saving to: 'https://docsupdatetracker.net/index.html.1'
-
-https://docsupdatetracker.net/index.html.1 100%[===================>] 97.62K --.-KB/s in 0.02s
-
-2022-11-09 11:21:15 (5.88 MB/s) - 'https://docsupdatetracker.net/index.html.1' saved [99966/99966]
-
->
-```
-
-For more information, see [Check system health](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md#check-system-health) in our sensor and on-premises management console troubleshooting article.
-
-## Configure tunneling access for sensors through the on-premises management console
-
-Enhance system security by preventing direct user access to the sensor.
-
-Instead of direct access, use proxy tunneling to let users access the sensor from the on-premises management console with a single firewall rule. This technique narrows the possibility of unauthorized access to the network environment beyond the sensor. The user's experience when signing in to the sensor remains the same.
-
-When tunneling access is configured, users use the following URL syntax to access their sensor consoles: `https://<on-premises management console address>/<sensor address>/<page URL>`
-
-For example, the following image shows a sample architecture where users access the sensor consoles via the on-premises management console.
--
-The interface between the IT firewall, on-premises management console, and the OT firewall is done using a reverse proxy with URL rewrites. The interface between the OT firewall and the sensors is done using reverse SSH tunnels.
-
-**To enable tunneling access for sensors**:
-
-1. Sign in to the on-premises management console's CLI with the *cyberx* or the *support* user credentials. For more information, see [Default privileged on-premises users](roles-on-premises.md#default-privileged-on-premises-users).
-
-1. Enter `sudo cyberx-management-tunnel-enable`.
-
-1. Select **Enter**.
-
-1. Enter `--port 10000`.
-
-## Next steps
-
-For more information, see:
--- [Prepare your OT network for Microsoft Defender for IoT](how-to-set-up-your-network.md)-- [Troubleshoot the sensor and on-premises management console](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md)
defender-for-iot How To Manage Individual Sensors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-individual-sensors.md
Title: Manage individual sensors
-description: Learn how to manage individual sensors, including managing activation files, certificates, performing backups, and updating a standalone sensor.
+ Title: Manage OT sensors from the sensor console - Microsoft Defender for IoT
+description: Learn how to manage individual Microsoft Defender for IoT OT network sensors directly from the sensor's console.
Last updated 11/28/2022
The **Overview** page shows the following widgets:
| Name | Description | |--|--|
-| **General Settings** | Displays a list of the sensor's basic configuration settings |
+| **General Settings** | Displays a list of the sensor's basic configuration settings and [connectivity status](#validate-connectivity-status). |
| **Traffic Monitoring** | Displays a graph detailing traffic in the sensor. The graph shows traffic as units of Mbps per hour on the day of viewing. | | **Top 5 OT Protocols** | Displays a bar graph that details the top five most used OT protocols. The bar graph also provides the number of devices that are using each of those protocols. | | **Traffic By Port** | Displays a pie chart showing the types of ports in your network, with the amount of traffic detected in each type of port. |
The **Overview** page shows the following widgets:
Select the link in each widget to drill down for more information in your sensor.
+### Validate connectivity status
+
+Verify that your sensor is successfully connected to the Azure portal directly from the sensor's **Overview** page.
+
+If there are any connection issues, a disconnection message is shown in the **General Settings** area on the **Overview** page, and a **Service connection error** warning appears at the top of the page in the :::image type="icon" source="media/how-to-manage-individual-sensors/bell-icon.png" border="false"::: **System Messages** area. For example:
++
+1. Find more information about the issue by hovering over the :::image type="icon" source="media/how-to-manage-individual-sensors/information-icon.png" border="false"::: information icon. For example:
+
+ :::image type="content" source="media/how-to-manage-individual-sensors/connectivity-message.png" alt-text="Screenshot of a connectivity error message." lightbox="media/how-to-manage-individual-sensors/connectivity-message.png":::
+
+1. Take action by selecting the **Learn more** option under :::image type="icon" source="media/how-to-manage-individual-sensors/bell-icon.png" border="false"::: **System Messages**. For example:
+
+ :::image type="content" source="media/how-to-manage-individual-sensors/system-messages.png" alt-text="Screenshot of the system messages pane." lightbox="media/how-to-manage-individual-sensors/system-messages.png":::
+ ## Manage sensor activation files Your sensor was onboarded with Microsoft Defender for IoT from the Azure portal. Each sensor was onboarded as either a locally connected sensor or a cloud-connected sensor.
A unique activation file is uploaded to each sensor that you deploy. For more in
Locally connected sensors are associated with an Azure subscription. The activation file for your locally connected sensors contains an expiration date. One month before this date, a warning message appears in the System Messages window in the top-right corner of the console. The warning remains until after you've updated the activation file.
+You can continue to work with Defender for IoT features even if the activation file has expired.
You can continue to work with Defender for IoT features even if the activation file has expired. ### About activation files for cloud-connected sensors
This section describes how to ensure connection between the sensor and the on-pr
3. In the **Sensor Setup ΓÇô Connection String** section, copy the automatically generated connection string.
- :::image type="content" source="media/how-to-manage-individual-sensors/connection-string-screen.png" alt-text="Copy the connection string from this screen.":::
+ :::image type="content" source="media/how-to-manage-individual-sensors/connection-string-screen.png" alt-text="Screenshot of the Connection string screen.":::
4. Sign in to the sensor console.
To send notifications:
For more information about forwarding rules, see [Forward alert information](how-to-forward-alert-information-to-partners.md). - ## Upload and play PCAP files When troubleshooting, you may want to examine data recorded by a specific PCAP file. To do so, you can upload a PCAP file to your sensor console and replay the data recorded.
defender-for-iot How To Manage Sensors From The On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-sensors-from-the-on-premises-management-console.md
Title: Manage sensors from the on-premises management console
-description: Learn how to manage sensors from the management console, including updating sensor versions, pushing system settings to sensors, managing certificates, and enabling and disabling engines on sensors.
+ Title: Manage OT sensors from the on-premises management console
+description: Learn how to manage OT sensors from the on-premises management console, including updating sensor versions, pushing system settings to sensors, managing certificates, and enabling and disabling engines on sensors.
Last updated 06/02/2022
-# Manage sensors from the management console
+# Manage sensors from the on-premises management console
This article describes how to manage OT sensors from an on-premises management console, such as pushing system settings to individual sensors, or enabling or disabling specific engines on your sensors.
defender-for-iot Integrate Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/integrate-overview.md
Integrate Microsoft Defender for Iot with partner services to view partner data
|Name |Description |Support scope |Supported by |Learn more | ||||||
-| **IBM QRadar** | Send Defender for IoT alerts to IBM QRadar | - OT networks <br>- Cloud connected sensors | Microsoft | [Stream Microsoft Defender for IoT alerts to a 3rd party SIEM](https://techcommunity.microsoft.com/t5/microsoft-defender-for-iot-blog/stream-microsoft-defender-for-iot-alerts-to-a-3rd-party-siem/ba-p/3581242) |
+| **IBM QRadar** | Send Defender for IoT alerts to IBM QRadar | - OT networks <br>- Cloud connected sensors | Microsoft | [Stream Defender for IoT cloud alerts to a partner SIEM](integrations/send-cloud-data-to-partners.md) |
|**IBM QRadar** | Forward Defender for IoT alerts to IBM QRadar. | - OT networks<br>- Locally managed sensors and on-premises management consoles | Microsoft | [Integrate Qradar with Microsoft Defender for IoT](tutorial-qradar.md) | ## LogRhythm
Integrate Microsoft Defender for Iot with partner services to view partner data
|Name |Description |Support scope |Supported by |Learn more | ||||||
-| **Splunk** | Send Defender for IoT alerts to Splunk | - OT networks <br>- Cloud connected sensors | Microsoft | [Stream Microsoft Defender for IoT alerts to a 3rd party SIEM](https://techcommunity.microsoft.com/t5/microsoft-defender-for-iot-blog/stream-microsoft-defender-for-iot-alerts-to-a-3rd-party-siem/ba-p/3581242) |
+| **Splunk** | Send Defender for IoT alerts to Splunk | - OT networks <br>- Cloud connected sensors | Microsoft | [Stream Defender for IoT cloud alerts to a partner SIEM](integrations/send-cloud-data-to-partners.md) |
|**Splunk** | Send Defender for IoT alerts to Splunk | - OT networks<br>- Locally managed sensors and on-premises management consoles | Microsoft | [Integrate Splunk with Microsoft Defender for IoT](tutorial-splunk.md) | ## Next steps
-For more information, see:
-
-**Device inventory**:
--- [Use the Device inventory in the Azure portal](how-to-manage-device-inventory-for-organizations.md)-- [Use the Device inventory in the OT sensor](how-to-investigate-sensor-detections-in-a-device-inventory.md)-- [Use the Device inventory in the on-premises management console](how-to-investigate-all-enterprise-sensor-detections-in-a-device-inventory.md)-
-**Alerts**:
--- [View alerts in the Azure portal](how-to-manage-cloud-alerts.md)-- [View alerts in the OT sensor](how-to-view-alerts.md)-- [View alerts in the on-premises management console](how-to-work-with-alerts-on-premises-management-console.md)
+> [!div class="nextstepaction"]
+> [Stream Defender for IoT cloud alerts to a partner SIEM](integrations/send-cloud-data-to-partners.md)
defender-for-iot Send Cloud Data To Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/integrations/send-cloud-data-to-partners.md
+
+ Title: Stream Microsoft Defender for IoT cloud alerts to a partner SIEM - Microsoft Defender for IoT
+description: Learn how to send Microsoft Defender for IoT data on the cloud to a partner SIEM via Microsoft Sentinel and Azure Event Hubs, using Splunk as an example.
Last updated : 12/26/2022+++
+# Stream Defender for IoT cloud alerts to a partner SIEM
+
+As more businesses convert OT systems to digital IT infrastructures, security operations center (SOC) teams and chief information security officers (CISOs) are increasingly responsible for handling threats from OT networks.
+
+We recommend using Microsoft Defender for IoT's out-of-the-box [data connector](../iot-solution.md) and [solution](../iot-advanced-threat-monitoring.md) to integrate with Microsoft Sentinel and bridge the gap between the IT and OT security challenge.
+
+However, if you have other security information and event management (SIEM) systems, you can also use Microsoft Sentinel to forward Defender for IoT cloud alerts on to that partner SIEM, via [Microsoft Sentinel](/azure/sentinel/) and [Azure Event Hubs](/azure/event-hubs/).
+
+While this article uses Splunk as an example, you can use the process described below with any SIEM that supports Event Hub ingestion, such as IBM QRadar.
+
+> [!IMPORTANT]
+> Using Event Hubs and a Log Analytics export rule may incur additional charges. For more information, see [Event Hubs pricing](https://azure.microsoft.com/pricing/details/event-hubs/) and [Log Data Export pricing](https://azure.microsoft.com/pricing/details/monitor/).
+
+## Prerequisites
+
+Before you start, you'll need the **Microsoft Defender for IoT** data connector installed in your Microsoft Sentinel instance. For more information, see [Tutorial: Connect Microsoft Defender for IoT with Microsoft Sentinel](../iot-solution.md).
+
+Also check any prerequisites for each of the procedures linked in the steps below.
+
+## Register an application in Azure Active Directory
+
+You'll need Azure Active Directory (Azure AD) defined as a service principal for the [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110/). To do this, you'll need to create an Azure AD application with specific permissions.
+
+**To register an Azure AD application and define permissions**:
+
+1. In [Azure AD](/azure/active-directory/), register a new application. On the **Certificates & secrets** page, add a new client secret for the service principal.
+
+ For more information, see [Register an application with the Microsoft identity platform](/azure/active-directory/develop/quickstart-register-app)
+
+1. In your app's **API permissions** page, grant API permissions to read data from your app.
+
+ 1. Select to add a permission and then select **Microsoft Graph** > **Application permissions** > **SecurityEvents.ReadWrite.All** > **Add permissions**.
+
+ 1. Make sure that admin consent is required for your permission.
+
+ For more information, see [Configure a client application to access a web API](/azure/active-directory/develop/quickstart-configure-app-access-web-apis#add-permissions-to-access-your-web-api)
+
+1. From your app's **Overview** page, note the following values for your app:
+
+ - **Display name**
+ - **Application (client) ID**
+ - **Directory (tenant) ID**
++
+1. From the **Certificates & secrets** page, note the values of your client secret **Value** and **Secret ID**.
+
+## Create an Azure event hub
+
+Create an Azure event hub to use as a bridge between Microsoft Sentinel and your partner SIEM. Start this step by creating an Azure event hub namespace, and then adding an Azure event hub.
+
+**To create your event hub namespace and event hub**:
+
+1. In Azure Event Hubs, create a new event hub namespace. In your new namespace, create a new Azure event hub.
+
+ In your event hub, make sure to define the **Partition Count** and **Message Retention** settings.
+
+ For more information, see [Create an event hub using the Azure portal](/azure/event-hubs/event-hubs-create).
+
+1. In your event hub namespace, select the **Access control (IAM)** page and add a new role assignment.
+
+ Select to use the **Azure Event Hubs Data Receiver** role, and add the Azure AD service principle app that you'd created [earlier](#register-an-application-in-azure-active-directory) as a member.
+
+ For more information, see: [Assign Azure roles using the Azure portal](/azure/role-based-access-control/role-assignments-portal).
+
+1. In your event hub namespace's **Overview** page, make a note of the namespace's **Host name** value.
+
+1. In your event hub namespace's **Event Hubs** page, make a note of your event hub's name.
+
+## Forward Microsoft Sentinel incidents to your event hub
+
+To forward Microsoft Sentinel incidents or alerts to your event hub, create a data export rule from Azure Log Analytics.
+
+In your rule, make sure to define the following settings:
+
+- Configure the **Source** as **SecurityIncident**
+- Configure the **Destination** as **Event Type**, using the event hub namespace and event hub name you'd recorded earlier.
+
+For more information, see [Log Analytics workspace data export in Azure Monitor](/azure/azure-monitor/logs/logs-data-export?tabs=portal#create-or-update-a-data-export-rule).
+
+## Configure Splunk to consume Microsoft Sentinel incidents
+
+Once you have your event hub and export rule configured, configure Splunk to consume Microsoft Sentinel incidents from the event hub.
+
+1. Install the [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110/) app.
+
+1. In the Splunk Add-on for Microsoft Cloud Services app, add an Azure App account.
+
+ 1. Enter a meaningful name for the account.
+ 1. Enter the client ID, client secret, and tenant ID details that you'd recorded earlier.
+ 1. Define the account class type as **Azure Public Cloud**.
+
+1. Go to the Splunk Add-on for Microsoft Cloud Services inputs, and create a new input for your Azure event hub.
+
+ 1. Enter a meaningful name for your input.
+ 1. Select the Azure App Account that you'd just created in the Splunk Add-on for Microsoft Services app.
+ 1. Enter your event hub namespace FQDN and event hub name.
+
+ Leave other settings as their defaults.
+
+Once data starts getting ingested into Splunk from your event hub, query the data by using the following value in your search field: `sourcetype="mscs:azure:eventhub"`
+
+## Next steps
+
+This article describes how to forward alerts generated by cloud-connected sensors only. If you're working on-premises, such as in air-gapped environments, you may be able to create a forwarding alert rule to forward alert data directly from an OT sensor or on-premises management console.
+
+For more information, see [Integrations with Microsoft and partner services](../integrate-overview.md).
defender-for-iot Manage Users On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/manage-users-on-premises-management-console.md
By default, each on-premises management console is installed with the privileged
When setting up an on-premises management console for the first time, sign in with one of these privileged users, create an initial user with an **Admin** role, and then create extra users for security analysts and read-only users.
-For more information, see [Install OT monitoring software](how-to-install-software.md#install-ot-monitoring-software) and [Default privileged on-premises users](roles-on-premises.md#default-privileged-on-premises-users).
+For more information, see [Install OT monitoring software on an on-premises management console](ot-deploy/install-software-on-premises-management-console.md) and [Default privileged on-premises users](roles-on-premises.md#default-privileged-on-premises-users).
## Add new on-premises management console users
defender-for-iot Manage Users Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/manage-users-sensor.md
By default, each OT network sensor is installed with the privileged *cyberx*, *s
When setting up a sensor for the first time, sign in with one of these privileged users, create an initial user with an **Admin** role, and then create extra users for security analysts and read-only users.
-For more information, see [Install OT monitoring software](how-to-install-software.md#install-ot-monitoring-software) and [Default privileged on-premises users](roles-on-premises.md#default-privileged-on-premises-users).
+For more information, see [Install OT monitoring software on OT sensors](how-to-install-software.md) and [Default privileged on-premises users](roles-on-premises.md#default-privileged-on-premises-users).
## Add new OT sensor users
defender-for-iot Install Software On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-deploy/install-software-on-premises-management-console.md
+
+ Title: Install Microsoft Defender for IoT on-premises management console software - Microsoft Defender for IoT
+description: Learn how to install Microsoft Defender for IoT on-premises management console software. Use this article if you're reinstalling software on a pre-configured appliance, or if you've chosen to install software on your own appliances.
Last updated : 12/13/2022+++
+# Install Microsoft Defender for IoT on-premises management console software
+
+Use the procedures in this article when installing Microsoft Defender for IoT software on an on-premises management console. You might be reinstalling software on a [pre-configured appliance](../ot-pre-configured-appliances.md), or you may be installing software on your own appliance.
+
+## Prerequisites
+
+Before installing Microsoft Defender for IoT, make sure that you have:
+
+- [Traffic mirroring configured in your network](../best-practices/traffic-mirroring-methods.md)
+- An [OT plan in Defender for IoT](../how-to-manage-subscriptions.md) on your Azure subscription
+- An OT sensor [onboarded to Defender for IoT](../onboard-sensors.md) in the Azure portal
+- [OT monitoring software installed on an OT network sensor](install-software-ot-sensor.md)
+
+Each appliance type also comes with its own set of instructions that are required before installing Defender for IoT software. Make sure that you've completed any specific procedures required for your appliance before installing Defender for IoT software.
+
+For more information, see:
+
+- The [OT monitoring appliance catalog](../appliance-catalog/index.yml)
+- [Which appliances do I need?](../ot-appliance-sizing.md)
+- [OT monitoring with virtual appliances](../ot-virtual-appliances.md)
+
+## Download software files from the Azure portal
+
+Download on-premises management console software from Defender for IoT in the Azure portal.
+
+On the Defender for IoT > **Getting started** page, select the **On-premises management console** or **Updates** tab and locate the software you need.
+
+If you're updating from a previous version, check the options carefully to ensure that you have the correct update path for your situation.
+
+## Install on-premises management console software
+
+This procedure describes how to install OT management software on an on-premises management console, for a physical or virtual appliance.
+
+The installation process takes about 20 minutes. After the installation, the system is restarted several times.
+
+**To install the software**:
+
+1. Mount the ISO file onto your hardware appliance or VM using one of the following options:
+
+ - **Physical media** ΓÇô burn the ISO file to your external storage, and then boot from the media.
+
+ - DVDs: First burn the software to the DVD as an image
+ - USB drive: First make sure that youΓÇÖve created a bootable USB drive with software such as [Rufus](https://rufus.ie/en/), and then save the software to the USB drive. USB drives must have USB version 3.0 or later.
+
+ Your physical media must have a minimum of 4-GB storage.
+
+ - **Virtual mount** ΓÇô use iLO for HPE appliances, or iDRAC for Dell appliances to boot the ISO file.
+
+1. Select your preferred language for the installation process.
+
+ :::image type="content" source="../media/tutorial-install-components/on-prem-language-select.png" alt-text="Screenshot of selecting your preferred language for the installation process.":::
+
+1. Select **MANAGEMENT-RELEASE-\<version\>\<deployment type\>**.
+
+ :::image type="content" source="../media/tutorial-install-components/on-prem-install-screen.png" alt-text="Screenshot of selecting your management release version.":::
+
+1. In the Installation Wizard, define the network properties:
+
+ :::image type="content" source="../media/tutorial-install-components/on-prem-first-steps-install.png" alt-text="Screenshot that shows the appliance profile.":::
+
+ | Parameter | Configuration |
+ |--|--|
+ | **configure management network interface** | For Dell: **eth0, eth1** <br /> For HP: **enu1, enu2** <br> Or <br />**possible value** |
+ | **configure management network IP address** | Enter an IP address |
+ | **configure subnet mask** | Enter an IP address|
+ | **configure DNS** | Enter an IP address |
+ | **configure default gateway IP address** | Enter an IP address|
+
+1. **(Optional)** If you would like to install a secondary Network Interface Card (NIC), define the following appliance profile, and network properties:
+
+ | Parameter | Configuration |
+ |--|--|
+ | **configure sensor monitoring interface** (Optional) | **eth1** or **possible value** |
+ | **configure an IP address for the sensor monitoring interface** | Enter an IP address |
+ | **configure a subnet mask for the sensor monitoring interface** | Enter an IP address |
+
+ For example:
+
+ :::image type="content" source="../media/tutorial-install-components/on-prem-secondary-nic-install.png" alt-text="Screenshot that shows the Secondary NIC install questions.":::
+
+ If you choose not to install the secondary NIC now, you can [do so at a later time](#add-a-secondary-nic-after-installation-optional).
+
+1. Accept the settings and continue by typing `Y`.
+
+1. After about 10 minutes, the two sets of credentials appear. For example:
+
+ :::image type="content" source="../media/tutorial-install-components/credentials-screen.png" alt-text="Screenshot of the credentials that appear that must be copied as they won't be presented again.":::
+
+ Save the usernames and passwords, you'll need these credentials to access the platform the first time you use it.
+
+ For more information, see [Default privileged on-premises users](../roles-on-premises.md#default-privileged-on-premises-users).
+
+1. Select **Enter** to continue.
+
+### Add a secondary NIC after installation (optional)
+
+You can enhance security to your on-premises management console by adding a secondary NIC dedicated for attached sensors within an IP address range. When you use a secondary NIC, the first is dedicated for end-users, and the secondary supports the configuration of a gateway for routed networks.
++
+Both NICs will support the user interface (UI). If you choose not to deploy a secondary NIC, all of the features will be available through the primary NIC.
+
+This procedure describes how to add a secondary NIC if you've already installed your on-premises management console.
+
+**To add a secondary NIC**:
+
+1. Use the network reconfigure command:
+
+ ```bash
+ sudo cyberx-management-network-reconfigure
+ ```
+
+1. Enter the following responses to the following questions:
+
+ :::image type="content" source="../media/tutorial-install-components/network-reconfig-command.png" alt-text="Screenshot of the required answers to configure your appliance. ":::
+
+ | Parameters | Response to enter |
+ |--|--|
+ | **Management Network IP address** | `N` |
+ | **Subnet mask** | `N` |
+ | **DNS** | `N` |
+ | **Default gateway IP Address** | `N` |
+ | **Sensor monitoring interface** <br>Optional. Relevant when sensors are on a different network segment.| `Y`, and select a possible value |
+ | **An IP address for the sensor monitoring interface** | `Y`, and enter an IP address that's accessible by the sensors|
+ | **A subnet mask for the sensor monitoring interface** | `Y`, and enter an IP address that's accessible by the sensors|
+ | **Hostname** | Enter the hostname |
+
+1. Review all choices and enter `Y` to accept the changes. The system reboots.
+
+### Find a port on your appliance
+
+If you're having trouble locating the physical port on your appliance, you can use the following command to find your port:
+
+```bash
+sudo ethtool -p <port value> <time-in-seconds>
+```
+
+This command will cause the light on the port to flash for the specified time period. For example, entering `sudo ethtool -p eno1 120`, will have port eno1 flash for 2 minutes, allowing you to find the port on the back of your appliance.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Validate after installing software](post-install-validation-ot-software.md)
+
+> [!div class="nextstepaction"]
+> [Troubleshooting](../how-to-troubleshoot-the-sensor-and-on-premises-management-console.md)
defender-for-iot Install Software Ot Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-deploy/install-software-ot-sensor.md
+
+ Title: Install OT network monitoring software on OT sensors - Microsoft Defender for IoT
+description: Learn how to install agentless monitoring software for an OT sensor for Microsoft Defender for IoT. Use this article when reinstalling software on a pre-configured appliance, or if you've chosen to install software on your own appliances.
Last updated : 12/13/2022+++
+# Install OT monitoring software on OT sensors
+
+Use the procedures in this article when installing Microsoft Defender for IoT software on OT network sensors. You might be reinstalling software on a [pre-configured appliance](../ot-pre-configured-appliances.md), or you may be installing software on your own appliance.
+
+## Prerequisites
+
+Before installing Microsoft Defender for IoT, make sure that you have:
+
+- [Traffic mirroring configured in your network](../best-practices/traffic-mirroring-methods.md)
+- An [OT plan in Defender for IoT](../how-to-manage-subscriptions.md) on your Azure subscription
+- An OT sensor [onboarded to Defender for IoT](../onboard-sensors.md) in the Azure portal
+
+Each appliance type also comes with its own set of instructions that are required before installing Defender for IoT software. Make sure that you've completed any specific procedures required for your appliance before installing Defender for IoT software.
+
+For more information, see:
+
+- The [OT monitoring appliance catalog](../appliance-catalog/index.yml)
+- [Which appliances do I need?](../ot-appliance-sizing.md)
+- [OT monitoring with virtual appliances](../ot-virtual-appliances.md)
+
+## Download software files from the Azure portal
+
+Download the OT sensor software from Defender for IoT in the Azure portal.
+
+On the Defender for IoT > **Getting started** page, select the **Sensor** or **Updates** tab and locate the software you need.
+
+If you're updating from a previous version, check the options carefully to ensure that you have the correct update path for your situation.
+
+## Install Defender or IoT software on OT sensors
+
+This procedure describes how to install OT monitoring software on a sensor.
+
+> [!Note]
+> Towards the end of this process you will be presented with the usernames and passwords for your device. Make sure to copy these down as these passwords will not be presented again.
+
+1. Mount the ISO file onto your hardware appliance or VM using one of the following options:
+
+ - **Physical media** ΓÇô burn the ISO file to your external storage, and then boot from the media.
+
+ - DVDs: First burn the software to the DVD as an image
+ - USB drive: First make sure that youΓÇÖve created a bootable USB drive with software such as [Rufus](https://rufus.ie/en/), and then save the software to the USB drive. USB drives must have USB version 3.0 or later.
+
+ Your physical media must have a minimum of 4-GB storage.
+
+ - **Virtual mount** ΓÇô use iLO for HPE appliances, or iDRAC for Dell appliances to boot the ISO file.
+
+1. When the installation boots, you're first prompted to select the hardware profile you want to install.
+
+ :::image type="content" source="../media/tutorial-install-components/sensor-architecture.png" alt-text="Screenshot of the sensor's hardware profile options." lightbox="../media/tutorial-install-components/sensor-architecture.png":::
+
+ For more information, see [Which appliances do I need?](../ot-appliance-sizing.md).
+
+ System files are installed, the sensor reboots, and then sensor files are installed. This process can take a few minutes.
+
+ When the installation steps are complete, the Ubuntu **Package configuration** screen is displayed, with the `Configuring iot-sensor` wizard, showing a prompt to select your monitor interfaces.
+
+ In this wizard, use the up or down arrows to navigate, and the SPACE bar to select an option. Press ENTER to advance to the next screen.
+
+1. In the `Select monitor interfaces` screen, select the interfaces you want to monitor.
+
+ > [!IMPORTANT]
+ > Make sure that you select only interfaces that are connected.
+ > If you select interfaces that are enabled but not connected, the sensor will show a *No traffic monitored* health notification in the Azure portal. If you connect more traffic sources after installation and want to monitor them with Defender for IoT, you can add them via the CLI.
+
+ By default, `eno1` is reserved for the management interface and we recommend that you leave this option unselected.
+
+ For example:
+
+ :::image type="content" source="../media/tutorial-install-components/monitor-interface.png" alt-text="Screenshot of the select monitor interface screen.":::
+
+1. In the `Select erspan monitor interfaces` screen, select any ERSPAN monitoring ports that you have. The wizard lists available interfaces, even if you don't have any ERSPAN monitoring ports in your system. If you have no ERSPAN monitoring ports, leave all options unselected.
+
+ For example:
+
+ :::image type="content" source="../media/tutorial-install-components/erspan-monitor.png" alt-text="Screenshot of the select erspan monitor screen.":::
+
+1. In the `Select management interface` screen, we recommend keeping the default `eno1` value selected as the management interface.
+
+ For example:
+
+ :::image type="content" source="../media/tutorial-install-components/management-interface.png" alt-text="Screenshot of the management interface select screen.":::
+
+1. In the `Enter sensor IP address` screen, enter the IP address for the sensor appliance you're installing.
+
+ :::image type="content" source="../media/tutorial-install-components/sensor-ip-address.png" alt-text="Screenshot of the sensor IP address screen.":::
+
+1. In the `Enter path to the mounted backups folder` screen, enter the path to the sensor's mounted backups. We recommend using the default path of `/opt/sensor/persist/backups`. For example:
+
+ :::image type="content" source="../media/tutorial-install-components/mounted-backups-path.png" alt-text="Screenshot of the mounted backup path screen.":::
+
+1. In the `Enter Subnet Mask` screen, enter the IP address for the sensor's subnet mask. For example:
+
+ :::image type="content" source="../media/tutorial-install-components/sensor-subnet-ip.png" alt-text="Screenshot of the Enter Subnet Mask screen.":::
+
+1. In the `Enter Gateway` screen, enter the sensor's default gateway IP address. For example:
+
+ :::image type="content" source="../media/tutorial-install-components/sensor-gateway-ip.png" alt-text="Screenshot of the Enter Gateway screen.":::
+
+1. In the `Enter DNS server` screen, enter the sensor's DNS server IP address. For example:
+
+ :::image type="content" source="../media/tutorial-install-components/sensor-dns-ip.png" alt-text="Screenshot of the Enter DNS server screen.":::
+
+1. In the `Enter hostname` screen, enter the sensor hostname. For example:
+
+ :::image type="content" source="../media/tutorial-install-components/sensor-hostname.png" alt-text="Screenshot of the Enter hostname screen.":::
+
+1. In the `Run this sensor as a proxy server (Preview)` screen, select `<Yes>` only if you want to configure a proxy, and then enter the proxy credentials as prompted.
+
+ The default configuration is without a proxy.
+
+ For more information, see [Connect Microsoft Defender for IoT sensors without direct internet access by using a proxy (version 10.x)](../how-to-connect-sensor-by-proxy.md).
++
+1. <a name=credentials></a>The installation process starts running and then shows the credentials screen. For example:
+
+ :::image type="content" source="../media/tutorial-install-components/login-information.png" alt-text="Screenshot of the final screen of the installation with usernames, and passwords.":::
+
+ Save the usernames and passwords listed, as the passwords are unique and this is the only time that the credentials are shown. Copy the credentials to a safe place so that you can use them when signing into the sensor for the first time.
+
+ For more information, see [Default privileged on-premises users](../roles-on-premises.md#default-privileged-on-premises-users).
+
+ Select `<Ok>` when you're ready to continue.
+
+ The installation continues running again, and then reboots when the installation is complete. Upon reboot, you're prompted to enter credentials to sign in. For example:
+
+ :::image type="content" source="../media/tutorial-install-components/sensor-sign-in.png" alt-text="Screenshot of a sensor sign-in screen after installation.":::
+
+1. Enter the credentials for one of the users that you'd copied down in the [previous step](#credentials).
+
+ - If the `iot-sensor login:` prompt disappears, press **ENTER** to have it shown again.
+ - When you enter your password, the password characters don't display on the screen. Make sure you enter them carefully.
+
+ When you've successfully signed in, the following confirmation screen appears:
+
+ :::image type="content" source="../media/tutorial-install-components/install-complete.png" alt-text="Screenshot of the sign-in confirmation.":::
+
+Make sure that your sensor is connected to your network, and then you can sign in to your sensor via a network-connected browser. For more information, see [Activate and set up your sensor](../how-to-activate-and-set-up-your-sensor.md#activate-and-set-up-your-sensor)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Validate after installing software](post-install-validation-ot-software.md)
+
+> [!div class="nextstepaction"]
+> [Install software on an on-premises management console](install-software-on-premises-management-console.md)
+
+> [!div class="nextstepaction"]
+> [Troubleshooting](../how-to-troubleshoot-the-sensor-and-on-premises-management-console.md)
defender-for-iot Post Install Validation Ot Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-deploy/post-install-validation-ot-software.md
+
+ Title: Post-installation validation of OT network monitoring software - Microsoft Defender for IoT
+description: Learn how to test your system post installation of OT network monitoring software for Microsoft Defender for IoT. Use this article after you've reinstalled software on a pre-configured appliance, or if you've chosen to install software on your own appliances.
Last updated : 12/13/2022+++
+# Post-installation validation of OT network monitoring software
+
+After you've installed OT software on your [OT sensors](install-software-ot-sensor.md) or [on-premises management console](install-software-on-premises-management-console.md), test your system to make sure that processes are running correctly. The same validation process applies to all appliance types.
+
+System health validations are supported via the sensor or on-premises management console UI or CLI, and are available for both the *support* and *cyberx* users.
+
+## General tests
+
+After installing OT monitoring software, make sure to run the following tests:
+
+- **Sanity test**: Verify that the system is running.
+
+- **Version**: Verify that the version is correct.
+
+- **ifconfig**: Verify that all the input interfaces configured during the installation process are running.
+
+## Gateway checks
+
+Use the `route` command to show the gateway's IP address. For example:
+
+``` CLI
+<root@xsense:/# route -n
+Kernel IP routing table
+Destination Gateway Genmask Flags Metric Ref Use Iface
+0.0.0.0 172.18.0.1 0.0.0.0 UG 0 0 0 eth0
+172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
+>
+```
+
+Use the `arp -a` command to verify that there is a binding between the MAC address and the IP address of the default gateway. For example:
+
+``` CLI
+<root@xsense:/# arp -a
+cusalvtecca101-gi0-02-2851.network.microsoft.com (172.18.0.1) at 02:42:b0:3a:e8:b5 [ether] on eth0
+mariadb_22.2.6.27-r-c64cbca.iot_network_22.2.6.27-r-c64cbca (172.18.0.5) at 02:42:ac:12:00:05 [ether] on eth0
+redis_22.2.6.27-r-c64cbca.iot_network_22.2.6.27-r-c64cbca (172.18.0.3) at 02:42:ac:12:00:03 [ether] on eth0
+>
+```
+
+## DNS checks
+
+Use the `cat /etc/resolv.conf` command to find the IP address that's configured for DNS traffic. For example:
+``` CLI
+<root@xsense:/# cat /etc/resolv.conf
+search reddog.microsoft.com
+nameserver 127.0.0.11
+options ndots:0
+>
+```
+
+Use the `host` command to resolve an FQDN. For example:
+
+``` CLI
+<root@xsense:/# host www.apple.com
+www.apple.com is an alias for www.apple.com.edgekey.net.
+www.apple.com.edgekey.net is an alias for www.apple.com.edgekey.net.globalredir.akadns.net.
+www.apple.com.edgekey.net.globalredir.akadns.net is an alias for e6858.dscx.akamaiedge.net.
+e6858.dscx.akamaiedge.net has address 72.246.148.202
+e6858.dscx.akamaiedge.net has IPv6 address 2a02:26f0:5700:1b4::1aca
+e6858.dscx.akamaiedge.net has IPv6 address 2a02:26f0:5700:182::1aca
+>
+```
+
+## Firewall checks
+
+Use the `wget` command to verify that port 443 is open for communication. For example:
+
+``` CLI
+<root@xsense:/# wget https://www.apple.com
+--2022-11-09 11:21:15-- https://www.apple.com/
+Resolving www.apple.com (www.apple.com)... 72.246.148.202, 2a02:26f0:5700:1b4::1aca, 2a02:26f0:5700:182::1aca
+Connecting to www.apple.com (www.apple.com)|72.246.148.202|:443... connected.
+HTTP request sent, awaiting response... 200 OK
+Length: 99966 (98K) [text/html]
+Saving to: 'https://docsupdatetracker.net/index.html.1'
+
+https://docsupdatetracker.net/index.html.1 100%[===================>] 97.62K --.-KB/s in 0.02s
+
+2022-11-09 11:21:15 (5.88 MB/s) - 'https://docsupdatetracker.net/index.html.1' saved [99966/99966]
+
+>
+```
+
+For more information, see [Check system health](../how-to-troubleshoot-the-sensor-and-on-premises-management-console.md#check-system-health) in our sensor and on-premises management console troubleshooting article.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Troubleshoot an OT sensor or on-premises management console](../how-to-troubleshoot-the-sensor-and-on-premises-management-console.md)
+>
defender-for-iot Ot Pre Configured Appliances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-pre-configured-appliances.md
You can [order](mailto:hardware.sales@arrow.com?cc=DIoTHardwarePurchase@microsof
|Hardware profile |Appliance |Performance / Monitoring |Physical specifications | ||||| |**C5600** | [HPE ProLiant DL360](appliance-catalog/hpe-proliant-dl360.md) | **Max bandwidth**: 3Gbp/s <br>**Max devices**: 12,000 <br> 32 Cores/32G RAM/5.6TB | **Mounting**: 1U <br>**Ports**: 15x RJ45 or 8x SFP (OPT) |
-|**E1800** | [HPE ProLiant DL20 Gen10 Plus](appliance-catalog/hpe-proliant-dl20-plus-enterprise.md) <br> (4SFF) | **Max bandwidth**: 1 Gbp/s<br>**Max devices**: 10,000 <br> 8 Cores/32G RAM/1.8TB | **Mounting**: 1U <br>**Ports**: 8x RJ45 or 6x SFP (OPT) |
+|**E1800** | [HPE ProLiant DL20 Gen10 Plus](appliance-catalog/hpe-proliant-dl20-plus-enterprise.md) (4SFF) <br><br> [Dell PowerEdge R350](appliance-catalog/dell-poweredge-r350-e1800.md) | **Max bandwidth**: 1 Gbp/s<br>**Max devices**: 10,000 <br> 8 Cores/32G RAM/1.8TB | **Mounting**: 1U <br>**Ports**: 8x RJ45 or 6x SFP (OPT) |
|**E500** | [Dell Edge 5200](appliance-catalog/dell-edge-5200.md) <br> (Rugged MIL-STD-810G) | **Max bandwidth**: 1 Gbp/s<br>**Max devices**: 10,000 <br> 8 Cores/32G RAM/512GB | **Mounting**: Wall Mount<br>**Ports**: 3x RJ45 | |**L500** | [HPE ProLiant DL20 Gen10 Plus](appliance-catalog/hpe-proliant-dl20-plus-smb.md) <br> (NHP 2LFF) | **Max bandwidth**: 200Mbp/s<br>**Max devices**: 1,000 <br> 4 Cores/8G RAM/500GB | **Mounting**: 1U<br>**Ports**: 4x RJ45 | |**L100** | [YS-Techsystems YS-FIT2](appliance-catalog/ys-techsystems-ys-fit2.md) <br>(Rugged MIL-STD-810G) | **Max bandwidth**: 10Mbp/s <br>**Max devices**: 100 <br> 4 Cores/8G RAM/128GB | **Mounting**: DIN/VESA<br>**Ports**: 2x RJ45 |
defender-for-iot Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/overview.md
Title: Overview - Microsoft Defender for IoT for organizations description: Learn about Microsoft Defender for IoT's features for end-user organizations and comprehensive IoT security for OT and Enterprise IoT networks. Previously updated : 06/02/2022- Last updated : 12/25/2022 # Welcome to Microsoft Defender for IoT for organizations
-The Internet of Things (IoT) supports billions of connected devices that use operational technology (OT) networks. IoT/OT devices and networks are often designed without prioritizing security, and therefore can't be protected by traditional systems. With each new wave of innovation, the risk to IoT devices and OT networks increases the possible attack surfaces.
+The Internet of Things (IoT) supports billions of connected devices that use both operational technology (OT) and IoT networks. IoT/OT devices and networks are often built using specialized protocols, and may prioritize operational challenges over security.
-Microsoft Defender for IoT is a unified security solution for identifying IoT and OT devices, vulnerabilities, and threats. With Defender for IoT, you can manage them through a central interface. This set of documentation describes how end-user organizations can secure their entire IoT/OT environment, including protecting existing devices or building security into new IoT innovations.
+When IoT/OT devices can't be protected by traditional security monitoring systems, each new wave of innovation increases the risk and possible attack surfaces across those IoT devices and OT networks.
+Microsoft Defender for IoT is a unified security solution built specifically to identify IoT and OT devices, vulnerabilities, and threats. Use Defender for IoT to secure your entire IoT/OT environment, including existing devices that may not have built-in security agents.
-**For end-user organizations**, Microsoft Defender for IoT provides an agentless, network-layer monitoring that integrates smoothly with industrial equipment and SOC tools. You can deploy Microsoft Defender for IoT in Azure-connected and hybrid environments, or completely on-premises.
+Defender for IoT provides agentless, network layer monitoring, and integrates with both industrial equipment and security operation center (SOC) tools.
-**For IoT device builders**, Microsoft Defender for IoT also offers a lightweight micro-agent that supports standard IoT operating systems, such as Linux and RTOS. The Microsoft Defender device builder agent helps you ensure that security is built into your IoT/OT projects, from the cloud. For more information, see [Microsoft Defender for IoT for device builders documentation](../device-builders/overview.md).
## Agentless device monitoring
-Many legacy IoT and OT devices don't support agents, and can therefore remain unpatched, misconfigured, and invisible to IT teams. These devices become soft targets for threat actors who want to pivot deeper into corporate networks.
+If your IoT and OT devices don't have embedded security agents, they may remain unpatched, misconfigured, and invisible to IT and security teams. Un-monitored devices can be soft targets for threat actors looking to pivot deeper into corporate networks.
-Traditional network security monitoring tools may lack understanding of networks containing specialized protocols, devices, and relevant machine-to-machine (M2M) behaviors. Agentless monitoring in Defender for IoT provides visibility and security into those networks.
+Defender for IoT uses agentless monitoring to provide visibility and security across your network, and identifies specialized protocols, devices, or machine-to-machine (M2M) behaviors.
- **Discover IoT/OT devices** in your network, their details, and how they communicate. Gather data from network sensors, Microsoft Defender for Endpoint, and third-party sources.
Traditional network security monitoring tools may lack understanding of networks
- Run searches in historical traffic across all relevant dimensions and protocols. Access full-fidelity PCAPs to drill down further.
- - Detect advanced threats that you may have missed by static IOCs, such as zero-day malware, fileless malware, and living-off-the-land tactics.
+ - Detect advanced threats that you may have missed by static indicators of compromise (IOCs), such as zero-day malware, fileless malware, and living-off-the-land tactics.
+
+- **Respond to threats** by integrating with Microsoft services such as Microsoft Sentinel, other partner systems, and APIs. Integrate with security information and event management (SIEM) services, security operations and response (SOAR) services, extended detection and response (XDR) services, and more.
+
+Defender for IoT's centralized user experience in the Azure portal lets the security and OT monitoring teams visualize and secure all their IT, IoT, and OT devices regardless of where the devices are located.
+
+## Support for cloud, on-premises, and hybrid OT networks
+
+Install OT network sensors on-premises, at strategic locations in your network to detect devices across your entire OT environment. Then, use any of the following configurations to view your devices and security value:
+
+- **Cloud services**:
+
+ While OT network sensors have their own UI console that displays details and security data about detected devices, connect your sensors to Azure to extend your journey to the cloud.
+
+ From the Azure portal, view data from all connected sensors in a central location, and integrate with other Microsoft services, like Microsoft Sentinel.
-- **Respond to threats** by integrating with Microsoft services, such as Microsoft Sentinel, non-Microsoft systems, and APIs. Use advanced integrations for security information and event management (SIEM), security operations and response (SOAR), extended detection and response (XDR) services, and more.
+- **Air-gapped and on-premises services**:
-A centralized user experience lets the security team visualize and secure all their IT, IoT, and OT devices regardless of where the devices are located.
+ If you have an air-gapped environment and want to keep all your OT network data fully on-premises, connect your OT network sensors to an on-premises management console for central visibility and control.
-## Support for cloud, on-premises, and hybrid networks
+ Continue to view detailed device data and security value in each sensor console.
-Defender for IoT can support various network configurations:
+- **Hybrid services**:
-- **Cloud**: Extend your journey to the cloud by delivering your data to Azure. There you can visualize data from a central location. That data can be shared with other Microsoft services for end-to-end security monitoring and response.
+ You may have hybrid network requirements where you can deliver some data to the cloud and other data must remain on-premises.
-- **On-premises**: For example, in air-gapped environments, you might want to keep all of your data fully on-premises. Use the data provided by each sensor and the central visualizations provided by an on-premises management console to ensure security on your network.
+ In this case, set up your system in a flexible and scalable configuration to fit your needs. Connect some of your OT sensors to the cloud and view data on the Azure portal, and keep other sensors managed on-premises only.
-- **Hybrid**: You may have hybrid network requirements where you can deliver some data to the cloud and other data must remain on-premises. In this case, set up your system in a flexible and scalable configuration that fits your needs.
+For more information, see [System architecture for OT system monitoring](architecture.md).
-Regardless of configuration, data detected by a specific sensor is also always available in the sensor console.
+## Extend support to proprietary OT protocols
-## Extend support to proprietary protocols
+IoT and industrial control system (ICS) devices can be secured using both embedded protocols and proprietary, custom, or non-standard protocols. If you have devices that run on protocols that aren't supported by Defender for IoT out-of-the-box, use the Horizon Open Development Environment (ODE) SDK to develop dissector plug-ins to decode network traffic for your protocols.
-IoT and ICS devices can be secured using both embedded protocols and proprietary, custom, or non-standard protocols. Use the Horizon Open Development Environment (ODE) SDK to develop dissector plug-ins that decode network traffic, regardless of protocol type.
+Create custom alerts for your plugin to pinpoint specific network activity and effectively update your security, IT, and operational teams. For example, have alerts triggered when:
-For example, in an environment running MODBUS, you can generate an alert when the sensor detects a write command to a memory register on a specific IP address and Ethernet destination. Or you might want to generate an alert when any access is performed to a specific IP address. Alerts are triggered when Horizon alert rule conditions are met.
+- The sensor detects a write command to a memory register on a specific IP address and Ethernet destination.
+- Any access is performed to a specific IP address.
-Use custom, condition-based alert triggering and messaging to help pinpoint specific network activity and effectively update your security, IT, and operational teams.
-Contact [ms-horizon-support@microsoft.com](mailto:ms-horizon-support@microsoft.com) for details about working with the Open Development Environment (ODE) SDK and creating protocol plugins.
+For more information, see [Manage proprietary protocols with Horizon plugins](resources-manage-proprietary-protocols.md).
-## Protect enterprise networks
+## Protect enterprise IoT networks
-Microsoft Defender for IoT can protect IoT and OT devices, whether they're connected to IT, OT, or dedicated IoT networks.
+Use one or both of the following methods to extend Defender for IoT's agentless security features beyond OT environments to enterprise IoT devices.
-Enterprise IoT network protection extends agentless features beyond operational environments, providing coverage for all IoT devices in your environment. For example, an enterprise IoT environment may include printers, cameras, and purpose-built, proprietary, devices.
+- Add an Enterprise IoT plan in Microsoft Defender for Endpoint for added alerts, vulnerabilities, and recommendations for IoT devices in Defender for Endpoint. An Enterprise IoT plan also provides a shared device inventory across the Azure portal and Microsoft 365 Defender.
-When you expand Microsoft Defender for IoT into the enterprise network, you can apply Microsoft 365 Defender's features for asset discovery and use Microsoft Defender for Endpoint for a single, integrated package that can secure all of your IoT/OT infrastructure.
+- Onboard an Enterprise IoT network sensor in Defender for IoT (Public Preview) to extend Defender for IoT device visibility to devices that aren't covered by Defender for Endpoint.
-Use Microsoft Defender for IoT's sensors as extra data sources. They provide visibility in areas of your organization's network where Microsoft Defender for Endpoint isn't deployed, and when employees are accessing information remotely. Microsoft Defender for IoT's sensors provide visibility into both the IoT-to-IoT and the IoT-to-internet communications. Integrating Defender for IoT and Defender for Endpoint synchronizes any enterprise IoT devices discovered on the network by either service.
+Enterprise IoT devices can include devices such as printers, smart TVs, and conferencing systems and purpose-built, proprietary devices.
-For more information, see the [Microsoft 365 Defender](/microsoft-365/security/defender/microsoft-365-defender) and [Microsoft Defender for Endpoint documentation](/microsoft-365/security/defender-endpoint).
+For more information, see [Securing IoT devices in the enterprise](concept-enterprise.md).
++
+## Defender for IoT for device builders
+
+Defender for IoT also provides a lightweight security micro-agent that you can use to build security straight into your new IoT innovations.
+
+For more information, see the [Microsoft Defender for IoT for device builders documentation](../device-builders/overview.md).
+
+## Supported service regions
+
+Defender for IoT routes all traffic from all European regions to the *West Europe* regional datacenter. It routes traffic from all remaining regions to the *East US* regional datacenter.
+
+If you're using Defender for IoT OT monitoring software earlier than [22.1](release-notes.md#versions-222x) and are connecting through your own IoT Hub, the IoT Hub supported regions are also relevant for your organization. For more information, see [IoT Hub supported regions](https://azure.microsoft.com/global-infrastructure/services/?products=iot-hub).
## Next steps
-For more information, see:
+> [!div class="nextstepaction"]
+> [View ICS/OT Security videos](https://www.youtube.com/playlist?list=PLmAptfqzxVEXz5txCCKYUdpQETAMpeOhu)
+
+> [!div class="nextstepaction"]
+> [Get started with OT security monitoring](getting-started.md)
-- [ICS/OT Security video series](https://www.youtube.com/playlist?list=PLmAptfqzxVEXz5txCCKYUdpQETAMpeOhu)-- [OT threat monitoring in enterprise SOCs](concept-sentinel-integration.md)-- [Microsoft Defender for IoT architecture](architecture.md)-- [Quickstart: Get started with Defender for IoT](getting-started.md)
+> [!div class="nextstepaction"]
+> [Get started with Enterprise IoT security monitoring](eiot-defender-for-endpoint.md)
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
Title: OT monitoring software versions - Microsoft Defender for IoT description: This article lists Microsoft Defender for IoT on-premises OT monitoring software versions, including release and support dates and highlights for new features. Previously updated : 11/22/2022 Last updated : 1/02/2023 # OT monitoring software versions
Cloud features may be dependent on a specific sensor version. Such features are
| Version / Patch | Release date | Scope | Supported until | | - | | -- | - |
+| **22.3** | | | |
+| 22.3.4 | 01/2023 | Major | 12/2023 |
| **22.2** | | | | | 22.2.8 | 11/2022 | Patch | 10/2023 | | 22.2.7| 10/2022 | Patch | 09/2023 |
Version numbers are listed only in this article and in the [What's new in Micros
To understand whether a feature is supported in your sensor version, check the relevant version section below and its listed features.
+## Versions 22.3.x
+
+### 22.3.4
+
+**Release date**: 01/2021
+
+**Supported until**: 12/2023
+
+- [Azure connectivity status shown on OT sensors](how-to-manage-individual-sensors.md#validate-connectivity-status)
+ ## Versions 22.2.x
defender-for-iot Resources Training Sessions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/resources-training-sessions.md
Access the training at the following location:
## Next steps
-[Quickstart: Get started with Defender for IoT](getting-started.md#quickstart-get-started-with-defender-for-iot)
+[Quickstart: Get started with Defender for IoT](getting-started.md)
defender-for-iot Roles On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/roles-on-premises.md
This article provides:
## Default privileged on-premises users
-By default, each sensor and on-premises management console is [installed](how-to-install-software.md#install-ot-monitoring-software) with the *cyberx* and *support* privileged users. OT sensors are also installed with the *cyberx_host* privileged user.
+By default, each [sensor](ot-deploy/install-software-ot-sensor.md) and [on-premises management console](ot-deploy/install-software-on-premises-management-console.md) is installed with the *cyberx* and *support* privileged users. OT sensors are also installed with the *cyberx_host* privileged user.
Privileged users have access to advanced tools for troubleshooting and setup, such as the CLI. When first setting up your sensor or on-premises management console, first sign in with one of the privileged users. Then create an initial user with an **Admin** role, and then use that admin user to create other users with other roles.
defender-for-iot Configure Mirror Erspan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/traffic-mirroring/configure-mirror-erspan.md
The installation wizard starts to run, and you can select the interfaces you wan
Complete the wizard to apply your changes.
-For more information, see [Install OT monitoring software](../how-to-install-software.md#install-ot-monitoring-software).
+For more information, see [Install OT monitoring software on OT sensors](../how-to-install-software.md).
+ ## Sample configuration on a Cisco switch The following code shows a sample `ifconfig` output for ERSPAN configured on a Cisco switch:
defender-for-iot Tutorial Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-onboarding.md
Title: Tutorial - Get started with Microsoft Defender for IoT for OT security
-description: This tutorial describes how to use Microsoft Defender for IoT to set up a network for OT system security.
+ Title: Onboard and activate a virtual OT sensor - Microsoft Defender for IoT.
+description: This tutorial describes how to set up a virtual OT network sensor to monitor your OT network traffic.
Last updated 07/11/2022
Before you start, make sure that you have the following:
- Access to the Azure portal as a [Security Admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner). For more information, see [Azure user roles for OT and Enterprise IoT monitoring with Defender for IoT](roles-azure.md). -- At least one device to monitor, with the device connected to a SPAN port on a switch.
+- Make sure that you have a network switch that supports traffic monitoring via a SPAN port. You'll also need at least one device to monitor, connected to the switch's SPAN port.
- VMware, ESXi 5.5 or later, installed and operational on your sensor.
defender-for-iot Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/whats-new.md
Title: What's new in Microsoft Defender for IoT description: This article describes features available in Microsoft Defender for IoT, across both OT and Enterprise IoT networks, and both on-premises and in the Azure portal. Previously updated : 12/27/2022 Last updated : 01/03/2023 # What's new in Microsoft Defender for IoT?
Features released earlier than nine months ago are described in the [What's new
> Noted features listed below are in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. >
-## December 2022
+## January 2023
|Service area |Updates | |||
-| **OT networks** | [New purchase experience for OT plans](#new-purchase-experience-for-ot-plans) |
-|**Enterprise IoT networks** | [Enterprise IoT sensor alerts and recommendations (Public Preview)](#enterprise-iot-sensor-alerts-and-recommendations-public-preview) |
+|**OT networks** | **Version 22.3.4**: [Azure connectivity status shown on OT sensors](#azure-connectivity-status-shown-on-ot-sensors) |
-### Enterprise IoT sensor alerts and recommendations (Public Preview)
+### Azure connectivity status shown on OT sensors
-The Azure portal now provides the following additional security data for traffic detected by Enterprise IoT network sensors:
+Details about Azure connectivity status are now shown on the **Overview** page in OT network sensors, and errors are shown if the sensor's connection to Azure is lost.
-|Data type |Description |
-|||
-|**Alerts** | The Enterprise IoT network sensor now triggers the following alerts: <br>- **Connection Attempt to Known Malicious IP** <br>- **Malicious Domain Name Request** |
-|**Recommendations** | The Enterprise IoT network sensor now triggers the following recommendation for detected devices, as relevant: <br>**Disable insecure administration protocol** |
+For example:
-For more information, see:
-- [Malware engine alerts](alert-engine-messages.md#malware-engine-alerts)-- [View and manage alerts from the Azure portal](how-to-manage-cloud-alerts.md)-- [Enhance security posture with security recommendations](recommendations.md)-- [Discover Enterprise IoT devices with an Enterprise IoT network sensor (Public preview)](eiot-sensor.md)
+For more information, see [Manage individual sensors](how-to-manage-individual-sensors.md) and [Onboard OT sensors to Defender for IoT](onboard-sensors.md).
+
+## December 2022
+
+|Service area |Updates |
+|||
+|**OT networks** | - **Cloud feature**: [New purchase experience for OT plans](#new-purchase-experience-for-ot-plans) |
### New purchase experience for OT plans
dms Known Issues Azure Sql Db Managed Instance Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-azure-sql-db-managed-instance-online.md
Known issues and limitations that are associated with online migrations from SQL
## Backup requirements -- **Backups with checksum**-
- Azure Database Migration Service uses the backup and restore method to migrate your on-premises databases to SQL Managed Instance. Azure Database Migration Service only supports backups created using checksum.
-
- [Enable or Disable Backup Checksums During Backup or Restore (SQL Server)](/sql/relational-databases/backup-restore/enable-or-disable-backup-checksums-during-backup-or-restore-sql-server).
-
- > [!NOTE]
- > If you take the database backups with compression, the checksum is a default behavior unless explicitly disabled.
-
- With offline migrations, if you choose **I will let Azure Database Migration Service…**, then Azure Database Migration Service will take the database backup with the checksum option enabled.
- - **Backup media** Make sure to take every backup on a separate backup media (backup files). Azure Database Migration Service doesn't support backups that are appended to a single backup file. Take full backup and log backups to separate backup files.
dms Known Issues Azure Sql Migration Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-azure-sql-migration-azure-data-studio.md
Known issues and limitations associated with the Azure SQL Migration extension f
- **Recommendation**: Ensure the database backups in your Azure Storage container are correct. If you're using network file share, there might be network related issues and lags that are causing this error. Wait for the process to complete. -- **Message**: `Migration for Database <DatabaseName> failed with error 'Full backup <URL of backup in Azure Storage container> is missing checksum. Provide full backup with checksum.'.`--- **Cause**: The database backups haven't been taken with checksum enabled.--- **Recommendation**: See [Enable or disable backup checksums during backup or restore (SQL Server)](/sql/relational-databases/backup-restore/enable-or-disable-backup-checksums-during-backup-or-restore-sql-server) for taking backups with checksum enabled. -- - **Message**: `Migration for Database <Database Name> failed with error 'Non retriable error occurred while restoring backup with index 1 - 3234 Logical file <Name> isn't part of database <Database GUID>. Use RESTORE FILELISTONLY to list the logical file names. RESTORE DATABASE is terminating abnormally.'.` - **Cause**: You've specified a logical file name that isn't in the database backup.
dms Migration Dms Powershell Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/migration-dms-powershell-cli.md
Pre-requisites that are common across all supported migration scenarios using Az
> - If your database backup files are provided in an SMB network share, [Create an Azure storage account](../storage/common/storage-account-create.md) that allows the DMS service to upload the database backup files. Make sure to create the Azure Storage Account in the same region as the Azure Database Migration Service instance is created. > - Make sure the Azure storage account blob container is used exclusively to store backup files only. Any other type of files (txt, png, jpg, etc.) will interfere with the restore process leading to a failure. > - Azure Database Migration Service does not initiate any backups, and instead uses existing backups, which you may already have as part of your disaster recovery plan, for the migration.
- > - You should take [backups using the `WITH CHECKSUM` option](/sql/relational-databases/backup-restore/enable-or-disable-backup-checksums-during-backup-or-restore-sql-server).
> - Each backup can be written to either a separate backup file or multiple backup files. However, appending multiple backups (i.e. full and t-log) into a single backup media is not supported. > - Use compressed backups to reduce the likelihood of experiencing potential issues associated with migrating large backups. * Ensure that the service account running the source SQL Server instance has read and write permissions on the SMB network share that contains database backup files.
dms Migration Using Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/migration-using-azure-data-studio.md
The following list describes each step in the workflow:
> If your migration target is Azure SQL Database, you don't need backups for this migration. Database migration to Azure SQL Database is considered a logical migration that involves the database's pre-creation and data movement (performed by Database Migration Service). > [!IMPORTANT]
+> The Azure SQL Migration extension for Azure Data Studio doesn't take database backups, or neither initiate any database backups on your behalf. Instead, the service uses existing database backup files for the migration.
+>
> In online migration mode, Database Migration Service continuously uploads the backup source files to your Azure storage account and restores them to the target until you complete the final step of cutting over to the target. > > In offline migration mode, Database Migration Service uploads the backup source files to Azure storage and restores them to the target without requiring a cutover.
dms Tutorial Sql Server Managed Instance Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-offline-ads.md
Before you begin the tutorial:
> [!IMPORTANT] >
+ > - The Azure SQL Migration extension for Azure Data Studio doesn't take database backups, or neither initiate any database backups on your behalf. Instead, the service uses existing database backup files for the migration.
> - If your database backup files are in an SMB network share, [create an Azure storage account](../storage/common/storage-account-create.md) that Database Migration Service can use to upload database backup files to and to migrate databases. Make sure you create the Azure storage account in the same region where you create your instance of Database Migration Service.
- > - Database Migration Service doesn't initiate any backups. Instead, the service uses existing backups for the migration. You might already have these backups as part of your disaster recovery plan.
- > - Make sure you [create backups by using the WITH CHECKSUM option](/sql/relational-databases/backup-restore/enable-or-disable-backup-checksums-during-backup-or-restore-sql-server?preserve-view=true&view=sql-server-2017).
> - You can write each backup to either a separate backup file or to multiple backup files. Appending multiple backups such as full and transaction logs into a single backup media isn't supported. > - You can provide compressed backups to reduce the likelihood of experiencing potential issues associated with migrating large backups.
dms Tutorial Sql Server Managed Instance Online Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-online-ads.md
To complete this tutorial, you need to:
- Azure storage account file share or blob container > [!IMPORTANT]
+ > - The Azure SQL Migration extension for Azure Data Studio doesn't take database backups, or neither initiate any database backups on your behalf. Instead, the service uses existing database backup files for the migration.
> - If your database backup files are provided in an SMB network share, [Create an Azure storage account](../storage/common/storage-account-create.md) that allows the DMS service to upload the database backup files. Make sure to create the Azure Storage Account in the same region as the Azure Database Migration Service instance is created.
- > - Azure Database Migration Service does not initiate any backups, and instead uses existing backups, which you might already have as part of your disaster recovery plan, for the migration.
- > - You need to take [backups using the `WITH CHECKSUM` option](/sql/relational-databases/backup-restore/enable-or-disable-backup-checksums-during-backup-or-restore-sql-server).
> - Each backup can be written to either a separate backup file or multiple backup files. However, appending multiple backups (that is, full and t-log) into a single backup media isn't supported. > - Use compressed backups to reduce the likelihood of experiencing potential issues associated with migrating large backups. * Ensure that the service account running the source SQL Server instance has read and write permissions on the SMB network share that contains database backup files.
Resource group, Azure storage account, Blob container from the corresponding dro
The final step of the tutorial is to complete the migration cutover to ensure the migrated database in Azure SQL Managed Instance is ready for use. This is the only part in the process that requires downtime for applications that connect to the database and hence the timing of the cutover needs to be carefully planned with business or application stakeholders.
-To complete the cutover,
+To complete the cutover:
-1. Stop all incoming transactions to the source database and prepare to make any application configuration changes to point to the target database in Azure SQL Managed Instance.
-2. take any tail log backups for the source database in the backup location specified
-3. ensure all database backups have the status *Restored* in the monitoring details page
-4. select *Complete cutover* in the monitoring details page
+1. Stop all incoming transactions to the source database.
+2. Make application configuration changes to point to the target database in Azure SQL Managed Instance.
+3. Take a final log backup of the source database in the backup location specified
+4. Put the source database in read-only mode. Therefore, users can read data from the database but not modify it.
+5. Ensure all database backups have the status *Restored* in the monitoring details page.
+6. Select *Complete cutover* in the monitoring details page.
During the cutover process, the migration status changes from *in progress* to *completing*. When the cutover process is completed, the migration status changes to *succeeded* to indicate that the database migration is successful and that the migrated database is ready for use.
dms Tutorial Sql Server Managed Instance Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-online.md
You will learn how to:
> [!IMPORTANT] > For online migrations from SQL Server to SQL Managed Instance using Azure Database Migration Service, you must provide the full database backup and subsequent log backups in the SMB network share that the service can use to migrate your databases. Azure Database Migration Service does not initiate any backups, and instead uses existing backups, which you may already have as part of your disaster recovery plan, for the migration.
-> Be sure that you take [backups using the WITH CHECKSUM option](/sql/relational-databases/backup-restore/enable-or-disable-backup-checksums-during-backup-or-restore-sql-server?preserve-view=true&view=sql-server-2017). Each backup can be written to either a separate backup file or multiple backup files. However, appending multiple backups (i.e. full and t-log) into a single backup media is not supported. Finally, you can use compressed backups to reduce the likelihood of experiencing potential issues associated with migrating large backups.
+> Each backup can be written to either a separate backup file or multiple backup files. However, appending multiple backups (that is, full and t-log) into a single backup media isn't supported.
+> Use compressed backups to reduce the likelihood of experiencing potential issues associated with migrating large backups.
> [!NOTE] > Using Azure Database Migration Service to perform an online migration requires creating an instance based on the Premium pricing tier.
dms Tutorial Sql Server To Virtual Machine Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-virtual-machine-offline-ads.md
Before you begin the tutorial:
> [!IMPORTANT] >
+ > - The Azure SQL Migration extension for Azure Data Studio doesn't take database backups, or neither initiate any database backups on your behalf. Instead, the service uses existing database backup files for the migration.
> - If your database backup files are in an SMB network share, [create an Azure storage account](../storage/common/storage-account-create.md) that Database Migration Service can use to upload database backup files to and to migrate databases. Make sure you create the Azure storage account in the same region where you create your instance of Database Migration Service.
- > - Database Migration Service doesn't initiate any backups. Instead, the service uses existing backups for the migration. You might already have these backups as part of your disaster recovery plan.
> - You can write each backup to either a separate backup file or to multiple backup files. Appending multiple backups such as full and transaction logs into a single backup media isn't supported. > - You can provide compressed backups to reduce the likelihood of experiencing potential issues associated with migrating large backups.
dms Tutorial Sql Server To Virtual Machine Online Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-virtual-machine-online-ads.md
To complete this tutorial, you need to:
- Azure storage account file share or blob container > [!IMPORTANT]
+ > - The Azure SQL Migration extension for Azure Data Studio doesn't take database backups, or neither initiate any database backups on your behalf. Instead, the service uses existing database backup files for the migration.
> - If your database backup files are provided in an SMB network share, [Create an Azure storage account](../storage/common/storage-account-create.md) that allows the DMS service to upload the database backup files. Make sure to create the Azure Storage Account in the same region as the Azure Database Migration Service instance is created. > - Azure Database Migration Service does not initiate any backups, and instead uses existing backups, which you may already have as part of your disaster recovery plan, for the migration. > - Each backup can be written to either a separate backup file or multiple backup files. However, appending multiple backups (i.e. full and t-log) into a single backup media is not supported.
The final step of the tutorial is to complete the migration cutover. The complet
To complete the cutover: 1. Stop all incoming transactions to the source database.
-1. Make application configuration changes to point to the target database in SQL Server on Azure Virtual Machine.
-1. Take any tail log backups for the source database in the backup location specified.
-1. Ensure all database backups have the status *Restored* in the monitoring details page.
-1. Select *Complete cutover* in the monitoring details page.
+2. Make application configuration changes to point to the target database in SQL Server on Azure Virtual Machines.
+3. Take a final log backup of the source database in the backup location specified
+4. Put the source database in read-only mode. Therefore, users can read data from the database but not modify it.
+5. Ensure all database backups have the status *Restored* in the monitoring details page.
+6. Select *Complete cutover* in the monitoring details page.
During the cutover process, the migration status changes from *in progress* to *completing*. The migration status changes to *succeeded* when the cutover process is completed. The database migration is successful and that the migrated database is ready for use.
external-attack-surface-management Deploying The Defender Easm Azure Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/deploying-the-defender-easm-azure-resource.md
After you create a resource group, you can create EASM resources within the grou
- **Region**: Select an Azure location. The following regions are supported: - southcentralus
- - eastus, australiaeast
+ - eastus
+ - australiaeast
- westus3 - swedencentral - eastasia
After you create a resource group, you can create EASM resources within the grou
## Next steps - [Using and managing discovery](using-and-managing-discovery.md)-- [Understanding dashboards](understanding-dashboards.md)
+- [Understanding dashboards](understanding-dashboards.md)
frontdoor Front Door Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-overview.md
Whether youΓÇÖre delivering content and files or building global apps and APIs, Azure Front Door can help you deliver higher availability, lower latency, greater scale, and more secure experiences to your users wherever they are.
-Azure Front Door is MicrosoftΓÇÖs modern cloud Content Delivery Network (CDN) that provides fast, reliable, and secure access between your users and your applicationsΓÇÖ static and dynamic web content across the globe. Azure Front Door delivers your content using the MicrosoftΓÇÖs global edge network with hundreds of [global and local POPs](edge-locations-by-region.md) distributed around the world close to both your enterprise and consumer end users.
+Azure Front Door is MicrosoftΓÇÖs modern cloud Content Delivery Network (CDN) that provides fast, reliable, and secure access between your users and your applicationsΓÇÖ static and dynamic web content across the globe. Azure Front Door delivers your content using the MicrosoftΓÇÖs global edge network with hundreds of [global and local points of presence (PoPs)](edge-locations-by-region.md) distributed around the world close to both your enterprise and consumer end users.
:::image type="content" source="./media/overview/front-door-overview.png" alt-text="Diagram of Azure Front Door routing user traffic to endpoints." lightbox="./media/overview/front-door-overview-expanded.png":::
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/overview.md
Title: Understand the machine configuration feature of Azure Policy description: Learn how Azure Policy uses the machine configuration feature to audit or configure settings inside virtual machines. Previously updated : 11/16/2022 Last updated : 01/03/2023
parameters.
### Assignments to Azure Management Groups
-Azure Policy definitions in the category 'Guest Configuration' can be assigned
-to Management Groups only when the effect is 'AuditIfNotExists'. Policy
-definitions with effect 'DeployIfNotExists' aren't supported as assignments to
-Management Groups.
+Azure Policy definitions in the category `Guest Configuration` can be assigned
+to management groups when the effect is `AuditIfNotExists` or `DeployIfNotExists`.
### Client log files
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/overview.md
Title: Organize your resources with management groups - Azure Governance description: Learn about the management groups, how their permissions work, and how to use them. Previously updated : 05/25/2022 Last updated : 01/03/2023
There are limitations that exist when using custom roles on management groups.
restriction is in place as there's a latency issue with updating the data plane resource providers. This latency issue is being worked on and these actions will be disabled from the role definition to reduce any risks.-- The Azure Resource Manager doesn't validate the management group's existence in the role
+- Azure Resource Manager doesn't validate the management group's existence in the role
definition's assignable scope. If there's a typo or an incorrect management group ID listed, the role definition is still created.-- Role assignment of a role with _dataActions_ isn't supported. Create the role assignment at the
- subscription scope instead.
> [!IMPORTANT] > Adding a management group to `AssignableScopes` is currently in preview. This preview version is
governance Policy For Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/policy-for-kubernetes.md
The following limitations apply only to the Azure Policy Add-on for AKS:
- [AKS Pod security policy](../../../aks/use-pod-security-policies.md) and the Azure Policy Add-on for AKS can't both be enabled. For more information, see [AKS pod security limitation](../../../aks/use-azure-policy.md).-- Namespaces automatically excluded by Azure Policy Add-on for evaluation: _kube-system_,
- _gatekeeper-system_, and _aks-periscope_.
+- Namespaces automatically excluded by Azure Policy Add-on for evaluation: _kube-system_ and
+ _gatekeeper-system_.
## Recommendations
In a Kubernetes cluster, if a namespace has the cluster-appropriate label, the a
with violations aren't denied. Compliance assessment results are still available. - Azure Arc-enabled Kubernetes cluster: `admission.policy.azure.com/ignore`-- Azure Kubernetes Service cluster: `control-plane` > [!NOTE] > While a cluster admin may have permission to create and update constraint templates and
governance Guest Configuration Baseline Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/guest-configuration-baseline-windows.md
For more information, see [Azure Policy guest configuration](../concepts/guest-c
|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Audit PNP Activity<br /><sub>(AZ-WIN-00182)</sub> |**Description**: This policy setting allows you to audit when plug and play detects an external device. The recommended state for this setting is: `Success`. **Note:** A Windows 10, Server 2016 or higher OS is required to access and set this value in Group Policy.<br />**Key Path**: {0CCE9248-69AE-11D9-BED3-505054503030}<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
+|Audit Plug and Play Events<br /><sub>(AZ-WIN-00182)</sub> |**Description**: This PNP Activity policy setting allows you to audit when plug and play detects an external device. The recommended state for this setting is: `Success`. **Note:** A Windows 10, Server 2016 or higher OS is required to access and set this value in Group Policy.<br />**Key Path**: {0CCE9248-69AE-11D9-BED3-505054503030}<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
|Audit Process Creation<br /><sub>(CCE-36059-4)</sub> |**Description**: This subcategory reports the creation of a process and the name of the program or user that created it. Events for this subcategory include: - 4688: A new process has been created. - 4696: A primary token was assigned to process. Refer to Microsoft Knowledge Base article 947226: [Description of security events in Windows Vista and in Windows Server 2008](https://support.microsoft.com/en-us/kb/947226) for the most recent information about this setting. The recommended state for this setting is: `Success`.<br />**Key Path**: {0CCE922B-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical | ## System Audit Policies - DS Access
hdinsight Hdinsight Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes-archive.md
HDInsight 3.6 will continue to run on Ubuntu 16.04. It will change to Basic supp
You need to drop and recreate your clusters if you'd like to move existing HDInsight 4.0 clusters to Ubuntu 18.04. Plan to create or recreate your clusters after Ubuntu 18.04 support becomes available.
-After creating the new cluster, you can SSH to your cluster and run `sudo lsb_release -a` to verify that it runs on Ubuntu 18.04. We recommend that you test your applications in your test subscriptions first before moving to production. [Learn more about the HDInsight Ubuntu 18.04 update](./hdinsight-ubuntu-1804-qa.md).
+After creating the new cluster, you can SSH to your cluster and run `sudo lsb_release -a` to verify that it runs on Ubuntu 18.04. We recommend that you test your applications in your test subscriptions first before moving to production.
#### Scaling optimizations on HBase accelerated writes clusters HDInsight made some improvements and optimizations on scaling for HBase accelerated write enabled clusters. [Learn more about HBase accelerated write](./hbase/apache-hbase-accelerated-writes.md).
hdinsight Hdinsight Ubuntu 1804 Qa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-ubuntu-1804-qa.md
- Title: Azure HDInsight Ubuntu 18.04 update
-description: Learn about Azure HDInsight Ubuntu 18.04 OS changes.
---- Previously updated : 07/18/2022--
-# HDInsight Ubuntu 18.04 OS update
-
-This article provides more details for HDInsight Ubuntu 18.04 OS update and potential changes that are needed.
-
-## Update overview
-
-HDInsight has started rolling out the new HDInsight 4.0 cluster image running on Ubuntu 18.04 in May 2021. Newly created HDInsight 4.0 clusters will run on Ubuntu 18.04 by default once available. Existing clusters on Ubuntu 16.04 will run as is with full support.
-
-HDInsight 3.6 will continue to run on Ubuntu 16.04. It will reach the end of standard support by 30 June 2021, and will change to Basic support starting on 1 July 2021. For more information about dates and support options, see [Azure HDInsight versions](./hdinsight-component-versioning.md). Ubuntu 18.04 won't be supported for HDInsight 3.6. If youΓÇÖd like to use Ubuntu 18.04, youΓÇÖll need to migrate your clusters to HDInsight 4.0. Spark 3.0 with HDInsight 4.0 is available only on Ubuntu 16.04. Spark 3.1 with HDInsight 4.0 will be shipping soon and will be available on Ubuntu 18.04.
-
-Drop and recreate your clusters if youΓÇÖd like to move existing clusters to Ubuntu 18.04. Plan to create or recreate your cluster.
-
-## Script actions changes
-
-HDInsight script actions are used to install extra components and change configuration settings. A script action is a Bash script that runs on the nodes in an HDInsight cluster.
-
-There might be some potential changes you need to make for your script actions.
-
-**Change each instance of `xenial` to `bionic` when grabbing your packages wherever needed:**
-
-For example:
-- Update `http://packages.treasuredata.com/3/ubuntu/xenial/ xenial contrib` to `http://packages.treasuredata.com/3/ubuntu/bionic/ bionic contrib`.-- Update `http://azure.archive.ubuntu.com/ubuntu/ xenial main restricted` to `http://azure.archive.ubuntu.com/ubuntu/ bionic main restricted`.-
-**Some package versions are not present for bionic:**
-
-For example, [Node.js version 4.x](https://deb.nodesource.com/node_4.x/dists/) is not present in the bionic repo. [Node.js version 12.x](https://deb.nodesource.com/node_12.x/dists/bionic/) is present.
-
-Scripts that install old versions that are not present for bionic need to be updated to later versions.
-
-**/etc/rc.local does not exist by default in 18.04:**
-
-Some scripts use `/etc/rc.local` for service startups but it doesn't exist by default in Ubuntu 18.04. It should be converted to a proper systemd unit.
-
-**Base OS packages have been updated:**
-
-If your scripts rely on an older version package in Ubuntu 16.04, it may not work. SSH to your cluster node and run `dpkg --list` on your cluster node to show the details of all installed packages.
-
-**In general Ubuntu 18.04 has stricter rules than 16.04.**
-
-## Custom Applications
-Some [third party applications](./hdinsight-apps-install-applications.md) can be installed to the HDInsight cluster. Those applications may not work well with Ubuntu 18.04. To reduce the risk of breaking changes, HDInsight won't roll out the new image for subscriptions that had installed custom applications since 25 February 2021. If you want to try the new image with your test subscriptions, open a support ticket to enable your subscription.
-
-## Edge nodes
-With the new image, the OS for cluster edge nodes will also be updated to Ubuntu 18.04. Your existing clients need to be tested with the Ubuntu 18.04. To reduce the risk of breaking changes, HDInsight won't roll out the new image for subscriptions that had used edge nodes since 25 February 2021. If you want to try the new image with your test subscriptions, open a support ticket to enable your subscription.
-
-## References
-----
iot-central Howto Configure Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-configure-rules.md
Webhooks let you connect your IoT Central app to other applications and services
In this example, you connect to *RequestBin* to get notified when a rule fires:
-1. Open [RequestBin](https://requestbin.net/).
+1. Navigate to [RequestBin](https://requestbin.com/).
-1. Create a new RequestBin and copy the **Bin URL**.
+1. Select **Create a RequestBin**.
+
+1. Sign in with one of the available methods.
+
+1. Copy the URL of your RequestBin endpoint.
1. Add an action to your rule:
iot-central Howto Export To Webhook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-export-to-webhook.md
For Webhook destinations, IoT Central exports data in near real time. The data i
## Create a Webhook destination
-You can export data to a publicly available HTTP Webhook endpoint. You can create a test Webhook endpoint using [RequestBin](https://requestbin.net/). RequestBin throttles request when the request limit is reached:
+You can export data to a publicly available HTTP Webhook endpoint. You can create a test Webhook endpoint using [RequestBin](https://requestbin.com/). RequestBin throttles request when the request limit is reached:
-1. Open [RequestBin](https://requestbin.net/).
-1. Create a new RequestBin and copy the **Bin URL**. You use this URL when you test your data export.
+1. Navigate to [RequestBin](https://requestbin.com/).
+
+1. Select **Create a RequestBin**.
+
+1. Sign in with one of the available methods.
+
+1. Copy the URL of your RequestBin You use this URL when you test your data export.
To create the Azure Data Explorer destination in IoT Central on the **Data export** page:
iot-central Howto Manage Data Export With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-data-export-with-rest-api.md
The response to this request looks like the following example:
"id": "9742a8d9-c3ca-4d8d-8bc7-357bdc7f39d9", "displayName": "Webhook destination", "type": "webhook@v1",
- "url": "http://requestbin.net/r/f7x2i1ug",
+ "url": "https://eofnjsh68jdytan.m.pipedream.net",
"headerCustomizations": {}, "status": "error", }
iot-central Tutorial Use Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/tutorial-use-rest-api.md
To create your test endpoint for the data export destination:
1. Navigate to [RequestBin](https://requestbin.com/). 1. Select **Create a RequestBin**.
+1. Sign in with one of the available methods.
1. Copy the URL of your RequestBin endpoint. 1. In Postman, open the **IoT Central REST tutorial** collection and navigate to the collection variables. 1. Paste the URL of your RequestBin endpoint into the **Current value** column for **webHookURL** in the collection variables.
iot-develop Quickstart Devkit Mxchip Az3166 Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-mxchip-az3166-iot-hub.md
To connect the MXCHIP DevKit to Azure, you'll modify a configuration file for Wi
|Constant name|Value| |-|--| |`IOT_HUB_HOSTNAME` |{*Your Iot hub hostName value*}|
- |`IOT_DPS_REGISTRATION_ID` |{*Your Device ID value*}|
+ |`IOT_HUB_DEVICE_ID` |{*Your Device ID value*}|
|`IOT_DEVICE_SAS_KEY` |{*Your Primary key value*}| 1. Save and close the file.
iot-hub Iot Hub Devguide File Upload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-file-upload.md
Previously updated : 10/25/2021 Last updated : 12/30/2022 # Upload files with IoT Hub
-There are many scenarios where you can't easily map your device data into the relatively small device-to-cloud messages that IoT Hub accepts. For example, sending large media files like video; or, sending large telemetry batches either uploaded by intermittently connected devices or that have been aggregated and compressed to save bandwidth.
+There are many scenarios where you can't easily map your device data into the relatively small device-to-cloud messages that IoT Hub accepts. For example, sending large media files like video; or, sending large telemetry batches, either uploaded by intermittently connected devices or aggregated and compressed to save bandwidth.
When you need to upload large files from a device, you can still use the security and reliability of IoT Hub. Instead of brokering messages through itself, however, IoT Hub acts as a dispatcher to an associated Azure storage account. IoT Hub can also provide notification to backend services when a device completes a file upload.
-If you need help with deciding when to use reported properties, device-to-cloud messages, or file uploads, see [Device-to-cloud communication guidance](iot-hub-devguide-d2c-guidance.md).
+If you need help with deciding when to use reported properties, device-to-cloud messages, or file uploads, see [Device-to-cloud communications guidance](iot-hub-devguide-d2c-guidance.md).
[!INCLUDE [iot-hub-include-x509-ca-signed-file-upload-support-note](../../includes/iot-hub-include-x509-ca-signed-file-upload-support-note.md)] ## File upload overview
-An IoT hub facilitates file uploads from connected devices by providing them with shared access signature (SAS) URIs on a per-upload basis for a blob container and Azure storage account that have been preconfigured with the hub. There are three parts to using file uploads with IoT Hub: preconfiguring an Azure storage account and blob container on your IoT hub, uploading files from devices, and, optionally, notifying backend services of completed file uploads.
+An IoT hub facilitates file uploads from connected devices by providing them with shared access signature (SAS) URIs on a per-upload basis for a blob container and Azure storage account that have been pre-configured with the hub. There are three parts to using file uploads with IoT Hub: pre-configuring an Azure storage account and blob container on your IoT hub, uploading files from devices, and, optionally, notifying backend services of completed file uploads.
Before you can use the file upload feature, you must associate an [Azure storage account](../storage/common/storage-account-overview.md) and [blob container](../storage/blobs/storage-blobs-introduction.md) with your IoT hub. You can also configure settings that control how IoT Hub authenticates with Azure storage, the time-to-live (TTL) of the SAS URIs that the IoT hub hands out to devices, and file upload notifications to your backend services. To learn more, see [Associate an Azure storage account with IoT Hub](#associate-an-azure-storage-account-with-iot-hub). Devices follow a three-step process to upload a file to the associated blob container:
-1. The device initiates the file upload with the IoT hub. It passes the name of a blob in the request and gets a SAS URI and a correlation ID in return. The SAS URI contains a SAS token for Azure storage that grants the device read-write permission on the requested blob in the blob container. For details, see [Device: Initialize a file upload](#device-initialize-a-file-upload).
+1. The device initiates the file upload with the IoT hub. It passes the name of a blob in the request and gets a SAS URI and a correlation ID in return. The SAS URI contains a SAS token for Azure storage that grants the device read-write permission on the requested blob in the blob container. For more information, see [Device: Initialize a file upload](#device-initialize-a-file-upload).
-1. The device uses the SAS URI to securely call Azure blob storage APIs to upload the file to the blob container. For details, see [Device: Upload file using Azure storage APIs](#device-upload-file-using-azure-storage-apis).
+1. The device uses the SAS URI to securely call Azure blob storage APIs to upload the file to the blob container. For more information, see [Device: Upload file using Azure storage APIs](#device-upload-file-using-azure-storage-apis).
-1. When the file upload is complete, the device notifies the IoT hub of the completion status using the correlation ID it received from IoT Hub when it initiated the upload. For details, see [Device: Notify IoT Hub of a completed file upload](#device-notify-iot-hub-of-a-completed-file-upload).
+1. When the file upload is complete, the device notifies the IoT hub of the completion status using the correlation ID it received from IoT Hub when it initiated the upload. For more information, see [Device: Notify IoT Hub of a completed file upload](#device-notify-iot-hub-of-a-completed-file-upload).
-Backend services can subscribe to file upload notifications on the IoT hub's service-facing file upload notification endpoint. If you've enabled these notifications on your IoT hub, it delivers them on this endpoint whenever a device notifies the hub that it has completed a file upload. Services can use these notifications to trigger further processing of the blob data. For details, see [Service: File upload notifications](#service-file-upload-notifications).
+Backend services can subscribe to file upload notifications on the IoT hub's service-facing file upload notification endpoint. If you've enabled these notifications on your IoT hub, it delivers them on this endpoint whenever a device notifies the hub that it has completed a file upload. Services can use these notifications to trigger further processing of the blob data. For more information, see [Service: File upload notifications](#service-file-upload-notifications).
-File upload is fully supported by the Azure IoT device and service SDKs. For details, see [File upload using an SDK](#file-upload-using-an-sdk).
+File upload is fully supported by the Azure IoT device and service SDKs. For more information, see [File upload using an SDK](#file-upload-using-an-sdk).
### File upload quotas and limits
-IoT Hub imposes throttling limits on the number of file uploads that it can initiate in a given period. The threshold is based on the SKU and number of units of your IoT hub. Additionally, each device is limited to 10 concurrent active file uploads at a time. For more information, see [Throttling and quotas](iot-hub-devguide-quotas-throttling.md).
+IoT Hub imposes throttling limits on the number of file uploads that it can initiate in a given period. The threshold is based on the SKU and number of units of your IoT hub. Additionally, each device is limited to 10 concurrent active file uploads at a time. For more information, see [IoT Hub quotas and throttling](iot-hub-devguide-quotas-throttling.md).
## Associate an Azure storage account with IoT Hub
-You must associate an Azure storage account and blob container with your IoT hub to use file upload features. All file uploads from devices registered with your IoT hub will go to this container. To configure a storage account and blob container on your IoT hub, see [Configure file uploads with Azure portal](iot-hub-configure-file-upload.md), [Configure file uploads with Azure CLI](iot-hub-configure-file-upload-cli.md), or [Configure file uploads with PowerShell](iot-hub-configure-file-upload-powershell.md). You can also use the IoT Hub management APIs to configure file uploads programmatically.
+You must associate an Azure storage account and blob container with your IoT hub to use file upload features. All file uploads from devices registered with your IoT hub will go to this container. To configure a storage account and blob container on your IoT hub, see [Configure IoT Hub file uploads using the Azure portal](iot-hub-configure-file-upload.md), [Configure IoT Hub file uploads using Azure CLI](iot-hub-configure-file-upload-cli.md), or [Configure IoT Hub file uploads using PowerShell](iot-hub-configure-file-upload-powershell.md). You can also use the IoT Hub management APIs to configure file uploads programmatically.
-If you use the portal, you can create a storage account and container during configuration. Otherwise, to create a storage account, see [Create a storage account](../storage/common/storage-account-create.md) in the Azure storage documentation. Once you have a storage account, you can see how to create a blob container in the [Azure blob storage quickstarts](../storage/blobs/storage-quickstart-blobs-portal.md). By default, Azure IoT Hub uses key-based authentication to connect and authorize with Azure Storage. You can also configure user-assigned or system-assigned managed identities to authenticate Azure IoT Hub with Azure Storage. Managed identities provide Azure services with an automatically managed identity in Azure AD in a secure manner. To learn how to configure managed identities, see [Configure file upload with managed identities](iot-hub-managed-identity.md#configure-file-upload-with-managed-identities).
+If you use the portal, you can create a storage account and container during configuration. Otherwise, to create a storage account, see [Create a storage account](../storage/common/storage-account-create.md) in the Azure storage documentation. Once you have a storage account, you can see how to create a blob container in the [Azure Blob Storage quickstarts](../storage/blobs/storage-quickstart-blobs-portal.md). By default, Azure IoT Hub uses key-based authentication to connect and authorize with Azure Storage. You can also configure user-assigned or system-assigned managed identities to authenticate Azure IoT Hub with Azure Storage. Managed identities provide Azure services with an automatically managed identity in Azure AD in a secure manner. To learn how to configure managed identities, see the [Configure file upload with managed identities](iot-hub-managed-identity.md#configure-file-upload-with-managed-identities) section of [IoT Hub support for managed identities](iot-hub-managed-identity.md).
-File upload is subject to [Azure storage's firewall settings](../storage/common/storage-network-security.md). Based on your authentication configuration, you will need to ensure your devices can communicate with Azure storage.
+File upload is subject to [Azure Storage's firewall settings](../storage/common/storage-network-security.md). Based on your authentication configuration, you'll need to ensure your devices can communicate with Azure storage.
There are several other settings that control the behavior of file uploads and file upload notifications. The following sections list all of the settings available. Depending on whether you use the Azure portal, Azure CLI, PowerShell, or the management APIs to configure file uploads, some of these settings may not be available. Make sure to set the **enableFileUploadNotifications** setting if you want notifications sent to your backend services when a file upload completes. ### Iot Hub storage and authentication settings
-The following settings associate a storage account and container with your IoT hub and control how your hub authenticates with Azure storage. These settings do not affect how devices authenticate with Azure storage. Devices always authenticate with the SAS token presented in the SAS URI retrieved from IoT Hub.
+The following settings associate a storage account and container with your IoT hub and control how your hub authenticates with Azure storage. These settings don't affect how devices authenticate with Azure storage. Devices always authenticate with the SAS token presented in the SAS URI retrieved from IoT Hub.
| Property | Description | Range and default | | | | |
The following settings control file uploads from the device.
| Property | Description | Range and default | | | | |
-| **storageEndpoints.$default.ttlAsIso8601** | Default TTL for SAS URIs generated by IoT Hub. | ISO_8601 interval up to 48 hours (minimum 1 minute). Default: 1 hour. |
+| **storageEndpoints.$default.ttlAsIso8601** | Default TTL for SAS URIs generated by IoT Hub. | ISO_8601 interval up to 48 hours (minimum one minute). Default: one hour. |
### File upload notification settings
The following settings control file upload notifications to backend services.
| Property | Description | Range and default | | | | | | **enableFileUploadNotifications** |Controls whether file upload notifications are written to the file notifications endpoint. |Bool. Default: False. |
-| **fileNotifications.ttlAsIso8601** |Default TTL for file upload notifications. |ISO_8601 interval up to 48 hours (minimum 1 minute). Default: 1 hour. |
+| **fileNotifications.ttlAsIso8601** |Default TTL for file upload notifications. |ISO_8601 interval up to 48 hours (minimum one minute). Default: one hour. |
| **fileNotifications.lockDuration** |Lock duration for the file upload notifications queue. |5 to 300 seconds. Default: 60 seconds. | | **fileNotifications.maxDeliveryCount** |Maximum delivery count for the file upload notification queue. |1 to 100. Default: 100. | ## File upload using an SDK
-The following how-to guides provide complete, step-by-step instructions to upload files using the Azure IoT device and service SDKs. They show you how to use the Azure portal to associate a storage account with an IoT hub, and they contain code snippets or refer to samples that guide you through an upload.
+The following how-to guides provide complete, step-by-step instructions to upload files using the Azure IoT device and service SDKs. The guides show you how to use the Azure portal to associate a storage account with an IoT hub. The guides also contain code snippets or refer to samples that guide you through an upload.
| How-to guide | Device SDK example | Service SDK example | ||--||
The following how-to guides provide complete, step-by-step instructions to uploa
| [Python](iot-hub-python-python-file-upload.md) | Yes | No (not supported) | > [!NOTE]
-> The C device SDK uses a single call on the device client to perform file uploads. For more information, see [IoTHubDeviceClient_UploadToBlobAsync()](https://github.com/Azure/azure-iot-sdk-c/blob/main/iothub_client/inc/iothub_device_client.h#L328) and [IoTHubDeviceClient_UploadMultipleBlocksToBlobAsync()](https://github.com/Azure/azure-iot-sdk-c/blob/main/iothub_client/inc/iothub_device_client.h#L350). These functions perform all aspects of the file upload in a single call -- initiating the upload, uploading the file to Azure storage, and notifying IoT Hub when it completes. This means that, in addition to whatever protocol the device is using to communicate with IoT Hub, it will also need to be able to communicate over HTTPS with Azure storage as these functions make calls to the Azure storage APIs.
+> The C device SDK uses a single call on the device client to perform file uploads. For more information, see [IoTHubDeviceClient_UploadToBlobAsync()](https://github.com/Azure/azure-iot-sdk-c/blob/main/iothub_client/inc/iothub_device_client.h#L328) and [IoTHubDeviceClient_UploadMultipleBlocksToBlobAsync()](https://github.com/Azure/azure-iot-sdk-c/blob/main/iothub_client/inc/iothub_device_client.h#L350). These functions perform all aspects of the file upload in a single call: initiating the upload, uploading the file to Azure storage, and notifying IoT Hub when it completes. This interaction means that, in addition to whatever protocol the device is using to communicate with IoT Hub, the device also needs to be able to communicate over HTTPS with Azure storage as these functions make calls to the Azure storage APIs.
## Device: Initialize a file upload
When it receives the response, the device:
* Uses the other properties to construct a SAS URI for the blob that it uses to authenticate with Azure storage. The SAS URI contains the resource URI for the requested blob and the SAS token. It takes following form: `https://{hostName}/{containerName}/{blobName}{sasToken}` (The `sasToken` property in the response contains a leading '?' character.) The braces aren't included.
- For example, for the values returned in the sample above, the SAS URI is, `https://contosostorageaccount.blob.core.windows.net/device-upload-container/mydevice/myfile.txt?sv=2018-03-28&sr=b&sig=mBLiODhpKXBs0y9RVzwk1S...l1X9qAfDuyg%3D&se=2021-07-30T06%3A11%3A10Z&sp=rw`
+ For example, for the values returned in the previous sample, the SAS URI is, `https://contosostorageaccount.blob.core.windows.net/device-upload-container/mydevice/myfile.txt?sv=2018-03-28&sr=b&sig=mBLiODhpKXBs0y9RVzwk1S...l1X9qAfDuyg%3D&se=2021-07-30T06%3A11%3A10Z&sp=rw`
For more information about the SAS URI and SAS token, see [Create a service SAS](/rest/api/storageservices/create-service-sas) in the Azure storage documentation. ## Device: Upload file using Azure storage APIs
-The device uses the [Azure Blob storage REST APIs](/rest/api/storageservices/blob-service-rest-api) or equivalent Azure storage SDK APIs to upload the file to the blob in Azure storage.
+The device uses the [Azure Blob Storage REST APIs](/rest/api/storageservices/blob-service-rest-api) or equivalent Azure storage SDK APIs to upload the file to the blob in Azure storage.
**Supported protocols**: HTTPS
hello world
Working with Azure storage APIs is beyond the scope of this article. In addition to the Azure Blob storage REST APIs linked previously in this section, you can explore the following documentation to help you get started:
-* To learn more about working with blobs in Azure storage, see the [Azure blob storage](../storage/blobs/index.yml) documentation.
+* To learn more about working with blobs in Azure storage, see the [Azure Blob Storage documentation](../storage/blobs/index.yml).
* For information about using Azure storage client SDKs to upload blobs, see [Azure Blob Storage API reference](../storage/blobs/reference.md).
When it receives a file upload complete notification from the device, IoT Hub:
## Service: File upload notifications
-If file upload notifications are enabled on your IoT hub, it generates a notification message for backend services when it receives notification from a device that a file upload is complete. IoT Hub delivers these file upload notifications through a service-facing endpoint. The receive semantics for file upload notifications are the same as for cloud-to-device messages and have the same [message life cycle](iot-hub-devguide-messages-c2d.md#the-cloud-to-device-message-life-cycle). The service SDKs expose APIs to handle file upload notifications.
+If file upload notifications are enabled on your IoT hub, your hub generates a notification message for backend services when it receives notification from a device that a file upload is complete. IoT Hub delivers these file upload notifications through a service-facing endpoint. The receive semantics for file upload notifications are the same as for cloud-to-device messages and have the same [message life cycle](iot-hub-devguide-messages-c2d.md#the-cloud-to-device-message-life-cycle). The service SDKs expose APIs to handle file upload notifications.
**Supported protocols** AMQP, AMQP-WS <br/> **Endpoint**: {iot hub}.azure-devices.net/messages/servicebound/fileuploadnotifications <br/>
iot-hub Iot Hub Mqtt Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-mqtt-support.md
Previously updated : 10/12/2018 Last updated : 12/30/2022
IoT Hub enables devices to communicate with the IoT Hub device endpoints using:
-* [MQTT v3.1.1](https://mqtt.org/) on port 8883
-* MQTT v3.1.1 over WebSocket on port 443.
+* [MQTT v3.1.1](https://mqtt.org/) on TCP port 8883
+* MQTT v3.1.1 over WebSocket on TCP port 443.
IoT Hub isn't a full-featured MQTT broker and doesn't support all the behaviors specified in the MQTT v3.1.1 standard. This article describes how devices can use supported MQTT behaviors to communicate with IoT Hub. [!INCLUDE [iot-hub-basic](../../includes/iot-hub-basic-partial.md)]
-All device communication with IoT Hub must be secured using TLS/SSL. Therefore, IoT Hub doesn't support non-secure connections over port 1883.
+All device communication with IoT Hub must be secured using TLS/SSL. Therefore, IoT Hub doesn't support non-secure connections over TCP port 1883.
## Connecting to IoT Hub
A device can use the MQTT protocol to connect to an IoT hub using any of the fol
* Libraries in the [Azure IoT SDKs](https://github.com/Azure/azure-iot-sdks). * The MQTT protocol directly.
-The MQTT port (8883) is blocked in many corporate and educational networking environments. If you can't open port 8883 in your firewall, we recommend using MQTT over Web Sockets. MQTT over Web Sockets communicates over port 443, which is almost always open in networking environments. To learn how to specify the MQTT and MQTT over Web Sockets protocols when using the Azure IoT SDKs, see [Using the device SDKs](#using-the-device-sdks).
+The MQTT port (TCP port 8883) is blocked in many corporate and educational networking environments. If you can't open port 8883 in your firewall, we recommend using MQTT over WebSockets. MQTT over WebSockets communicates over port 443, which is almost always open in networking environments. To learn how to specify the MQTT and MQTT over WebSockets protocols when using the Azure IoT SDKs, see [Using the device SDKs](#using-the-device-sdks).
## Using the device SDKs
-[Device SDKs](https://github.com/Azure/azure-iot-sdks) that support the MQTT protocol are available for Java, Node.js, C, C#, and Python. The device SDKs use the chosen [authentication mechanism](iot-concepts-and-iot-hub.md#device-identity-and-authentication) to establish a connection to an IoT hub. To use the MQTT protocol, the client protocol parameter must be set to **MQTT**. You can also specify MQTT over Web Sockets in the client protocol parameter. By default, the device SDKs connect to an IoT Hub with the **CleanSession** flag set to **0** and use **QoS 1** for message exchange with the IoT hub. While it's possible to configure **QoS 0** for faster message exchange, you should note that the delivery isn't guaranteed nor acknowledged. For this reason, **QoS 0** is often referred as "fire and forget".
+[Device SDKs](https://github.com/Azure/azure-iot-sdks) that support the MQTT protocol are available for Java, Node.js, C, C#, and Python. The device SDKs use the chosen [authentication mechanism](iot-concepts-and-iot-hub.md#device-identity-and-authentication) to establish a connection to an IoT hub. To use the MQTT protocol, the client protocol parameter must be set to **MQTT**. You can also specify MQTT over WebSockets in the client protocol parameter. By default, the device SDKs connect to an IoT Hub with the **CleanSession** flag set to **0** and use **QoS 1** for message exchange with the IoT hub. While it's possible to configure **QoS 0** for faster message exchange, you should note that the delivery isn't guaranteed nor acknowledged. For this reason, **QoS 0** is often referred as "fire and forget".
When a device is connected to an IoT hub, the device SDKs provide methods that enable the device to exchange messages with an IoT hub.
-The following table contains links to code samples for each supported language and specifies the parameter to use to establish a connection to IoT Hub using the MQTT or the MQTT over Web Sockets protocol.
+The following table contains links to code samples for each supported language and specifies the parameter to use to establish a connection to IoT Hub using the MQTT or the MQTT over WebSockets protocol.
-| Language | MQTT protocol parameter | MQTT over Web Sockets protocol parameter
+| Language | MQTT protocol parameter | MQTT over WebSockets protocol parameter
| | | | | [Node.js](https://github.com/Azure/azure-iot-sdk-node/blob/main/device/samples/javascript/simple_sample_device.js) | azure-iot-device-mqtt.Mqtt | azure-iot-device-mqtt.MqttWs | | [Java](https://github.com/Azure/azure-iot-sdk-java/blob/main/device/iot-device-samples/send-receive-sample/src/main/java/samples/com/microsoft/azure/sdk/iot/SendReceive.java) |[IotHubClientProtocol](/java/api/com.microsoft.azure.sdk.iot.device.iothubclientprotocol).MQTT | IotHubClientProtocol.MQTT_WS | | [C](https://github.com/Azure/azure-iot-sdk-c/tree/master/iothub_client/samples/iothub_client_sample_mqtt_dm) | [MQTT_Protocol](https://github.com/Azure/azure-iot-sdk-c/blob/main/iothub_client/inc/iothubtransportmqtt.h) | [MQTT_WebSocket_Protocol](https://github.com/Azure/azure-iot-sdk-c/blob/main/iothub_client/inc/iothubtransportmqtt_websockets.h) |
-| [C#](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/iothub/device/samples) | [TransportType](/dotnet/api/microsoft.azure.devices.client.transporttype).Mqtt | TransportType.Mqtt falls back to MQTT over Web Sockets if MQTT fails. To specify MQTT over Web Sockets only, use TransportType.Mqtt_WebSocket_Only |
+| [C#](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/iothub/device/samples) | [TransportType](/dotnet/api/microsoft.azure.devices.client.transporttype).Mqtt | TransportType.Mqtt falls back to MQTT over WebSockets if MQTT fails. To specify MQTT over WebSockets only, use TransportType.Mqtt_WebSocket_Only |
| [Python](https://github.com/Azure/azure-iot-sdk-python/tree/main/samples) | Supports MQTT by default | Add `websockets=True` in the call to create the client |
-The following fragment shows how to specify the MQTT over Web Sockets protocol when using the Azure IoT Node.js SDK:
+The following fragment shows how to specify the MQTT over WebSockets protocol when using the Azure IoT Node.js SDK:
```javascript var Client = require('azure-iot-device').Client;
var Protocol = require('azure-iot-device-mqtt').MqttWs;
var client = Client.fromConnectionString(deviceConnectionString, Protocol); ```
-The following fragment shows how to specify the MQTT over Web Sockets protocol when using the Azure IoT Python SDK:
+The following fragment shows how to specify the MQTT over WebSockets protocol when using the Azure IoT Python SDK:
```python from azure.iot.device.aio import IoTHubDeviceClient
device_client = IoTHubDeviceClient.create_from_connection_string(deviceConnectio
### Default keep-alive timeout
-In order to ensure a client/IoT Hub connection stays alive, both the service and the client regularly send a *keep-alive* ping to each other. The client using IoT SDK sends a keep-alive at the interval defined in this table below:
+In order to ensure a client/IoT Hub connection stays alive, both the service and the client regularly send a *keep-alive* ping to each other. The client using IoT SDK sends a keep-alive at the interval defined in the following table:
|Language |Default keep-alive interval |Configurable | ||||
In order to ensure a client/IoT Hub connection stays alive, both the service and
|C# | 300 seconds* | [Yes](/dotnet/api/microsoft.azure.devices.client.transport.mqtt.mqtttransportsettings.keepaliveinseconds) | |Python | 60 seconds | [Yes](https://github.com/Azure/azure-iot-sdk-python/blob/main/azure-iot-device/azure/iot/device/iothub/abstract_clients.py#L343) |
-> *The C# SDK defines the default value of the MQTT KeepAliveInSeconds property as 300 seconds but in reality the SDK sends a ping request four times per keep-alive duration set. This means the SDK sends a keep-alive ping every 75 seconds.
+*The C# SDK defines the default value of the MQTT KeepAliveInSeconds property as 300 seconds. In reality, the SDK sends a ping request four times per keep-alive duration set. This means the SDK sends a keep-alive ping every 75 seconds.
-Following the [MQTT spec](http://docs.oasis-open.org/mqtt/mqtt/v3.1.1/os/mqtt-v3.1.1-os.html#_Toc398718081), IoT Hub's keep-alive ping interval is 1.5 times the client keep-alive value. However, IoT Hub limits the maximum server-side timeout to 29.45 minutes (1767 seconds) because all Azure services are bound to the Azure load balancer TCP idle timeout, which is 29.45 minutes.
+Following the [MQTT v3.1.1 specification](http://docs.oasis-open.org/mqtt/mqtt/v3.1.1/os/mqtt-v3.1.1-os.html#_Toc398718081), IoT Hub's keep-alive ping interval is 1.5 times the client keep-alive value; however, IoT Hub limits the maximum server-side timeout to 29.45 minutes (1767 seconds). This limit exists because all Azure services are bound to the Azure load balancer TCP idle timeout, which is 29.45 minutes.
For example, a device using the Java SDK sends the keep-alive ping, then loses network connectivity. 230 seconds later, the device misses the keep-alive ping because it's offline. However, IoT Hub doesn't close the connection immediately - it waits another `(230 * 1.5) - 230 = 115` seconds before disconnecting the device with the error [404104 DeviceConnectionClosedRemotely](iot-hub-troubleshoot-error-404104-deviceconnectionclosedremotely.md).
-The maximum client keep-alive value you can set is `1767 / 1.5 = 1177` seconds. Any traffic will reset the keep-alive. For example, a successful SAS token refresh resets the keep-alive.
+The maximum client keep-alive value you can set is `1767 / 1.5 = 1177` seconds. Any traffic will reset the keep-alive. For example, a successful shared access signature (SAS) token refresh resets the keep-alive.
### Migrating a device app from AMQP to MQTT
When doing so, make sure to check the following items:
## Example in C using MQTT without an Azure IoT SDK
-In the [IoT MQTT Sample repository](https://github.com/Azure-Samples/IoTMQTTSample), you'll find a couple of C/C++ demo projects showing how to send telemetry messages, and receive events with an IoT hub without using the Azure IoT C SDK.
+In the [IoT MQTT Sample repository](https://github.com/Azure-Samples/IoTMQTTSample), you'll find a couple of C/C++ demo projects showing how to send telemetry messages and receive events with an IoT hub without using the Azure IoT C SDK.
-These samples use the Eclipse Mosquitto library to send messages to the MQTT Broker implemented in the IoT hub.
+These samples use the [Eclipse Mosquitto](https://mosquitto.org) library to send messages to the MQTT broker implemented in the IoT hub.
To learn how to adapt the samples to use the [Azure IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) conventions, see [Tutorial - Use MQTT to develop an IoT Plug and Play device client](../iot-develop/tutorial-use-mqtt.md).
-This repository contains:
+This repository contains the following examples:
**For Windows:**
-* TelemetryMQTTWin32: contains code to send a telemetry message to an Azure IoT hub, built and run on a Windows machine.
+* `mosquitto_telemetry` contains code to send a telemetry message to an Azure IoT hub, built and run on a Windows machine.
-* SubscribeMQTTWin32: contains code to subscribe to events of a given IoT hub on a Windows machine.
+* `mosquitto_subscribe` contains code to subscribe to events of a given IoT hub on a Windows machine.
-* DeviceTwinMQTTWin32: contains code to query and subscribe to the device twin events of a device in the Azure IoT hub on a Windows machine.
-
-* PnPMQTTWin32: contains code to send a telemetry message with IoT Plug and Play device capabilities to an Azure IoT hub, built and run on a Windows machine. You can read more on [IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md)
+* `mosquitto_device_twin` contains code to query and subscribe to the device twin events of a device in the Azure IoT hub on a Windows machine.
**For Linux:**
-* MQTTLinux: contains code and build script to run on Linux (WSL, Ubuntu, and Raspbian have been tested so far).
+* `MQTTLinux` contains code and build script to run on Linux (WSL, Ubuntu, and Raspbian have been tested so far).
-* LinuxConsoleVS2019: contains the same code but in a VS2019 project targeting WSL (Windows Linux sub system). This project allows you to debug the code running on Linux step by step from Visual Studio.
+* `LinuxConsoleVS2019` contains the same code but in a Visual Studio 2019 (VS2019) project targeting Windows Subsystem for Linux (WSL). This project allows you to debug the code running on Linux step by step from Visual Studio.
**For mosquitto_pub:**
-This folder contains two samples commands used with mosquitto_pub utility tool provided by Mosquitto.org.
+This folder contains two samples commands used with the mosquitto_pub utility tool provided by [Eclipse Mosquitto](https://mosquitto.org).
-* Mosquitto_sendmessage: to send a text message to an IoT hub acting as a device.
+* [Send a message](https://github.com/Azure-Samples/IoTMQTTSample/tree/master/mosquitto_pub#send-a-message) sends a text message to an IoT hub, acting as a device.
-* Mosquitto_subscribe: to see events occurring in an IoT hub.
+* [Subscribe to events](https://github.com/Azure-Samples/IoTMQTTSample/tree/master/mosquitto_pub#subscribe-to-events) subscribes to and displays events occurring in an IoT hub.
## Using the MQTT protocol directly (as a device)
If a device can't use the device SDKs, it can still connect to the public device
> [!NOTE] > If you use X.509 certificate authentication, SAS token passwords are not required. For more information, see [Set up X.509 security in your Azure IoT Hub](./tutorial-x509-scripts.md) and follow code instructions in the [TLS/SSL configuration section](#tlsssl-configuration).
- For more information about how to generate SAS tokens, see the device section of [Using IoT Hub security tokens](iot-hub-dev-guide-sas.md#use-sas-tokens-as-a-device).
+ For more information about how to generate SAS tokens, see the [Use SAS tokens as a device](iot-hub-dev-guide-sas.md#use-sas-tokens-as-a-device) section of [Control access to IoT Hub using Shared Access Signatures](iot-hub-dev-guide-sas.md).
- When testing, you can also use the cross-platform [Azure IoT Tools for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools) or the CLI extension command [az iot hub generate-sas-token](/cli/azure/iot/hub#az-iot-hub-generate-sas-token) to quickly generate a SAS token that you can copy and paste into your own code.
+ You can also use the cross-platform Azure IoT Tools for Visual Studio Code or the CLI extension command [az iot hub generate-sas-token](/cli/azure/iot/hub#az-iot-hub-generate-sas-token) to quickly generate a SAS token. You can then copy and paste the SAS token into your own code for testing purposes.
### For Azure IoT Tools
The device app can specify a **Will** message in the **CONNECT** packet. The dev
## Using the MQTT protocol directly (as a module)
-Connecting to IoT Hub over MQTT using a module identity is similar to the device (described [in the section on using the MQTT protocol directly as a device](#using-the-mqtt-protocol-directly-as-a-device)) but you need to use the following:
+You can connect to IoT Hub over MQTT using a module identity, similar to connecting to IoT Hub as a device. For more information about connecting to IoT Hub over MQTT as a device, see [Using the MQTT protocol directly (as a device)](#using-the-mqtt-protocol-directly-as-a-device). However, you need to use the following values:
* Set the client ID to `{device-id}/{module-id}`.
To use the MQTT protocol directly, your client *must* connect over TLS/SSL. Atte
In order to establish a TLS connection, you may need to download and reference the DigiCert Baltimore Root Certificate. This certificate is the one that Azure uses to secure the connection. You can find this certificate in the [Azure-iot-sdk-c](https://github.com/Azure/azure-iot-sdk-c/blob/master/certs/certs.c) repository. More information about these certificates can be found on [Digicert's website](https://www.digicert.com/digicert-root-certificates.htm).
-An example of how to implement this using the Python version of the [Paho MQTT library](https://pypi.python.org/pypi/paho-mqtt) by the Eclipse Foundation might look like the following.
+The following example demonstrates how to implement this configuration, by using the Python version of the [Paho MQTT library](https://pypi.python.org/pypi/paho-mqtt) by the Eclipse Foundation.
First, install the Paho library from your command-line environment:
First, install the Paho library from your command-line environment:
pip install paho-mqtt ```
-Then, implement the client in a Python script. Replace the placeholders as follows:
+Then, implement the client in a Python script. Replace these placeholders in the following code snippet:
* `<local path to digicert.cer>` is the path to a local file that contains the DigiCert Baltimore Root certificate. You can create this file by copying the certificate information from [certs.c](https://github.com/Azure/azure-iot-sdk-c/blob/master/certs/certs.c) in the Azure IoT SDK for C. Include the lines `--BEGIN CERTIFICATE--` and `--END CERTIFICATE--`, remove the `"` marks at the beginning and end of every line, and remove the `\r\n` characters at the end of every line.
client.publish("devices/" + device_id + "/messages/events/", '{"id":123}', qos=1
client.loop_forever() ```
-To authenticate using a device certificate, update the code snippet above with the following changes (see [How to get an X.509 CA certificate](./iot-hub-x509ca-overview.md#get-an-x509-ca-certificate) on how to prepare for certificate-based authentication):
+To authenticate using a device certificate, update the previous code snippet with the changes specified in the following code snippet. For more information about how to prepare for certificate-based authentication, see the [Get an X.509 CA certificate](./iot-hub-x509ca-overview.md#get-an-x509-ca-certificate) section of [Authenticate devices using X.509 CA certificates](./iot-hub-x509ca-overview.md).
```python # Create the client as before
client.connect(iot_hub_name+".azure-devices.net", port=8883)
## Sending device-to-cloud messages
-After a device connects, it can send messages to IoT Hub using `devices/{device-id}/messages/events/` or `devices/{device-id}/messages/events/{property-bag}` as a **Topic Name**. The `{property-bag}` element enables the device to send messages with additional properties in a url-encoded format. For example:
+After a device connects, it can send messages to IoT Hub using `devices/{device-id}/messages/events/` or `devices/{device-id}/messages/events/{property-bag}` as a **Topic Name**. The `{property-bag}` element enables the device to send messages with other properties in a url-encoded format. For example:
```text RFC 2396-encoded(<PropertyName1>)=RFC 2396-encoded(<PropertyValue1>)&RFC 2396-encoded(<PropertyName2>)=RFC 2396-encoded(<PropertyValue2>)…
RFC 2396-encoded(<PropertyName1>)=RFC 2396-encoded(<PropertyValue1>)&RFC 2396-en
> This `{property_bag}` element uses the same encoding as query strings in the HTTPS protocol. > [!NOTE]
-> If you are routing D2C messages to a Storage account and you want to levarage JSON encoding you need to specify the Content Type and Content Encoding
-> information including `$.ct=application%2Fjson&$.ce=utf-8` as part of the `{property_bag}` mentioned above.
+> If you're routing D2C messages to an Azure Storage account and you want to leverage JSON encoding, you must specify the Content Type and Content Encoding information, including `$.ct=application%2Fjson&$.ce=utf-8`, as part of the `{property_bag}` mentioned in the previous note.
>
-> These attributes format are protocol-specific and are translated by IoT Hub into the relative System Properties as described [here](./iot-hub-devguide-routing-query-syntax.md#system-properties)
+> The format of these attributes are protocol-specific. IoT Hub translates these attributes into their corresponding system properties. For more information, see the [System properties](./iot-hub-devguide-routing-query-syntax.md#system-properties) section of [IoT Hub message routing query syntax](./iot-hub-devguide-routing-query-syntax.md#system-properties).
-The following is a list of IoT Hub implementation-specific behaviors:
+The following list describes IoT Hub implementation-specific behaviors:
* IoT Hub doesn't support QoS 2 messages. If a device app publishes a message with **QoS 2**, IoT Hub closes the network connection.
The following is a list of IoT Hub implementation-specific behaviors:
* IoT Hub only supports one active MQTT connection per device. Any new MQTT connection on behalf of the same device ID causes IoT Hub to drop the existing connection and **400027 ConnectionForcefullyClosedOnNewConnection** will be logged into IoT Hub Logs
-* To route messages based on message body, you must first add property 'contentType' (`ct`) to the end of the MQTT topic and set its value to be `application/json;charset=utf-8`. An example is shown below. To learn more about routing messages either based on message properties or message body, please see the [IoT Hub message routing query syntax documentation](iot-hub-devguide-routing-query-syntax.md).
+* To route messages based on message body, you must first add property 'contentType' (`ct`) to the end of the MQTT topic and set its value to be `application/json;charset=utf-8` as shown in the following example. For more information about routing messages either based on message properties or message body, see the [IoT Hub message routing query syntax documentation](iot-hub-devguide-routing-query-syntax.md).
```devices/{device-id}/messages/events/$.ct=application%2Fjson%3Bcharset%3Dutf-8```
-For more information, see [Messaging developer's guide](iot-hub-devguide-messaging.md).
+For more information, see [Send device-to-cloud and cloud-to-device messages with IoT Hub](iot-hub-devguide-messaging.md).
## Receiving cloud-to-device messages
-To receive messages from IoT Hub, a device should subscribe using `devices/{device-id}/messages/devicebound/#` as a **Topic Filter**. The multi-level wildcard `#` in the Topic Filter is used only to allow the device to receive additional properties in the topic name. IoT Hub doesn't allow the usage of the `#` or `?` wildcards for filtering of subtopics. Since IoT Hub isn't a general-purpose pub-sub messaging broker, it only supports the documented topic names and topic filters.
+To receive messages from IoT Hub, a device should subscribe using `devices/{device-id}/messages/devicebound/#` as a **Topic Filter**. The multi-level wildcard `#` in the Topic Filter is used only to allow the device to receive more properties in the topic name. IoT Hub doesn't allow the usage of the `#` or `?` wildcards for filtering of subtopics. Since IoT Hub isn't a general-purpose pub-sub messaging broker, it only supports the documented topic names and topic filters.
-The device does not receive any messages from IoT Hub until it has successfully subscribed to its device-specific endpoint, represented by the `devices/{device-id}/messages/devicebound/#` topic filter. After a subscription has been established, the device receives cloud-to-device messages that were sent to it after the time of the subscription. If the device connects with **CleanSession** flag set to **0**, the subscription is persisted across different sessions. In this case, the next time the device connects with **CleanSession 0** it receives any outstanding messages sent to it while disconnected. If the device uses **CleanSession** flag set to **1** though, it does not receive any messages from IoT Hub until it subscribes to its device-endpoint.
+The device doesn't receive any messages from IoT Hub until it has successfully subscribed to its device-specific endpoint, represented by the `devices/{device-id}/messages/devicebound/#` topic filter. After a subscription has been established, the device receives cloud-to-device messages that were sent to it after the time of the subscription. If the device connects with **CleanSession** flag set to **0**, the subscription is persisted across different sessions. In this case, the next time the device connects with **CleanSession 0** it receives any outstanding messages sent to it while disconnected. If the device uses **CleanSession** flag set to **1** though, it doesn't receive any messages from IoT Hub until it subscribes to its device-endpoint.
-IoT Hub delivers messages with the **Topic Name** `devices/{device-id}/messages/devicebound/`, or `devices/{device-id}/messages/devicebound/{property-bag}` when there are message properties. `{property-bag}` contains url-encoded key/value pairs of message properties. Only application properties and user-settable system properties (such as **messageId** or **correlationId**) are included in the property bag. System property names have the prefix **$**, application properties use the original property name with no prefix. For additional details about the format of the property bag, see [Sending device-to-cloud messages](#sending-device-to-cloud-messages).
+IoT Hub delivers messages with the **Topic Name** `devices/{device-id}/messages/devicebound/`, or `devices/{device-id}/messages/devicebound/{property-bag}` when there are message properties. `{property-bag}` contains url-encoded key/value pairs of message properties. Only application properties and user-settable system properties (such as **messageId** or **correlationId**) are included in the property bag. System property names have the prefix **$**, application properties use the original property name with no prefix. For more information about the format of the property bag, see [Sending device-to-cloud messages](#sending-device-to-cloud-messages).
In cloud-to-device messages, values in the property bag are represented as in the following table:
When a device app subscribes to a topic with **QoS 2**, IoT Hub grants maximum Q
First, a device subscribes to `$iothub/twin/res/#`, to receive the operation's responses. Then, it sends an empty message to topic `$iothub/twin/GET/?$rid={request id}`, with a populated value for **request ID**. The service then sends a response message containing the device twin data on topic `$iothub/twin/res/{status}/?$rid={request-id}`, using the same **request ID** as the request.
-Request ID can be any valid value for a message property value, as per the [IoT Hub messaging developer's guide](iot-hub-devguide-messaging.md), and status is validated as an integer.
+The request ID can be any valid value for a message property value, and status is validated as an integer. For more information, see [Send device-to-cloud and cloud-to-device messages with IoT Hub](iot-hub-devguide-messaging.md).
The response body contains the properties section of the device twin, as shown in the following response example:
The possible status codes are:
|Status | Description | | -- | -- | | 200 | Success |
-| 429 | Too many requests (throttled), as per [IoT Hub throttling](iot-hub-devguide-quotas-throttling.md) |
+| 429 | Too many requests (throttled). For more information, see [IoT Hub throttling](iot-hub-devguide-quotas-throttling.md) |
| 5** | Server errors |
-For more information, see the [Device twins developer's guide](iot-hub-devguide-device-twins.md).
+For more information, see [Understand and use device twins in IoT Hub](iot-hub-devguide-device-twins.md).
## Update device twin's reported properties
The possible status codes are:
| 429 | Too many requests (throttled), as per [IoT Hub throttling](iot-hub-devguide-quotas-throttling.md) | | 5** | Server errors |
-The Python code snippet below, demonstrates the twin reported properties update process over MQTT (using Paho MQTT client):
+The following Python code snippet demonstrates the twin reported properties update process over MQTT using the Paho MQTT client:
```python from paho.mqtt import client as mqtt
client.publish("$iothub/twin/PATCH/properties/reported/?$rid=" +
rid, twin_reported_property_patch, qos=0) ```
-Upon success of twin reported properties update operation above, the publication message from IoT Hub will have the following topic: `$iothub/twin/res/204/?$rid=1&$version=6`, where `204` is the status code indicating success, `$rid=1` corresponds to the request ID provided by the device in the code, and `$version` corresponds to the version of reported properties section of device twins after the update.
+Upon success of the twin reported properties update process in the previous code snippet, the publication message from IoT Hub will have the following topic: `$iothub/twin/res/204/?$rid=1&$version=6`, where `204` is the status code indicating success, `$rid=1` corresponds to the request ID provided by the device in the code, and `$version` corresponds to the version of reported properties section of device twins after the update.
-For more information, see the [Device twins developer's guide](iot-hub-devguide-device-twins.md).
+For more information, see [Understand and use device twins in IoT Hub](iot-hub-devguide-device-twins.md).
## Receiving desired properties update notifications
As for property updates, `null` values mean that the JSON object member is being
> [!IMPORTANT] > IoT Hub generates change notifications only when devices are connected. Make sure to implement the [device reconnection flow](iot-hub-devguide-device-twins.md#device-reconnection-flow) to keep the desired properties synchronized between IoT Hub and the device app.
-For more information, see the [Device twins developer's guide](iot-hub-devguide-device-twins.md).
+For more information, see [Understand and use device twins in IoT Hub](iot-hub-devguide-device-twins.md).
## Respond to a direct method
First, a device has to subscribe to `$iothub/methods/POST/#`. IoT Hub sends meth
To respond, the device sends a message with a valid JSON or empty body to the topic `$iothub/methods/res/{status}/?$rid={request-id}`. In this message, the **request ID** must match the one in the request message, and **status** must be an integer.
-For more information, see the [Direct method developer's guide](iot-hub-devguide-direct-methods.md).
+For more information, see [Understand and invoke direct methods from IoT Hub](iot-hub-devguide-direct-methods.md).
## Next steps
To learn more about the MQTT protocol, see the [MQTT documentation](https://mqtt
To learn more about planning your IoT Hub deployment, see:
-* [Azure Certified for IoT device catalog](https://devicecatalog.azure.com/)
-* [Support additional protocols](../iot-edge/iot-edge-as-gateway.md)
-* [Compare with Event Hubs](iot-hub-compare-event-hubs.md)
-* [Scaling, HA, and DR](iot-hub-scaling.md)
+* [Azure Certified Device Catalog](https://devicecatalog.azure.com/)
+* [How an IoT Edge device can be used as a gateway](../iot-edge/iot-edge-as-gateway.md)
+* [Connecting IoT Devices to Azure: IoT Hub and Event Hubs](iot-hub-compare-event-hubs.md)
+* [Choose the right IoT Hub tier for your solution](iot-hub-scaling.md)
To further explore the capabilities of IoT Hub, see:
-* [Deploying AI to edge devices with Azure IoT Edge](../iot-edge/quickstart-linux.md)
+* [Azure IoT Hub concepts overview](iot-hub-devguide.md)
+* [Quickstart: Deploy your first IoT Edge module to a virtual Linux device](../iot-edge/quickstart-linux.md)
iot-hub Iot Hub Python Twin Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-python-twin-getstarted.md
ms.devlang: python Previously updated : 03/11/2020 Last updated : 01/03/2023
In this article, you create two Python console apps:
* **ReportConnectivity.py**: a simulated device app that connects to your IoT hub and reports its connectivity condition. > [!NOTE]
-> See [Azure IoT SDKs](iot-hub-devguide-sdks.md) for more information about the SDK tools available to build both device and back-end apps.
+> For more information about the SDK tools available to build both device and back-end apps, see [Azure IoT SDKs](iot-hub-devguide-sdks.md).
## Prerequisites
In this article, you create two Python console apps:
[!INCLUDE [iot-hub-include-find-custom-connection-string](../../includes/iot-hub-include-find-custom-connection-string.md)]
+## Create a service app that updates desired properties and queries twins
+
+In this section, you create a Python console app that adds location metadata to the device twin associated with your **{Device ID}**. The app queries IoT hub for devices located in the US and then queries devices that report a cellular network connection.
+
+1. In your working directory, open a command prompt and install the **Azure IoT Hub Service SDK for Python**.
+
+ ```cmd/sh
+ pip install azure-iot-hub
+ ```
+
+2. Using a text editor, create a new **AddTagsAndQuery.py** file.
+
+3. Add the following code to import the required modules from the service SDK:
+
+ ```python
+ import sys
+ from time import sleep
+ from azure.iot.hub import IoTHubRegistryManager
+ from azure.iot.hub.models import Twin, TwinProperties, QuerySpecification, QueryResult
+ ```
+
+4. Add the following code. Replace `[IoTHub Connection String]` with the IoT hub connection string you copied in [Get the IoT hub connection string](#get-the-iot-hub-connection-string). Replace `[Device Id]` with the device ID (the name) from your registered device in the IoT Hub.
+
+ ```python
+ IOTHUB_CONNECTION_STRING = "[IoTHub Connection String]"
+ DEVICE_ID = "[Device Id]"
+ ```
+
+5. Add the following code to the **AddTagsAndQuery.py** file:
+
+ ```python
+ def iothub_service_sample_run():
+ try:
+ iothub_registry_manager = IoTHubRegistryManager(IOTHUB_CONNECTION_STRING)
+
+ new_tags = {
+ 'location' : {
+ 'region' : 'US',
+ 'plant' : 'Redmond43'
+ }
+ }
+
+ twin = iothub_registry_manager.get_twin(DEVICE_ID)
+ twin_patch = Twin(tags=new_tags, properties= TwinProperties(desired={'power_level' : 1}))
+ twin = iothub_registry_manager.update_twin(DEVICE_ID, twin_patch, twin.etag)
+
+ # Add a delay to account for any latency before executing the query
+ sleep(1)
+
+ query_spec = QuerySpecification(query="SELECT * FROM devices WHERE tags.location.plant = 'Redmond43'")
+ query_result = iothub_registry_manager.query_iot_hub(query_spec, None, 100)
+ print("Devices in Redmond43 plant: {}".format(', '.join([twin.device_id for twin in query_result.items])))
+
+ print()
+
+ query_spec = QuerySpecification(query="SELECT * FROM devices WHERE tags.location.plant = 'Redmond43' AND properties.reported.connectivity = 'cellular'")
+ query_result = iothub_registry_manager.query_iot_hub(query_spec, None, 100)
+ print("Devices in Redmond43 plant using cellular network: {}".format(', '.join([twin.device_id for twin in query_result.items])))
+
+ except Exception as ex:
+ print("Unexpected error {0}".format(ex))
+ return
+ except KeyboardInterrupt:
+ print("IoT Hub Device Twin service sample stopped")
+ ```
+
+ The **IoTHubRegistryManager** object exposes all the methods required to interact with device twins from the service. The code first initializes the **IoTHubRegistryManager** object, then updates the device twin for **DEVICE_ID**, and finally runs two queries. The first selects only the device twins of devices located in the **Redmond43** plant, and the second refines the query to select only the devices that are also connected through a cellular network.
+
+6. Add the following code at the end of **AddTagsAndQuery.py** to implement the **iothub_service_sample_run** function:
+
+ ```python
+ if __name__ == '__main__':
+ print("Starting the Python IoT Hub Device Twin service sample...")
+ print()
+
+ iothub_service_sample_run()
+ ```
+
+7. Run the application with:
+
+ ```cmd/sh
+ python AddTagsAndQuery.py
+ ```
+
+ You should see one device in the results for the query asking for all devices located in **Redmond43** and none for the query that restricts the results to devices that use a cellular network. In the next section, you'll create a device app that will use a cellular network and you'll rerun this query to see how it changes.
+
+ ![Screenshot of the first query showing all devices in Redmond.](./media/iot-hub-python-twin-getstarted/service-1.png)
+ ## Create a device app that updates reported properties In this section, you create a Python console app that connects to your hub as your **{Device ID}** and then updates its device twin's reported properties to confirm that it's connected using a cellular network.
In this section, you create a Python console app that connects to your hub as yo
![receive desired properties on device app](./media/iot-hub-python-twin-getstarted/device-2.png)
-## Create a service app that updates desired properties and queries twins
-
-In this section, you create a Python console app that adds location metadata to the device twin associated with your **{Device ID}**. The app queries IoT hub for devices located in the US and then queries devices that report a cellular network connection.
-
-1. In your working directory, open a command prompt and install the **Azure IoT Hub Service SDK for Python**.
-
- ```cmd/sh
- pip install azure-iot-hub
- ```
-
-2. Using a text editor, create a new **AddTagsAndQuery.py** file.
-
-3. Add the following code to import the required modules from the service SDK:
-
- ```python
- import sys
- from time import sleep
- from azure.iot.hub import IoTHubRegistryManager
- from azure.iot.hub.models import Twin, TwinProperties, QuerySpecification, QueryResult
- ```
-
-4. Add the following code. Replace `[IoTHub Connection String]` with the IoT hub connection string you copied in [Get the IoT hub connection string](#get-the-iot-hub-connection-string). Replace `[Device Id]` with the device ID (the name) from your registered device in the IoT Hub.
-
- ```python
- IOTHUB_CONNECTION_STRING = "[IoTHub Connection String]"
- DEVICE_ID = "[Device Id]"
- ```
-
-5. Add the following code to the **AddTagsAndQuery.py** file:
-
- ```python
- def iothub_service_sample_run():
- try:
- iothub_registry_manager = IoTHubRegistryManager(IOTHUB_CONNECTION_STRING)
-
- new_tags = {
- 'location' : {
- 'region' : 'US',
- 'plant' : 'Redmond43'
- }
- }
-
- twin = iothub_registry_manager.get_twin(DEVICE_ID)
- twin_patch = Twin(tags=new_tags, properties= TwinProperties(desired={'power_level' : 1}))
- twin = iothub_registry_manager.update_twin(DEVICE_ID, twin_patch, twin.etag)
-
- # Add a delay to account for any latency before executing the query
- sleep(1)
-
- query_spec = QuerySpecification(query="SELECT * FROM devices WHERE tags.location.plant = 'Redmond43'")
- query_result = iothub_registry_manager.query_iot_hub(query_spec, None, 100)
- print("Devices in Redmond43 plant: {}".format(', '.join([twin.device_id for twin in query_result.items])))
-
- print()
-
- query_spec = QuerySpecification(query="SELECT * FROM devices WHERE tags.location.plant = 'Redmond43' AND properties.reported.connectivity = 'cellular'")
- query_result = iothub_registry_manager.query_iot_hub(query_spec, None, 100)
- print("Devices in Redmond43 plant using cellular network: {}".format(', '.join([twin.device_id for twin in query_result.items])))
-
- except Exception as ex:
- print("Unexpected error {0}".format(ex))
- return
- except KeyboardInterrupt:
- print("IoT Hub Device Twin service sample stopped")
- ```
-
- The **IoTHubRegistryManager** object exposes all the methods required to interact with device twins from the service. The code first initializes the **IoTHubRegistryManager** object, then updates the device twin for **DEVICE_ID**, and finally runs two queries. The first selects only the device twins of devices located in the **Redmond43** plant, and the second refines the query to select only the devices that are also connected through a cellular network.
-
-6. Add the following code at the end of **AddTagsAndQuery.py** to implement the **iothub_service_sample_run** function:
-
- ```python
- if __name__ == '__main__':
- print("Starting the Python IoT Hub Device Twin service sample...")
- print()
-
- iothub_service_sample_run()
- ```
-
-7. Run the application with:
-
- ```cmd/sh
- python AddTagsAndQuery.py
- ```
-
- You should see one device in the results for the query asking for all devices located in **Redmond43** and none for the query that restricts the results to devices that use a cellular network.
-
- ![first query showing all devices in Redmond](./media/iot-hub-python-twin-getstarted/service-1.png)
- In this article, you: * Added device metadata as tags from a back-end app
iot-hub Tutorial X509 Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-x509-certificates.md
Previously updated : 02/26/2021 Last updated : 12/30/2022 #Customer intent: As a developer, I want to be able to use X.509 certificates to authenticate devices to an IoT hub. This step of the tutorial needs to introduce me to X.509 Public Key certificates.
# Tutorial: Understanding X.509 Public Key Certificates
-X.509 certificates are digital documents that represent a user, computer, service, or device. They are issued by a certification authority (CA), subordinate CA, or registration authority and contain the public key of the certificate subject. They do not contain the subject's private key which must be stored securely. Public key certificates are documented by [RFC 5280](https://tools.ietf.org/html/rfc5280). They are digitally signed and, in general, contain the following information:
+X.509 certificates are digital documents that represent a user, computer, service, or device. They're issued by a certification authority (CA), subordinate CA, or registration authority and contain the public key of the certificate subject. They don't contain the subject's private key, which must be stored securely. Public key certificates are documented by [RFC 5280](https://tools.ietf.org/html/rfc5280). They're digitally signed and, in general, contain the following information:
* Information about the certificate subject * The public key that corresponds to the subject's private key
Version 2 added the following fields containing information about the certificat
Version 3 certificates added the following extensions:
-* **Authority Key Identifier**: This can be one of two values:
+* **Authority Key Identifier**: This extension can be set to one of two values:
* The subject of the CA and serial number of the CA certificate that issued this certificate * A hash of the public key of the CA that issued this certificate * **Subject Key Identifier**: Hash of the current certificate's public key
-* **Key Usage** Defines the service for which a certificate can be used. This can be one or more of the following values:
+* **Key Usage** Defines the service for which a certificate can be used. This extension can be set to one or more of the following values:
* **Digital Signature** * **Non-Repudiation** * **Key Encipherment**
Version 3 certificates added the following extensions:
* **Subject Alternative Name**: List of alternate names for the subject * **Issuer Alternative Name**: List of alternate names for the issuing CA * **Subject Dir Attribute**: Attributes from an X.500 or LDAP directory
-* **Basic Constraints**: Allows the certificate to designate whether it is issued to a CA, or to a user, computer, device, or service. This extension also includes a path length constraint that limits the number of subordinate CAs that can exist.
+* **Basic Constraints**: Allows the certificate to designate whether it's issued to a CA, or to a user, computer, device, or service. This extension also includes a path length constraint that limits the number of subordinate CAs that can exist.
* **Name Constraints**: Designates which namespaces are allowed in a CA-issued certificate * **Policy Constraints**: Can be used to prohibit policy mappings between CAs * **Extended Key Usage**: Indicates how a certificate's public key can be used beyond the purposes identified in the **Key Usage** extension
Version 3 certificates added the following extensions:
* **Inhibit anyPolicy**: Inhibits the use of the **All Issuance Policies** OID (2.5.29.32.0) in subordinate CA certificates * **Freshest CRL**: Contains one or more URLs where the issuing CA's delta CRL is published * **Authority Information Access**: Contains one or more URLs where the issuing CA certificate is published
-* **Subject Information Access**: Contains information about how to retrieve additional details for a certificate subject
+* **Subject Information Access**: Contains information about how to retrieve more details for a certificate subject
## Certificate formats
-Certificates can be saved in a variety of formats. Azure IoT Hub authentication typically uses the PEM and PFX formats.
+Certificates can be saved in various formats. Azure IoT Hub authentication typically uses the Privacy-Enhanced Mail (PEM) and Personal Information Exchange (PFX) formats.
### Binary certificate
-This contains a raw form binary certificate using DER ASN.1 Encoding.
+A raw form binary certificate using Distinguished Encoding Rules (DER) ASN.1 encoding.
### ASCII PEM format
-A PEM certificate (.pem extension) contains a base64-encoded certificate beginning with --BEGIN CERTIFICATE-- and ending with --END CERTIFICATE--. The PEM format is very common and is required by IoT Hub when uploading certain certificates.
+A PEM certificate (.pem) file contains a Base64-encoded certificate beginning with `--BEGIN CERTIFICATE--` and ending with `--END CERTIFICATE--`. One of the most common formats for X.509 certificates, PEM format is required by IoT Hub when uploading certain certificates.
-### ASCII (PEM) key
+### ASCII PEM key
-Contains a base64-encoded DER key with possibly additional metadata about the algorithm used for password protection.
+Contains a Base64-encoded DER key, optionally with more metadata about the algorithm used for password protection.
-### PKCS#7 certificate
+### PKCS #7 certificate
-A format designed for the transport of signed or encrypted data. It is defined by [RFC 2315](https://tools.ietf.org/html/rfc2315). It can include the entire certificate chain.
+A format designed for the transport of signed or encrypted data. It's defined by [RFC 2315](https://tools.ietf.org/html/rfc2315). It can include the entire certificate chain.
-### PKCS#8 key
+### PKCS #8 key
The format for a private key store defined by [RFC 5208](https://tools.ietf.org/html/rfc5208).
-### PKCS#12 key and certificate
+### PKCS #12 key and certificate
-A complex format that can store and protect a key and the entire certificate chain. It is commonly used with a .pfx extension. PKCS#12 is synonymous with the PFX format.
+A complex format that can store and protect a key and the entire certificate chain. It's commonly used with a .pfx extension. PKCS #12 is synonymous with the PFX format.
## For more information
-For more information, see the following topics:
+For more information, see the following articles:
* [The laymanΓÇÖs guide to X.509 certificate jargon](https://techcommunity.microsoft.com/t5/internet-of-things/the-layman-s-guide-to-x-509-certificate-jargon/ba-p/2203540)
-* [Conceptual understanding of X.509 CA certificates in the IoT industry](./iot-hub-x509ca-concept.md)
+* [Understand how X.509 CA certificates are used in IoT](./iot-hub-x509ca-concept.md)
## Next steps
-If you want to generate test certificates that you can use to authenticate devices to your IoT Hub, see the following topics:
+If you want to generate test certificates that you can use to authenticate devices to your IoT Hub, see the following articles:
-* [Using Microsoft-Supplied Scripts to Create Test Certificates](tutorial-x509-scripts.md)
-* [Using OpenSSL to Create Test Certificates](tutorial-x509-openssl.md)
-* [Using OpenSSL to Create Self-Signed Test Certificates](tutorial-x509-self-sign.md)
+* [Tutorial: Using Microsoft-supplied scripts to create test certificates](tutorial-x509-scripts.md)
+* [Tutorial: Using OpenSSL to create test certificates](tutorial-x509-openssl.md)
+* [Tutorial: Using OpenSSL to create self-signed certificates](tutorial-x509-self-sign.md)
-If you have a certification authority (CA) certificate or subordinate CA certificate and you want to upload it to your IoT hub and prove that you own it, see [Proving Possession of a CA Certificate](tutorial-x509-prove-possession.md).
+If you have a certification authority (CA) certificate or subordinate CA certificate and you want to upload it to your IoT hub and prove that you own it, see [Tutorial: Proving possession of a CA certificate](tutorial-x509-prove-possession.md).
iot-hub Tutorial X509 Self Sign https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-x509-self-sign.md
Previously updated : 02/26/2021 Last updated : 12/30/2022 #Customer intent: As a developer, I want to be able to use X.509 certificates to authenticate devices to an IoT hub. This step of the tutorial needs to show me how to use OpenSSL to self-sign device certificates.
# Tutorial: Using OpenSSL to create self-signed certificates
-You can authenticate a device to your IoT Hub using two self-signed device certificates. This is sometimes called thumbprint authentication because the certificates contain thumbprints (hash values) that you submit to the IoT hub. The following steps tell you how to create two self-signed certificates. This type of certificate is mainly used for testing.
+You can authenticate a device to your IoT hub using two self-signed device certificates. This type of authentication is sometimes called *thumbprint authentication* because the certificates contain thumbprints (hash values) that you submit to the IoT hub. The following steps show you how to create two self-signed certificates. This type of certificate is typically used for testing.
## Step 1 - Create a key for the first certificate + ```bash openssl genpkey -out device1.key -algorithm RSA -pkeyopt rsa_keygen_bits:2048 ```
Locality Name (eg, city) [Default City]:.
Organization Name (eg, company) [Default Company Ltd]:. Organizational Unit Name (eg, section) []:. Common Name (eg, your name or your server hostname) []:{your-device-id}
-Email Address []:
+Email Address []:.
+Please enter the following 'extra' attributes
+to be sent with your certificate request
+A challenge password []:.
+An optional company name []:.
``` ## Step 3 - Check the CSR
Locality Name (eg, city) [Default City]:.
Organization Name (eg, company) [Default Company Ltd]:. Organizational Unit Name (eg, section) []:. Common Name (eg, your name or your server hostname) []:{your-device-id}
-Email Address []:
+Email Address []:.
+
+Please enter the following 'extra' attributes
+to be sent with your certificate request
+A challenge password []:.
+An optional company name []:.
``` ## Step 7 - Self-sign certificate 2
openssl x509 -in device2.crt -noout -fingerprint
## Step 10 - Create a new IoT device
-Navigate to your IoT Hub in the Azure portal and create a new IoT device identity with the following characteristics:
+Navigate to your IoT hub in the Azure portal and create a new IoT device identity with the following characteristics:
* Provide the **Device ID** that matches the subject name of your two certificates. * Select the **X.509 Self-Signed** authentication type.
Navigate to your IoT Hub in the Azure portal and create a new IoT device identit
## Next Steps
-Go to [Testing Certificate Authentication](tutorial-x509-test-certificate.md) to determine if your certificate can authenticate your device to your IoT Hub. The code on that page requires that you use a PFX certificate. Use the following OpenSSL command to convert your device .crt certificate to .pfx format.
+Go to [Testing Certificate Authentication](tutorial-x509-test-certificate.md) to determine if your certificate can authenticate your device to your IoT hub. The code on that page requires that you use a PFX certificate. Use the following OpenSSL command to convert your device .crt certificate to .pfx format.
```bash openssl pkcs12 -export -in device.crt -inkey device.key -out device.pfx
load-balancer Update Load Balancer With Vm Scale Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/update-load-balancer-with-vm-scale-set.md
- Title: Update or delete an existing load balancer used by Virtual Machine Scale Sets-
-description: With this how-to article, get started with Azure Standard Load Balancer and Virtual Machine Scale Sets.
------ Previously updated : 12/06/2022---
-# Update or delete a load balancer used by Virtual Machine Scale Sets
-
-When you work with Virtual Machine Scale Sets and an instance of Azure Load Balancer, you can:
--- Add, update, and delete rules.-- Add configurations.-- Delete the load balancer.-
-## Set up a load balancer for scaling out Virtual Machine Scale Sets
-
-Make sure that the instance of Azure Load Balancer has an [inbound NAT pool](/cli/azure/network/lb/inbound-nat-pool) set up and that the Virtual Machine Scale Set is put in the backend pool of the load balancer. Load Balancer will automatically create new inbound NAT rules in the inbound NAT pool when new virtual machine instances are added to the Virtual Machine Scale Set.
-
-To check whether the inbound NAT pool is properly set up:
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. On the left menu, select **All resources**. Then select **MyLoadBalancer** from the resource list.
-1. Under **Settings**, select **Inbound NAT rules**. In the right pane, if you see a list of rules created for each individual instance in the Virtual Machine Scale Set, you're all set to go for scaling up at any time.
-
-## Add inbound NAT rules
-
-Individual inbound NAT rules can't be added. But you can add a set of inbound NAT rules with defined front-end port range and back-end port for all instances in the Virtual Machine Scale Set.
-
-To add a whole set of inbound NAT rules for the Virtual Machine Scale Sets, first create an inbound NAT pool in the load balancer. Then reference the inbound NAT pool from the network profile of the Virtual Machine Scale Set. A full example using the CLI is shown.
-
-The new inbound NAT pool shouldn't have an overlapping front-end port range with existing inbound NAT pools. To view existing inbound NAT pools that are set up, use this [CLI command](/cli/azure/network/lb/inbound-nat-pool#az-network-lb-inbound-nat-pool-list):
-
-```azurecli-interactive
- az network lb inbound-nat-pool create
- -g MyResourceGroup
- --lb-name MyLb
- -n MyNatPool
- --protocol Tcp
- --frontend-port-range-start 80
- --frontend-port-range-end 89
- --backend-port 80
- --frontend-ip-name MyFrontendIp
- az vmss update
- -g MyResourceGroup
- -n myVMSS
- --add virtualMachineProfile.networkProfile.networkInterfaceConfigurations[0].ipConfigurations[0].loadBalancerInboundNatPools "{'id':'/subscriptions/mySubscriptionId/resourceGroups/MyResourceGroup/providers/Microsoft.Network/loadBalancers/MyLb/inboundNatPools/MyNatPool'}"
-
- az vmss update-instances
- --instance-ids *
- --resource-group MyResourceGroup
- --name MyVMSS
-```
-## Update inbound NAT rules
-
-Individual inbound NAT rules can't be updated. But you can update a set of inbound NAT rules with a defined front-end port range and a back-end port for all instances in the Virtual Machine Scale Set.
-
-To update a whole set of inbound NAT rules for Virtual Machine Scale Sets, update the inbound NAT pool in the load balancer.
-
-```azurecli-interactive
-az network lb inbound-nat-pool update
- -g MyResourceGroup
- --lb-name MyLb
- -n MyNatPool
- --protocol Tcp
- --backend-port 8080
-```
-
-## Delete inbound NAT rules
-
-Individual inbound NAT rules can't be deleted, but you can delete the entire set of inbound NAT rules by deleting the inbound NAT pool.
-
-To delete the NAT pool, first remove it from the scale set. A full example using the CLI is shown here:
-
-```azurecli-interactive
- az vmss update
- --resource-group MyResourceGroup
- --name MyVMSS
- --remove virtualMachineProfile.networkProfile.networkInterfaceConfigurations[0].ipConfigurations[0].loadBalancerInboundNatPools
- az vmss update-instances
- --instance-ids "*"
- --resource-group MyResourceGroup
- --name MyVMSS
- az network lb inbound-nat-pool delete
- --resource-group MyResourceGroup
- --lb-name MyLoadBalancer
- --name MyNatPool
-```
-
-## Add multiple IP configurations
-
-To add multiple IP configurations:
-
-1. On the left menu, select **All resources**. Then select **MyLoadBalancer** from the resource list.
-1. Under **Settings**, select **Frontend IP configuration**. Then select **Add**.
-1. On the **Add frontend IP address** page, enter the values and select **OK**.
-1. Refer to [Manage rules for Azure Load Balancer - Azure portal](manage-rules-how-to.md) if new load-balancing rules are needed.
-1. Create a new set of inbound NAT rules by using the newly created front-end IP configurations if needed. An example is found in the previous section.
-
-## Multiple Virtual Machine Scale Sets behind a single Load Balancer
-
-Create inbound NAT Pool in Load Balancer, reference the inbound NAT pool in the network profile of a Virtual Machine Scale Set, and finally update the instances for the changes to take effect. Repeat the steps for all Virtual Machine Scale Sets.
-
-Make sure to create separate inbound NAT pools with non-overlapping frontend port ranges.
-
-```azurecli-interactive
- az network lb inbound-nat-pool create
- -g MyResourceGroup
- --lb-name MyLb
- -n MyNatPool
- --protocol Tcp
- --frontend-port-range-start 80
- --frontend-port-range-end 89
- --backend-port 80
- --frontend-ip-name MyFrontendIpConfig
- az vmss update
- -g MyResourceGroup
- -n myVMSS
- --add virtualMachineProfile.networkProfile.networkInterfaceConfigurations[0].ipConfigurations[0].loadBalancerInboundNatPools "{'id':'/subscriptions/mySubscriptionId/resourceGroups/MyResourceGroup/providers/Microsoft.Network/loadBalancers/MyLb/inboundNatPools/MyNatPool'}"
-
- az vmss update-instances
- --instance-ids *
- --resource-group MyResourceGroup
- --name MyVMSS
-
- az network lb inbound-nat-pool create
- -g MyResourceGroup
- --lb-name MyLb
- -n MyNatPool2
- --protocol Tcp
- --frontend-port-range-start 100
- --frontend-port-range-end 109
- --backend-port 80
- --frontend-ip-name MyFrontendIpConfig2
- az vmss update
- -g MyResourceGroup
- -n myVMSS2
- --add virtualMachineProfile.networkProfile.networkInterfaceConfigurations[0].ipConfigurations[0].loadBalancerInboundNatPools "{'id':'/subscriptions/mySubscriptionId/resourceGroups/MyResourceGroup/providers/Microsoft.Network/loadBalancers/MyLb/inboundNatPools/MyNatPool2'}"
-
- az vmss update-instances
- --instance-ids *
- --resource-group MyResourceGroup
- --name MyVMSS2
-```
-
-## Delete the front-end IP configuration used by the Virtual Machine Scale Set
-
-To delete the front-end IP configuration in use by the scale set:
-
- 1. First delete the inbound NAT pool (the set of inbound NAT rules) that references the front-end IP configuration. Instructions on how to delete the inbound rules are found in the previous section.
- 1. Delete the load-balancing rule that references the front-end IP configuration.
- 1. Delete the front-end IP configuration.
-
-## Delete a load balancer used by a Virtual Machine Scale Set
-
-To delete the front-end IP configuration in use by the scale set:
-
- 1. First delete the inbound NAT pool (the set of inbound NAT rules) that references the front-end IP configuration. Instructions on how to delete the inbound rules are found in the previous section.
- 1. Delete the load-balancing rule that references the back-end pool that contains the Virtual Machine Scale Set.
- 1. Remove the `loadBalancerBackendAddressPool` reference from the network profile of the Virtual Machine Scale Set.
-
- A full example using the CLI is shown here:
-
-```azurecli-interactive
- az vmss update
- --resource-group MyResourceGroup
- --name MyVMSS
- --remove virtualMachineProfile.networkProfile.networkInterfaceConfigurations[0].ipConfigurations[0].loadBalancerBackendAddressPools
- az vmss update-instances
- --instance-ids "*"
- --resource-group MyResourceGroup
- --name MyVMSS
-```
-Finally, delete the load balancer resource.
-
-## Next steps
-
-To learn more about Azure Load Balancer and Virtual Machine Scale Sets, read more about the concepts.
-
-> [Azure Load Balancer with virtual machine scale sets](load-balancer-standard-virtual-machine-scale-sets.md)
logic-apps Quickstart Create First Logic App Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/quickstart-create-first-logic-app-workflow.md
Title: Quickstart - Create automated integration workflows in the Azure portal
-description: Create your first automated integration workflow using multi-tenant Azure Logic Apps in the Azure portal.
+ Title: 'Quickstart - Create an automated workflow using the Azure portal'
+description: Create your first automated integration workflow running in multi-tenant Azure Logic Apps using the Azure portal.
ms.suite: integration Previously updated : 08/23/2022- Last updated : 01/04/2023 #Customer intent: As a developer, I want to create my first automated integration workflow that runs in Azure Logic Apps using the Azure portal.
-# Quickstart: Create an integration workflow with multi-tenant Azure Logic Apps and the Azure portal
+# Quickstart: Create an integration workflow in multi-tenant Azure Logic Apps using the Azure portal
[!INCLUDE [logic-apps-sku-consumption](../../includes/logic-apps-sku-consumption.md)]
-This quickstart shows how to create an example automated workflow that integrates two services, an RSS feed for a website and an email account. More specifically, you create a [Consumption plan-based](logic-apps-pricing.md#consumption-pricing) logic app resource and workflow that uses the RSS connector and the Office 365 Outlook connector. This resource runs in [*multi-tenant* Azure Logic Apps](logic-apps-overview.md).
+This quickstart shows how to create an example automated workflow that integrates two services, an RSS feed for a website and an email account. More specifically, you'll create a [Consumption](logic-apps-pricing.md#consumption-pricing) logic app resource and workflow that runs in [global, multi-tenant Azure Logic Apps](logic-apps-overview.md#create-and-deploy-to-different-environments).
> [!NOTE]
-> To create a workflow in a Standard logic app resource that runs in *single-tenant* Azure Logic Apps, review
+>
+> To create a workflow in a Standard logic app resource that runs in single-tenant Azure Logic Apps, review
> [Create an integration workflow with single-tenant Azure Logic Apps](create-single-tenant-workflows-azure-portal.md). > For more information about multi-tenant versus single-tenant Azure Logic Apps, review > [Single-tenant versus multi-tenant and integration service environment](single-tenant-overview-compare.md).
-The RSS connector has a trigger that checks an RSS feed, based on a schedule. The Office 365 Outlook connector has an action that sends an email for each new item. The connectors in this example are only two among the [hundreds of connectors](/connectors/connector-reference/connector-reference-logicapps-connectors) that you can use in a workflow. While this example is cloud-based, Azure Logic Apps supports workflows that connect apps, data, services, and systems across cloud, on premises, and hybrid environments.
+The workflow that you create uses the RSS connector and the Office 365 Outlook connector. The RSS connector provides a trigger that checks an RSS feed, based on a schedule. The Office 365 Outlook connector provides an action that sends an email for each new item. The connectors in this example are only two among the [hundreds of connectors](/connectors/connector-reference/connector-reference-logicapps-connectors) that you can use in a workflow. While this example is cloud-based, Azure Logic Apps supports workflows that connect apps, data, services, and systems across cloud, on premises, and hybrid environments.
The following screenshot shows the high-level example workflow: ![Screenshot showing the example workflow with the RSS trigger, "When a feed item is published" and the Outlook action, "Send an email".](./media/quickstart-create-first-logic-app-workflow/quickstart-workflow-overview.png)
-As you progress through this quickstart, you'll learn these basic steps:
+As you progress through this quickstart, you'll learn the following basic steps:
* Create a Consumption logic app resource that runs in multi-tenant Azure Logic Apps. * Select the blank logic app template.
As you progress through this quickstart, you'll learn these basic steps:
* Add an action that performs a task after the trigger fires. * Run your workflow.
-To create and manage a logic app resource using other tools, review these other Azure Logic Apps quickstarts:
+To create and manage a Consumption logic app resource using other tools, review these other Azure Logic Apps quickstarts:
* [Create and manage logic apps in Visual Studio Code](quickstart-create-logic-apps-visual-studio-code.md) * [Create and manage logic apps in Visual Studio](quickstart-create-logic-apps-with-visual-studio.md)
-* [Create and manage logic apps via the Azure CLI](quickstart-logic-apps-azure-cli.md)
+* [Create and manage logic apps using the Azure CLI](quickstart-logic-apps-azure-cli.md)
<a name="prerequisites"></a>
To create and manage a logic app resource using other tools, review these other
* An email account from a service that works with Azure Logic Apps, such as Office 365 Outlook or Outlook.com. For other supported email providers, review [Connectors for Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors). > [!NOTE]
- > If you want to use the [Gmail connector](/connectors/gmail/), only G Suite accounts can use this connector without restriction in Azure
- > Logic Apps. If you have a consumer Gmail account, you can only use this connector with specific Google-approved services, unless you
- > [create a Google client app to use for authentication with your Gmail connector](/connectors/gmail/#authentication-and-bring-your-own-application).
- > For more information, see [Data security and privacy policies for Google connectors in Azure Logic Apps](../connectors/connectors-google-data-security-privacy-policy.md).
+ >
+ > If you want to use the [Gmail connector](/connectors/gmail/), only G Suite accounts can use
+ > this connector without restriction in Azure Logic Apps. If you have a consumer Gmail account,
+ > you can only use this connector with specific Google-approved services, unless you
+ > [create a Google client app to use for authentication with your Gmail connector](/connectors/gmail/#authentication-and-bring-your-own-application). For more information, see
+ > [Data security and privacy policies for Google connectors in Azure Logic Apps](../connectors/connectors-google-data-security-privacy-policy.md).
-* If you have a firewall that limits traffic to specific IP addresses, set up your firewall to allow access for *both* the [inbound](logic-apps-limits-and-config.md#inbound) and [outbound](logic-apps-limits-and-config.md#outbound) IP addresses used by the Logic Apps service in the Azure region where you create your logic app workflow.
+* If you have a firewall that limits traffic to specific IP addresses, make sure that you set up your firewall to allow access for both the [inbound](logic-apps-limits-and-config.md#inbound) and [outbound](logic-apps-limits-and-config.md#outbound) IP addresses used by Azure Logic Apps in the Azure region where you create your logic app workflow.
- This example uses the RSS and Office 365 Outlook connectors, which are [managed by Microsoft](../connectors/managed.md). These connectors require that you set up your firewall to allow access for *all* the [managed connector outbound IP addresses](/connectors/common/outbound-ip-addresses) in the Azure region for your logic app resource.
+ This example uses the RSS and Office 365 Outlook connectors, which are [managed by Microsoft](../connectors/managed.md). These connectors require that you set up your firewall to allow access for all the [managed connector outbound IP addresses](/connectors/common/outbound-ip-addresses) in the Azure region for your logic app resource.
<a name="create-logic-app-resource"></a>
To create and manage a logic app resource using other tools, review these other
1. Sign in to the [Azure portal](https://portal.azure.com) with your Azure account.
-1. In the Azure search box, enter `logic apps`, and select **Logic apps**.
+1. In the Azure search box, enter **logic apps**, and select **Logic apps**.
- ![Screenshot that shows Azure portal search box with "logic apps" as the search term and "Logic Apps" as the selected result.](./media/quickstart-create-first-logic-app-workflow/find-select-logic-apps.png)
+ ![Screenshot showing the Azure portal search box with "logic apps" entered and "Logic Apps" selected.](./media/quickstart-create-first-logic-app-workflow/find-select-logic-apps.png)
1. On the **Logic apps** page, select **Add**.
- ![Screenshot showing the Azure portal and Logic Apps service page and "Add" option selected.](./media/quickstart-create-first-logic-app-workflow/add-new-logic-app.png)
-
-1. On the **Create Logic App** pane, on the **Basics** tab, provide the following basic information about your logic app:
+ ![Screenshot showing the Azure Logic Apps page and "Add" selected.](./media/quickstart-create-first-logic-app-workflow/add-new-logic-app.png)
- | Property | Required | Value | Description |
- |-|-|-|-|
- | **Subscription** | Yes | <*Azure-subscription-name*> | Your Azure subscription name. |
- | **Resource Group** | Yes | <*Azure-resource-group-name*> | The [Azure resource group](../azure-resource-manager/management/overview.md#terminology) where you create your logic app and related resources. This name must be unique across regions and can contain only letters, numbers, hyphens (**-**), underscores (**_**), parentheses (**()**), and periods (**.**). <br><br>This example creates a resource group named **My-First-LA-RG**. |
- | **Logic App name** | Yes | <*logic-app-name*> | Your logic app name, which must be unique across regions and can contain only letters, numbers, hyphens (`-`), underscores (`_`), parentheses (`(`, `)`), and periods (`.`). <br><br>This example creates a logic app named **My-First-Logic-App**. |
- |||||
+1. On the **Create Logic App** pane, make sure that you first choose the plan type for your logic app resource.
-1. Before you continue making selections, go to the **Plan** section. For **Plan type**, select **Consumption** so that you view only the settings that apply to the Consumption plan-based logic app type. The **Plan type** property specifies the logic app type and billing model to use.
+ Go to the **Plan** section, and then for the **Plan type**, select **Consumption**, which shows only the settings for a Consumption logic app resource. The **Plan type** property specifies the logic app resource type and billing model to use.
| Plan type | Description | |--|-|
- | **Standard** | This logic app type is the default selection and runs in single-tenant Azure Logic Apps and uses the [Standard billing model](logic-apps-pricing.md#standard-pricing). |
+ | **Standard** | This logic app type is the default selection, which runs in single-tenant Azure Logic Apps and uses the [Standard billing model](logic-apps-pricing.md#standard-pricing). |
| **Consumption** | This logic app type runs in global, multi-tenant Azure Logic Apps and uses the [Consumption billing model](logic-apps-pricing.md#consumption-pricing). |
- |||
-1. Now continue making the following selections:
+1. Now provide the following information for your logic app resource:
| Property | Required | Value | Description | |-|-|-|-|
- | **Region** | Yes | <*Azure-region*> | The Azure datacenter region for storing your app's information. This example deploys the sample logic app to the **West US** region in Azure. <p>**Note**: If your subscription is associated with an [integration service environment](connect-virtual-network-vnet-isolated-environment-overview.md), this list includes those environments. |
- | **Enable log analytics** | Yes | **No** | This option appears and applies only when you select the **Consumption** logic app type. <p><p>Change this option only when you want to enable diagnostic logging. For this quickstart, keep the default selection. |
- ||||
+ | **Subscription** | Yes | <*Azure-subscription-name*> | Your Azure subscription name. |
+ | **Resource Group** | Yes | <*Azure-resource-group-name*> | The [Azure resource group](../azure-resource-manager/management/overview.md#terminology) where you create your logic app and related resources. This name must be unique across regions and can contain only letters, numbers, hyphens (**-**), underscores (**_**), parentheses (**()**), and periods (**.**). <br><br>This example creates a resource group named **My-First-LA-RG**. |
+ | **Logic App name** | Yes | <*logic-app-name*> | Your logic app name, which must be unique across regions and can contain only letters, numbers, hyphens (`-`), underscores (`_`), parentheses (`(`, `)`), and periods (`.`). <br><br>This example creates a logic app named **My-First-Logic-App**. |
+ | **Region** | Yes | <*Azure-region*> | The Azure datacenter region for storing your app's information. This example deploys the sample logic app to the **West US** region in Azure. |
+ | **Enable log analytics** | Yes | **No** | This option appears and applies only when you select the **Consumption** logic app type. <br><br>Change this option only when you want to enable diagnostic logging. For this quickstart, keep the default selection. |
> [!NOTE] > > If you selected an Azure region that supports availability zone redundancy, the **Zone redundancy**
- > section is enabled. This preview section offers the choice to enable availability zone redundancy
- > for your logic app. However, currently supported Azure regions don't include **West US**,
+ > section is automatically enabled. This preview section offers the choice to enable availability zone
+ > redundancy for your logic app. However, currently supported Azure regions don't include **West US**,
> so you can ignore this section for this example. For more information, see > [Protect logic apps from region failures with zone redundancy and availability zones](set-up-zone-redundancy-availability-zones.md).
This example uses an RSS trigger that checks an RSS feed, based on a schedule. I
1. Under the designer search box, select **All**.
-1. In the designer search box, enter **rss**. From the **Triggers** list, select the RSS trigger, **When a feed item is published**.
+1. In the designer search box, enter **rss**. From the **Triggers** list, select the RSS trigger named **When a feed item is published**.
- ![Screenshot showing the workflow designer with "rss" in the search box and the selected RSS trigger, "When a feed item is published".](./media/quickstart-create-first-logic-app-workflow/add-rss-trigger-new-feed-item.png)
+ ![Screenshot showing the workflow designer with "rss" in the search box and the selected RSS trigger named "When a feed item is published".](./media/quickstart-create-first-logic-app-workflow/add-rss-trigger-new-feed-item.png)
-1. In the trigger details, provide the following information:
+1. For the trigger, provide the following information:
| Property | Required | Value | Description | |-|-|-|-|
- | **The RSS feed URL** | Yes | <*RSS-feed-URL*> | The RSS feed URL to monitor. <p><p>This example uses the Wall Street Journal's RSS feed at `https://feeds.a.dj.com/rss/RSSMarketsMain.xml`. However, you can use any RSS feed that doesn't require HTTP authorization. Choose an RSS feed that publishes frequently, so you can easily test your workflow. |
- | **Chosen property will be used to determine** | No | PublishDate | The property that determines which items are new. |
- | **Interval** | Yes | 1 | The number of intervals to wait between feed checks. <p><p>This example uses `1` as the interval. |
- | **Frequency** | Yes | Minute | The unit of frequency to use for every interval. <p><p>This example uses `Minute` as the frequency. |
- |||||
+ | **The RSS feed URL** | Yes | <*RSS-feed-URL*> | The RSS feed URL to monitor. <br><br>This example uses the Wall Street Journal's RSS feed at **https://feeds.a.dj.com/rss/RSSMarketsMain.xml**. However, you can use any RSS feed that doesn't require HTTP authorization. Choose an RSS feed that publishes frequently, so you can easily test your workflow. |
+ | **Chosen property will be used to determine** | No | **PublishDate** | The property that determines which items are new. |
+ | **Interval** | Yes | **1** | The number of intervals to wait between feed checks. <br><br>This example uses **1** as the interval. |
+ | **Frequency** | Yes | **Minute** | The unit of frequency to use for every interval. <br><br>This example uses **Minute** as the frequency. |
![Screenshot showing the RSS trigger settings, including RSS URL, frequency, and interval.](./media/quickstart-create-first-logic-app-workflow/add-rss-trigger-settings.png)
This example uses an RSS trigger that checks an RSS feed, based on a schedule. I
## Add an action
-Following a trigger, an [action](logic-apps-overview.md#logic-app-concepts) is a subsequent step that runs some operation in the workflow. Any action can use the outputs from the previous step, which can be the trigger or another action. You can choose from many different actions, add multiple actions up to the [limit per workflow](logic-apps-limits-and-config.md#definition-limits), and even create different action paths.
+Following a trigger, an [action](logic-apps-overview.md#logic-app-concepts) is any subsequent step that runs some operation in the workflow. Any action can use the outputs from the previous step, which can be the trigger or another action. You can choose from many different actions, add multiple actions up to the [limit per workflow](logic-apps-limits-and-config.md#definition-limits), and even create different action paths.
This example uses an Office 365 Outlook action that sends an email each time that the trigger fires for a new RSS feed item. If multiple new items exist between checks, you receive multiple emails.
This example uses an Office 365 Outlook action that sends an email each time tha
For example, if you have a Microsoft work or school account and want to use Office 365 Outlook, select **Office 365 Outlook**. Or, if you have a personal Microsoft account, select **Outlook.com**. This example continues with Office 365 Outlook. > [!NOTE]
+ >
> If you use a different supported email service in your workflow, the user interface might look > slightly different. However, the basic concepts for connecting to another email service remain the same.
This example uses an Office 365 Outlook action that sends an email each time tha
![Screenshot that shows sign-in prompt for Office 365 Outlook.](./media/quickstart-create-first-logic-app-workflow/email-service-authentication.png) > [!NOTE]
- > This example shows manual authentication for connecting to Office 365 Outlook. However, other services might
- > support or use different authentication types. Based on your scenario, you can handle connection authentication
- > in various ways.
+ >
+ > This example shows manual authentication for connecting to Office 365 Outlook. However,
+ > other services might support or use different authentication types. Based on your scenario,
+ > you can handle connection authentication in various ways.
>
- > For example, if you use use Azure Resource Manager templates for deployment, you can increase security on inputs
- > that change often by parameterizing values such as connection details. For more information, review these topics:
+ > For example, if you use use Azure Resource Manager templates for deployment, you can increase
+ > security on inputs that change often by parameterizing values such as connection details.
+ > For more information, review the following documentation:
> > * [Template parameters for deployment](logic-apps-azure-resource-manager-templates-overview.md#template-parameters) > * [Authorize OAuth connections](logic-apps-deploy-azure-resource-manager-templates.md#authorize-oauth-connections)
This example uses an Office 365 Outlook action that sends an email each time tha
1. In the **To** box, enter the receiver's email address. For this example, use your email address. > [!NOTE]
+ >
> The **Add dynamic content** list appears when you click inside the **To** box and other boxes > for certain input types. This list shows any outputs from previous steps that are available for > you to select as inputs for the current action. You can ignore this list for now. The next step
This example uses an Office 365 Outlook action that sends an email each time tha
![Screenshot showing the "Send an email" action and cursor inside the "Subject" property box with the open dynamic content list and selected trigger output, "Feed title".](./media/quickstart-create-first-logic-app-workflow/send-email-subject-dynamic-content.png) > [!TIP]
+ >
> In the dynamic content list, if no outputs appear from the **When a feed item is published** trigger, > next to the action's header, select **See more**. >
This example uses an Office 365 Outlook action that sends an email each time tha
## Run your workflow
-To check that the workflow runs correctly, you can wait for the trigger to check the RSS feed based on the set schedule. Or, you can manually run the workflow by selecting **Run Trigger** on the designer toolbar, as shown in the following screenshot.
+To check that the workflow runs correctly, you can wait for the trigger to check the RSS feed based on the set schedule. Or, you can manually run the workflow from the designer toolbar.
+
+* Open the **Run Trigger** menu, and select **Run**.
-![Screenshot showing the workflow designer and the "Run" button selected on the designer toolbar.](./media/quickstart-create-first-logic-app-workflow/run-logic-app-test.png)
+ ![Screenshot showing the workflow designer and the "Run" button selected on the designer toolbar.](./media/quickstart-create-first-logic-app-workflow/run-logic-app-test.png)
If the RSS feed has new items, your workflow sends an email for each new item. Otherwise, your workflow waits until the next interval to check the RSS feed again.
machine-learning Concept Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-mlflow.md
You can also use MLflow to [Query & compare experiments and runs with MLflow](ho
> - MLflow in R support is limited to tracking experiment's metrics, parameters and models on Azure Machine Learning jobs. Interactive training on RStudio, Posit (formerly RStudio Workbench) or Jupyter Notebooks with R kernels is not supported. Model management and registration is not supported using the MLflow R SDK. As an alternative, use Azure ML CLI or Azure ML studio for model registration and management. View the following [R example about using the MLflow tracking client with Azure Machine Learning](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/single-step/r). > - MLflow in Java support is limited to tracking experiment's metrics and parameters on Azure Machine Learning jobs. Artifacts and models can't be tracked using the MLflow Java SDK. As an alternative, use the `Outputs` folder in jobs along with the method `mlflow.save_model` to save models (or artifacts) you want to capture. View the following [Java example about using the MLflow tracking client with the Azure Machine Learning](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/single-step/java/iris).
+### Example notebooks
+
+* [Training and tracking an XGBoost classifier with MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/train-with-mlflow/xgboost_classification_mlflow.ipynb): Demonstrates how to track experiments by using MLflow, log models, and combine multiple flavors into pipelines.
+* [Training and tracking an XGBoost classifier with MLflow using service principal authentication](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/train-with-mlflow/xgboost_service_principal.ipynb): Demonstrates how to track experiments by using MLflow from compute that's running outside Azure Machine Learning. It shows how to authenticate against Azure Machine Learning services by using a service principal.
+* [Hyper-parameter optimization using Hyperopt and nested runs in MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/train-with-mlflow/xgboost_nested_runs.ipynb): Demonstrates how to use child runs in MLflow to do hyper-parameter optimization for models by using the popular library Hyperopt. It shows how to transfer metrics, parameters, and artifacts from child runs to parent runs.
+* [Logging models with MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/logging-models/logging_model_with_mlflow.ipynb): Demonstrates how to use the concept of models instead of artifacts with MLflow, including how to construct custom models.
+* [Manage runs and experiments with MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/run-history/run_history.ipynb): Demonstrates how to query experiments, runs, metrics, parameters, and artifacts from Azure Machine Learning by using MLflow.
+ ## Model registries with MLflow Azure Machine Learning supports MLflow for model management. This support represents a convenient way to support the entire model lifecycle for users who are familiar with the MLflow client. To learn more about how to manage models by using the MLflow API in Azure Machine Learning, view [Manage model registries in Azure Machine Learning with MLflow](how-to-manage-models-mlflow.md).
+### Example notebooks
+
+* [Manage model registries with MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/model-management/model_management.ipynb): Demonstrates how to manage models in registries by using MLflow.
+ ## Model deployments of MLflow models You can [deploy MLflow models to Azure Machine Learning](how-to-deploy-mlflow-models.md) so that you can apply the model management capabilities and no-code deployment offering in Azure Machine Learning. Azure Machine Learning supports deploying models to both real-time and batch endpoints. You can use the `azureml-mlflow` MLflow plug-in, the Azure Machine Learning CLI v2, and the user interface in Azure Machine Learning studio. Learn more at [Deploy MLflow models to Azure Machine Learning](how-to-deploy-mlflow-models.md).
+### Example notebooks
+
+* [Deploy MLflow to Online Endpoints](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/deploy/mlflow_sdk_online_endpoints.ipynb): Demonstrates how to deploy models in MLflow format to online endpoints using MLflow SDK.
+* [Deploy MLflow to Online Endpoints with safe rollout](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/deploy/mlflow_sdk_online_endpoints_progressive.ipynb): Demonstrates how to deploy models in MLflow format to online endpoints using MLflow SDK with progressive rollout of models and the deployment of multiple model's versions in the same endpoint.
+* [Deploy MLflow to web services (V1)](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/deploy/mlflow_sdk_web_service.ipynb): Demonstrates how to deploy models in MLflow format to web services (ACI/AKS v1) using MLflow SDK.
+* [Deploying models trained in Azure Databricks to Azure Machine Learning with MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/no-code-deployment/track_with_databricks_deploy_aml.ipynb): Demonstrates how to train models in Azure Databricks and deploy them in Azure ML. It also includes how to handle cases where you also want to track the experiments with the MLflow instance in Azure Databricks.
+ ## Training MLflow projects (preview) You can submit training jobs to Azure Machine Learning by using [MLflow projects](https://www.mlflow.org/docs/latest/projects.html) (preview). You can submit jobs locally with Azure Machine Learning tracking or migrate your jobs to the cloud via [Azure Machine Learning compute](./how-to-create-attach-compute-cluster.md). Learn more at [Train machine learning models with MLflow projects and Azure Machine Learning](how-to-train-mlflow-projects.md).
+### Example notebooks
+
+* [Train an MLflow project on a local compute](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/track-and-monitor-experiments/using-mlflow/train-projects-local/train-projects-local.ipynb)
+* [Train an MLflow project on remote compute](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/track-and-monitor-experiments/using-mlflow/train-projects-remote/train-projects-remote.ipynb).
+ ## MLflow SDK, Azure Machine Learning v2, and Azure Machine Learning studio capabilities The following table shows which operations are supported by each of the tools available in the machine learning lifecycle.
The following table shows which operations are supported by each of the tools av
> - <sup>3</sup> Some operations may not be supported. View [Manage model registries in Azure Machine Learning with MLflow](how-to-manage-models-mlflow.md) for details. > - <sup>4</sup> Deployment of MLflow models to batch inference by using the MLflow SDK is not possible at the moment. View [Deploy MLflow models to Azure Machine Learning](how-to-deploy-mlflow-models.md) for details.
-## Example notebooks
--
-If you're getting started with MLflow in Azure Machine Learning, we recommend that you explore the [notebook examples about how to use MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/readme.md):
-
-* [Training and tracking an XGBoost classifier with MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/train-with-mlflow/xgboost_classification_mlflow.ipynb): Demonstrates how to track experiments by using MLflow, log models, and combine multiple flavors into pipelines.
-* [Training and tracking an XGBoost classifier with MLflow using service principal authentication](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/train-with-mlflow/xgboost_service_principal.ipynb): Demonstrates how to track experiments by using MLflow from compute that's running outside Azure Machine Learning. It shows how to authenticate against Azure Machine Learning services by using a service principal.
-* [Hyper-parameter optimization using Hyperopt and nested runs in MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/train-with-mlflow/xgboost_nested_runs.ipynb): Demonstrates how to use child runs in MLflow to do hyper-parameter optimization for models by using the popular library Hyperopt. It shows how to transfer metrics, parameters, and artifacts from child runs to parent runs.
-* [Logging models with MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/logging-models/logging_model_with_mlflow.ipynb): Demonstrates how to use the concept of models instead of artifacts with MLflow, including how to construct custom models.
-* [Manage runs and experiments with MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/run-history/run_history.ipynb): Demonstrates how to query experiments, runs, metrics, parameters, and artifacts from Azure Machine Learning by using MLflow.
-* [Manage model registries with MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/model-management/model_management.ipynb): Demonstrates how to manage models in registries by using MLflow.
-* [Deploying models with MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/no-code-deployment/deploying_with_mlflow.ipynb): Demonstrates how to deploy no-code models in MLflow format to a deployment target in Azure Machine Learning.
-* [Training models in Azure Databricks and deploying them on Azure Machine Learning](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/no-code-deployment/track_with_databricks_deploy_aml.ipynb): Demonstrates how to train models in Azure Databricks and deploy them in Azure Machine Learning. It also includes how to handle cases where you also want to track the experiments with the MLflow instance in Azure Databricks.
-* [Migrating models with a scoring script to MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/migrating-scoring-to-mlflow/scoring_to_mlmodel.ipynb): Demonstrates how to migrate models with scoring scripts to no-code deployment with MLflow.
-* [Using MLflow REST with Azure Machine Learning](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/using-rest-api/using_mlflow_rest_api.ipynb): Demonstrates how to work with the MLflow REST API when you're connected to Azure Machine Learning.
## Next steps
machine-learning How To Administrate Data Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-administrate-data-authentication.md
In general, data access from studio involves the following checks:
- Storage: Does the storage allow public access, or does it restrict access through a service endpoint or a private endpoint? * What operation is being performed? - Create, read, update, and delete (CRUD) operations on a data store/dataset are handled by Azure Machine Learning.
- - The following RBAC operation must be enabled to archive data assets in the Studio: Microsoft.MachineLearningServices/workspaces/datasets/registered/delete
+ - Archive operation on data assets in the Studio requires the following RBAC operation: Microsoft.MachineLearningServices/workspaces/datasets/registered/delete
- Data Access calls (such as preview or schema) go to the underlying storage and need extra permissions. * Where is this operation being run; compute resources in your Azure subscription or resources hosted in a Microsoft subscription? - All calls to dataset and datastore services (except the "Generate Profile" option) use resources hosted in a __Microsoft subscription__ to run the operations.
machine-learning How To Deploy Mlflow Model Spark Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-model-spark-jobs.md
In this article, learn how to deploy and run your [MLflow](https://www.mlflow.or
This example shows how you can deploy an MLflow model registered in Azure Machine Learning to Spark jobs running in [managed Spark clusters (preview)](how-to-submit-spark-jobs.md), Azure Databricks, or Azure Synapse Analytics, to perform inference over large amounts of data.
-It uses an MLflow model based on the [Diabetes dataset](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html). This dataset contains ten baseline variables, age, sex, body mass index, average blood pressure, and six blood serum measurements obtained from n = 442 diabetes patients, as well as the response of interest, a quantitative measure of disease progression one year after baseline (regression).
+The model is based on the [UCI Heart Disease Data Set](https://archive.ics.uci.edu/ml/datasets/Heart+Disease). The database contains 76 attributes, but we are using a subset of 14 of them. The model tries to predict the presence of heart disease in a patient. It is integer valued from 0 (no presence) to 1 (presence). It has been trained using an `XGBBoost` classifier and all the required preprocessing has been packaged as a `scikit-learn` pipeline, making this model an end-to-end pipeline that goes from raw data to predictions.
-The model has been trained using an `scikit-learn` regressor and all the required preprocessing has been packaged as a pipeline, making this model an end-to-end pipeline that goes from raw data to predictions.
-
-The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste YAML and other files, clone the repo and then change directories to the `sdk/python/using-mlflow/deploy`.
+The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste files, clone the repo, and then change directories to `sdk/using-mlflow/deploy`.
```azurecli git clone https://github.com/Azure/azureml-examples --depth 1
cd sdk/python/using-mlflow/deploy
Before following the steps in this article, make sure you have the following prerequisites: -- An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).-- You must have a MLflow model registered in your workspace. Particularly, this example will register a model trained for the [Diabetes dataset](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html).-- Install the Mlflow SDK package `mlflow` and the Azure Machine Learning plug-in for MLflow `azureml-mlflow`.-
- ```bash
- pip install mlflow azureml-mlflow
- ```
-- If you aren't running in Azure Machine Learning compute, configure the MLflow tracking URI or MLflow's registry URI to point to the workspace you are working on. See [Track runs using MLflow with Azure Machine Learning](how-to-use-mlflow-cli-runs.md#set-up-tracking-environment) for more details.
+- You must have a MLflow model registered in your workspace. Particularly, this example will register a model trained for the [Diabetes dataset](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html).
### Connect to your workspace
Tracking is already configured for you. Your default credentials will also be us
**Configure tracking URI**
-You need to configure MLflow to point to the Azure Machine Learning MLflow tracking URI. The tracking URI has the protocol `azureml://`. You can use MLflow to configure it.
-
-```python
-azureml_tracking_uri = "<AZUREML_TRACKING_URI>"
-mlflow.set_tracking_uri(azureml_tracking_uri)
-```
-
-There are multiple ways to get the Azure Machine Learning MLflow tracking URI. Refer to [Set up tracking environment](how-to-use-mlflow-cli-runs.md) to see all the alternatives.
-
-> [!TIP]
-> When working on shared environments, like for instance an Azure Databricks cluster, Azure Synapse Analytics cluster, or similar, it is useful to configure the environment variable `MLFLOW_TRACKING_URI` to automatically configure the MLflow tracking URI to the desired target for all the sessions running in the cluster rather than to do it on a per-session basis.
**Configure authentication**
-Once the tracking is configured, you'll also need to configure how the authentication needs to happen to the associated workspace. For interactive jobs where there's a user connected to the session, you can rely on Interactive Authentication.
-
-For those scenarios where unattended execution is required, you'll have to configure a service principal to communicate with Azure Machine Learning.
-
-```python
-import os
-
-os.environ["AZURE_TENANT_ID"] = "<AZURE_TENANT_ID>"
-os.environ["AZURE_CLIENT_ID"] = "<AZURE_CLIENT_ID>"
-os.environ["AZURE_CLIENT_SECRET"] = "<AZURE_CLIENT_SECRET>"
-```
+Once the tracking is configured, you'll also need to configure how the authentication needs to happen to the associated workspace. By default, the Azure Machine Learning plugin for MLflow will perform interactive authentication by opening the default browser to prompt for credentials. Refer to [Configure MLflow for Azure Machine Learning: Configure authentication](how-to-use-mlflow-configure-tracking.md#configure-authentication) to additional ways to configure authentication for MLflow in Azure Machine Learning workspaces.
-> [!TIP]
-> When working on shared environments, it is better to configure this environment variables for the entire cluster. As a best practice, manage them as secrets in an instance of Azure Key Vault. For instance, in Azure Databricks, you can use secrets to set this variables as follows: `AZURE_CLIENT_SECRET={{secrets/<scope-name>/<secret-name>}}`. See [Reference a secret in an environment variable](https://learn.microsoft.com/azure/databricks/security/secrets/secrets#reference-a-secret-in-an-environment-variable) for how to do it in Azure Databricks or refer to similar documentation in your platform.
os.environ["AZURE_CLIENT_SECRET"] = "<AZURE_CLIENT_SECRET>"
We need a model registered in the Azure Machine Learning registry to perform inference. In this case, we already have a local copy of the model in the repository, so we only need to publish the model to the registry in the workspace. You can skip this step if the model you are trying to deploy is already registered. ```python
-model_name = 'sklearn-diabetes'
-model_local_path = "sklearn-diabetes/model"
+model_name = 'heart-classifier'
+model_local_path = "model"
registered_model = mlflow_client.create_model_version( name=model_name, source=f"file://{model_local_path}"
Alternatively, if your model was logged inside of a run, you can register it dir
> To register the model, you'll need to know the location where the model has been stored. If you are using `autolog` feature of MLflow, the path will depend on the type and framework of the model being used. We recommend to check the jobs output to identify which is the name of this folder. You can look for the folder that contains a file named `MLModel`. If you are logging your models manually using `log_model`, then the path is the argument you pass to such method. As an example, if you log the model using `mlflow.sklearn.log_model(my_model, "classifier")`, then the path where the model is stored is `classifier`. ```python
-model_name = 'sklearn-diabetes'
+model_name = 'heart-classifier'
registered_model = mlflow_client.create_model_version( name=model_name, source=f"runs://{RUN_ID}/{MODEL_PATH}"
input_data_path = "dbfs:/data"
The following section explains how to run MLflow models registered in Azure Machine Learning in Spark jobs.
+1. Ensure the following libraries are installed in the cluster:
+
+ :::code language="yaml" source="~/azureml-examples-main/sdk/python/using-mlflow/deploy/model/conda.yaml" range="7-10":::
+
+1. We'll use a notebook to demonstrate how to create a scoring routine with an MLflow model registered in Azure Machine Learning. Create a notebook and use PySpark as the default language.
+
+1. Import the required namespaces:
+
+ ```python
+ import mlflow
+ import pyspark.sql.functions as f
+ ```
+ 1. Configure the model URI. The following URI brings a model named `heart-classifier` in its latest version. ```python
The following section explains how to run MLflow models registered in Azure Mach
1. Load the model as an UDF function. A user-defined function (UDF) is a function defined by a user, allowing custom logic to be reused in the user environment. ```python
- predict_function = mlflow.pyfunc.spark_udf(spark, model_uri, env_manager="local")
+ predict_function = mlflow.pyfunc.spark_udf(spark, model_uri, result_type='double')
``` > [!TIP]
The following section explains how to run MLflow models registered in Azure Mach
> [!TIP] > The `predict_function` receives as arguments the columns required. In our case, all the columns of the data frame are expected by the model and hence `df.columns` is used. If your model requires a subset of the columns, you can introduce them manually. If you model has a signature, types need to be compatible between inputs and expected types.
+1. You can write your predictions back to storage:
+
+ ```python
+ scored_data_path = "dbfs:/scored-data"
+ scored_data.to_csv(scored_data_path)
+ ```
## Run the model in a standalone Spark job in Azure Machine Learning
The following section explains how to run MLflow models registered in Azure Mach
print(args.input_data) # Load the model as an UDF function
- predict_function = mlflow.pyfunc.spark_udf(spark, args.model, env_manager="local")
+ predict_function = mlflow.pyfunc.spark_udf(spark, args.model, env_manager="conda")
# Read the data you want to score df = spark.read.option("header", "true").option("inferSchema", "true").csv(input_data).drop("target")
The following section explains how to run MLflow models registered in Azure Mach
The above script takes three arguments `--model`, `--input_data` and `--scored_data`. The first two are inputs and represent the model we want to run and the input data, the last one is an output and it is the output folder where predictions will be placed.
+ > [!TIP]
+ > **Installation of Python packages:** The previous scoring script loads the MLflow model into an UDF function, but it indicates the parameter `env_manager="conda"`. When this parameter is set, MLflow will restore the required packages as specified in the model definition in an isolated environment where only the UDF function runs. For more details see [`mlflow.pyfunc.spark_udf`](https://mlflow.org/docs/latest/python_api/mlflow.pyfunc.html?highlight=env_manager#mlflow.pyfunc.spark_udf) documentation.
+ 1. Create a job definition: __mlflow-score-spark-job.yml__
machine-learning How To Deploy Mlflow Models Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models-online-endpoints.md
Additionally, you will need to:
# [Python (MLflow SDK)](#tab/mlflow) -- Install the Mlflow SDK package `mlflow` and the Azure Machine Learning plug-in for MLflow `azureml-mlflow`.
+- Install the MLflow SDK package `mlflow` and the Azure Machine Learning plug-in for MLflow `azureml-mlflow`.
```bash pip install mlflow azureml-mlflow ``` -- If you are not running in Azure Machine Learning compute, configure the MLflow tracking URI or MLflow's registry URI to point to the workspace you are working on. See [Track runs using MLflow with Azure Machine Learning](how-to-use-mlflow-cli-runs.md#set-up-tracking-environment) for more details.
+- If you are not running in Azure Machine Learning compute, configure the MLflow tracking URI or MLflow's registry URI to point to the workspace you are working on. See [Configure MLflow for Azure Machine Learning](how-to-use-mlflow-configure-tracking.md) for more details.
# [Studio](#tab/studio)
Use the following steps to deploy an MLflow model with a custom scoring script.
__conda.yml__
- ```yml
+ ```yaml
channels: - conda-forge dependencies:
Use the following steps to deploy an MLflow model with a custom scoring script.
# [Python (Azure ML SDK)](#tab/sdk)
- ```python
+ ```pythonS
environment = Environment( conda_file="sklearn-diabetes/environment/conda.yml", image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:latest",
Use the following steps to deploy an MLflow model with a custom scoring script.
Create a deployment configuration file:
- ```yml
+ ```yaml
$schema: https://azuremlschemas.azureedge.net/latest/managedOnlineDeployment.schema.json name: sklearn-diabetes-custom endpoint_name: my-endpoint
machine-learning How To Deploy Mlflow Models Online Progressive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models-online-progressive.md
Title: Progressive rollout of MLflow models
+ Title: Progressive rollout of MLflow models to Online Endpoints
description: Learn to deploy your MLflow model progressively using MLflow SDK.
ms.devlang: azurecli
-# Progressive rollout of MLflow models
+# Progressive rollout of MLflow models to Online Endpoints
In this article, you'll learn how you can progressively update and deploy MLflow models to Online Endpoints without causing service disruption. You'll use blue-green deployment, also known as a safe rollout strategy, to introduce a new version of a web service to production. This strategy will allow you to roll out your new version of the web service to a small subset of users or requests before rolling it out completely.
Additionally, you will need to:
pip install mlflow azureml-mlflow ``` -- If you are not running in Azure Machine Learning compute, configure the MLflow tracking URI or MLflow's registry URI to point to the workspace you are working on. See [Track runs using MLflow with Azure Machine Learning](how-to-use-mlflow-cli-runs.md#set-up-tracking-environment) for more details.
+- If you are not running in Azure Machine Learning compute, configure the MLflow tracking URI or MLflow's registry URI to point to the workspace you are working on. See [Configure MLflow for Azure Machine Learning](how-to-use-mlflow-configure-tracking.md) for more details.
So far, the endpoint is empty. There are no deployments on it. Let's create the
__blue-deployment.yml__
- ```yml
+ ```yaml
$schema: https://azuremlschemas.azureedge.net/latest/managedOnlineDeployment.schema.json name: default endpoint_name: heart-classifier-edp
So far, the endpoint is empty. There are no deployments on it. Let's create the
__sample.yml__
- ```yml
+ ```yaml
{ "input_data": { "columns": [
Let's imagine that there is a new version of the model created by the developmen
__green-deployment.yml__
- ```yml
+ ```yaml
$schema: https://azuremlschemas.azureedge.net/latest/managedOnlineDeployment.schema.json name: xgboost-model endpoint_name: heart-classifier-edp
machine-learning How To Deploy Mlflow Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models.md
MLflow includes built-in deployment tools that model developers can use to test
### Batch vs Online endpoints
-Azure Machine Learning supports deploying models to both online and batch endpoints. Online Endpoints compare to [MLflow built-in server](https://www.mlflow.org/docs/latest/models.html#built-in-deployment-tools) and they provide a scalable, synchronous, and lightweight way to run models for inference. Batch Endpoints, on the other hand, provide a way to run asynchronous inference over long running inferencing processes that can scale to big amounts of data. This capability is not present by the moment in MLflow server although similar capability can be achieved using Spark jobs. The rest of this section mostly applies to online endpoints but you can learn more of batch endpoint at [What are Azure Machine Learning endpoints?](concept-endpoints.md).
+Azure Machine Learning supports deploying models to both online and batch endpoints. Online Endpoints compare to [MLflow built-in server](https://www.mlflow.org/docs/latest/models.html#built-in-deployment-tools) and they provide a scalable, synchronous, and lightweight way to run models for inference. Batch Endpoints, on the other hand, provide a way to run asynchronous inference over long running inferencing processes that can scale to big amounts of data. This capability is not present by the moment in MLflow server although similar capability can be achieved [using Spark jobs](how-to-deploy-mlflow-model-spark-jobs.md).
+
+The rest of this section mostly applies to online endpoints but you can learn more of batch endpoint and MLflow models at [Use MLflow models in batch deployments](how-to-mlflow-batch.md).
### Input formats
machine-learning How To Log View Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-log-view-metrics.md
Logs can help you diagnose errors and warnings, or track performance metrics lik
```bash pip install mlflow azureml-mlflow ```
-* If you are doing remote tracking (tracking experiments running outside Azure Machine Learning), configure MLflow to track experiments using Azure Machine Learning. See [Setup your tracking environment](how-to-use-mlflow-cli-runs.md?#set-up-tracking-environment) for more details.
+* If you are doing remote tracking (tracking experiments running outside Azure Machine Learning), configure MLflow to track experiments using Azure Machine Learning. See [Configure MLflow for Azure Machine Learning](how-to-use-mlflow-configure-tracking.md) for more details.
## Getting started
machine-learning How To Manage Models Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-models-mlflow.md
Azure Machine Learning supports MLflow for model management. This represents a convenient way to support the entire model lifecycle for users familiar with the MLFlow client. The following article describes the different capabilities and how it compares with other options.
-## Support matrix for managing models with MLflow
-
-The MLflow client exposes several methods to retrieve and manage models. The following table shows which of those methods are currently supported in MLflow when connected to Azure ML. It also compares it with other models management capabilities in Azure ML.
-
-| Feature | MLflow | Azure ML with MLflow | Azure ML CLIv2 | Azure ML Studio |
-| :- | :-: | :-: | :-: | :-: |
-| Registering models in MLflow format | **&check;** | **&check;** | **&check;** | **&check;** |
-| Registering models not in MLflow format | | | **&check;** | **&check;** |
-| Registering models from runs outputs/artifacts | **&check;** | **&check;**<sup>1</sup> | **&check;**<sup>2</sup> | **&check;** |
-| Registering models from runs outputs/artifacts in a different tracking server/workspace | **&check;** | | | |
-| Listing registered models | **&check;** | **&check;** | **&check;** | **&check;** |
-| Retrieving details of registered model's versions | **&check;** | **&check;** | **&check;** | **&check;** |
-| Editing registered model's versions description | **&check;** | **&check;** | **&check;** | **&check;** |
-| Editing registered model's versions tags | **&check;** | **&check;** | **&check;** | **&check;** |
-| Renaming registered models | **&check;** | <sup>3</sup> | <sup>3</sup> | <sup>3</sup> |
-| Deleting a registered model (container) | **&check;** | <sup>3</sup> | <sup>3</sup> | <sup>3</sup> |
-| Deleting a registered model's version | **&check;** | **&check;** | **&check;** | **&check;** |
-| Manage MLflow model stages | **&check;** | **&check;** | | |
-| Search registered models by name | **&check;** | **&check;** | **&check;** | **&check;**<sup>4</sup> |
-| Search registered models using string comparators `LIKE` and `ILIKE` | **&check;** | | | **&check;**<sup>4</sup> |
-| Search registered models by tag | | | | **&check;**<sup>4</sup> |
-
-> [!NOTE]
-> - <sup>1</sup> Use URIs with format `runs:/<ruin-id>/<path>`.
-> - <sup>2</sup> Use URIs with format `azureml://jobs/<job-id>/outputs/artifacts/<path>`.
-> - <sup>3</sup> Registered models are immutable objects in Azure ML.
-> - <sup>4</sup> Use search box in Azure ML Studio. Partial match supported.
- ### Prerequisites
-* Install the `azureml-mlflow` package.
-* If you are running outside an Azure ML compute, configure the MLflow tracking URI or MLflow's registry URI to point to the workspace you are working on. For more information about how to Set up tracking environment, see [Track runs using MLflow with Azure Machine Learning](how-to-use-mlflow-cli-runs.md#set-up-tracking-environment) for more details.
## Registering new models in the registry
client.delete_model_version(model_name, version="2")
> [!NOTE] > Azure Machine Learning doesn't support deleting the entire model container. To achieve the same thing, you will need to delete all the model versions from a given model.+
+## Support matrix for managing models with MLflow
+
+The MLflow client exposes several methods to retrieve and manage models. The following table shows which of those methods are currently supported in MLflow when connected to Azure ML. It also compares it with other models management capabilities in Azure ML.
+
+| Feature | MLflow | Azure ML with MLflow | Azure ML CLIv2 | Azure ML Studio |
+| :- | :-: | :-: | :-: | :-: |
+| Registering models in MLflow format | **&check;** | **&check;** | **&check;** | **&check;** |
+| Registering models not in MLflow format | | | **&check;** | **&check;** |
+| Registering models from runs outputs/artifacts | **&check;** | **&check;**<sup>1</sup> | **&check;**<sup>2</sup> | **&check;** |
+| Registering models from runs outputs/artifacts in a different tracking server/workspace | **&check;** | | | |
+| Listing registered models | **&check;** | **&check;** | **&check;** | **&check;** |
+| Retrieving details of registered model's versions | **&check;** | **&check;** | **&check;** | **&check;** |
+| Editing registered model's versions description | **&check;** | **&check;** | **&check;** | **&check;** |
+| Editing registered model's versions tags | **&check;** | **&check;** | **&check;** | **&check;** |
+| Renaming registered models | **&check;** | <sup>3</sup> | <sup>3</sup> | <sup>3</sup> |
+| Deleting a registered model (container) | **&check;** | <sup>3</sup> | <sup>3</sup> | <sup>3</sup> |
+| Deleting a registered model's version | **&check;** | **&check;** | **&check;** | **&check;** |
+| Manage MLflow model stages | **&check;** | **&check;** | | |
+| Search registered models by name | **&check;** | **&check;** | **&check;** | **&check;**<sup>4</sup> |
+| Search registered models using string comparators `LIKE` and `ILIKE` | **&check;** | | | **&check;**<sup>4</sup> |
+| Search registered models by tag | | | | **&check;**<sup>4</sup> |
+
+> [!NOTE]
+> - <sup>1</sup> Use URIs with format `runs:/<ruin-id>/<path>`.
+> - <sup>2</sup> Use URIs with format `azureml://jobs/<job-id>/outputs/artifacts/<path>`.
+> - <sup>3</sup> Registered models are immutable objects in Azure ML.
+> - <sup>4</sup> Use search box in Azure ML Studio. Partial match supported.
+
+## Next steps
+
+- [Logging MLflow models](how-to-log-mlflow-models.md)
+- [Query & compare experiments and runs with MLflow](how-to-track-experiments-mlflow.md)
+- [Guidelines for deploying MLflow models](how-to-deploy-mlflow-models.md)
machine-learning How To Track Experiments Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-track-experiments-mlflow.md
Use MLflow to query and manage all the experiments in Azure Machine Learning. Th
### Prerequisites
-* Install `azureml-mlflow` plug-in.
-* If you're running in a compute not hosted in Azure ML, configure MLflow to point to the Azure ML tracking URL. You can follow the instruction at [Track runs from your local machine](how-to-use-mlflow-cli-runs.md).
## Getting all the experiments
machine-learning How To Train Mlflow Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-mlflow-projects.md
In this article, learn how to enable MLflow's tracking URI and logging API, coll
## Prerequisites
-* Install the `azureml-mlflow` package.
-* [Create an Azure Machine Learning Workspace](quickstart-create-resources.md).
- * See which [access permissions you need to perform your MLflow operations with your workspace](how-to-assign-roles.md#mlflow-operations).
- * Configure MLflow for tracking in Azure Machine Learning, as explained in the next section.
-### Set up tracking environment
+### Connect to your workspace
-To configure MLflow for working with Azure Machine Learning, you need to point your MLflow environment to the Azure Machine Learning MLflow Tracking URI.
+First, let's connect MLflow to your Azure Machine Learning workspace.
-> [!NOTE]
-> When running on Azure Compute (Azure Notebooks, Jupyter Notebooks hosted on Azure Compute Instances or Compute Clusters) you don't have to configure the tracking URI. It's automatically configured for you.
-
-# [Using the Azure ML SDK v2](#tab/azuremlsdk)
--
-You can get the Azure ML MLflow tracking URI using the [Azure Machine Learning SDK v2 for Python](concept-v2.md). Ensure you have the library `azure-ai-ml` installed in the cluster you are using. The following sample gets the unique MLFLow tracking URI associated with your workspace. Then the method [`set_tracking_uri()`](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.set_tracking_uri) points the MLflow tracking URI to that URI.
-
-1. Using the workspace configuration file:
-
- ```Python
- from azure.ai.ml import MLClient
- from azure.identity import DefaultAzureCredential
- import mlflow
-
- ml_client = MLClient.from_config(credential=DefaultAzureCredential()
- azureml_mlflow_uri = ml_client.workspaces.get(ml_client.workspace_name).mlflow_tracking_uri
- mlflow.set_tracking_uri(azureml_mlflow_uri)
- ```
+# [Azure Machine Learning compute](#tab/aml)
- > [!TIP]
- > You can download the workspace configuration file by:
- > 1. Navigate to [Azure ML studio](https://ml.azure.com)
- > 2. Click on the uper-right corner of the page -> Download config file.
- > 3. Save the file `config.json` in the same directory where you are working on.
+Tracking is already configured for you. Your default credentials will also be used when working with MLflow.
-1. Using the subscription ID, resource group name and workspace name:
+# [Remote compute](#tab/remote)
- ```Python
- from azure.ai.ml import MLClient
- from azure.identity import DefaultAzureCredential
- import mlflow
+**Configure tracking URI**
- #Enter details of your AzureML workspace
- subscription_id = '<SUBSCRIPTION_ID>'
- resource_group = '<RESOURCE_GROUP>'
- workspace_name = '<AZUREML_WORKSPACE_NAME>'
- ml_client = MLClient(credential=DefaultAzureCredential(),
- subscription_id=subscription_id,
- resource_group_name=resource_group)
+**Configure authentication**
- azureml_mlflow_uri = ml_client.workspaces.get(workspace_name).mlflow_tracking_uri
- mlflow.set_tracking_uri(azureml_mlflow_uri)
- ```
+Once the tracking is configured, you'll also need to configure how the authentication needs to happen to the associated workspace. By default, the Azure Machine Learning plugin for MLflow will perform interactive authentication by opening the default browser to prompt for credentials. Refer to [Configure MLflow for Azure Machine Learning: Configure authentication](how-to-use-mlflow-configure-tracking.md#configure-authentication) to additional ways to configure authentication for MLflow in Azure Machine Learning workspaces.
- > [!IMPORTANT]
- > `DefaultAzureCredential` will try to pull the credentials from the available context. If you want to specify credentials in a different way, for instance using the web browser in an interactive way, you can use `InteractiveBrowserCredential` or any other method available in `azure.identity` package.
-
-# [Using an environment variable](#tab/environ)
--
-Another option is to set one of the MLflow environment variables [MLFLOW_TRACKING_URI](https://mlflow.org/docs/latest/tracking.html#logging-to-a-tracking-server) directly in your terminal.
-
-```Azure CLI
-export MLFLOW_TRACKING_URI=$(az ml workspace show --query mlflow_tracking_uri | sed 's/"//g')
-```
-
->[!IMPORTANT]
-> Make sure you are logged in to your Azure account on your local machine, otherwise the tracking URI returns an empty string. If you are using any Azure ML compute the tracking environment and experiment name is already configured.
-
-# [Building the MLflow tracking URI](#tab/build)
-
-The Azure Machine Learning Tracking URI can be constructed using the subscription ID, region of where the resource is deployed, resource group name and workspace name. The following code sample shows how:
-
-```python
-import mlflow
-
-region = ""
-subscription_id = ""
-resource_group = ""
-workspace_name = ""
-
-azureml_mlflow_uri = f"azureml://{region}.api.azureml.ms/mlflow/v1.0/subscriptions/{subscription_id}/resourceGroups/{resource_group}/providers/Microsoft.MachineLearningServices/workspaces/{workspace_name}"
-mlflow.set_tracking_uri(azureml_mlflow_uri)
-```
-
-> [!NOTE]
-> You can also get this URL by:
-> 1. Navigate to [Azure ML studio](https://ml.azure.com)
-> 2. Click on the uper-right corner of the page -> View all properties in Azure Portal -> MLflow tracking URI.
-> 3. Copy the URI and use it with the method `mlflow.set_tracking_uri`.
machine-learning How To Use Mlflow Azure Databricks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-azure-databricks.md
If you prefer to manage your tracked experiments in a centralized location, you
You have to configure the MLflow tracking URI to point exclusively to Azure Machine Learning, as it is demonstrated in the following example:
- # [Using the Azure ML SDK v2](#tab/azuremlsdk)
-
- [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-
- You can get the Azure ML MLflow tracking URI using the [Azure Machine Learning SDK v2 for Python](concept-v2.md). Ensure you have the library `azure-ai-ml` installed in the cluster you are using. The following sample gets the unique MLFLow tracking URI associated with your workspace. Then the method [`set_tracking_uri()`](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.set_tracking_uri) points the MLflow tracking URI to that URI.
-
- a. Using the workspace configuration file:
-
- ```Python
- from azure.ai.ml import MLClient
- from azure.identity import DefaultAzureCredential
- import mlflow
-
- ml_client = MLClient.from_config(credential=DefaultAzureCredential()
- azureml_mlflow_uri = ml_client.workspaces.get(ml_client.workspace_name).mlflow_tracking_uri
- mlflow.set_tracking_uri(azureml_mlflow_uri)
- ```
-
- > [!TIP]
- > You can download the workspace configuration file by:
- > 1. Navigate to [Azure ML studio](https://ml.azure.com)
- > 2. Click on the uper-right corner of the page -> Download config file.
- > 3. Save the file `config.json` in the same directory where you are working on.
-
- b. Using the subscription ID, resource group name and workspace name:
-
- ```Python
- from azure.ai.ml import MLClient
- from azure.identity import DefaultAzureCredential
- import mlflow
-
- #Enter details of your AzureML workspace
- subscription_id = '<SUBSCRIPTION_ID>'
- resource_group = '<RESOURCE_GROUP>'
- workspace_name = '<AZUREML_WORKSPACE_NAME>'
-
- ml_client = MLClient(credential=DefaultAzureCredential(),
- subscription_id=subscription_id,
- resource_group_name=resource_group)
-
- azureml_mlflow_uri = ml_client.workspaces.get(workspace_name).mlflow_tracking_uri
- mlflow.set_tracking_uri(azureml_mlflow_uri)
- ```
-
- > [!IMPORTANT]
- > `DefaultAzureCredential` will try to pull the credentials from the available context. If you want to specify credentials in a different way, for instance using the web browser in an interactive way, you can use `InteractiveBrowserCredential` or any other method available in `azure.identity` package.
-
- # [Using an environment variable](#tab/env)
-
- Another option is to set one of the MLflow environment variables [MLFLOW_TRACKING_URI](https://mlflow.org/docs/latest/tracking.html#logging-to-a-tracking-server) directly in your cluster. This has the advantage of doing the configuration only once per compute cluster. In Azure Databricks, you can configure environment variables using the cluster configuration page.
-
- ![Configure the environment variables in an Azure Databricks cluster](./media/how-to-use-mlflow-azure-databricks/env.png)
-
- After the environment variable is configured, any experiment running in such cluster will be tracked in Azure Machine Learning.
-
- > [!NOTE]
- > You can get the tracking URL for your Azure Machine Learning workspace by:
- > 1. Navigate to [Azure ML studio](https://ml.azure.com)
- > 2. Click on the uper-right corner of the page -> View all properties in Azure Portal -> MLflow tracking URI.
- > 3. Copy the URI and use it with the method `mlflow.set_tracking_uri`.
-
- # [Building the MLflow tracking URI](#tab/build)
-
- For workspaces not deployed in a private network, the Azure Machine Learning Tracking URI can be constructed using the subscription ID, region of where the resource is deployed, resource group name and workspace name. The following code sample shows how:
-
- ```python
- import mlflow
-
- region = ""
- subscription_id = ""
- resource_group = ""
- workspace_name = ""
-
- azureml_mlflow_uri = f"azureml://{region}.api.azureml.ms/mlflow/v1.0/subscriptions/{subscription_id}/resourceGroups/{resource_group}/providers/Microsoft.MachineLearningServices/workspaces/{workspace_name}"
- mlflow.set_tracking_uri(azureml_mlflow_uri)
- ```
-
- > [!NOTE]
- > You can also get this URL by:
- > 1. Navigate to [Azure ML studio](https://ml.azure.com)
- > 2. Click on the uper-right corner of the page -> View all properties in Azure Portal -> MLflow tracking URI.
- > 3. Copy the URI and use it with the method `mlflow.set_tracking_uri`.
-
-
+**Configure tracking URI**
+
+1. Get the tracking URI for your workspace:
+
+ # [Azure CLI](#tab/cli)
+
+ [!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
+
+ 1. Login and configure your workspace:
+
+ ```bash
+ az account set --subscription <subscription>
+ az configure --defaults workspace=<workspace> group=<resource-group> location=<location>
+ ```
+
+ 1. You can get the tracking URI using the `az ml workspace` command:
+
+ ```bash
+ az ml workspace show --query mlflow_tracking_uri
+ ```
+
+ # [Python](#tab/python)
+
+ [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
+
+ You can get the Azure ML MLflow tracking URI using the [Azure Machine Learning SDK v2 for Python](concept-v2.md). Ensure you have the library `azure-ai-ml` installed in the compute you are using. The following sample gets the unique MLFLow tracking URI associated with your workspace.
+
+ 1. Login into your workspace using the `MLClient`. The easier way to do that is by using the workspace config file:
+
+ ```python
+ from azure.ai.ml import MLClient
+ from azure.identity import DefaultAzureCredential
+
+ ml_client = MLClient.from_config(credential=DefaultAzureCredential())
+ ```
+
+ > [!TIP]
+ > You can download the workspace configuration file by:
+ > 1. Navigate to [Azure ML studio](https://ml.azure.com)
+ > 2. Click on the upper-right corner of the page -> Download config file.
+ > 3. Save the file `config.json` in the same directory where you are working on.
+
+ 1. Alternatively, you can use the subscription ID, resource group name and workspace name to get it:
+
+ ```python
+ from azure.ai.ml import MLClient
+ from azure.identity import DefaultAzureCredential
+
+ #Enter details of your AzureML workspace
+ subscription_id = '<SUBSCRIPTION_ID>'
+ resource_group = '<RESOURCE_GROUP>'
+ workspace_name = '<WORKSPACE_NAME>'
+
+ ml_client = MLClient(credential=DefaultAzureCredential(),
+ subscription_id=subscription_id,
+ resource_group_name=resource_group)
+ ```
+
+ > [!IMPORTANT]
+ > `DefaultAzureCredential` will try to pull the credentials from the available context. If you want to specify credentials in a different way, for instance using the web browser in an interactive way, you can use `InteractiveBrowserCredential` or any other method available in [`azure.identity`](https://pypi.org/project/azure-identity/) package.
+
+ 1. Get the Azure Machine Learning Tracking URI:
+
+ ```python
+ mlflow_tracking_uri = ml_client.workspaces.get(ml_client.workspace_name).mlflow_tracking_uri
+ ```
+
+ # [Studio](#tab/studio)
+
+ Use the Azure Machine Learning portal to get the tracking URI:
+
+ 1. Open the [Azure Machine Learning studio portal](https://ml.azure.com) and log in using your credentials.
+ 1. In the upper right corner, click on the name of your workspace to show the __Directory + Subscription + Workspace__ blade.
+ 1. Click on __View all properties in Azure Portal__.
+ 1. On the __Essentials__ section, you will find the property __MLflow tracking URI__.
+
+
+ # [Manually](#tab/manual)
+
+ The Azure Machine Learning Tracking URI can be constructed using the subscription ID, region of where the resource is deployed, resource group name and workspace name. The following code sample shows how:
+
+ > [!WARNING]
+ > If you are working in a private link-enabled workspace, the MLflow endpoint will also use a private link to communicate with Azure Machine Learning. As a consequence, the tracking URI will look different as proposed here. You need to get the tracking URI using the Azure ML SDK or CLI v2 on those cases.
+
+ ```python
+ region = "<LOCATION>"
+ subscription_id = '<SUBSCRIPTION_ID>'
+ resource_group = '<RESOURCE_GROUP>'
+ workspace_name = '<AML_WORKSPACE_NAME>'
+
+ mlflow_tracking_uri = f"azureml://{region}.api.azureml.ms/mlflow/v1.0/subscriptions/{subscription_id}/resourceGroups/{resource_group}/providers/Microsoft.MachineLearningServices/workspaces/{workspace_name}"
+ ```
+
+1. Configuring the tracking URI:
+
+ # [Using MLflow SDK](#tab/mlflow)
+
+ Then the method [`set_tracking_uri()`](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.set_tracking_uri) points the MLflow tracking URI to that URI.
+
+ ```python
+ import mlflow
+
+ mlflow.set_tracking_uri(mlflow_tracking_uri)
+ ```
+
+ # [Using environment variables](#tab/environ)
+
+ You can set the MLflow environment variables [MLFLOW_TRACKING_URI](https://mlflow.org/docs/latest/tracking.html#logging-to-a-tracking-server) in your compute to make any interaction with MLflow in that compute to point by default to Azure Machine Learning.
+
+ ```bash
+ MLFLOW_TRACKING_URI=$(az ml workspace show --query mlflow_tracking_uri | sed 's/"//g')
+ ```
+
+
+
+ > [!TIP]
+ > When working on shared environments, like an Azure Databricks cluster, Azure Synapse Analytics cluster, or similar, it is useful to set the environment variable `MLFLOW_TRACKING_URI` at the cluster level to automatically configure the MLflow tracking URI to point to Azure Machine Learning for all the sessions running in the cluster rather than to do it on a per-session basis.
+ >
+ > ![Configure the environment variables in an Azure Databricks cluster](./media/how-to-use-mlflow-azure-databricks/env.png)
+ >
+ > Once the environment variable is configured, any experiment running in such cluster will be tracked in Azure Machine Learning.
++
+**Configure authentication**
+
+Once the tracking is configured, you'll also need to configure how the authentication needs to happen to the associated workspace. By default, the Azure Machine Learning plugin for MLflow will perform interactive authentication by opening the default browser to prompt for credentials. Refer to [Configure MLflow for Azure Machine Learning: Configure authentication](how-to-use-mlflow-configure-tracking.md#configure-authentication) to additional ways to configure authentication for MLflow in Azure Machine Learning workspaces.
+ #### Experiment's names in Azure Machine Learning
machine-learning How To Use Mlflow Azure Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-azure-synapse.md
Azure Synapse Analytics can be configured to track experiments using MLflow to A
To use Azure Machine Learning as your centralized repository for experiments, you can leverage MLflow. On each notebook where you are working on, you have to configure the tracking URI to point to the workspace you will be using. The following example shows how it can be done:
- # [Using the Azure ML SDK v2](#tab/azuremlsdk)
-
- [!INCLUDE [sdk v1](../../includes/machine-learning-sdk-v2.md)]
+__Configure tracking URI__
- You can get the Azure ML MLflow tracking URI using the [Azure Machine Learning SDK v2 for Python](concept-v2.md). Ensure you have the library `azure-ai-ml` installed in the cluster you are using. The following sample gets the unique MLFLow tracking URI associated with your workspace. Then the method [`set_tracking_uri()`](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.set_tracking_uri) points the MLflow tracking URI to that URI.
- a. Using the workspace configuration file:
+__Configure authentication__
- ```Python
- from azure.ai.ml import MLClient
- from azure.identity import DefaultAzureCredential
- import mlflow
+Once the tracking is configured, you'll also need to configure how the authentication needs to happen to the associated workspace. By default, the Azure Machine Learning plugin for MLflow will perform interactive authentication by opening the default browser to prompt for credentials. Refer to [Configure MLflow for Azure Machine Learning: Configure authentication](how-to-use-mlflow-configure-tracking.md#configure-authentication) to additional ways to configure authentication for MLflow in Azure Machine Learning workspaces.
- ml_client = MLClient.from_config(credential=DefaultAzureCredential()
- azureml_mlflow_uri = ml_client.workspaces.get(ml_client.workspace_name).mlflow_tracking_uri
- mlflow.set_tracking_uri(azureml_mlflow_uri)
- ```
-
- > [!TIP]
- > You can download the workspace configuration file by:
- > 1. Navigate to [Azure ML studio](https://ml.azure.com)
- > 2. Click on the uper-right corner of the page -> Download config file.
- > 3. Save the file `config.json` in the same directory where you are working on.
-
- b. Using the subscription ID, resource group name and workspace name:
-
- ```Python
- from azure.ai.ml import MLClient
- from azure.identity import DefaultAzureCredential
- import mlflow
-
- #Enter details of your AzureML workspace
- subscription_id = '<SUBSCRIPTION_ID>'
- resource_group = '<RESOURCE_GROUP>'
- workspace_name = '<AZUREML_WORKSPACE_NAME>'
-
- ml_client = MLClient(credential=DefaultAzureCredential(),
- subscription_id=subscription_id,
- resource_group_name=resource_group)
-
- azureml_mlflow_uri = ml_client.workspaces.get(workspace_name).mlflow_tracking_uri
- mlflow.set_tracking_uri(azureml_mlflow_uri)
- ```
-
- > [!IMPORTANT]
- > `DefaultAzureCredential` will try to pull the credentials from the available context. If you want to specify credentials in a different way, for instance using the web browser in an interactive way, you can use `InteractiveBrowserCredential` or any other method available in `azure.identity` package.
-
- # [Building the MLflow tracking URI](#tab/build)
-
- The Azure Machine Learning Tracking URI can be constructed using the subscription ID, region of where the resource is deployed, resource group name and workspace name. The following code sample shows how:
-
- ```python
- import mlflow
-
- region = ""
- subscription_id = ""
- resource_group = ""
- workspace_name = ""
-
- azureml_mlflow_uri = f"azureml://{region}.api.azureml.ms/mlflow/v1.0/subscriptions/{subscription_id}/resourceGroups/{resource_group}/providers/Microsoft.MachineLearningServices/workspaces/{workspace_name}"
- mlflow.set_tracking_uri(azureml_mlflow_uri)
- ```
-
- > [!NOTE]
- > You can also get this URL by:
- > 1. Navigate to [Azure ML studio](https://ml.azure.com)
- > 2. Click on the uper-right corner of the page -> View all properties in Azure Portal -> MLflow tracking URI.
- > 3. Copy the URI and use it with the method `mlflow.set_tracking_uri`.
-
-
### Experiment's names in Azure Machine Learning
machine-learning How To Use Mlflow Cli Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-cli-runs.md
Title: Track ML experiments and models with MLflow
-description: Set up MLflow Tracking with Azure Machine Learning to log metrics and artifacts from ML models with MLflow
+description: Use MLflow to log metrics and artifacts from machine learning runs
See [MLflow and Azure Machine Learning](concept-mlflow.md) for all supported MLf
## Prerequisites
-* Install the `mlflow` package.
- * You can use the [MLflow Skinny](https://github.com/mlflow/mlflow/blob/master/README_SKINNY.rst) which is a lightweight MLflow package without SQL storage, server, UI, or data science dependencies. This is recommended for users who primarily need the tracking and logging capabilities without importing the full suite of MLflow features including deployments.
-
-* Install the `azureml-mlflow` package.
-* [Create an Azure Machine Learning Workspace](quickstart-create-resources.md).
- * See which [access permissions you need to perform your MLflow operations with your workspace](how-to-assign-roles.md#mlflow-operations).
* (Optional) Install and [set up Azure ML CLI (v2)](how-to-configure-cli.md#prerequisites) and make sure you install the ml extension. * (Optional) Install and set up Azure ML SDK(v2) for Python.
-## Track runs from your local machine or remote compute
+### Connect to your workspace
-Tracking using MLflow with Azure Machine Learning lets you store the logged metrics and artifacts runs that were executed on your local machine into your Azure Machine Learning workspace.
+First, let's connect to Azure Machine Learning workspace where your model is registered.
-### Set up tracking environment
+# [Azure Machine Learning compute](#tab/aml)
-To track a run that is not running on Azure Machine Learning compute, you need to point MLflow to the Azure Machine Learning MLflow Tracking URI.
+Tracking is already configured for you. Your default credentials will also be used when working with MLflow.
-> [!NOTE]
-> When running on Azure Compute (Azure Notebooks, Jupyter Notebooks hosted on Azure Compute Instances or Compute Clusters) you don't have to configure the tracking URI. It's automatically configured for you.
+# [Remote compute](#tab/remote)
-1. Getting the Azure Machine Learning Tracking URI:
-
- # [Python](#tab/python)
-
- [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-
- You can get the Azure ML MLflow tracking URI using the [Azure Machine Learning SDK v2 for Python](concept-v2.md). Ensure you have the library `azure-ai-ml` installed in the compute you are using. The following sample gets the unique MLFLow tracking URI associated with your workspace.
-
- 1. Login into your workspace using the `MLClient`. The easier way to do that is by using the workspace config file:
-
- ```python
- from azure.ai.ml import MLClient
- from azure.identity import DefaultAzureCredential
-
- ml_client = MLClient.from_config(credential=DefaultAzureCredential())
- ```
-
- > [!TIP]
- > You can download the workspace configuration file by:
- > 1. Navigate to [Azure ML studio](https://ml.azure.com)
- > 2. Click on the uper-right corner of the page -> Download config file.
- > 3. Save the file `config.json` in the same directory where you are working on.
-
- 1. Alternatively, you can use the subscription ID, resource group name and workspace name to get it:
-
- ```python
- from azure.ai.ml import MLClient
- from azure.identity import DefaultAzureCredential
-
- #Enter details of your AzureML workspace
- subscription_id = '<SUBSCRIPTION_ID>'
- resource_group = '<RESOURCE_GROUP>'
- workspace_name = '<AML_WORKSPACE_NAME>'
-
- ml_client = MLClient(credential=DefaultAzureCredential(),
- subscription_id=subscription_id,
- resource_group_name=resource_group)
- ```
-
- > [!IMPORTANT]
- > `DefaultAzureCredential` will try to pull the credentials from the available context. If you want to specify credentials in a different way, for instance using the web browser in an interactive way, you can use `InteractiveBrowserCredential` or any other method available in `azure.identity` package.
-
- 1. Get the Azure Machine Learning Tracking URI:
-
- ```python
- mlflow_tracking_uri = ml_client.workspaces.get(ml_client.workspace_name).mlflow_tracking_uri
- ```
-
- # [Azure CLI](#tab/cli)
-
- [!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
-
- You can use the Azure ML CLI v2 to get the MLflow tracking URI.
-
- 1. Login and configure your workspace:
-
- ```bash
- az account set --subscription <subscription>
- az configure --defaults workspace=<workspace> group=<resource-group> location=<location>
- ```
-
- 1. You can get the tracking URI using the `az ml workspace` command:
-
- ```bash
- az ml workspace show --query mlflow_tracking_uri
- ```
+**Configure tracking URI**
- # [Studio](#tab/studio)
-
- Use the Azure Machine Learning portal to get the tracking URI:
-
- 1. Open the [Azure Machine Learning studio portal](https://ml.azure.com) and log in using your credentials.
- 1. In the upper right corner, click on the name of your workspace to show the __Directory + Subscription + Workspace__ blade.
- 1. Click on __View all properties in Azure Portal__.
- 1. On the __Essentials__ section, you will find the property __MLflow tracking URI__.
-
- # [Manually](#tab/manual)
-
- The Azure Machine Learning Tracking URI can be constructed using the subscription ID, region of where the resource is deployed, resource group name and workspace name. The following code sample shows how:
-
- > [!WARNING]
- > If you are working in a private link-enabled workspace, the MLflow endpoint will also use a private link to communicate with Azure Machine Learning. As a consequence, the tracking URI will look different as proposed here. On those cases, you need to get the tracking URI using the Azure ML SDK or CLI v2.
-
- ```python
- region = "<LOCATION>"
- subscription_id = '<SUBSCRIPTION_ID>'
- resource_group = '<RESOURCE_GROUP>'
- workspace_name = '<AML_WORKSPACE_NAME>'
-
- mlflow_tracking_uri = f"azureml://{region}.api.azureml.ms/mlflow/v1.0/subscriptions/{subscription_id}/resourceGroups/{resource_group}/providers/Microsoft.MachineLearningServices/workspaces/{workspace_name}"
- ```
+**Configure authentication**
-1. Configuring the tracking URI:
+Once the tracking is configured, you'll also need to configure how the authentication needs to happen to the associated workspace. By default, the Azure Machine Learning plugin for MLflow will perform interactive authentication by opening the default browser to prompt for credentials. Refer to [Configure MLflow for Azure Machine Learning: Configure authentication](how-to-use-mlflow-configure-tracking.md#configure-authentication) to additional ways to configure authentication for MLflow in Azure Machine Learning workspaces.
- # [Using MLflow SDK](#tab/mlflow)
-
- Then the method [`set_tracking_uri()`](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.set_tracking_uri) points the MLflow tracking URI to that URI.
-
- ```python
- import mlflow
-
- mlflow.set_tracking_uri(mlflow_tracking_uri)
- ```
- # [Using an environment variable](#tab/environ)
-
- You can set the MLflow environment variables [MLFLOW_TRACKING_URI](https://mlflow.org/docs/latest/tracking.html#logging-to-a-tracking-server) in your compute to make any interaction with MLflow in that compute to point by default to Azure Machine Learning.
-
- ```bash
- MLFLOW_TRACKING_URI=$(az ml workspace show --query mlflow_tracking_uri | sed 's/"//g')
- ```
+ ### Set experiment name
When submitting runs using jobs, Azure Machine Learning automatically configures
### Creating a training routine - First, you should create a `src` subdirectory and create a file with your training code in a `hello_world.py` file in the `src` subdirectory. All your training code will go into the `src` subdirectory, including `train.py`. The training code is taken from this [MLfLow example](https://github.com/Azure/azureml-examples/blob/main/cli/jobs/basics/src/hello-mlflow.py) in the Azure Machine Learning example repo.
machine-learning How To Use Mlflow Configure Tracking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-configure-tracking.md
+
+ Title: Configure MLflow for Azure Machine Learning
+
+description: Connect MLflow to Azure Machine Learning workspaces to log metrics, artifacts and deploy models.
++++++ Last updated : 11/04/2022++
+ms.devlang: azurecli
+++
+# Configure MLflow for Azure Machine Learning
+
+Azure Machine Learning workspaces are MLflow compatible, which means they can act as an MLflow server without any extra configuration. Each workspace has an MLflow tracking URI that can be used by MLflow to connect to the workspace. In this article, learn how you can configure MLflow to connect to an Azure Machine Learning for tracking, registries, and deployment.
+
+> [!IMPORTANT]
+> When running on Azure Compute (Azure ML Notebooks, Jupyter notebooks hosted on Azure ML Compute Instances, or jobs running on Azure ML compute clusters) you don't have to configure the tracking URI. It's automatically configured for you.
+
+## Prerequisites
+
+You will need the following prerequisites to follow this tutorial:
+++
+## Configure MLflow tracking URI
+
+To connect MLflow to an Azure Machine Learning workspace you will need the tracking URI for the workspace. Each workspace has its own tracking URI and it has the protocol `azureml://`.
++
+## Configure authentication
+
+Once the tracking is set, you'll also need to configure how the authentication needs to happen to the associated workspace. By default, the Azure Machine Learning plugin for MLflow will perform interactive authentication by opening the default browser to prompt for credentials.
+
+The Azure Machine Learning plugin for MLflow supports several authentication mechanisms through the package `azure-identity`, which is installed as a dependency for the plugin `azureml-mlflow`. The following authentication methods are tried one by one until one of them succeeds:
+
+1. __Environment__: it will read account information specified via environment variables and use it to authenticate.
+1. __Managed Identity__: If the application is deployed to an Azure host with Managed Identity enabled, it will authenticate with it.
+1. __Azure CLI__: if a user has signed in via the Azure CLI `az login` command, it will authenticate as that user.
+1. __Azure PowerShell__: if a user has signed in via Azure PowerShell's `Connect-AzAccount` command, it will authenticate as that user.
+1. __Interactive browser__: it will interactively authenticate a user via the default browser.
++
+> [!NOTE]
+> If you'd rather use a certificate instead of a secret, you can configure the environment variables `AZURE_CLIENT_CERTIFICATE_PATH` to the path to a `PEM` or `PKCS12` certificate file (including private key) and
+`AZURE_CLIENT_CERTIFICATE_PASSWORD` with the password of the certificate file, if any.
+
+## Set experiment name (optional)
+
+All MLflow runs are logged to the active experiment. By default, runs are logged to an experiment named `Default` that is automatically created for you. You can configure the experiment where tracking is happening.
+
+> [!TIP]
+> When submitting jobs using Azure ML CLI v2, you can set the experiment name using the property `experiment_name` in the YAML definition of the job. You don't have to configure it on your training script. See [YAML: display name, experiment name, description, and tags](reference-yaml-job-command.md#yaml-display-name-experiment-name-description-and-tags) for details.
++
+# [MLflow SDK](#tab/mlflow)
+
+To configure the experiment you want to work on use MLflow command [`mlflow.set_experiment()`](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.set_experiment).
+
+```Python
+experiment_name = 'experiment_with_mlflow'
+mlflow.set_experiment(experiment_name)
+```
+
+# [Using environment variables](#tab/environ)
+
+You can also set one of the MLflow environment variables [MLFLOW_EXPERIMENT_NAME or MLFLOW_EXPERIMENT_ID](https://mlflow.org/docs/latest/cli.html#cmdoption-mlflow-run-arg-uri) with the experiment name.
+
+```bash
+export MLFLOW_EXPERIMENT_NAME="experiment_with_mlflow"
+```
+++
+## Next steps
+
+Now that your environment is connected to your workspace in Azure Machine Learning, you can start to work with it.
+
+- [Track ML experiments and models with MLflow](how-to-use-mlflow-cli-runs.md)
+- [Manage models registries in Azure Machine Learning with MLflow]()
+- [Train with MLflow Projects (Preview)](how-to-train-mlflow-projects.md)
+- [Guidelines for deploying MLflow models](how-to-deploy-mlflow-models.md)
machine-learning Reference Migrate Sdk V1 Mlflow Tracking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-migrate-sdk-v1-mlflow-tracking.md
# Migrate logging from SDK v1 to SDK v2
-The Azure Machine Learning Python SDK v2 does not provide native logging APIs. Instead, we recommend that you use [MLflow Tracking](https://www.mlflow.org/docs/latest/tracking.html). If you're migrating from SDK v1 to SDK v2, use the information in this section to understand the MLflow equivalents of SDK v1 logging APIs.
+Azure Machine Learning uses MLflow Tracking for metric logging and artifact storage for your experiments, whether you created the experiments via the Azure Machine Learning Python SDK, the Azure Machine Learning CLI, or Azure Machine Learning studio. We recommend using MLflow for tracking experiments.
-## Setup
+If you're migrating from SDK v1 to SDK v2, use the information in this section to understand the MLflow equivalents of SDK v1 logging APIs.
-To use MLflow tracking, import `mlflow` and optionally set the tracking URI for your workspace. If you're training on an Azure Machine Learning compute resource, such as a compute instance or compute cluster, the tracking URI is set automatically. If you're using a different compute resource, such as your laptop or desktop, you need to set the tracking URI.
+## Why MLflow?
-```python
-import mlflow
-
-# The rest of this is only needed if you are not using an Azure ML compute
-## Construct AzureML MLFLOW TRACKING URI
-def get_azureml_mlflow_tracking_uri(region, subscription_id, resource_group, workspace):
-return "azureml://{}.api.azureml.ms/mlflow/v1.0/subscriptions/{}/resourceGroups/{}/providers/Microsoft.MachineLearningServices/workspaces/{}".format(region, subscription_id, resource_group, workspace)
+MLflow, with over 13 million monthly downloads, has become the standard platform for end-to-end MLOps, enabling teams of all sizes to track, share, package and deploy any model for batch or real-time inference. By integrating with MLflow, your training code will not need to hold any specific code related to Azure Machine Learning, achieving true portability and seamless integration with other open-source platforms.
-region='<REGION>' ## example: westus
-subscription_id = '<SUBSCRIPTION_ID>' ## example: 11111111-1111-1111-1111-111111111111
-resource_group = '<RESOURCE_GROUP>' ## example: myresourcegroup
-workspace = '<AML_WORKSPACE_NAME>' ## example: myworkspacename
+## Prepare for migrating to MLflow
-MLFLOW_TRACKING_URI = get_azureml_mlflow_tracking_uri(region, subscription_id, resource_group, workspace)
+To use MLflow tracking, you will need to install `mlflow` and `azureml-mlflow` Python packages. All Azure Machine Learning environments have these packages already available for you but you will need to include them if creating your own environment.
-## Set the MLFLOW TRACKING URI
-mlflow.set_tracking_uri(MLFLOW_TRACKING_URI)
+```bash
+pip install mlflow azureml-mlflow
```
+> [!TIP]
+> You can use the [`mlflow-skinny`](https://github.com/mlflow/mlflow/blob/master/README_SKINNY.rst) which is a lightweight MLflow package without SQL storage, server, UI, or data science dependencies. This is recommended for users who primarily need the tracking and logging capabilities without importing the full suite of MLflow features including deployments.
+
+## Connect to your workspace
+
+Azure Machine Learning allows users to perform tracking in training jobs running on your workspace or running remotely (tracking experiments running outside Azure Machine Learning). If performing remote tracking, you will need to indicate the workspace you want to connect MLflow to.
+
+# [Azure Machine Learning compute](#tab/aml)
+
+You are already connected to your workspace when running on Azure Machine Learning compute.
+
+# [Remote compute](#tab/remote)
+
+**Configure tracking URI**
++
+**Configure authentication**
+
+Once the tracking is configured, you'll also need to configure how the authentication needs to happen to the associated workspace. By default, the Azure Machine Learning plugin for MLflow will perform interactive authentication by opening the default browser to prompt for credentials. Refer to [Configure MLflow for Azure Machine Learning: Configure authentication](how-to-use-mlflow-configure-tracking.md#configure-authentication) for more ways to configure authentication for MLflow in Azure Machine Learning workspaces.
++++ ## Experiments and runs __SDK v1__
machine-learning Resource Limits Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/resource-limits-capacity.md
This section lists basic limits and throttling thresholds in Azure Machine Learn
| Limit | Value | | | | | Metric names per run |50|
-| Metric rows per metric name |10 million|
+| Metric rows per metric name |1 million|
| Columns per metric row |15| | Metric column name length |255 characters | | Metric column value length |255 characters |
managed-grafana Quickstart Managed Grafana Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/quickstart-managed-grafana-cli.md
# Quickstart: Create an Azure Managed Grafana instance using the Azure CLI
-Get started by creating an Azure Managed Grafana workspace using the Azure CLI. Creating a workspace will generate a Managed Grafana instance.
+Get started by creating an Azure Managed Grafana workspace using the Azure CLI. Creating a workspace will generate an Azure Managed Grafana instance.
-## Prerequisite
+## Prerequisites
-- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).-- Minimum permission required to create a new instance: resource group Contributor.-- Minimum permission required to access an instance: Grafana Viewer permission on the Azure Managed Grafana instance.
- > [!NOTE]
- > Permission to access Azure Managed Grafana instances can only be granted by subscription Owners or User Access Administrators. If you don't have this permission, ask someone with the right access to assist you.
+- An Azure account for work or school and an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+- The [Azure CLI](/cli/azure/install-azure-cli).
+- Minimum required role to create an instance: resource group Contributor.
+- Minimum required role to access an instance: resource group Owner.
+ >[!NOTE]
+ > If you don't meet this requirement, once you've created a new Azure Managed Grafana instance, ask a User Access Administrator, subscription Owner or resource group Owner to grant you a Grafana Admin, Grafana Editor or Grafana Viewer role on the instance.
## Sign in to Azure
az login
This command will prompt your web browser to launch and load an Azure sign-in page.
-The CLI experience for Azure Managed Grafana is part of the amg extension for the Azure CLI (version 2.30.0 or higher). The extension will automatically install the first time you run the `az grafana` command.
+The CLI experience for Azure Managed Grafana is part of the `amg` extension for the Azure CLI (version 2.30.0 or higher). The extension will automatically install the first time you run the `az grafana` command.
## Create a resource group
In the preceding steps, you created an Azure Managed Grafana workspace in a new
## Next steps > [!div class="nextstepaction"]
-> [How to configure data sources for Azure Managed Grafana](./how-to-data-source-plugins-managed-identity.md)
+> [How to configure data sources for Azure Managed Grafana](./how-to-data-source-plugins-managed-identity.md)
managed-grafana Quickstart Managed Grafana Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/quickstart-managed-grafana-portal.md
# Quickstart: Create an Azure Managed Grafana instance using the Azure portal
-Get started by creating an Azure Managed Grafana workspace using the Azure portal. Creating a workspace will generate a Managed Grafana instance.
+Get started by creating an Azure Managed Grafana workspace using the Azure portal. Creating a workspace will generate an Azure Managed Grafana instance.
-## Prerequisite
+## Prerequisites
-- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).-- Minimum permission required to create a new instance: resource group Contributor.-- Minimum permission required to access an instance: Grafana Viewer permission on the Azure Managed Grafana instance.
- > [!NOTE]
- > Permission to access Azure Managed Grafana instances can only be granted by subscription Owners or User Access Administrators. If you don't have this permission, ask someone with the right access to assist you.
+- An Azure account for work or school and an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+- Minimum required role to create an instance: resource group Contributor.
+- Minimum required role to access an instance: resource group Owner.
+ >[!NOTE]
+ > If you don't meet this requirement, once you've created a new Azure Managed Grafana instance, ask a User Access Administrator, subscription Owner or resource group Owner to grant you a Grafana Admin, Grafana Editor or Grafana Viewer role on the instance.
## Create a Managed Grafana workspace
In the preceding steps, you created an Azure Managed Grafana workspace in a new
## Next steps > [!div class="nextstepaction"]
-> [How to configure data sources for Azure Managed Grafana](./how-to-data-source-plugins-managed-identity.md)
+> [How to configure data sources for Azure Managed Grafana](./how-to-data-source-plugins-managed-identity.md)
migrate Migrate Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix.md
Japan | Japan East or Japan West
Jio India | Jio India West Korea | Korea Central Norway | Norway East
+Sweden | Sweden Central
Switzerland | Switzerland North United Arab Emirates | UAE North United Kingdom | UK South or UK West
mysql Concepts Customer Managed Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-customer-managed-key.md
To avoid issues while setting up customer-managed data encryption during restore
> [!NOTE] > Using the same identity and key as on the source server is not mandatory when performing a restore.
+## Limitations
+
+For Azure Database for MySQL flexible server, the support for encryption of data at rest using customers managed key (CMK) has a limitation -
+
+* This feature is only supported for key vaults, which allow public access from all networks.
+ ## Next steps - [Data encryption with Azure CLI (Preview)](how-to-data-encryption-cli.md)
mysql How To Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-data-in-replication.md
The following steps prepare and configure the MySQL server hosted on-premises, i
1. Open the configuration file to edit it and locate **mysqld** section in the file. 1. In the mysqld section, add following line:
- ```bash
- log-bin=mysql-bin.log
- ```
+ ```bash
+ log-bin=mysql-bin.log
+ ```
1. Restart the MySQL service on source server (or Restart) for the changes to take effect. 1. After the server is restarted, verify that binary logging is enabled by running the same query as before:
The results should appear similar to the following. Make sure to note the binary
:::image type="content" source="./media/how-to-data-in-replication/master-status.png" alt-text="Master Status Results":::
-#### [Azure Data Studio](#tab/azure-data-studio)
-
-<--Content here-->
- ## Dump and restore the source server
mysql How To Data Out Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-data-out-replication.md
Last updated 12/30/2022 -+ # How to configure Azure Database for MySQL Flexible Server data-out replication
-[[!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)]
This article describes how to set up Data-out replication in Azure Database for MySQL Flexible Server by configuring the source and replica servers. This article assumes that you have some prior experience with MySQL servers and databases.
The results should appear similar to the following. Make sure to note the binary
:::image type="content" source="media/how-to-data-out-replication/mysql-workbench-result.png" alt-text="Screenshot of results.":::
-#### [Azure Data Studio](#tab/azure-data-studio)
-
-<--Content here-->
- ## Dump and restore the source server.
Follow the below steps if the source server has existing data to migrate to the
You can use mysqldump to dump databases from your primary server. For more details, visit [Dump & Restore](../single-server/concepts-migrate-dump-restore.md). It's unnecessary to dump the MySQL library and test library. 1. Set the source server to read/write mode.+ After dumping the database, change the source MySQL server to read/write mode. ```sql
Restore the dump file to the server created in the Azure Database for MySQL Flex
1. Set the replica server by connecting to it and opening the MySQL shell on the replica server. From the prompt, run the following operation, which configures several MySQL replication settings at the same time:
+ ```sql
CHANGE THE REPLICATION SOURCE TO SOURCE_HOST='<master_host>', SOURCE_USER='<master_user>', SOURCE_PASSWORD='<master_password>', SOURCE_LOG_FILE='<master_log_file>, SOURCE_LOG_POS=<master_log_pos>
+ ```
- master_host: hostname of the source server (example ΓÇô 'source.mysql.database.Azure.com') - master_user: username for the source server (example - 'syncuser'@'%')
Restore the dump file to the server created in the Azure Database for MySQL Flex
If the replica server is hosted in an Azure VM, set **Allow access to Azure services** to **ON** on the source to allow the source and replica servers to communicate. This setting can be changed from the connection security options. For more information, visit [Manage firewall rules using the portal](how-to-manage-firewall-portal.md). If you used mydumper/myloader to dump the database, you could get the master_log_file and master_log_pos from the /backup/metadata file.
-
+ ## Next step - Learn more about [Data-out replication](concepts-data-out-replication.md)
purview Concept Scans And Ingestion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-scans-and-ingestion.md
Whenever possible, a Managed Identity is the preferred authentication method bec
When scanning a source, you have a choice to scan the entire data source or choose only specific entities (folders/tables) to scan. Available options depend on the source you're scanning, and can be defined for both one-time and scheduled scans.
-For example, when [creating and running a scan for an Azure SQL Database](register-scan-azure-sql-database.md#creating-the-scan), you can choose which tables to scan, or select the entire database.
+For example, when [creating and running a scan for an Azure SQL Database](register-scan-azure-sql-database.md#create-the-scan), you can choose which tables to scan, or select the entire database.
### Scan rule set
The technical metadata or classifications identified by the scanning process are
For more information, or for specific instructions for scanning sources, follow the links below. * To understand resource sets, see our [resource sets article](concept-resource-sets.md).
-* [How to govern an Azure SQL Database](register-scan-azure-sql-database.md#creating-the-scan)
+* [How to govern an Azure SQL Database](register-scan-azure-sql-database.md#create-the-scan)
* [Lineage in Microsoft Purview](catalog-lineage-user-guide.md)
purview Microsoft Purview Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/microsoft-purview-connector-overview.md
The table below shows the supported capabilities for each data source. Select th
|| [Azure Database for PostgreSQL](register-scan-azure-postgresql.md) | [Yes](register-scan-azure-postgresql.md#register) | [Yes](register-scan-azure-postgresql.md#scan) | No* | No | No | || [Azure Dedicated SQL pool (formerly SQL DW)](register-scan-azure-synapse-analytics.md)| [Yes](register-scan-azure-synapse-analytics.md#register) | [Yes](register-scan-azure-synapse-analytics.md#scan)| No* | No | No | || [Azure Files](register-scan-azure-files-storage-source.md)|[Yes](register-scan-azure-files-storage-source.md#register) | [Yes](register-scan-azure-files-storage-source.md#scan) | Limited* | No | No |
-|| [Azure SQL Database](register-scan-azure-sql-database.md)| [Yes](register-scan-azure-sql-database.md#register) |[Yes](register-scan-azure-sql-database.md#scan)| [Yes (Preview)](register-scan-azure-sql-database.md#lineagepreview) | [Yes](register-scan-azure-sql-database.md#access-policy) (Preview) | No |
+|| [Azure SQL Database](register-scan-azure-sql-database.md)| [Yes](register-scan-azure-sql-database.md#register-the-data-source) |[Yes](register-scan-azure-sql-database.md#scope-and-run-the-scan)| [Yes (Preview)](register-scan-azure-sql-database.md#extract-lineage-preview) | [Yes](register-scan-azure-sql-database.md#set-up-access-policies) (Preview) | No |
|| [Azure SQL Managed Instance](register-scan-azure-sql-managed-instance.md)| [Yes](register-scan-azure-sql-managed-instance.md#scan) | [Yes](register-scan-azure-sql-managed-instance.md#scan) | No* | No | No | || [Azure Synapse Analytics (Workspace)](register-scan-synapse-workspace.md)| [Yes](register-scan-synapse-workspace.md#register) | [Yes](register-scan-synapse-workspace.md#scan)| [Yes - Synapse pipelines](how-to-lineage-azure-synapse-analytics.md)| No| No | |Database| [Amazon RDS](register-scan-amazon-rds.md) | [Yes](register-scan-amazon-rds.md#register-an-amazon-rds-data-source) | [Yes](register-scan-amazon-rds.md#scan-an-amazon-rds-database) | No | No | No |
The table below shows the supported capabilities for each data source. Select th
\* Besides the lineage on assets within the data source, lineage is also supported if dataset is used as a source/sink in [Data Factory](how-to-link-azure-data-factory.md) or [Synapse pipeline](how-to-lineage-azure-synapse-analytics.md). > [!NOTE]
-> Currently, the Microsoft Purview Data Map can't scan an asset that has `/`, `\`, or `#` in its name. To scope your scan and avoid scanning assets that have those characters in the asset name, use the example in [Register and scan an Azure SQL Database](register-scan-azure-sql-database.md#creating-the-scan).
+> Currently, the Microsoft Purview Data Map can't scan an asset that has `/`, `\`, or `#` in its name. To scope your scan and avoid scanning assets that have those characters in the asset name, use the example in [Register and scan an Azure SQL Database](register-scan-azure-sql-database.md#create-the-scan).
> [!IMPORTANT] > If you plan on using a self-hosted integration runtime, scanning some data sources requires additional setup on the self-hosted integration runtime machine. For example, JDK, Visual C++ Redistributable, or specific driver.
purview Register Scan Azure Multiple Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-multiple-sources.md
To learn how to add permissions on each resource type within a subscription or r
- [Azure Blob Storage](register-scan-azure-blob-storage-source.md#authentication-for-a-scan) - [Azure Data Lake Storage Gen1](register-scan-adls-gen1.md#authentication-for-a-scan) - [Azure Data Lake Storage Gen2](register-scan-adls-gen2.md#authentication-for-a-scan)-- [Azure SQL Database](register-scan-azure-sql-database.md#authentication-for-a-scan)
+- [Azure SQL Database](register-scan-azure-sql-database.md#configure-authentication-for-a-scan)
- [Azure SQL Managed Instance](register-scan-azure-sql-managed-instance.md#authentication-for-registration) - [Azure Synapse Analytics](register-scan-azure-synapse-analytics.md#authentication-for-registration)
purview Register Scan Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-sql-database.md
Title: 'Discover and govern Azure SQL DB'
-description: This article outlines the process to register an Azure SQL database in Microsoft Purview including instructions to authenticate and interact with the Azure SQL DB source
+ Title: 'Discover and govern Azure SQL Database'
+description: Learn how to register, authenticate with, and interact with an Azure SQL database in Microsoft Purview.
# Discover and govern Azure SQL Database in Microsoft Purview
-This article outlines the process to register an Azure SQL data source in Microsoft Purview including instructions to authenticate and interact with the Azure SQL database source
+This article outlines the process to register an Azure SQL database source in Microsoft Purview. It includes instructions to authenticate and interact with the SQL database.
## Supported capabilities
-|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|**Data Sharing**|
+|Metadata extraction| Full scan |Incremental scan|Scoped scan|Classification|Access policy|Lineage|Data sharing|
|||||||||
-| [Yes](#register) | [Yes](#scan)|[Yes](#scan) | [Yes](#scan)|[Yes](#scan)| [Yes (Preview)](#access-policy) | [Yes](#lineagepreview)(Preview)** | No |
+| [Yes](#register-the-data-source) | [Yes](#scope-and-run-the-scan)|[Yes](#scope-and-run-the-scan) | [Yes](#scope-and-run-the-scan)|[Yes](#scope-and-run-the-scan)| [Yes (preview)](#set-up-access-policies) | [Yes (preview)](#extract-lineage-preview) | No |
-\** Lineage is also supported if Azure SQL tables/views used as source/sink in [Data Factory Copy and Data Flow activities](how-to-link-azure-data-factory.md)
-
-* Data lineage extraction is currently supported only for Stored procedure runs
-
-When scanning Azure SQL Database, Microsoft Purview supports:
+> [!NOTE]
+> Data lineage extraction is currently supported only for stored procedure runs. Lineage is also supported if Azure SQL tables or views are used as a source/sink in [Azure Data Factory Copy and Data Flow activities](how-to-link-azure-data-factory.md).
-- Extracting technical metadata including:
+When you're scanning Azure SQL Database, Microsoft Purview supports extracting technical metadata from these sources:
- - Server
- - Database
- - Schemas
- - Tables including the columns
- - Views including the columns
- - Store procedures (with lineage extraction enabled)
- - Store procedure runs (with lineage extraction enabled)
+- Server
+- Database
+- Schemas
+- Tables, including columns
+- Views, including columns
+- Stored procedures (with lineage extraction enabled)
+- Stored procedure runs (with lineage extraction enabled)
-When setting up scan, you can further scope the scan after providing the database name by selecting tables and views as needed.
+When you're setting up a scan, you can further scope it after providing the database name by selecting tables and views as needed.
### Known limitations
-* Microsoft Purview doesn't support over 800 columns in the Schema tab and it will show "Additional-Columns-Truncated" if there are more than 800 columns.
-* Column level lineage is currently not supported in the lineage tab. However, the columnMapping attribute in properties tab of Azure SQL Stored Procedure Run captures column lineage in plain text.
-* Data lineage extraction is currently not supported for Functions, Triggers.
-* Lineage extraction scan is scheduled and defaulted to run every six hours. Frequency can't be changed.
-* If SQL views are referenced in stored procedures, they're captured as SQL tables currently.
-* Lineage extraction is currently not supported if your Azure SQL Server disables public access or doesn't allow Azure services to access it.
+* Microsoft Purview supports a maximum of 800 columns on the schema tab. If there are more than 800 columns, Microsoft Purview will show **Additional-Columns-Truncated**.
+* Column-level lineage is currently not supported on the lineage tab. However, the `columnMapping` attribute on the properties tab for SQL stored procedure runs captures column lineage in plain text.
+* Data lineage extraction is currently not supported for functions or triggers.
+* The lineage extraction scan is scheduled to run every six hours by default. The frequency can't be changed.
+* If SQL views are referenced in stored procedures, they're currently captured as SQL tables.
+* Lineage extraction is currently not supported if your logical server in Azure disables public access or doesn't allow Azure services to access it.
## Prerequisites
When setting up scan, you can further scope the scan after providing the databas
* An active [Microsoft Purview account](create-catalog-portal.md).
-* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+* Data Source Administrator and Data Reader permissions, so you can register a source and manage it in the Microsoft Purview governance portal. For details, see [Access control in the Microsoft Purview governance portal](catalog-permissions.md).
+
+## Register the data source
-## Register
+Before you scan, it's important to register the data source in Microsoft Purview:
-This section will enable you to register the Azure SQL DB data source and set up authentication to scan.
+1. In the [Azure portal](https://portal.azure.com), go to the **Microsoft Purview accounts** page and select your Microsoft Purview account.
-### Steps to register
+1. Under **Open Microsoft Purview Governance Portal**, select **Open**, and then select **Data Map**.
-It's important to register the data source in Microsoft Purview before setting up a scan.
+ :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-open-purview-studio.png" alt-text="Screenshot that shows the area for opening a Microsoft Purview governance portal.":::
-1. Go to the [Azure portal](https://portal.azure.com), and navigate to the **Microsoft Purview accounts** page and select your _Purview account_
+1. Create the [collection hierarchy](./quickstart-create-collection.md) by going to **Collections** and then selecting **Add a collection**. Assign permissions to individual subcollections as required.
-1. **Open Microsoft Purview governance portal** and navigate to the **Data Map**
+ :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-collections.png" alt-text="Screenshot that shows selections for assigning access control permissions to the collection hierarchy.":::
- :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-open-purview-studio.png" alt-text="Screenshot that navigates to the Sources link in the Data Map.":::
+1. Go to the appropriate collection under **Sources**, and then select the **Register** icon to register a new SQL database.
-1. Create the [Collection hierarchy](./quickstart-create-collection.md) using the **Collections** menu and assign permissions to individual subcollections, as required
+ :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-data-source.png" alt-text="Screenshot that shows the collection that's used to register the data source.":::
- :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-collections.png" alt-text="Screenshot that shows the collection menu to assign access control permissions to the collection hierarchy.":::
+1. Select the **Azure SQL Database** data source, and then select **Continue**.
-1. Navigate to the appropriate collection under the **Sources** menu and select the **Register** icon to register a new Azure SQL DB
+1. For **Name**, provide a suitable name for the data source. Select relevant names for **Azure subscription**, **Server name**, and **Select a collection**, and then select **Apply**.
- :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-data-source.png" alt-text="Screenshot that shows the collection used to register the data source.":::
+ :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-ds-details.png" alt-text="Screenshot that shows details entered to register a data source.":::
-1. Select the **Azure SQL Database** data source and select **Continue**
+1. Confirm that the SQL database appears under the selected collection.
-1. Provide a suitable **Name** for the data source, select the relevant **Azure subscription**, **Server name** for the SQL server and the **collection** and select on **Apply**
+ :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-ds-collections.png" alt-text="Screenshot that shows a data source mapped to a collection to initiate scanning.":::
- :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-ds-details.png" alt-text="Screenshot that shows the details to be entered in order to register the data source.":::
+## Update firewall settings
-1. The Azure SQL Server Database will be shown under the selected Collection
+If your database server has a firewall enabled, you need to update the firewall to allow access in one of the following ways:
- :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-ds-collections.png" alt-text="Screenshot that shows the data source mapped to the collection to initiate scanning.":::
+- [Allow Azure connections through the firewall](#allow-azure-connections). This is a straightforward option to route traffic through Azure networking, without needing to manage virtual machines.
+- [Install a self-hosted integration runtime on a machine in your network and give it access through the firewall](#install-a-self-hosted-integration-runtime). If you have a private virtual network set up within Azure, or have any other closed network set up, using a self-hosted integration runtime on a machine within that network will allow you to fully manage traffic flow and utilize your existing network.
+- [Use a managed virtual network](catalog-managed-vnet.md). Setting up a managed virtual network with your Microsoft Purview account will allow you to connect to Azure SQL by using the Azure integration runtime in a closed network.
-## Scan
+For more information about the firewall, see the [Azure SQL Database firewall documentation](/azure/azure-sql/database/firewall-configure).
-> [!TIP]
-> To troubleshoot any issues with scanning:
-> 1. Confirm you have followed all [**prerequisites**](#prerequisites).
-> 1. Check network by confirming [firewall](#firewall-settings), [Azure connections](#allow-azure-connections), or [integration runtime](#self-hosted-integration-runtime) settings.
-> 1. Confirm [authentication](#authentication-for-a-scan) is properly set up.
-> 1. Review our [**scan troubleshooting documentation**](troubleshoot-connections.md).
+### Allow Azure connections
-### Firewall settings
+Enabling Azure connections will allow Microsoft Purview to connect to the server without requiring you to update the firewall itself.
-If your database server has a firewall enabled, you'll need to update the firewall to allow access in one of two ways:
+1. Go to your database account.
+1. On the **Overview** page, select the server name.
+1. Select **Security** > **Firewalls and virtual networks**.
+1. For **Allow Azure services and resources to access this server**, select **Yes**.
-1. [Allow Azure connections through the firewall](#allow-azure-connections) - a straightforward option to route traffic through Azure networking, without needing to manage virtual machines.
-1. [Install a Self-Hosted Integration Runtime on a machine in your network and give it access through the firewall](#self-hosted-integration-runtime) - if you have a private VNet set up within Azure, or have any other closed network set up, using a self-hosted integration runtime on a machine within that network will allow you to fully manage traffic flow and utilize your existing network.
-1. [Use a managed virtual network](catalog-managed-vnet.md) - setting up a managed virtual network with your Microsoft Purview account will allow you to connect to Azure SQL using the Azure integration runtime in a closed network.
-For more information about the Azure SQL Firewall, see the [SQL Database firewall documentation.](/azure/azure-sql/database/firewall-configure) To connect Microsoft Purview through the firewall, follow the steps below.
+For more information about allowing connections from inside Azure, see the [how-to guide](/azure/azure-sql/database/firewall-configure#connections-from-inside-azure).
-#### Allow Azure Connections
+### Install a self-hosted integration runtime
-Enabling Azure connections will allow Microsoft Purview to reach and connect the server without updating the firewall itself. You can follow the How-to guide for [Connections from inside Azure](/azure/azure-sql/database/firewall-configure#connections-from-inside-azure).
+You can install a self-hosted integration runtime on a machine to connect with a resource in a private network:
-1. Navigate to your database account
-1. Select the server name in the **Overview** page
-1. Select **Security > Firewalls and virtual networks**
-1. Select **Yes** for **Allow Azure services and resources to access this server**
+1. [Create and install a self-hosted integration runtime](./manage-integration-runtimes.md) on a personal machine, or on a machine inside the same virtual network as your database server.
+1. Check your database server's networking configuration to confirm that a private endpoint is accessible to the machine that contains the self-hosted integration runtime. Add the IP address of the machine if it doesn't already have access.
+1. If your logical server is behind a private endpoint or in a virtual network, you can use an [ingestion private endpoint](catalog-private-link-ingestion.md#deploy-self-hosted-integration-runtime-ir-and-scan-your-data-sources) to ensure end-to-end network isolation.
-#### Self-Hosted Integration Runtime
+## Configure authentication for a scan
-A self-hosted integration runtime (SHIR) can be installed on a machine to connect with a resource in a private network.
+To scan your data source, you need to configure an authentication method in Azure SQL Database.
-1. [Create and install a self-hosted integration runtime](./manage-integration-runtimes.md) on a personal machine, or a machine inside the same VNet as your database server.
-1. Check your database server networking configuration to confirm that there's a private endpoint accessible to the SHIR machine. Add the IP of the machine if it doesn't already have access.
-1. If your Azure SQL Server is behind a private endpoint or in a VNet, you can use an [ingestion private endpoint](catalog-private-link-ingestion.md#deploy-self-hosted-integration-runtime-ir-and-scan-your-data-sources) to ensure end-to-end network isolation.
+>[!IMPORTANT]
+> If you're using a [self-hosted integration runtime](manage-integration-runtimes.md) to connect to your resource, system-assigned and user-assigned managed identities won't work. You need to use service principal authentication or SQL authentication.
-### Authentication for a scan
+Microsoft Purview supports the following options:
-To scan your data source, you'll need to configure an authentication method in the Azure SQL Database.
+* **System-assigned managed identity (SAMI)** (recommended). This is an identity that's associated directly with your Microsoft Purview account. It allows you to authenticate directly with other Azure resources without needing to manage a go-between user or credential set.
->[!IMPORTANT]
-> If you are using a [self-hosted integration runtime](manage-integration-runtimes.md) to connect to your resource, **system-assigned and user-assigned managed identities will not work**. You need to use Service Principal authentication or SQL authentication.
+ The SAMI is created when your Microsoft Purview resource is created. It's managed by Azure and uses your Microsoft Purview account's name. The SAMI can't currently be used with a self-hosted integration runtime for Azure SQL.
+
+ For more information, see the [managed identity overview](../active-directory/managed-identities-azure-resources/overview.md).
-The following options are supported:
+* **User-assigned managed identity (UAMI)** (preview). Similar to a SAMI, a UAMI is a credential resource that allows Microsoft Purview to authenticate against Azure Active Directory (Azure AD).
-* **System-assigned managed identity** (Recommended) - This is an identity associated directly with your Microsoft Purview account that allows you to authenticate directly with other Azure resources without needing to manage a go-between user or credential set. The **system-assigned** managed identity is created when your Microsoft Purview resource is created, is managed by Azure, and uses your Microsoft Purview account's name. The SAMI can't currently be used with a self-hosted integration runtime for Azure SQL. For more information, see the [managed identity overview](../active-directory/managed-identities-azure-resources/overview.md).
+ The UAMI is managed by users in Azure, rather than by Azure itself, which gives you more control over security. The UAMI can't currently be used with a self-hosted integration runtime for Azure SQL.
+
+ For more information, see the [guide for user-assigned managed identities](manage-credentials.md#create-a-user-assigned-managed-identity).
-* **User-assigned managed identity** (preview) - Similar to a SAMI, a user-assigned managed identity (UAMI) is a credential resource that allows Microsoft Purview to authenticate against Azure Active Directory. The **user-assigned** managed by users in Azure, rather than by Azure itself, which gives you more control over security. The UAMI can't currently be used with a self-hosted integration runtime for Azure SQL. For more information, see our [guide for user-assigned managed identities.](manage-credentials.md#create-a-user-assigned-managed-identity)
+* **Service principal**. A service principal is an application that can be assigned permissions like any other group or user, without being associated directly with a person. Authentication for service principals has an expiration date, so it can be useful for temporary projects.
-* **Service Principal**- A service principal is an application that can be assigned permissions like any other group or user, without being associated directly with a person. Their authentication has an expiration date, and so can be useful for temporary projects. For more information, see the [service principal documentation](../active-directory/develop/app-objects-and-service-principals.md).
+ For more information, see the [service principal documentation](../active-directory/develop/app-objects-and-service-principals.md).
-* **SQL Authentication** - connect to the SQL database with a username and password. For more information about SQL Authentication, you can [follow the SQL authentication documentation](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-sql-server-authentication). If you need to create a login, follow this [guide to query an Azure SQL database](/azure/azure-sql/database/connect-query-portal), and use [this guide to create a login using T-SQL.](/sql/t-sql/statements/create-login-transact-sql)
- > [!NOTE]
- > Be sure to select the Azure SQL Database option on the page.
+* **SQL authentication**. Connect to the SQL database with a username and password. For more information, see the [SQL authentication documentation](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-sql-server-authentication).
+
+ If you need to create a login, follow [this guide to query a SQL database](/azure/azure-sql/database/connect-query-portal). Use [this guide to create a login by using T-SQL](/sql/t-sql/statements/create-login-transact-sql).
+
+ > [!NOTE]
+ > Be sure to select the **Azure SQL Database** option on the page.
-Select your chosen method of authentication from the tabs below for steps to authenticate with your Azure SQL Database.
+For steps to authenticate with your SQL database, select your chosen method of authentication from the following tabs.
# [SQL authentication](#tab/sql-authentication) > [!Note]
-> Only the server-level principal login (created by the provisioning process) or members of the `loginmanager` database role in the master database can create new logins. It takes about **15 minutes** after granting permission, the Microsoft Purview account should have the appropriate permissions to be able to scan the resource(s).
+> Only the server-level principal login (created by the provisioning process) or members of the `loginmanager` database role in the master database can create new logins. The Microsoft Purview account should be able to scan the resources about 15 minutes after it gets permissions.
-1. You'll need a SQL login with at least `db_datareader` permissions to be able to access the information Microsoft Purview needs to scan the database. You can follow the instructions in [CREATE LOGIN](/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-current&preserve-view=true#examples-1) to create a sign-in for Azure SQL Database. You'll need to save the **username** and **password** for the next steps.
+1. You need a SQL login with at least `db_datareader` permissions to be able to access the information that Microsoft Purview needs to scan the database. You can follow the instructions in [CREATE LOGIN](/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-current&preserve-view=true#examples-1) to create a sign-in for Azure SQL Database. Save the username and password for the next steps.
-1. Navigate to your key vault in the Azure portal.
+1. Go to your key vault in the Azure portal.
-1. Select **Settings > Secrets** and select **+ Generate/Import**
+1. Select **Settings** > **Secrets**, and then select **+ Generate/Import**.
:::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-secret.png" alt-text="Screenshot that shows the key vault option to generate a secret.":::
-1. Enter the **Name** and **Value** as the *password* from your Azure SQL Database.
+1. For **Name** and **Value**, use the username and password (respectively) from your SQL database.
-1. Select **Create** to complete
+1. Select **Create**.
-1. If your key vault isn't connected to Microsoft Purview yet, you'll need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-microsoft-purview-account)
+1. If your key vault isn't connected to Microsoft Purview yet, [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-microsoft-purview-account).
-1. Finally, [create a new credential](manage-credentials.md#create-a-new-credential) using the key to set up your scan.
+1. [Create a new credential](manage-credentials.md#create-a-new-credential) by using the key to set up your scan.
:::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-credentials.png" alt-text="Screenshot that shows the key vault option to set up credentials.":::
Select your chosen method of authentication from the tabs below for steps to aut
# [Managed identity](#tab/managed-identity) >[!IMPORTANT]
-> If you are using a [self-hosted integration runtime](manage-integration-runtimes.md) to connect to your resource, system-assigned and user-assigned managed identities will not work. You need to use SQL Authentication or Service Principal Authentication.
+> If you're using a [self-hosted integration runtime](manage-integration-runtimes.md) to connect to your resource, system-assigned and user-assigned managed identities won't work. You need to use SQL authentication or service principal authentication.
-##### Configure Azure AD authentication in the database account
+### Configure Azure AD authentication in the database account
The managed identity needs permission to get metadata for the database, schemas, and tables. It must also be authorized to query the tables to sample for classification. -- If you haven't already, [configure Azure AD authentication with Azure SQL](/azure/azure-sql/database/authentication-aad-configure)-- Create Azure AD user in Azure SQL Database with the exact Microsoft Purview's managed identity by following tutorial on [create the user in Azure SQL Database](/azure/azure-sql/database/authentication-aad-service-principal-tutorial#create-the-service-principal-user-in-azure-sql-database). Assign proper permission (for example: `db_datareader`) to the identity. Example SQL syntax to create user and grant permission:
+1. If you haven't already, [configure Azure AD authentication with Azure SQL](/azure/azure-sql/database/authentication-aad-configure).
+1. Create an Azure AD user in Azure SQL Database with the exact managed identity from Microsoft Purview. Follow the steps in [Create the service principal user in Azure SQL Database](/azure/azure-sql/database/authentication-aad-service-principal-tutorial#create-the-service-principal-user-in-azure-sql-database).
+1. Assign proper permission (for example: `db_datareader`) to the identity. Here's example SQL syntax to create the user and grant permission:
```sql CREATE USER [Username] FROM EXTERNAL PROVIDER
The managed identity needs permission to get metadata for the database, schemas,
``` > [!Note]
- > The `Username` is your Microsoft Purview's managed identity name. You can read more about [fixed-database roles and their capabilities](/sql/relational-databases/security/authentication-access/database-level-roles#fixed-database-roles).
+ > The `[Username]` value is your managed identity name from Microsoft Purview. You can [read more about fixed-database roles and their capabilities](/sql/relational-databases/security/authentication-access/database-level-roles#fixed-database-roles).
-##### Configure Portal Authentication
+### Configure portal authentication
-It's important to give your Microsoft Purview account's system-managed identity or [user-assigned managed identity](manage-credentials.md#create-a-user-assigned-managed-identity) the permission to scan the Azure SQL DB. You can add the SAMI or UAMI at the Subscription, Resource Group, or Resource level, depending on the breadth of the scan.
+It's important to give your Microsoft Purview account's system-assigned managed identity or [user-assigned managed identity](manage-credentials.md#create-a-user-assigned-managed-identity) the permission to scan the SQL database. You can add the SAMI or UAMI at the subscription, resource group, or resource level, depending on the breadth of the scan.
> [!Note]
-> You need to be an owner of the subscription to be able to add a managed identity on an Azure resource.
+> To add a managed identity on an Azure resource, you need to be an owner of the subscription.
-1. From the [Azure portal](https://portal.azure.com), find either the subscription, resource group, or resource (for example, an Azure SQL Database) that the catalog should scan.
+1. From the [Azure portal](https://portal.azure.com), find the subscription, resource group, or resource (for example, a SQL database) that the catalog should scan.
-1. Select **Access Control (IAM)** in the left navigation and then select **+ Add** --> **Add role assignment**
+1. Select **Access control (IAM)** on the left menu, and then select **+ Add** > **Add role assignment**.
- :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-sql-ds.png" alt-text="Screenshot that shows the Azure SQL database.":::
+ :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-sql-ds.png" alt-text="Screenshot that shows selections for adding a role assignment for access control.":::
-1. Set the **Role** to **Reader** and enter your _Microsoft Purview account name_ or _[user-assigned managed identity](manage-credentials.md#create-a-user-assigned-managed-identity)_ under **Select** input box. Then, select **Save** to give this role assignment to your Microsoft Purview account.
+1. Set **Role** to **Reader**. In the **Select** box, enter your Microsoft Purview account name or UAMI. Then, select **Save** to give this role assignment to your Microsoft Purview account.
:::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-access-managed-identity.png" alt-text="Screenshot that shows the details to assign permissions for the Microsoft Purview account."::: # [Service principal](#tab/service-principal)
-##### Creating a new service principal
+### Create a new service principal
-If you don't have a service principal, you can [follow the service principal guide to create one.](./create-service-principal-azure.md)
+If you don't have a service principal, you can follow the [service principal guide](./create-service-principal-azure.md) to create one.
> [!NOTE]
-> To create a service principal, it's required to register an application in your Azure AD tenant. If you do not have access to do this, your Azure AD Global Administrator, or other roles like Application Administrator can perform this operation.
+> To create a service principal, you must register an application in your Azure AD tenant. If you don't have the required access, your Azure AD Global Administrator or Application Administrator can perform this operation.
-##### Granting the Service Principal access to your Azure SQL Database
+### Grant the service principal access to your SQL database
The service principal needs permission to get metadata for the database, schemas, and tables. It must also be authorized to query the tables to sample for classification. -- If you haven't already, [configure Azure AD authentication with Azure SQL](/azure/azure-sql/database/authentication-aad-configure)-- Create Azure AD user in Azure SQL Database with your service principal by following tutorial on [Create the service principal user in Azure SQL Database](/azure/azure-sql/database/authentication-aad-service-principal-tutorial#create-the-service-principal-user-in-azure-sql-database). Assign proper permission (for example: `db_datareader`) to the identity. Example SQL syntax to create user and grant permission:
+1. If you haven't already, [configure Azure AD authentication with Azure SQL](/azure/azure-sql/database/authentication-aad-configure).
+1. Create an Azure AD user in Azure SQL Database with your service principal. Follow the steps in [Create the service principal user in Azure SQL Database](/azure/azure-sql/database/authentication-aad-service-principal-tutorial#create-the-service-principal-user-in-azure-sql-database).
+1. Assign proper permission (for example: `db_datareader`) to the identity. Here's example SQL syntax to create the user and grant permission:
```sql CREATE USER [Username] FROM EXTERNAL PROVIDER
The service principal needs permission to get metadata for the database, schemas
``` > [!Note]
- > The `Username` is your own service principal's name. You can read more about [fixed-database roles and their capabilities](/sql/relational-databases/security/authentication-access/database-level-roles#fixed-database-roles).
+ > The `[Username]` value is your own service principal's name. You can [read more about fixed-database roles and their capabilities](/sql/relational-databases/security/authentication-access/database-level-roles#fixed-database-roles).
-##### Create the credential
+### Create the credential
-1. Navigate to your key vault in the Azure portal
+1. Go to your key vault in the Azure portal.
-1. Select **Settings > Secrets** and select **+ Generate/Import**
+1. Select **Settings** > **Secrets**, and then select **+ Generate/Import**.
- :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-secret.png" alt-text="Screenshot that shows the key vault option to generate a secret for Service Principal.":::
+ :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-secret.png" alt-text="Screenshot that shows the key vault option to generate a secret for a service principal.":::
-1. Give the secret a **Name** of your choice.
+1. For **Name**, give the secret a name of your choice.
-1. The secret's **Value** will be the Service Principal's **Secret Value**. If you've already created a secret for your service principal, you can find its value in **Client credentials** on your secret's overview page.
+1. For **Value**, use the service principal's secret value. If you've already created a secret for your service principal, you can find its value in **Client credentials** on your secret's overview page.
If you need to create a secret, you can follow the steps in the [service principal guide](create-service-principal-azure.md#adding-a-secret-to-the-client-credentials).
- :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-sp-client-credentials.png" alt-text="Screenshot that shows the Client credentials for the Service Principal.":::
+ :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-sp-client-credentials.png" alt-text="Screenshot that shows the client credentials for a service principal.":::
1. Select **Create** to create the secret.
-1. If your key vault isn't connected to Microsoft Purview yet, you'll need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-microsoft-purview-account)
+1. If your key vault isn't connected to Microsoft Purview yet, [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-microsoft-purview-account).
-1. Then, [create a new credential](manage-credentials.md#create-a-new-credential).
+1. [Create a new credential](manage-credentials.md#create-a-new-credential).
- :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-credentials.png" alt-text="Screenshot that shows the key vault option to add a credential for Service Principal.":::
+ :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-credentials.png" alt-text="Screenshot that shows the key vault option to add a credential for a service principal.":::
-1. The **Service Principal ID** will be the **Application ID** of your service principal. The **Secret name** will be the name of the secret you created in the previous steps.
+1. For **Service Principal ID**, use the application (client) ID of your service principal.
- :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-sp-appln-id.png" alt-text="Screenshot that shows the Application (client) ID for the Service Principal.":::
+ :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-sp-appln-id.png" alt-text="Screenshot that shows the application ID for a service principal.":::
- :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-sp-cred.png" alt-text="Screenshot that shows the key vault option to create a secret for Service Principal.":::
+1. For **Secret name**, use the name of the secret that you created in previous steps.
+
+ :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-sp-cred.png" alt-text="Screenshot that shows the key vault option to create a secret for a service principal.":::
-### Creating the scan
+## Create the scan
-1. Open your **Microsoft Purview account** and select the **Open Microsoft Purview governance portal**
-1. Navigate to the **Data map** --> **Sources** to view the collection hierarchy
-1. Select the **New Scan** icon under the **Azure SQL DB** registered earlier
+1. Open your Microsoft Purview account and select **Open Microsoft Purview governance portal**.
+1. Go to **Data map** > **Sources** to view the collection hierarchy.
+1. Select the **New Scan** icon under the SQL database that you registered earlier.
- :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-new-scan.png" alt-text="Screenshot that shows the screen to create a new scan.":::
+ :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-new-scan.png" alt-text="Screenshot that shows the pane for creating a new scan.":::
-Navigate to [lineage](#lineagepreview) section to learn more about data lineage from Azure SQL DB
+To learn more about data lineage in Azure SQL Database, see the [Extract lineage (preview)](#extract-lineage-preview) section of this article.
-Select your method of authentication from the tabs below for scanning steps.
+For scanning steps, select your method of authentication from the following tabs.
# [SQL authentication](#tab/sql-authentication)
-1. Provide a **Name** for the scan, select **Database selection method** as _Enter manually_, enter the **Database name** and the **Credential** created earlier, choose the appropriate collection for the scan and select **Test connection** to validate the connection. Once the connection is successful, select **Continue**
+1. For **Name**, provide a name for the scan.
+
+1. For **Database selection method**, select **Enter manually**.
+
+1. For **Database name** and **Credential**, enter the values that you created earlier.
+
+ :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-sql-auth.png" alt-text="Screenshot that shows database and credential information for the SQL authentication option to run a scan.":::
- :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-sql-auth.png" alt-text="Screenshot that shows the SQL Authentication option for scanning.":::
+1. For **Select a connection**, choose the appropriate collection for the scan.
+
+1. Select **Test connection** to validate the connection. After the connection is successful, select **Continue**.
# [Managed identity](#tab/managed-identity)
-1. Provide a **Name** for the scan, select the SAMI or UAMI under **Credential**, choose the appropriate collection for the scan.
+1. For **Name**, provide a name for the scan.
+
+1. Select the SAMI or UAMI under **Credential**, and choose the appropriate collection for the scan.
- :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-managed-id.png" alt-text="Screenshot that shows the managed identity option to run the scan.":::
+ :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-managed-id.png" alt-text="Screenshot that shows credential and collection information for the managed identity option to run a scan.":::
-1. Select **Test connection**. On a successful connection, select **Continue**
+1. Select **Test connection**. After the connection is successful, select **Continue**.
- :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-test.png" alt-text="Screenshot that allows the managed identity option to run the scan.":::
+ :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-test.png" alt-text="Screenshot that shows the message for a successful connection for the managed identity option to run a scan.":::
# [Service principal](#tab/service-principal)
-1. Provide a **Name** for the scan, choose the appropriate collection for the scan, and select the **Credential** dropdown to select the credential created earlier.
+1. For **Name**, provide a name for the scan.
- :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-sp.png" alt-text="Screenshot that shows the option for service principal to enable scanning.":::
+1. Choose the appropriate collection for the scan, and select the credential that you created earlier under **Credential**.
-1. Select **Test connection**. On a successful connection, select **Continue**.
+ :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-sp.png" alt-text="Screenshot that shows collection and credential information for the service principal option to enable scanning.":::
+
+1. Select **Test connection**. After the connection is successful, select **Continue**.
-### Scoping and running the scan
+## Scope and run the scan
1. You can scope your scan to specific database objects by choosing the appropriate items in the list.
- :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-scope-scan.png" alt-text="Scope your scan.":::
-
-1. Then select a scan rule set. You can choose between the system default, existing custom rule sets, or create a new rule set inline.
+ :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-scope-scan.png" alt-text="Screenshot that shows options for scoping a scan.":::
- :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-scan-rule-set.png" alt-text="Scan rule set.":::
+1. Select a scan rule set. You can use the system default, choose from existing custom rule sets, or create a new rule set inline. Select **Continue** when you're finished.
-1. If creating a new _scan rule set_
+ :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-scan-rule-set.png" alt-text="Screenshot that shows options for selecting a scan rule set.":::
- :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-new-scan-rule-set.png" alt-text="New Scan rule set.":::
+ If you select **New scan rule set**, a pane opens so that you can enter the source type, the name of the rule set, and a description. Select **Continue** when you're finished.
-1. You can select the **classification rules** to be included in the scan rule
+ :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-new-scan-rule-set.png" alt-text="Screenshot that shows information for creating a new scan rule set.":::
+
+ For **Select classification rules**, choose the classification rules that you want to include in the scan rule set, and then select **Create**.
- :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-classification.png" alt-text="Scan rule set classification rules.":::
+ :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-classification.png" alt-text="Screenshot that shows a list of classification rules for a scan rule set.":::
- :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-sel-scan-rule.png" alt-text="Scan rule set selection.":::
+ The new scan rule set then appears in the list of available rule sets.
+
+ :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-sel-scan-rule.png" alt-text="Screenshot that shows the selection of a new scan rule set.":::
1. Choose your scan trigger. You can set up a schedule or run the scan once.
-1. Review your scan and select **Save and run**.
+1. Review your scan, and then select **Save and run**.
+
+### View a scan
+
+To check the status of a scan, go to the data source in the collection, and then select **View details**.
+
-### View Scan
+The scan details indicate the progress of the scan in **Last run status**, along with the number of assets scanned and classified.
+**Last run status** is updated to **In progress** and then **Completed** after the entire scan has run successfully.
-1. Navigate to the _data source_ in the _Collection_ and select **View Details** to check the status of the scan
- :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-view-scan.png" alt-text="view scan.":::
+### Manage a scan
-1. The scan details indicate the progress of the scan in the **Last run status** and the number of assets _scanned_ and _classified_
+After you run a scan, you can use the run history to manage it:
-1. The **Last run status** will be updated to **In progress** and then **Completed** once the entire scan has run successfully
+1. Under **Recent scans**, select a scan.
- :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-scan-complete.png" alt-text="view scan completed.":::
+ :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-manage scan.png" alt-text="Screenshot that shows the selection of a recently completed scan.":::
-### Manage Scan
+1. In the run history, you have options for running the scan again, editing it, or deleting it.
-Scans can be managed or run again on completion
+ :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-manage-scan-options.png" alt-text="Screenshot that shows options for running, editing, and deleting a scan.":::
-1. Select the **Scan name** to manage the scan
+ If you select **Run scan now** to rerun the scan, you can then choose either **Incremental scan** or **Full scan**.
- :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-manage scan.png" alt-text="manage scan.":::
+ :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-full-inc.png" alt-text="Screenshot that shows options for full or incremental scan.":::
-1. You can _run the scan_ again, _edit the scan_, _delete the scan_
- :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-manage-scan-options.png" alt-text="manage scan options.":::
+### Troubleshoot scanning
-1. You can _run an incremental scan_ or a _full scan_ again
+If you have problems with scanning, try these tips:
- :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-full-inc.png" alt-text="full or incremental scan.":::
+- Confirm that you followed all [prerequisites](#prerequisites).
+- Check the network by confirming [firewall](#update-firewall-settings), [Azure connections](#allow-azure-connections), or [integration runtime](#install-a-self-hosted-integration-runtime) settings.
+- Confirm that [authentication](#configure-authentication-for-a-scan) is properly set up.
-## Access policy
+For more information, review [Troubleshoot your connections in Microsoft Purview](troubleshoot-connections.md).
+
+## Set up access policies
-### Supported policies
The following types of policies are supported on this data resource from Microsoft Purview: - [DevOps policies](concept-policies-devops.md) - [Data owner policies](concept-policies-data-owner.md)-- [self-service policies](concept-self-service-data-access-policy.md)
+- [Self-service policies](concept-self-service-data-access-policy.md)
+
+### Access policy prerequisites on Azure SQL Database
-### Access policy pre-requisites on Azure SQL Database
### Configure the Microsoft Purview account for policies+ [!INCLUDE [Access policies generic configuration](./includes/access-policies-configuration-generic.md)] ### Register the data source and enable Data use management
-The Azure SQL Database resource needs to be registered first with Microsoft Purview before you can create access policies.
-To register your resources, follow the **Prerequisites** and **Register** sections of this guide:
-[Register Azure SQL Database in Microsoft Purview](./register-scan-azure-sql-database.md#prerequisites)
-After you've registered the data source, you'll need to enable Data Use Management. This is a pre-requisite before you can create policies on the data source. Data Use Management can impact the security of your data, as it delegates to certain Microsoft Purview roles managing access to the data sources. **Go through the secure practices related to Data Use Management in this guide**: [How to enable Data Use Management](./how-to-enable-data-use-management.md)
+The Azure SQL Database resource needs to be registered with Microsoft Purview before you can create access policies. To register your resources, follow the "Prerequisites" and "Register the data source" sections in [Enable Data use management on your Microsoft Purview sources](./register-scan-azure-sql-database.md#prerequisites).
+
+After you register the data source, you need to enable **Data use management**. This is a prerequisite before you can create policies on the data source. **Data use management** can affect the security of your data, because it delegates to certain Microsoft Purview roles that manage access to the data sources. Go through the security practices in [Enable Data use management on your Microsoft Purview sources](./how-to-enable-data-use-management.md).
-Once your data source has the **Data Use Management** option *Enabled*, it will look like this screenshot.
-![Screenshot shows how to register a data source for policy.](./media/how-to-policies-data-owner-sql/register-data-source-for-policy-azure-sql-db.png)
+After your data source has the **Data use management** option set to **Enabled**, it will look like this screenshot:
+
+![Screenshot that shows the panel for registering a data source for a policy, including areas for name, server name, and data use management.](./media/how-to-policies-data-owner-sql/register-data-source-for-policy-azure-sql-db.png)
### Create a policy+ To create an access policy for Azure SQL Database, follow these guides:
-* [DevOps policy on a single Azure SQL Database](./how-to-policies-devops-azure-sql-db.md#create-a-new-devops-policy)
-* [Data owner policy on a single Azure SQL Database](./how-to-policies-data-owner-azure-sql-db.md#create-and-publish-a-data-owner-policy) - This guide will allow you to provision access on a single Azure SQL Database account in your subscription.
-* [Data owner policy covering all sources in a subscription or resource group](./how-to-policies-data-owner-resource-group.md) - This guide will allow you to provision access on all enabled data sources in a resource group, or across an Azure subscription. The pre-requisite is that the subscription or resource group is registered with the Data use management option enabled.
-* [self-service policy for Azure SQL Database](./how-to-policies-self-service-azure-sql-db.md) - This guide will allow data consumers to request access to data assets using self-service workflow.
-## Lineage (Preview)
+* [Provision access to system metadata in Azure SQL Database](./how-to-policies-devops-azure-sql-db.md#create-a-new-devops-policy). Use this guide to apply a DevOps policy on a single SQL database.
+* [Provision access by data owner for Azure SQL Database](./how-to-policies-data-owner-azure-sql-db.md#create-and-publish-a-data-owner-policy). Use this guide to provision access on a single SQL database account in your subscription.
+* [Resource group and subscription access provisioning by data owner](./how-to-policies-data-owner-resource-group.md). Use this guide to provision access on all enabled data sources in a resource group or across an Azure subscription. The prerequisite is that the subscription or resource group must be registered with the **Data use management** option enabled.
+* [Self-service policies for Azure SQL Database](./how-to-policies-self-service-azure-sql-db.md). Use this guide to allow data consumers to request access to data assets by using a self-service workflow.
+
+## Extract lineage (preview)
<a id="lineagepreview"></a>
-Microsoft Purview supports lineage from Azure SQL Database. At the time of setting up a scan, enable lineage extraction toggle button to extract lineage.
+Microsoft Purview supports lineage from Azure SQL Database. When you're setting up a scan, you turn on the **Lineage extraction** toggle to extract lineage.
+
+### Prerequisites for setting up a scan with lineage extraction
-### Prerequisites for setting up scan with Lineage extraction
+1. Follow the steps in the [Configure authentication for a scan](#configure-authentication-for-a-scan) section of this article to authorize Microsoft Purview to scan your SQL database.
-1. Follow steps under [authentication for a scan using Managed Identity](#authentication-for-a-scan) section to authorize Microsoft Purview scan your Azure SQL Database
+2. Sign in to Azure SQL Database with your Azure AD account, and assign `db_owner` permissions to the Microsoft Purview managed identity.
-2. Sign in to Azure SQL Database with Azure AD account and assign db_owner permissions to the Microsoft Purview Managed identity. Use below example SQL syntax to create user and grant permission by replacing 'purview-account' with your Account name:
+ Use the following example SQL syntax to create a user and grant permission. Replace `<purview-account>` with your account name.
```sql Create user <purview-account> FROM EXTERNAL PROVIDER
Microsoft Purview supports lineage from Azure SQL Database. At the time of setti
EXEC sp_addrolemember 'db_owner', <purview-account> GO ```
-3. Run below command on your Azure SQL Database to create master Key
+3. Run the following command on your SQL database to create a master key:
```sql Create master key Go ```
-### Creating scan with lineage extraction toggle turned on
+### Create a scan with lineage extraction turned on
-1. Enable lineage extraction toggle in the scan screen
+1. On the pane for setting up a scan, turn on the **Enable lineage extraction** toggle.
- :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-lineage-extraction.png" alt-text="Screenshot that shows the screen to create a new scan with lineage extraction." lightbox="media/register-scan-azure-sql-database/register-scan-azure-sql-db-lineage-extraction-expanded.png":::
+ :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-lineage-extraction.png" alt-text="Screenshot that shows the pane for creating a new scan, with lineage extraction turned on." lightbox="media/register-scan-azure-sql-database/register-scan-azure-sql-db-lineage-extraction-expanded.png":::
-2. Select your method of authentication by following steps in the [scan section](#creating-the-scan)
-3. Once the scan is successfully set up from previous step, a new scan type called **Lineage extraction** will run incremental scans every 6 hours to extract lineage from Azure SQL Database. Lineage is extracted based on the actual stored procedure runs in the Azure SQL Database
+2. Select your method of authentication by following the steps in the [Create the scan](#create-the-scan) section of this article.
+3. After you successfully set up the scan, a new scan type called **Lineage extraction** will run incremental scans every six hours to extract lineage from Azure SQL Database. Lineage is extracted based on the stored procedure runs in the SQL database.
- :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-lineage-extraction-runs.png" alt-text="Screenshot that shows the screen that runs lineage extraction every 6 hours."lightbox="media/register-scan-azure-sql-database/register-scan-azure-sql-db-lineage-extraction-runs-expanded.png":::
+ :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-lineage-extraction-runs.png" alt-text="Screenshot that shows the screen that runs lineage extraction every six hours."lightbox="media/register-scan-azure-sql-database/register-scan-azure-sql-db-lineage-extraction-runs-expanded.png":::
### Search Azure SQL Database assets and view runtime lineage
-You can [browse data catalog](how-to-browse-catalog.md) or [search data catalog](how-to-search-catalog.md) to view asset details for Azure SQL Database. The following steps describe how-to view runtime lineage details.
+You can [browse through the data catalog](how-to-browse-catalog.md) or [search the data catalog](how-to-search-catalog.md) to view asset details for Azure SQL Database. The following steps describe how to view runtime lineage details:
-1. Go to asset -> lineage tab, you can see the asset lineage when applicable. Refer to the [supported capabilities](#supported-capabilities) section on the supported Azure SQL Database lineage scenarios. For more information about lineage in general, see [data lineage](concept-data-lineage.md) and [lineage user guide](catalog-lineage-user-guide.md).
+1. Go to the **Lineage** tab for the asset. When applicable, the asset lineage appears here.
- :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-lineage.png" alt-text="Screenshot that shows the screen with lineage from stored procedures.":::
+ :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-lineage.png" alt-text="Screenshot that shows lineage details from stored procedures.":::
-2. Go to stored procedure asset -> Properties -> Related assets to see the latest run details of stored procedures.
+ For information about supported Azure SQL Database lineage scenarios, refer to the [Supported capabilities](#supported-capabilities) section of this article. For more information about lineage in general, see [Data lineage in Microsoft Purview](concept-data-lineage.md) and [Microsoft Purview Data Catalog lineage user guide](catalog-lineage-user-guide.md).
- :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-stored-procedure-properties.png" alt-text="Screenshot that shows the screen with stored procedure properties containing runs.":::
+2. Go to the stored procedure asset. On the **Properties** tab, go to **Related assets** to get the latest run details of stored procedures.
-3. Select the stored procedure hyperlink next to Runs to see Azure SQL Stored Procedure Run Overview. Go to properties tab to see enhanced run time information from stored procedure. For example: executedTime, rowcount, Client Connection, and so on.
+ :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-stored-procedure-properties.png" alt-text="Screenshot that shows run details for stored procedure properties.":::
- :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-stored-procedure-run-properties.png" alt-text="Screenshot that shows the screen with stored procedure run properties."lightbox="media/register-scan-azure-sql-database/register-scan-azure-sql-db-stored-procedure-run-properties-expanded.png":::
+3. Select the stored procedure hyperlink next to **Runs** to see the **Azure SQL Stored Procedure Run** overview. Go to the **Properties** tab to see enhanced runtime information from the stored procedure, such as **executedTime**, **rowCount**, and **Client Connection**.
-### Troubleshooting steps
+ :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-stored-procedure-run-properties.png" alt-text="Screenshot that shows run properties for a stored procedure."lightbox="media/register-scan-azure-sql-database/register-scan-azure-sql-db-stored-procedure-run-properties-expanded.png":::
-* If no lineage is captured after a successful **Lineage extraction** run, it's possible that no stored procedures have run at least once since the scan is set up.
-* Lineage is captured for stored procedure runs that happened after a successful scan is set up. Lineage from past Stored procedure runs isn't captured.
-* If your database is processing heavy workloads with lots of stored procedure runs, lineage extraction will filter only the most recent runs. Stored procedure runs early in the 6 hour window or the run instances that create heavy query load won't be extracted. Contact support if you're missing lineage from any stored procedure runs.
+### Troubleshoot lineage extraction
+
+The following tips can help you solve problems related to lineage:
+
+* If no lineage is captured after a successful **Lineage extraction** run, it's possible that no stored procedures have run at least once since you set up the scan.
+* Lineage is captured for stored procedure runs that happen after a successful scan is set up. Lineage from past stored procedure runs isn't captured.
+* If your database is processing heavy workloads with lots of stored procedure runs, lineage extraction will filter only the most recent runs. Stored procedure runs early in the six-hour window, or the run instances that create heavy query load, won't be extracted. Contact support if you're missing lineage from any stored procedure runs.
## Next steps
-Follow the below guides to learn more about Microsoft Purview and your data.
-- [DevOps policies in Microsoft Purview](concept-policies-devops.md)-- [Data Estate Insights in Microsoft Purview](concept-insights.md)-- [Lineage in Microsoft Purview](catalog-lineage-user-guide.md)-- [Search Data Catalog](how-to-search-catalog.md)
+To learn more about Microsoft Purview and your data, use these guides:
+
+- [Concepts for Microsoft Purview DevOps policies](concept-policies-devops.md)
+- [Understand the Microsoft Purview Data Estate Insights application](concept-insights.md)
+- [Microsoft Purview Data Catalog lineage user guide](catalog-lineage-user-guide.md)
+- [Search the Microsoft Purview Data Catalog](how-to-search-catalog.md)
security Threat Modeling Tool Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-authentication.md
| Product/Service | Article | | | - | | **Web Application** | <ul><li>[Consider using a standard authentication mechanism to authenticate to Web Application](#standard-authn-web-app)</li><li>[Applications must handle failed authentication scenarios securely](#handle-failed-authn)</li><li>[Enable step up or adaptive authentication](#step-up-adaptive-authn)</li><li>[Ensure that administrative interfaces are appropriately locked down](#admin-interface-lockdown)</li><li>[Implement forgot password functionalities securely](#forgot-pword-fxn)</li><li>[Ensure that password and account policy are implemented](#pword-account-policy)</li><li>[Implement controls to prevent username enumeration](#controls-username-enum)</li></ul> |
-| **Database** | <ul><li>[When possible, use Windows Authentication for connecting to SQL Server](#win-authn-sql)</li><li>[When possible use Azure Active Directory Authentication for Connecting to SQL Database](#aad-authn-sql)</li><li>[When SQL authentication mode is used, ensure that account and password policy are enforced on SQL server](#authn-account-pword)</li><li>[Do not use SQL Authentication in contained databases](#autn-contained-db)</li></ul> |
+| **Database** | <ul><li>[When possible, use Windows Authentication for connecting to SQL Server](#win-authn-sql)</li><li>[When possible use Azure Active Directory Authentication for Connecting to SQL Database](#aad-authn-sql)</li><li>[When SQL authentication mode is used, ensure that account and password policy are enforced on SQL server](#authn-account-pword)</li><li>[Don't use SQL Authentication in contained databases](#autn-contained-db)</li></ul> |
| **Azure Event Hub** | <ul><li>[Use per device authentication credentials using SaS tokens](#authn-sas-tokens)</li></ul> | | **Azure Trust Boundary** | <ul><li>[Enable Azure AD Multi-Factor Authentication for Azure Administrators](#multi-factor-azure-admin)</li></ul> | | **Service Fabric Trust Boundary** | <ul><li>[Restrict anonymous access to Service Fabric Cluster](#anon-access-cluster)</li><li>[Ensure that Service Fabric client-to-node certificate is different from node-to-node certificate](#fabric-cn-nn)</li><li>[Use AAD to authenticate clients to service fabric clusters](#aad-client-fabric)</li><li>[Ensure that service fabric certificates are obtained from an approved Certificate Authority (CA)](#fabric-cert-ca)</li></ul> |
| **Applicable Technologies** | Generic | | **Attributes** | N/A | | **References** | N/A |
-| Details | <p>Applications that explicitly authenticate users must handle failed authentication scenarios securely.The authentication mechanism must:</p><ul><li>Deny access to privileged resources when authentication fails</li><li>Display a generic error message after failed authentication and access denied occurs</li></ul><p>Test for:</p><ul><li>Protection of privileged resources after failed logins</li><li>A generic error message is displayed on failed authentication and access denied event(s)</li><li>Accounts are disabled after an excessive number of failed attempts</li><ul>|
+| Details | <p>Applications that explicitly authenticate users must handle failed authentication scenarios securely. The authentication mechanism must:</p><ul><li>Deny access to privileged resources when authentication fails</li><li>Display a generic error message after failed authentication and access denied occurs</li></ul><p>Test for:</p><ul><li>Protection of privileged resources after failed logins</li><li>A generic error message is displayed on failed authentication and access denied event(s)</li><li>Accounts are disabled after an excessive number of failed attempts</li><ul>|
## <a id="step-up-adaptive-authn"></a>Enable step up or adaptive authentication
| **Applicable Technologies** | Generic | | **Attributes** | N/A | | **References** | N/A |
-| Details | The first solution is to grant access only from a certain source IP range to the administrative interface. If that solution would not be possible than it is always recommended to enforce a step-up or adaptive authentication for logging in into the administrative interface |
+| Details | The first solution is to grant access only from a certain source IP range to the administrative interface. If that solution wouldn't be possible then it's always recommended to enforce a step-up or adaptive authentication for logging in into the administrative interface |
## <a id="forgot-pword-fxn"></a>Implement forgot password functionalities securely
| **Applicable Technologies** | Generic | | **Attributes** | N/A | | **References** | N/A |
-| Details | <p>The first thing is to verify that forgot password and other recovery paths send a link including a time-limited activation token rather than the password itself. Additional authentication based on soft-tokens (e.g. SMS token, native mobile applications, etc.) can be required as well before the link is sent over. Second, you should not lock out the users account whilst the process of getting a new password is in progress.</p><p>This could lead to a Denial of service attack whenever an attacker decides to intentionally lock out the users with an automated attack. Third, whenever the new password request was set in progress, the message you display should be generalized in order to prevent username enumeration. Fourth, always disallow the use of old passwords and implement a strong password policy.</p> |
+| Details | <p>The first thing is to verify that forgot password and other recovery paths send a link including a time-limited activation token rather than the password itself. Additional authentication based on soft-tokens (e.g. SMS token, native mobile applications, etc.) can be required as well before the link is sent over. Second, you shouldn't lock out the users account whilst the process of getting a new password is in progress.</p><p>This could lead to a Denial of service attack whenever an attacker decides to intentionally lock out the users with an automated attack. Third, whenever the new password request was set in progress, the message you display should be generalized in order to prevent username enumeration. Fourth, always disallow the use of old passwords and implement a strong password policy.</p> |
## <a id="pword-account-policy"></a>Ensure that password and account policy are implemented
| **Applicable Technologies** | Generic | | **Attributes** | N/A | | **References** | N/A |
-| **Steps** | All error messages should be generalized in order to prevent username enumeration. Also sometimes you cannot avoid information leaking in functionalities such as a registration page. Here you need to use rate-limiting methods like CAPTCHA to prevent an automated attack by an attacker. |
+| **Steps** | All error messages should be generalized in order to prevent username enumeration. Also sometimes you can't avoid information leaking in functionalities such as a registration page. Here you need to use rate-limiting methods like CAPTCHA to prevent an automated attack by an attacker. |
## <a id="win-authn-sql"></a>When possible, use Windows Authentication for connecting to SQL Server
| **Applicable Technologies** | Generic | | **Attributes** | N/A | | **References** | [SQL Server password policy](/previous-versions/sql/sql-server-2012/ms161959(v=sql.110)) |
-| **Steps** | When using SQL Server Authentication, logins are created in SQL Server that are not based on Windows user accounts. Both the user name and the password are created by using SQL Server and stored in SQL Server. SQL Server can use Windows password policy mechanisms. It can apply the same complexity and expiration policies used in Windows to passwords used inside SQL Server. |
+| **Steps** | When using SQL Server Authentication, logins are created in SQL Server that aren't based on Windows user accounts. Both the user name and the password are created by using SQL Server and stored in SQL Server. SQL Server can use Windows password policy mechanisms. It can apply the same complexity and expiration policies used in Windows to passwords used inside SQL Server. |
## <a id="autn-contained-db"></a>Do not use SQL Authentication in contained databases
| **Applicable Technologies** | Generic | | **Attributes** | N/A | | **References** | [What is Azure AD Multi-Factor Authentication?](../../active-directory/authentication/concept-mfa-howitworks.md) |
-| **Steps** | <p>Multi-factor authentication (MFA) is a method of authentication that requires more than one verification method and adds a critical second layer of security to user sign-ins and transactions. It works by requiring any two or more of the following verification methods:</p><ul><li>Something you know (typically a password)</li><li>Something you have (a trusted device that is not easily duplicated, like a phone)</li><li>Something you are (biometrics)</li><ul>|
+| **Steps** | <p>Multi-factor authentication (MFA) is a method of authentication that requires more than one verification method and adds a critical second layer of security to user sign-ins and transactions. It works by requiring any two or more of the following verification methods:</p><ul><li>Something you know (typically a password)</li><li>Something you have (a trusted device that isn't easily duplicated, like a phone)</li><li>Something you are (biometrics)</li><ul>|
## <a id="anon-access-cluster"></a>Restrict anonymous access to Service Fabric Cluster
The `<netMsmqBinding/>` element of the WCF configuration file below instructs WC
| **Applicable Technologies** | Generic | | **Attributes** | N/A | | **References** | [Token cache serialization in MSAL.NET](../../active-directory/develop/msal-net-token-cache-serialization.md) |
-| **Steps** | <p>The default cache that MSAL (Microsoft Authentication Library) uses is an in-memory cache, and is scalable. However there are different options available that you can use as an alternative, such as a distributed token cache. These have L1/L2 mechanisms, where L1 is in memory and L2 is the distributed cache implementation. These can be accordingly configured to limit L1 memory, encrypt or set eviction policies. Other alternatives include Redis, SQL Server or Azure Comsos DB caches. An implementation of a distributed token cache can be found in the following [Tutorial: Get started with ASP.NET Core MVC](https://learn.microsoft.com/aspnet/core/tutorials/first-mvc-app/start-mvc.md).</p>|
+| **Steps** | <p>The default cache that MSAL (Microsoft Authentication Library) uses is an in-memory cache, and is scalable. However there are different options available that you can use as an alternative, such as a distributed token cache. These have L1/L2 mechanisms, where L1 is in memory and L2 is the distributed cache implementation. These can be accordingly configured to limit L1 memory, encrypt or set eviction policies. Other alternatives include Redis, SQL Server or Azure Comsos DB caches. An implementation of a distributed token cache can be found in the following [Tutorial: Get started with ASP.NET Core MVC](/aspnet/core/tutorials/first-mvc-app/start-mvc).</p>|
## <a id="tokenreplaycache-msal"></a>Ensure that TokenReplayCache is used to prevent the replay of MSAL authentication tokens
security Threat Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/threat-detection.md
Microsoft Defender for Cloud helps protect your hybrid cloud environment. By per
Defender for Cloud's recommendations are based on the [Microsoft cloud security benchmark](/security/benchmark/azure/introduction) - the Microsoft-authored, Azure-specific set of guidelines for security and compliance best practices based on common compliance frameworks. This widely respected benchmark builds on the controls from the [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/azure/) and the [National Institute of Standards and Technology (NIST)](https://www.nist.gov/) with a focus on cloud centric security.
-Enabling Defender for Cloud's enhanced security features brings advanced, intelligent, protection of your Azure, hybrid and multi-cloud resources and workloads. Learn more in [Microsoft Defender for Cloud's enhanced security features](../../defender-for-cloud/enhanced-security-features-overview.md).
+Enabling Defender for Cloud's enhanced security features brings advanced, intelligent, protection of your Azure, hybrid and multicloud resources and workloads. Learn more in [Microsoft Defender for Cloud's enhanced security features](../../defender-for-cloud/enhanced-security-features-overview.md).
The workload protection dashboard in Defender for Cloud provides visibility and control of the integrated cloud workload protection features provided by a range of **Microsoft Defender** plans:
Researchers also receive threat intelligence information that is shared among ma
### Behavioral analytics
-Behavioral analytics is a technique that analyzes and compares data to a collection of known patterns. However, these patterns are not simple signatures. They are determined through complex machine learning algorithms that are applied to massive datasets.
+Behavioral analytics is a technique that analyzes and compares data to a collection of known patterns. However, these patterns aren't simple signatures. They're determined through complex machine learning algorithms that are applied to massive datasets.
![Behavioral analytics findings](./media/threat-detection/azure-threat-detection-fig11.jpg)
Some examples include:
Microsoft Defender for Cloud also uses anomaly detection to identify threats. In contrast to behavioral analytics (which depends on known patterns derived from large data sets), anomaly detection is more ΓÇ£personalizedΓÇ¥ and focuses on baselines that are specific to your deployments. Machine learning is applied to determine normal activity for your deployments, and then rules are generated to define outlier conditions that could represent a security event. HereΓÇÖs an example: -- **Inbound RDP/SSH brute force attacks**: Your deployments might have busy virtual machines with many logins each day and other virtual machines that have few, if any, logins. Microsoft Defender for Cloud can determine baseline login activity for these virtual machines and use machine learning to define around the normal login activities. If there is any discrepancy with the baseline defined for login related characteristics, an alert might be generated. Again, machine learning determines what is significant.
+- **Inbound RDP/SSH brute force attacks**: Your deployments might have busy virtual machines with many logins each day and other virtual machines that have few, if any, logins. Microsoft Defender for Cloud can determine baseline login activity for these virtual machines and use machine learning to define around the normal login activities. If there's any discrepancy with the baseline defined for login related characteristics, an alert might be generated. Again, machine learning determines what is significant.
### Continuous threat intelligence monitoring
SQL Database threat detectors use one of the following detection methodologies:
- **Deterministic detection**: Detects suspicious patterns (rules based) in the SQL client queries that match known attacks. This methodology has high detection and low false positive, but limited coverage because it falls within the category of ΓÇ£atomic detections.ΓÇ¥ -- **Behavioral detection**: Detects anomalous activity, which is abnormal behavior in the database that was not seen during the most recent 30 days. Examples of SQL client anomalous activity can be a spike of failed logins or queries, a high volume of data being extracted, unusual canonical queries, or unfamiliar IP addresses used to access the database.
+- **Behavioral detection**: Detects anomalous activity, which is abnormal behavior in the database that wasn't seen during the most recent 30 days. Examples of SQL client anomalous activity can be a spike of failed logins or queries, a high volume of data being extracted, unusual canonical queries, or unfamiliar IP addresses used to access the database.
### Application Gateway Web Application Firewall
sentinel Configure Data Transformation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/configure-data-transformation.md
Before you start configuring DCRs for data transformation:
| If you are ingesting | Ingestion-time transformation is... | Use this DCR type | | -- | - | -- |
-| **Custom data** through <br>the **DCR-based API** | <li>Required<li>Included in the DCR that defines the data model | Standard DCR |
+| **Custom data** through <br>the [**Log Ingestion API**](../azure-monitor/logs/logs-ingestion-api-overview.md) | <li>Required<li>Included in the DCR that defines the data model | Standard DCR |
| **Built-in data types** <br>(Syslog, CommonSecurityLog, WindowsEvent, SecurityEvent) <br>using the legacy **Log Analytics Agent (MMA)** | <li>Optional<li>If desired, added to the DCR attached to the Workspace where this data is being ingested | Workspace transformation DCR | | **Built-in data types** <br>from most other sources | <li>Optional<li>If desired, added to the DCR attached to the Workspace where this data is being ingested | Workspace transformation DCR |
Before you start configuring DCRs for data transformation:
Use the following procedures from the Log Analytics and Azure Monitor documentation to configure your data transformation DCRs:
-[Direct ingestion through the DCR-based Custom Logs API](../azure-monitor/logs/logs-ingestion-api-overview.md):
+[Direct ingestion through the Log Ingestion API](../azure-monitor/logs/logs-ingestion-api-overview.md):
- Walk through a tutorial for [ingesting logs using the Azure portal](../azure-monitor/logs/tutorial-logs-ingestion-portal.md). - Walk through a tutorial for [ingesting logs using Azure Resource Manager (ARM) templates and REST API](../azure-monitor/logs/tutorial-logs-ingestion-api.md).
Use the following procedures from the Log Analytics and Azure Monitor documentat
- [Data collection transformations in Azure Monitor (preview)](../azure-monitor/essentials/data-collection-transformations.md)
-When you're done, come back to Microsoft Sentinel to verify that your data is being ingested based on your newly-configured transformation. It make take up to 60 minutes for the data transformation configurations to apply.
+When you're done, come back to Microsoft Sentinel to verify that your data is being ingested based on your newly configured transformation. It may take up to 60 minutes for the data transformation configurations to apply.
## Migrate to ingestion-time data transformation
sentinel Connect Dns Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-dns-ama.md
With the connector, you can:
This connector is fully normalized using [Advanced Security Information Model (ASIM) parsers](normalization.md). The connector streams events originated from the analytical logs into the normalized table named `ASimDnsActivityLogs`. This table acts as a translator, using one unified language, shared across all DNS connectors to come.
-For a source-agnostic parser that unifies all DNS data and ensures that your analysis runs across all configured sources, use the [ASIM DNS unifying parser](normalization-schema-dns.md#unifying-parsers) `_Im_Dns`.
+For a source-agnostic parser that unifies all DNS data and ensures that your analysis runs across all configured sources, use the [ASIM DNS unifying parser](normalization-schema-dns.md#out-of-the-box-parsers) `_Im_Dns`.
The ASIM unifying parser complements the native `ASimDnsActivityLogs` table. While the native table is ASIM compliant, the parser is needed to add capabilities, such as aliases, available only at query time, and to combine `ASimDnsActivityLogs`  with other DNS data sources.
sentinel Data Transformation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-transformation.md
You can filter at the record (row) level, by specifying criteria for which recor
- Help to reduce costs, as you reduce storage requirements - Improve performance, as fewer query-time adjustments are needed
-Ingestion-time data transformation supports [multiple-workspace scenarios](extend-sentinel-across-workspaces-tenants.md). You would create separate DCRs for each workspace.
+Ingestion-time data transformation supports [multiple-workspace scenarios](extend-sentinel-across-workspaces-tenants.md).
+
+### Normalization
+
+Ingest-time transformation also allows you to normalize logs when ingested into built-in or customer ASIM normalized tables. Using ingest-time normalization improves normalized queries performance.
+
+For more information on ingest-time normalization using transformations, refer to [Ingest-time normalization](normalization-ingest-time.md).
### Enrichment and tagging
Only the following tables are currently supported for custom log ingestion:
- [**SecurityEvent**](/azure/azure-monitor/reference/tables/securityevent) - [**CommonSecurityLog**](/azure/azure-monitor/reference/tables/commonsecuritylog) - [**Syslog**](/azure/azure-monitor/reference/tables/syslog)-- **ASIMDnsActivityLog**
+- [**ASimDnsActivityLog**](/azure/azure-monitor/reference/tables/asimdnsactivitylogs)
+- [**ASimNetworkSessionLogs**](/azure/azure-monitor/reference/tables/asimnetworksessionlogs)
## Known issues
sentinel Network Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/network-normalization-schema.md
The following filtering parameters are available:
| **dstipaddr_has_any_prefix** | dynamic | Filter only network sessions for which the [destination IP address field](#dstipaddr) prefix is in one of the listed values. Prefixes should end with a `.`, for example: `10.0.`. The length of the list is limited to 10,000 items.| | **ipaddr_has_any_prefix** | dynamic | Filter only network sessions for which the [destination IP address field](#dstipaddr) or [source IP address field](#srcipaddr) prefix is in one of the listed values. Prefixes should end with a `.`, for example: `10.0.`. The length of the list is limited to 10,000 items.<br><br>The field [ASimMatchingIpAddr](normalization-common-fields.md#asimmatchingipaddr) is set with the one of the values `SrcIpAddr`, `DstIpAddr`, or `Both` to reflect the matching fields or fields. | | **dstportnum** | Int | Filter only network sessions with the specified destination port number. |
-| **hostname_has_any** | dynamic | Filter only network sessions for which the [destination hostname field](#dsthostname) has any of the values listed. The length of the list is limited to 10,000 items.<br><br> The field [ASimMatchingHostname](normalization-common-fields.md#asimmatchinghostname) is set with the one of the values `SrcHostname`, `DstHostname`, or `Both` to reflect the matching fields or fields. |
-| **dvcaction** | dynamic | Filter only network sessions for which the [Device Action field](#dvcaction) is any of the values listed. |
+| **hostname_has_any** | dynamic/string | Filter only network sessions for which the [destination hostname field](#dsthostname) has any of the values listed. The length of the list is limited to 10,000 items.<br><br> The field [ASimMatchingHostname](normalization-common-fields.md#asimmatchinghostname) is set with the one of the values `SrcHostname`, `DstHostname`, or `Both` to reflect the matching fields or fields. |
+| **dvcaction** | dynamic/string | Filter only network sessions for which the [Device Action field](#dvcaction) is any of the values listed. |
| **eventresult** | String | Filter only network sessions with a specific **EventResult** value. |
+Some parameter can accept both list of values of type `dynamic` or a single string value. To pass a literal list to parameters that expect a dynamic value, explicitly use a [dynamic literal](/azure/data-explorer/kusto/query/scalar-data-types/dynamic#dynamic-literals.md). For example: `dynamic(['192.168.','10.'])`
For example, to filter only network sessions for a specified list of domain names, use:
sentinel Normalization About Parsers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-about-parsers.md
The following table lists the available unifying parsers:
| Schema | Unifying parser | | | - |
+| Audit Event | imAuditEvent |
| Authentication | imAuthentication | | Dns | _Im_Dns | | File Event | imFileEvent |
Using parsers may impact your query performance, primarily from filtering the re
When invoking the parser, always use available filtering parameters by adding one or more named parameters to ensure optimal performance of the ASIM parsers. Each schema has a standard set of filtering parameters documented in the relevant schema documentation. Filtering parameters are entirely optional. The following schemas support filtering parameters:
+- [Audit Event](normalization-schema-audit.md)
- [Authentication](authentication-normalization-schema.md) - [DNS](normalization-schema-dns.md#filtering-parser-parameters) - [Network Session](network-normalization-schema.md#filtering-parser-parameters)
Every schema that supports filtering parameters supports at least the `starttime
For an example of using filtering parsers see [Unifying parsers](#unifying-parsers) above.
+## The pack parameter
+
+To ensure efficiency, parsers maintain only normalized fields. Fields which are not normalized have less value when combined with other sources. Some parsers support the *pack* parameter. When the *pack* parameter is set to `true`, the parser will pack additional data into the *AdditionalFields* dynamic field.
+
+The [parsers list](normalization-parsers-list.md) article notes parsers which support the *pack* parameter.
+ ## Next steps Learn more about ASIM parsers:
sentinel Normalization About Schemas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-about-schemas.md
Schema references outline the fields that comprise each schema. ASIM currently d
| [Authentication Event](authentication-normalization-schema.md) | 0.1.2 | Preview | | [DNS Activity](normalization-schema-dns.md) | 0.1.6 | Preview | | [DHCP Activity](dhcp-normalization-schema.md) | 0.1 | Preview |
-| [File Activity](file-event-normalization-schema.md) | 0.2 | Preview |
+| [File Activity](normalization-schema-file-event.md) | 0.2 | Preview |
| [Network Session](normalization-schema.md) | 0.2.5 | Preview | | [Process Event](process-events-normalization-schema.md) | 0.1.4 | Preview | | [Registry Event](registry-event-normalization-schema.md) | 0.1.2 | Preview |
sentinel Normalization Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-content.md
The following built-in authentication content is supported for ASIM normalizatio
The following built-in DNS query content is supported for ASIM normalization.
+### Solutions
+
+- [Log4j Vulnerability Detection](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-apachelog4jvulnerability?tab=Overview)
+- [Legacy IOC Based Threat Detection](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-ioclegacy?tab=Overview)
+ ### Analytics rules - [(Preview) TI map Domain entity to DNS Events (ASIM DNS Schema)](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/ASimDNS/imDns_DomainEntity_DnsEvents.yaml)
The following built-in DNS query content is supported for ASIM normalization.
The following built-in file activity content is supported for ASIM normalization.
-### Analytic Rules
+- [Legacy IOC Based Threat Detection](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-ioclegacy?tab=Overview)
+
+### Analytics Rules
- [SUNBURST and SUPERNOVA backdoor hashes (Normalized File Events)](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/ASimFileEvent/imFileESolarWindsSunburstSupernova.yaml) - [Exchange Server Vulnerabilities Disclosed March 2021 IoC Match](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/ExchangeServerVulnerabilitiesMarch2021IoCs.yaml)
The following built-in file activity content is supported for ASIM normalization
The following built-in network session related content is supported for ASIM normalization.
+### Solutions
+
+- [Network Threat Protection Essentials](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-networkthreatdetection?tab=Overview)
+- [Log4j Vulnerability Detection](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-apachelog4jvulnerability?tab=Overview)
+- [Legacy IOC Based Threat Detection](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-ioclegacy?tab=Overview)
+ ### Analytics rules - [Log4j vulnerability exploit aka Log4Shell IP IOC](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Apache%20Log4j%20Vulnerability%20Detection/Analytic%20Rules/Log4J_IPIOC_Dec112021.yaml)
The following built-in network session related content is supported for ASIM nor
- [Known STRONTIUM group domains - July 2019](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/STRONTIUMJuly2019IOCs.yaml) - ### Hunting queries - [Connection from external IP to OMI related Ports](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/NetworkConnectiontoOMIPorts.yaml)
-## Workbooks
--- Threat Intelligence Workbook-- ## Process activity security content The following built-in process activity content is supported for ASIM normalization.
+### Solutions
+
+- [Endpoint Threat Protection Essentials](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-endpointthreat?tab=Overview)
+- [Legacy IOC Based Threat Detection](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-ioclegacy?tab=Overview)
+ ### Analytics rules - [Probable AdFind Recon Tool Usage (Normalized Process Events)](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/ASimProcess/imProcess_AdFind_Usage.yaml)
The following built-in process activity content is supported for ASIM normalizat
The following built-in registry activity content is supported for ASIM normalization.
-### Analytic rules
+### Analytics rules
- [Potential Fodhelper UAC Bypass (ASIM Version)](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/PotentialFodhelperUACBypass(ASIMVersion).yaml)
The following built-in registry activity content is supported for ASIM normaliza
The following built-in web session related content is supported for ASIM normalization.
+### Solutions
+
+- [Log4j Vulnerability Detection](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-apachelog4jvulnerability?tab=Overview)
+- [Threat Intelligence](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-threatintelligence-taxii?tab=Overview)
+ ### Analytics rules - [(Preview) TI map Domain entity to Web Session Events (ASIM Web Session schema)](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/ThreatIntelligenceIndicator/DomainEntity_imWebSession.yaml)
sentinel Normalization Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-functions.md
# Advanced Security Information Model (ASIM) helper functions (Public preview)
-Advanced Security Information Model (ASIM) helper functions extend the KQL language providing functionality that helps interact with normalized data and in writing parsers. The following is a list of ASIM help functions:
+Advanced Security Information Model (ASIM) helper functions extend the KQL language providing functionality that helps interact with normalized data and in writing parsers.
-## Scalar functions
+## Enrichment lookup functions
-Scalar functions are used in expressions are typically invoked as part of an `extend` statement.
+Enrichment lookup functions provide an easy method of looking up known values, based on their numerical representation. Such functions are useful as events often use the short form numeric code, while users prefer the textual form. Most of the functions have two forms:
-| Function | Input parameters | Output | Description |
-| -- | - | | -- |
-| _ASIM_GetSourceBySourceType | SourceType (String) | List of sources (dynamic) | Retrieve the list of sources associated with the input source type from the `SourceBySourceType` Watchlist. This function is intended for use by parsers writers. |
-| _ASIM_LookupDnsQueryType | QueryType (Integer) | Query Type Name | Translate a numeric DNS resource record (RR) type to its name, as defined by [IANA](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml#dns-parameters-4) |
-| _ASIM_LookupDnsResponseCode | ResponseCode (Integer) | Response Code Name | Translate a numeric DNS response code (RCODE) to its name, as defined by [IANA](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml#dns-parameters-6) |
+The **lookup** version is a scalar function that accepts as input the numeric code and returns the textual form. Use the following KQL snippet with the **lookup** version:
+
+```KQL
+| extend ProtocolName = _ASIM_LookupNetworkProtocol (ProtocolNumber)
+```
+
+The **resolve** version is a tabular function that:
+
+- Is used a KQL pipeline operator.
+- Accepts as input the name of the field holding the value to look up.
+- Sets the ASIM fields typically holding both the input value and the resulting lookup value.
+Use the following KQL snippet with the **resolve** version:
-## Tabular functions
+```KQL
+| invoke _ASIM_ResolveNetworkProtocol (`ProtocolNumber`)
+```
-Tabular functions are invoked using the `invoke` operator and return value by adding fields to the data set, as if they perform `extend`.
+Which will automatically populate the NetworkProtocol field with the result of the lookup.
-| Function | Input parameters | Extended fields | Description |
+The **resolve** version is preferable for use in ASIM parsers, while the lookup version is useful in general purpose queries. When an enrichment lookup function has to return more than one value, it will always use the **resolve** format.
+
+### Lookup type functions
+
+| Function | Input* | Output | Description |
| -- | - | | -- |
-| _ASIM_ResolveDnsQueryType | field (String) | `DnsQueryTypeName` | Translate a numeric DNS resource record (RR) type stored in the field specified to its name, as defined by [IANA](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml#dns-parameters-4), and assigns the result to the field `DnsQueryTypeName` |
-| _ASIM_LookupDnsResponseCode | field (String) | `DnsResponseCodeName` | Translate a numeric DNS response code (RCODE) stored in the field specified to its name, as defined by [IANA](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml#dns-parameters-6), and assigns the result to the field `DnsResponseCodeName` |
-| _ASIM_ResolveFQDN | field (String) | - `ExtractedHostname`<br> - `Domain`<br> - `DomainType` <br> - `FQDN` | Analyzes the value in the field specified and set the output fields accordingly. For more information, see [example](normalization-develop-parsers.md#resolvefqnd) in the article about developing parsers. |
-| _ASIM_ResolveSrcFQDN | field (String) | - `SrcHostname`<br> - `SrcDomain`<br> - `SrcDomainType`<br> - `SrcFQDN` | Similar to _ASIM_ResolveFQDN, but sets the `Src` fields |
-| _ASIM_ResolveDstFQDN | field (String) | - `DstHostname`<br> - `DstDomain`<br> - `DstDomainType`<br> - `SrcFQDN` | Similar to _ASIM_ResolveFQDN, but sets the `Dst` fields |
-| _ASIM_ResolveDvcFQDN | field (String) | - `DvcHostname`<br> - `DvcDomain`<br> - `DvcDomainType`<br> - `DvcFQDN` | Similar to _ASIM_ResolveFQDN, but sets the `Dvc` fields |
+| **_ASIM_LookupDnsQueryType** | Numeric DNS query type code | Query type name | Translate a numeric DNS resource record (RR) type to its name, as defined by [IANA](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml#dns-parameters-4) |
+| **_ASIM_LookupDnsResponseCode** | Numeric DNS response code | Response code name | Translate a numeric DNS response code (RCODE) to its name, as defined by [IANA](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml#dns-parameters-6) |
+| **_ASIM_LookupICMPType** | Numeric ICMP type | ICMP type name | Translate a numeric ICMP type to its name, as defined by [IANA](https://www.iana.org/assignments/icmp-parameters/icmp-parameters.xhtml#icmp-parameters-types) |
+| **_ASIM_LookupNetworkProtocol** | IP protocol number | IP protocol name | Translate a numeric IP protocol code to its name, as defined by [IANA](https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml) |
++
+### Resolve type functions
+
+The resolve format functions perform the same action as their lookup counterpart, but accept a field name, provided as a string constant, as input and set up predefined fields as output. The input value is also assigned to a predefined field.
+
+| Function | Extended fields |
+| -- | - |
+| **_ASIM_ResolveDnsQueryType** | - `DnsQueryType` for the input value<br> - `DnsQueryTypeName` for the output value |
+| **_ASIM_ResolveDnsResponseCode** | - `DnsResponseCode` for the input value<br> - `DnsResponseCodeName` for the output value |
+| **_ASIM_ResolveICMPType** | - `NetworkIcmpCode` for the input value<br> - `NetworkIcmpType` for the lookup value |
+| **_ASIM_ResolveNetworkProtocol** | - `NetworkProtocolNumber` for the input value<br>- `NetworkProtocol` for the lookup value |
+
+## Parser helper functions
+
+The following functions perform tasks which are common in parsers and useful to accelerate parser development.
+
+### Device resolution functions
+
+The device resolution functions analyze a hostname and determine whether it has domain information and the type of domain notation. The functions then populate the relevant ASIM fields representing a device. All the functions are resolve type functions and accept the name of the field containing the hostname, represented as a string, as input.
+
+| Function | Extended fields | Description |
+| -- | - | -- |
+| **_ASIM_ResolveFQDN** | - `ExtractedHostname`<br> - `Domain`<br> - `DomainType` <br> - `FQDN` | Analyzes the value in the field specified and set the output fields accordingly. For more information, see [example](normalization-develop-parsers.md#resolvefqnd) in the article about developing parsers. |
+| **_ASIM_ResolveSrcFQDN** | - `SrcHostname`<br> - `SrcDomain`<br> - `SrcDomainType`<br> - `SrcFQDN` | Similar to `_ASIM_ResolveFQDN`, but sets the `Src` fields |
+| **_ASIM_ResolveDstFQDN** | - `DstHostname`<br> - `DstDomain`<br> - `DstDomainType`<br> - `SrcFQDN` | Similar to `_ASIM_ResolveFQDN`, but sets the `Dst` fields |
+| **_ASIM_ResolveDvcFQDN** | - `DvcHostname`<br> - `DvcDomain`<br> - `DvcDomainType`<br> - `DvcFQDN` | Similar to `_ASIM_ResolveFQDN`, but sets the `Dvc` fields |
+### Source identification functions
+The **_ASIM_GetSourceBySourceType** function retrieves the list of sources associated with a source type provided as input from the `SourceBySourceType` Watchlist. The function is intended for use by parsers writers. For more information, see [Filtering by source type using a Watchlist](normalization-develop-parsers.md#filtering-by-source-type-using-a-watchlist).
## <a name="next-steps"></a>Next steps
sentinel Normalization Ingest Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-ingest-time.md
+
+ Title: Ingest time normalization | Microsoft Docs
+description: This article explains how Microsoft Sentinel normalizes data at ingest
++ Last updated : 12/28/2022+++
+# Ingest time normalization
+
+## Query time parsing
+
+As discussion in the [ASIM overview](normalization.md), Microsoft Sentinel uses both query time and ingest time normalization to take advantage of the benefits of each.
+
+To use query time normalization, use the [query time unifying parsers](normalization-about-parsers.md#unifying-parsers), such as `_Im_Dns` in your queries. Normalizing using query time parsing has several advantages:
+
+- **Preserving the original format**: Query time normalization does't require the data to be modified, thus preserving the original data format sent by the source.
+- **Avoiding potential duplicate storage**: Since the normalized data is only a view of the original data, there is no need to store both original and normalized data.
+- **Easier development**: Since query time parsers present a view of the data and don't modify the data, they are easy to develop. Developing, testing and fixing a parser can all be done on existing data. Moreover, parsers can be fixed when an issue is discovered and the fix will apply to existing data.
+
+## Ingest time parsing
+
+While ASIM query time parsers are optimized, query time parsing can slow down queries, especially on large data sets.
+
+Ingest time parsing enables transforming events to a normalized schema as they are ingested into Microsoft Sentinel and storing them in a normalized format. Ingest time parsing is less flexible and parsers are harder to develop, but since the data is stored in a normalized format, offers better performance.
+
+Normalized data can be stored in Microsoft Sentinel's native normalized tables, or in a custom table that uses an ASIM schema. A custom table that has a schema close to, but not identical, to an ASIM schema, also provides the performance benefits of ingest time normalization.
+
+Currently, ASIM supports the following native normalized tables as a destination for ingest time normalization:
+- [**ASimDnsActivityLogs**](/azure/azure-monitor/reference/tables/asimdnsactivitylogs) for the [DNS](normalization-schema-dns.md) schema.
+- [**ASimNetworkSessionLogs**](/azure/azure-monitor/reference/tables/asimnetworksessionlogs) for the [Network Session](network-normalization-schema.md) schema
+
+The advantage of native normalized tables is that they are included by default in the ASIM unifying parsers. Custom normalized tables can be included in the unifying parsers, as discussed in [Manage Parsers](normalization-manage-parsers.md).
+
+## Combining ingest time and query time normalization
+
+Queries should always use the [query time unifying parsers](normalization-about-parsers.md#unifying-parsers), such as `_Im_Dns` to take advantage of both query time and ingest time normalization. Native normalized tables are included in the queried data by using a stub parser.
+
+The stub parser is a query time parser that uses as input the normalized table. Since the normalized table doesn't require parsing, the stub parser is efficient.
+
+The stub parser presents to queries a view that adds to the ASIM native table:
+
+- **Aliases** - in order to not waste storage on repeating values, aliases are not stored in ASIM native tables and are added at query time by the stub parsers.
+- **Constant values** - Like aliases, and for the same reason, ASIM normalized tables also don't store constant values such as [EventSchema](normalization-common-fields.md#eventschema). The stub parser adds those fields. ASIM normalized table is shared by many sources, and ingest time parsers can change their output version. Therefore, fields such as [EventProduct](normalization-common-fields.md#eventproduct), [EventVendor](normalization-common-fields.md#eventvendor), and [EventSchemaVersion](normalization-common-fields.md#eventschemaversion) are not constant and are not added by the stub parser.
+- **Filtering** - the stub parser also implements filtering. While ASIM native tables don't need filtering parsers to achieve better performance, filtering is needed to support inclusion in the unifying parser.
+- **Updates and fixes** - Using a stub parser enables fixing issues faster. For example if data was ingested incorrectly, an IP address may have not been extracted from the message field during ingest. The IP address can be extracted by the stub parser at query time.
+
+When using custom normalized tables, create your own stub parser to implement this functionality, and add it to the unifying parsers as discussed in [Manage Parsers](normalization-manage-parsers.md). Use the stub parser for the native table, such as the [DNS native table stub parser](https://github.com/Azure/Azure-Sentinel/blob/master/Parsers/ASimDns/Parsers/ASimDnsNative.yaml) and its [filtering counterpart](https://github.com/Azure/Azure-Sentinel/blob/master/Parsers/ASimDns/Parsers/vimDnsNative.yaml), as a starting point. If your table is semi-normalized, use the stub parser to perform the additional parsing and normalization needed.
+
+Learn more about writing parsers in [Developing ASIM parsers](normalization-develop-parsers.md).
+
+## Implementing ingest time normalization
+
+To normalize data at ingest, you will need to use a [Data Collection Rule (DCR)](/azure/azure-monitor/essentials/data-collection-rule-overview.md). The procedure for implementing the DCR depends on the method used to ingest the data. For more information, refer to the article [Transform or customize data at ingestion time in Microsoft Sentinel](configure-data-transformation.md).
+
+A [KQL](kusto-overview.md) transformation query is the core of a DCR. The KQL version used in DCRs is slightly different than the version used elsewhere in Microsoft Sentinel to accommodate for requirements of pipeline event processing. Therefore, you will need to modify any query-time parser to use it in a DCR. For more information on the differences, and how to convert a query-time parser to an ingest-time parser, read about the [DCR KQL limitations](../azure-monitor/essentials/data-collection-transformations-structure.md#kql-limitations).
++
+## <a name="next-steps"></a>Next steps
+
+For more information, see:
+
+- [Normalization and the Advanced Security Information Model (ASIM)](normalization.md)
+- [Advanced Security Information Model (ASIM) parsers](normalization-parsers-overview.md)
+- [Transform or customize data at ingestion time in Microsoft Sentinel](configure-data-transformation.md)
sentinel Normalization Parsers List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-parsers-list.md
ASIM Network Session parsers are available in every workspace. Microsoft Sentine
| **Microsoft Defender for IoT sensor** | | `_Im_NetworkSession_MD4IoTSensorVxx` | | **Palo Alto PanOS traffic logs** | Collected using CEF. | `_Im_NetworkSession_PaloAltoCEFVxx` | | **Sysmon for Linux** (event 3) | Collected using the Log Analytics Agent<br> or the Azure Monitor Agent. |`_Im_NetworkSession_LinuxSysmonVxx` |
-| **Vectra AI** | | `_Im_NetworkSession_VectraIAVxx` |
+| **Vectra AI** | Supports the [pack](normalization-about-parsers.md#the-pack-parameter) parameter. | `_Im_NetworkSession_VectraIAVxx` |
| **Windows Firewall logs** | Collected as Windows events using the Log Analytics Agent (Event table) or Azure Monitor Agent (WindowsEvent table). Supports Windows events 5150 to 5159. | `_Im_NetworkSession_MicrosoftWindowsEventFirewallVxx`| | **Watchguard FirewareOW** | Collected using Syslog. | `_Im_NetworkSession_WatchGuardFirewareOSVxx` | | **Zscaler ZIA firewall logs** | Collected using CEF. | `_Im_NetworkSessionZscalerZIAVxx` |
ASIM Web Session parsers are available in every workspace. Microsoft Sentinel pr
| **Source** | **Notes** | **Parser** | | | | | | **Squid Proxy** | | `_Im_WebSession_SquidProxyVxx` |
-| **Vectra AI Streams** | | `_Im_WebSession_VectraAIVxx` |
+| **Vectra AI Streams** | Supports the [pack](normalization-about-parsers.md#the-pack-parameter) parameter. | `_Im_WebSession_VectraAIVxx` |
| **Zscaler ZIA** | Collected using CEF | `_Im_WebSessionZscalerZIAVxx` | Deploy the workspace deployed parsers version from the [Microsoft Sentinel GitHub repository](https://aka.ms/DeployASIM).
sentinel Normalization Schema Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-schema-audit.md
The Microsoft Sentinel Audit events normalization schema represents events associated with the audit trail of information systems. The audit trail logs system configuration activities and policy changes. Such changes are often performed by system administrators, but can also be performed by users when configuring the settings of their own applications.
-Every system logs audit events alongside its core activity logs. For example, a Firewall will log events about the network sessions is processes, as well as audit events about configuration changes applied to the Firewall itself.
+Every system logs audit events alongside its core activity logs. For example, a Firewall will log events about the network sessions is processes, and audit events about configuration changes applied to the Firewall itself.
For more information about normalization in Microsoft Sentinel, see [Normalization and the Advanced Security Information Model (ASIM)](normalization.md).
For more information about normalization in Microsoft Sentinel, see [Normalizati
## Schema overview The main fields of an audit event are:-- The object, which may be, for example a managed resource or policy rule, that the event focuses on, represented by the field [Object](#object). The field [ObjectType](#objecttype) specifies the type of the object.
+- The object, which may be, for example, a managed resource or policy rule, that the event focuses on, represented by the field [Object](#object). The field [ObjectType](#objecttype) specifies the type of the object.
- The application context of the object, represented by the field [TargetAppName](#targetappname), which is aliased by [Application](#application).-- The operation performed on the object, represented by the fields [EventType](#eventtype) and [Operation](#operation). While [Operation](#operation) is the value the source reported, [EventType](#eventtype) is a normalized version, that is more consistent across sources.
+- The operation performed on the object, represented by the fields [EventType](#eventtype) and [Operation](#operation). While [Operation](#operation) is the value the source reported, [EventType](#eventtype) is a normalized version that is more consistent across sources.
- The old and new values for the object, if applicable, represented by [OldValue](#oldvalue) and [NewValue](#newvalue) respectively.
-Audit events also reference the following entities which are involved in the configuration operation:
+Audit events also reference the following entities, which are involved in the configuration operation:
- **Actor** - The user performing the configuration operation. - **TargetApp** - The application or system for which the configuration operation applies.
Audit events also reference the following entities which are involved in the con
The descriptor `Dvc` is used for the reporting device, which is the local system for sessions reported by an endpoint, and the intermediary or security device in other cases. +
+## Parsers
+
+### Deploying and using audit events parsers
+
+Deploy the ASIM audit events parsers from the [Microsoft Sentinel GitHub repository](https://aka.ms/DeployASIM). To query across all audit event sources, use the unifying parser `imAuditEvent` as the table name in your query.
+
+For more information about using ASIM parsers, see the [ASIM parsers overview](normalization-parsers-overview.md).
+For the list of the audit event parsers Microsoft Sentinel provides out-of-the-box refer to the [ASIM parsers list](normalization-parsers-list.md#audit-event-parsers)
+
+### Add your own normalized parsers
+
+When implementing custom parsers for the File Event information model, name your KQL functions using the following syntax: `imAuditEvent<vendor><Product>`. Refer to the article [Managing ASIM parsers](normalization-manage-parsers.md) to learn how to add your custom parsers to the audit event unifying parser.
+
+### Filtering parser parameters
+
+The audit events parsers support [filtering parameters](normalization-about-parsers.md#optimizing-parsing-using-parameters). While these parameters are optional, they can improve your query performance.
+
+The following filtering parameters are available:
+
+| Name | Type | Description |
+|-|--|-|
+| **starttime** | datetime | Filter only events that ran at or after this time. This parameter uses the `TimeGenerated` field as the time designator of the event. |
+| **endtime** | datetime | Filter only events queries that finished running at or before this time. This parameter uses the `TimeGenerated` field as the time designator of the event. |
+| **srcipaddr_has_any_prefix** | dynamic | Filter only events from this source IP address, as represented in the [SrcIpAddr](#srcipaddr) field. |
+| **eventtype_in**| string | Filter only events in which the event type, as represented in the [EventType](#eventtype) field is any of the terms provided. |
+| **eventresult**| string | Filter only events in which the event result, as represented in the [EventResult](normalization-common-fields.md#eventresult) field is equal to the parameter value. |
+| **actorusername_has_any** | dynamic/string | Filter only events in which the [ActorUsername](#actorusername) includes any of the terms provided. |
+| **operation_has_any** | dynamic/string | Filter only events in which [Operation](#operation) field includes any of the terms provided. |
+| **object_has_any** | dynamic/string | Filter only events in which [Object](#object) field includes any of the terms provided. |
+| **newvalue_has_any** | dynamic/string | Filter only events in which [NewValue](#object) field includes any of the terms provided. |
+
+Some parameter can accept both list of values of type `dynamic` or a single string value. To pass a literal list to parameters that expect a dynamic value, explicitly use a [dynamic literal](/azure/data-explorer/kusto/query/scalar-data-types/dynamic#dynamic-literals.md). For example: `dynamic(['192.168.','10.'])`
+
+For example, to filter only audit events with the terms `install` or `update` in their [Operation](#operation) field, from the last day , use:
+
+```kql
+imAuditEvent (operation_has_any=dynamic(['install','update']), starttime = ago(1d), endtime=now())
+```
+ ## Schema details ### Common ASIM fields
The following list mentions fields that have specific guidelines for Audit Event
| Field | Class | Type | Description | ||-||--|
-| <a name="eventtype"></a> **EventType** | Mandatory | Enumerated | Describes the operation audited by the event using a normalized value. Use [EventSubType](#eventsubtype) to provide further details which the normalized value does not convey, and [Operation](#operation). to store the operation as reported by the reported device.<br><br> For Audit Event records, the allowed values are:<br> - `Set`<br>- `Read`<br>- `Create`<br>- `Delete`<br>- `Execute`<br>- `Install`<br>- `Clear`<br>- `Enable`<br>- `Disable`<br>- `Other`. <br><br>Audit events represent a large variety of operations, and the `Other` value enables mapping operations that have no corresponding `EventType`. However, the use of `Other` limit the usability of the event and should be avoided if possible. |
-| <a name="eventsubtype"></a> **EventSubType** | Optional | String | Provides further details which the normalized value in [EventType](#eventtype) does not convey. |
+| <a name="eventtype"></a> **EventType** | Mandatory | Enumerated | Describes the operation audited by the event using a normalized value. Use [EventSubType](#eventsubtype) to provide further details, which the normalized value does not convey, and [Operation](#operation). to store the operation as reported by the reporting device.<br><br> For Audit Event records, the allowed values are:<br> - `Set`<br>- `Read`<br>- `Create`<br>- `Delete`<br>- `Execute`<br>- `Install`<br>- `Clear`<br>- `Enable`<br>- `Disable`<br>- `Other`. <br><br>Audit events represent a large variety of operations, and the `Other` value enables mapping operations that have no corresponding `EventType`. However, the use of `Other` limits the usability of the event and should be avoided if possible. |
+| <a name="eventsubtype"></a> **EventSubType** | Optional | String | Provides further details, which the normalized value in [EventType](#eventtype) does not convey. |
| **EventSchema** | Mandatory | String | The name of the schema documented here is `AuditEvent`. | | **EventSchemaVersion** | Mandatory | String | The version of the schema. The version of the schema documented here is `0.1`. | #### All common fields
-Fields that appear in the table below are common to all ASIM schemas. Any guideline specified above overrides the general guidelines for the field. For example, a field might be optional in general, but mandatory for a specific schema. For more information on each field, refer to the [ASIM Common Fields](normalization-common-fields.md) article.
+Fields that appear in the table are common to all ASIM schemas. Any of guidelines specified in this document overrides the general guidelines for the field. For example, a field might be optional in general, but mandatory for a specific schema. For more information on each field, see the [ASIM Common Fields](normalization-common-fields.md) article.
| **Class** | **Fields** | | | - |
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| Field | Class | Type | Description | ||--||--|
-| <a name="actoruserid"></a>**ActorUserId** | Optional | String | A machine-readable, alphanumeric, unique representation of the Actor. For more information, and for alternative fields for additional IDs, see [The User entity](normalization-about-schemas.md#the-user-entity). <br><br>Example: `S-1-12-1-4141952679-1282074057-627758481-2916039507` |
+| <a name="actoruserid"></a>**ActorUserId** | Optional | String | A machine-readable, alphanumeric, unique representation of the Actor. For more information, and for alternative fields for other IDs, see [The User entity](normalization-about-schemas.md#the-user-entity). <br><br>Example: `S-1-12-1-4141952679-1282074057-627758481-2916039507` |
| **ActorScope** | Optional | String | The scope, such as Azure AD Domain Name, in which [ActorUserId](#actoruserid) and [ActorUsername](#actorusername) are defined. or more information and list of allowed values, see [UserScope](normalization-about-schemas.md#userscope) in the [Schema Overview article](normalization-about-schemas.md).| | **ActorScopeId** | Optional | String | The scope ID, such as Azure AD Directory ID, in which [ActorUserId](#actoruserid) and [ActorUsername](#actorusername) are defined. or more information and list of allowed values, see [UserScopeId](normalization-about-schemas.md#userscopeid) in the [Schema Overview article](normalization-about-schemas.md).| | **ActorUserIdType**| Optional | UserIdType | The type of the ID stored in the [ActorUserId](#actoruserid) field. For more information and list of allowed values, see [UserIdType](normalization-about-schemas.md#useridtype) in the [Schema Overview article](normalization-about-schemas.md).|
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| <a name="dst"></a>**Dst** | Recommended | String | A unique identifier of the authentication target. <br><br>This field may alias the [TargerDvcId](#targetdvcid), [TargetHostname](#targethostname), [TargetIpAddr](#targetipaddr), [TargetAppId](#targetappid), or [TargetAppName](#targetappname) fields. <br><br>Example: `192.168.12.1` | | <a name="targethostname"></a>**TargetHostname** | Recommended | Hostname | The target device hostname, excluding domain information.<br><br>Example: `DESKTOP-1282V4D` | | <a name="targetdomain"></a>**TargetDomain** | Recommended | String | The domain of the target device.<br><br>Example: `Contoso` |
-| <a name="targetdomaintype"></a>**TargetDomainType** | Recommended | Enumerated | The type of [TargetDomain](#targetdomain). For a list of allowed values and further information refer to [DomainType](normalization-about-schemas.md#domaintype) in the [Schema Overview article](normalization-about-schemas.md).<br><br>Required if [TargetDomain](#targetdomain) is used. |
+| <a name="targetdomaintype"></a>**TargetDomainType** | Recommended | Enumerated | The type of [TargetDomain](#targetdomain). For a list of allowed values and further information, refer to [DomainType](normalization-about-schemas.md#domaintype) in the [Schema Overview article](normalization-about-schemas.md).<br><br>Required if [TargetDomain](#targetdomain) is used. |
| **TargetFQDN** | Optional | String | The target device hostname, including domain information when available. <br><br>Example: `Contoso\DESKTOP-1282V4D` <br><br>**Note**: This field supports both traditional FQDN format and Windows domain\hostname format. The [TargetDomainType](#targetdomaintype) reflects the format used. | | <a name = "targetdescription"></a>**TargetDescription** | Optional | String | A descriptive text associated with the device. For example: `Primary Domain Controller`. | | <a name="targetdvcid"></a>**TargetDvcId** | Optional | String | The ID of the target device. If multiple IDs are available, use the most important one, and store the others in the fields `TargetDvc<DvcIdType>`. <br><br>Example: `ac7e9755-8eae-4ffc-8a02-50ed7a2216c3` | | <a name="targetdvcscopeid"></a>**TargetDvcScopeId** | Optional | String | The cloud platform scope ID the device belongs to. **TargetDvcScopeId** map to a subscription ID on Azure and to an account ID on AWS. | | <a name="targetdvcscope"></a>**TargerDvcScope** | Optional | String | The cloud platform scope the device belongs to. **TargetDvcScope** map to a subscription ID on Azure and to an account ID on AWS. |
-| **TargetDvcIdType** | Optional | Enumerated | The type of [TargetDvcId](#targetdvcid). For a list of allowed values and further information refer to [DvcIdType](normalization-about-schemas.md#dvcidtype) in the [Schema Overview article](normalization-about-schemas.md). <br><br>Required if **TargetDeviceId** is used.|
-| **TargetDeviceType** | Optional | Enumerated | The type of the target device. For a list of allowed values and further information refer to [DeviceType](normalization-about-schemas.md#devicetype) in the [Schema Overview article](normalization-about-schemas.md). |
+| **TargetDvcIdType** | Optional | Enumerated | The type of [TargetDvcId](#targetdvcid). For a list of allowed values and further information, refer to [DvcIdType](normalization-about-schemas.md#dvcidtype) in the [Schema Overview article](normalization-about-schemas.md). <br><br>Required if **TargetDeviceId** is used.|
+| **TargetDeviceType** | Optional | Enumerated | The type of the target device. For a list of allowed values and further information, refer to [DeviceType](normalization-about-schemas.md#devicetype) in the [Schema Overview article](normalization-about-schemas.md). |
|<a name="targetipaddr"></a>**TargetIpAddr** |Optional | IP Address|The IP address of the target device. <br><br>Example: `2.2.2.2` | | **TargetDvcOs**| Optional| String| The OS of the target device. <br><br>Example: `Windows 10`| | **TargetPortNumber** |Optional |Integer |The port of the target device.|
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| <a name="srcdvcscope"></a>**SrcDvcScope** | Optional | String | The cloud platform scope the device belongs to. **SrcDvcScope** map to a subscription ID on Azure and to an account ID on AWS. | | **SrcDvcIdType** | Optional | DvcIdType | The type of [SrcDvcId](#srcdvcid). For a list of allowed values and further information, refer to [DvcIdType](normalization-about-schemas.md#dvcidtype) in the [Schema Overview article](normalization-about-schemas.md). <br><br>**Note**: This field is required if [SrcDvcId](#srcdvcid) is used. | | **SrcDeviceType** | Optional | DeviceType | The type of the source device. For a list of allowed values and further information, refer to [DeviceType](normalization-about-schemas.md#devicetype) in the [Schema Overview article](normalization-about-schemas.md). |
-| <a name="srcsubscription"></a>**SrcSubscriptionId** | Optional | String | The cloud platform subscription ID the source device belongs to. **SrcSubscriptionId** map to a subscription ID on Azure and to an account ID on AWS. |
+| <a name="srcsubscriptionid"></a>**SrcSubscriptionId** | Optional | String | The cloud platform subscription ID the source device belongs to. **SrcSubscriptionId** map to a subscription ID on Azure and to an account ID on AWS. |
| **SrcGeoCountry** | Optional | Country | The country associated with the source IP address.<br><br>Example: `USA` | | **SrcGeoRegion** | Optional | Region | The region within a country associated with the source IP address.<br><br>Example: `Vermont` | | **SrcGeoCity** | Optional | City | The city associated with the source IP address.<br><br>Example: `Burlington` |
The following fields are used to represent that inspection performed by a securi
| **ThreatIsActive** | Optional | Boolean | True ID the threat identified is considered an active threat. | | **ThreatFirstReportedTime** | Optional | datetime | The first time the IP address or domain were identified as a threat. | | **ThreatLastReportedTime** | Optional | datetime | The last time the IP address or domain were identified as a threat.|
+| **ThreatIpAddr** | Optional | IP Address | An IP address for which a threat was identified. The field [ThreatField](#threatfield) contains the name of the field **ThreatIpAddr** represents. |
+| <a name="threatfield"></a>**ThreatField** | Optional | Enumerated | The field for which a threat was identified. The value is either `SrcIpAddr` or `TargetIpAddr`. |
+ ## Next steps
sentinel Normalization Schema Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-schema-dns.md
_Im_Dns | where SrcIpAddr != "127.0.0.1" and EventSubType == "response"
For more information about ASIM parsers, see the [ASIM parsers overview](normalization-parsers-overview.md).
-### Unifying parsers
+### Out-of-the-box parsers
-To use parsers that unify all ASIM out-of-the-box parsers, and ensure that your analysis runs across all the configured sources, use the `_Im_Dns` filtering parser or the `_ASim_Dns` parameter-less parser. You can also use workspace deployed `ImDns` and `ASimDns` parsers.
+To use parsers that unify all ASIM out-of-the-box parsers, and ensure that your analysis runs across all the configured sources, use the unifying parser `_Im_Dns` as the table name in your query.
-### Out-of-the-box, source-specific parsers
-
-For the list of the DNS parsers Microsoft Sentinel provides out-of-the-box refer to the [ASIM parsers list](normalization-parsers-list.md#dns-parsers)
+For the list of the DNS parsers Microsoft Sentinel provides out-of-the-box refer to the [ASIM parsers list](normalization-parsers-list.md#dns-parsers).
### Add your own normalized parsers
-When implementing custom parsers for the Dns information model, name your KQL functions in the following format:
--- `vimDns<vendor><Product>` for parametrized parsers-- `ASimDns<vendor><Product>` for regular parsers
+When implementing custom parsers for the Dns information model, name your KQL functions using the format `vimDns<vendor><Product>`. Refer to the article [Managing ASIM parsers](normalization-manage-parsers.md) to learn how to add your custom parsers to the DNS unifying parser.
### Filtering parser parameters
-The `im` and `vim*` parsers support [filtering parameters](normalization-about-parsers.md#optimizing-parsing-using-parameters). While these parsers are optional, they can improve your query performance.
+The DNS parsers support [filtering parameters](normalization-about-parsers.md#optimizing-parsing-using-parameters). While these parameters are optional, they can improve your query performance.
The following filtering parameters are available:
The following filtering parameters are available:
| **starttime** | datetime | Filter only DNS queries that ran at or after this time. | | **endtime** | datetime | Filter only DNS queries that finished running at or before this time. | | **srcipaddr** | string | Filter only DNS queries from this source IP address. |
-| **domain_has_any**| dynamic | Filter only DNS queries where the `domain` (or `query`) has any of the listed domain names, including as part of the event domain. The length of the list is limited to 10,000 items.
+| **domain_has_any**| dynamic/string | Filter only DNS queries where the `domain` (or `query`) has any of the listed domain names, including as part of the event domain. The length of the list is limited to 10,000 items.
| **responsecodename** | string | Filter only DNS queries for which the response code name matches the provided value. <br>For example: `NXDOMAIN` | | **response_has_ipv4** | string | Filter only DNS queries in which the response field includes the provided IP address or IP address prefix. Use this parameter when you want to filter on a single IP address or prefix. <br><br>Results aren't returned for sources that don't provide a response.| | **response_has_any_prefix** | dynamic| Filter only DNS queries in which the response field includes any of the listed IP addresses or IP address prefixes. Prefixes should end with a `.`, for example: `10.0.`. <br><br>Use this parameter when you want to filter on a list of IP addresses or prefixes. <br><br>Results aren't returned for sources that don't provide a response. The length of the list is limited to 10,000 items. |
To filter only DNS queries for a specified list of domain names, use:
let torProxies=dynamic(["tor2web.org", "tor2web.com", "torlink.co"]); _Im_Dns (domain_has_any = torProxies) ```
-> [!TIP]
-> To pass a literal list to parameters that expect a dynamic value, explicitly use a [dynamic literal](/azure/data-explorer/kusto/query/scalar-data-types/dynamic#dynamic-literals.md). For example: `dynamic(['192.168.','10.'])`.
->
+
+Some parameter can accept both list of values of type `dynamic` or a single string value. To pass a literal list to parameters that expect a dynamic value, explicitly use a [dynamic literal](/azure/data-explorer/kusto/query/scalar-data-types/dynamic#dynamic-literals.md). For example: `dynamic(['192.168.','10.'])`
## Normalized content
sentinel Normalization Schema File Event https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-schema-file-event.md
+
+ Title: The Advanced Security Information Model (ASIM) File Event normalization schema reference (Public preview)| Microsoft Docs
+description: This article describes the Microsoft Sentinel File Event normalization schema.
++ Last updated : 11/09/2021+++
+# The Advanced Security Information Model (ASIM) File Event normalization schema reference (Public preview)
+
+The File Event normalization schema is used to describe file activity such as creating, modifying, or deleting files or documents. Such events are reported by operating systems, file storage systems such as Azure Files, and document management systems such as Microsoft SharePoint.
+
+For more information about normalization in Microsoft Sentinel, see [Normalization and the Advanced Security Information Model (ASIM)](normalization.md).
+
+> [!IMPORTANT]
+> The File Event normalization schema is currently in PREVIEW. This feature is provided without a service level agreement, and is not recommended for production workloads.
+>
+> The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+
+## Parsers
+
+### Deploying and using file activity parsers
+
+Deploy the ASIM File Activity parsers from the [Microsoft Sentinel GitHub repository](https://aka.ms/DeployASIM). To query across all File Activity sources, use the unifying parser `imFileEvent` as the table name in your query.
+
+For more information about using ASIM parsers, see the [ASIM parsers overview](normalization-parsers-overview.md).
+For the list of the file activity parsers Microsoft Sentinel provides out-of-the-box refer to the [ASIM parsers list](normalization-parsers-list.md#file-activity-parsers)
+
+### Add your own normalized parsers
+
+When implementing custom parsers for the File Event information model, name your KQL functions using the following syntax: `imFileEvent<vendor><Product`.
+
+Refer to the article [Managing ASIM parsers](normalization-manage-parsers.md) to learn how to add your custom parsers to the file activity unifying parser.
++
+## Normalized content
+
+For a full list of analytics rules that use normalized File Activity events, see [File Activity security content](normalization-content.md#file-activity-security-content).
+
+## Schema overview
+
+The File Event information model is aligned to the [OSSEM Process entity schema](https://github.com/OTRF/OSSEM/blob/master/docs/cdm/entities/file.md).
+
+The File Event schema references the following entities, which are central to file activities:
+
+- **Actor**. The user that initiated the file activity
+- **ActingProcess**. The process used by the Actor to initiate the file activity
+- **TargetFile**. The file on which the operation was performed
+- **Source File (SrcFile)**. Stores file information prior to the operation.
+
+The relationship between these entities is best demonstrated as follows: An **Actor** performs a file operation using an **Acting Process**, which modifies the **Source File** to **Target File**.
+
+For example: `JohnDoe` (**Actor**) uses `Windows File Explorer` (**Acting process**) to rename `new.doc` (**Source File**) to `old.doc` (**Target File**).
++
+## Schema details
+
+### Common fields
+
+> [!IMPORTANT]
+> Fields common to all schemas are described in detail in the [ASIM Common Fields](normalization-common-fields.md) article.
+>
+
+#### Fields with specific guidelines for the File Event schema
+
+The following list mentions fields that have specific guidelines for File activity events:
+
+| **Field** | **Class** | **Type** | **Description** |
+| | | | |
+| **EventType** | Mandatory | Enumerated | Describes the operation reported by the record. <br><br>For File records, supported values include: <br><br>- `FileAccessed`<br>- `FileCreated`<br>- `FileModified`<br>- `FileDeleted`<br>- `FileRenamed`<br>- `FileCopied`<br>- `FileMoved`<br>- `FolderCreated`<br>- `FolderDeleted` |
+| **EventSchema** | Mandatory | String | The name of the schema documented here is **FileEvent**. |
+| **EventSchemaVersion** | Mandatory | String | The version of the schema. The version of the schema documented here is `0.2.1` |
+| **Dvc** fields| - | - | For File activity events, device fields refer to the system on which the file activity occurred. |
++
+> [!IMPORTANT]
+> The `EventSchema` field is currently optional but will become Mandatory on September 1st 2022.
+>
+
+#### All common fields
+
+Fields that appear in the table are common to all ASIM schemas. Any of the schema specific guidelines in this document overrides the general guidelines for the field. For example, a field might be optional in general, but mandatory for a specific schema. For more information on each field, see to the [ASIM Common Fields](normalization-common-fields.md) article.
+
+| **Class** | **Fields** |
+| | - |
+| Mandatory | - [EventCount](normalization-common-fields.md#eventcount)<br> - [EventStartTime](normalization-common-fields.md#eventstarttime)<br> - [EventEndTime](normalization-common-fields.md#eventendtime)<br> - [EventType](normalization-common-fields.md#eventtype)<br>- [EventResult](normalization-common-fields.md#eventresult)<br> - [EventProduct](normalization-common-fields.md#eventproduct)<br> - [EventVendor](normalization-common-fields.md#eventvendor)<br> - [EventSchema](normalization-common-fields.md#eventschema)<br> - [EventSchemaVersion](normalization-common-fields.md#eventschemaversion)<br> - [Dvc](normalization-common-fields.md#dvc)<br>|
+| Recommended | - [EventResultDetails](normalization-common-fields.md#eventresultdetails)<br>- [EventSeverity](normalization-common-fields.md#eventseverity)<br>- [EventUid](normalization-common-fields.md#eventuid)<br> - [DvcIpAddr](normalization-common-fields.md#dvcipaddr)<br> - [DvcHostname](normalization-common-fields.md#dvchostname)<br> - [DvcDomain](normalization-common-fields.md#dvcdomain)<br>- [DvcDomainType](normalization-common-fields.md#dvcdomaintype)<br>- [DvcFQDN](normalization-common-fields.md#dvcfqdn)<br>- [DvcId](normalization-common-fields.md#dvcid)<br>- [DvcIdType](normalization-common-fields.md#dvcidtype)<br>- [DvcAction](normalization-common-fields.md#dvcaction)|
+| Optional | - [EventMessage](normalization-common-fields.md#eventmessage)<br> - [EventSubType](normalization-common-fields.md#eventsubtype)<br>- [EventOriginalUid](normalization-common-fields.md#eventoriginaluid)<br>- [EventOriginalType](normalization-common-fields.md#eventoriginaltype)<br>- [EventOriginalSubType](normalization-common-fields.md#eventoriginalsubtype)<br>- [EventOriginalResultDetails](normalization-common-fields.md#eventoriginalresultdetails)<br> - [EventOriginalSeverity](normalization-common-fields.md#eventoriginalseverity) <br> - [EventProductVersion](normalization-common-fields.md#eventproductversion)<br> - [EventReportUrl](normalization-common-fields.md#eventreporturl)<br> - [EventOwner](normalization-common-fields.md#eventowner)<br>- [DvcZone](normalization-common-fields.md#dvczone)<br>- [DvcMacAddr](normalization-common-fields.md#dvcmacaddr)<br>- [DvcOs](normalization-common-fields.md#dvcos)<br>- [DvcOsVersion](normalization-common-fields.md#dvchostname)<br>- [DvcOriginalAction](normalization-common-fields.md#dvcoriginalaction)<br>- [DvcInterface](normalization-common-fields.md#dvcinterface)<br>- [AdditionalFields](normalization-common-fields.md#additionalfields)<br>- [DvcDescription](normalization-common-fields.md#dvcdescription)<br>- [DvcScopeId](normalization-common-fields.md#dvcscopeid)<br>- [DvcScope](normalization-common-fields.md#dvcscope)|
+
+### Target file fields
+
+The following fields represent information about the target file in a file operation. If the operation involves a single file, `FileCreate` for example, it is represented by the target file fields.
+
+| Field | Class | Type | Description |
+||--||--|
+|**TargetFileCreationTime** | Optional|Date/Time |The time at which the target file was created. |
+|**TargetFileDirectory** | Optional|String |The target file folder or location. This field should be similar to the [TargetFilePath](#targetfilepath) field, without the final element. <br><br>**Note**: A parser can provide this value if the value available in the log source and does not need to be extracted from the full path.|
+|**TargetFileExtension** |Optional |String | The target file extension.<br><br>**Note**: A parser can provide this value if the value available in the log source and does not need to be extracted from the full path.|
+| **TargetFileMimeType**|Optional | Enumerated| The Mime, or Media, type of the target file. Allowed values are listed in the [IANA Media Types](https://www.iana.org/assignments/media-types/media-types.xhtml) repository.|
+| <a name='targetfilename'></a>**TargetFileName**|Recommended |String |The name of the target file, without a path or a location, but with an extension if relevant. This field should be similar to the final element in the [TargetFilePath](#targetfilepath) field.|
+|**FileName** |Alias | | Alias to the [TargetFileName](#targetfilename) field.|
+|<a name="targetfilepath"></a>**TargetFilePath** | Mandatory| String| The full, normalized path of the target file, including the folder or location, the file name, and the extension. For more information, see [Path structure](#path-structure). <br><br>**Note**: If the record does not include folder or location information, store the filename only here. <br><br>Example: `C:\Windows\System32\notepad.exe`|
+| **TargetFilePathType** | Mandatory|Enumerated | The type of [TargetFilePath](#targetfilepath). For more information, see [Path structure](#path-structure). |
+|**FilePath** |Alias | | Alias to the [TargetFilePath](#targetfilepath) field.|
+| **TargetFileMD5**| Optional| MD5|The MD5 hash of the target file. <br><br>Example: `75a599802f1fa166cdadb360960b1dd0` |
+| **TargetFileSHA1** |Optional |SHA1 |The SHA-1 hash of the target file. <br><br>Example:<br> `d55c5a4df19b46db8c54`<br>`c801c4665d3338acdab0`|
+| **TargetFileSHA256** | Optional|SHA256 |The SHA-256 hash of the target file. <br><br>Example:<br> `e81bb824c4a09a811af17deae22f22dd`<br>`2e1ec8cbb00b22629d2899f7c68da274` |
+| **TargetFileSHA512**| Optional| SHA512|The SHA-512 hash of the source file. |
+| **Hash** | Alias | |Alias to the best available Target File hash. |
+| **HashType** | Recommended | String | The type of hash stored in the HASH alias field, allowed values are `MD5`, `SHA`, `SHA256`, `SHA512` and `IMPHASH`. Mandatory if `Hash` is populated. |
+| **TargetFileSize** |Optional | Integer|The size of the target file in bytes. |
+
+### Source file fields
+
+The following fields represent information about the source file in a file operation that has both a source and a destination, such as copy. If the operation involves a single file, it is represented by the target file fields.
+
+| Field | Class | Type | Description |
+||--||--|
+| **SrcFileCreationTime**|Optional |Date/Time |The time at which the source file was created. |
+|**SrcFileDirectory** | Optional| String| The source file folder or location. This field should be similar to the [SrcFilePath](#srcfilepath) field, without the final element. <br><br>**Note**: A parser can provide this value if the value is available in the log source, and does not need to be extracted from the full path.|
+| **SrcFileExtension**|Optional | String|The source file extension. <br><br>**Note**: A parser can provide this value the value is available in the log source, and does not need to be extracted from the full path.|
+|**SrcFileMimeType** |Optional |Enumerated | The Mime or Media type of the source file. Supported values are listed in the [IANA Media Types](https://www.iana.org/assignments/media-types/media-types.xhtml) repository. |
+|**SrcFileName** |Recommended |String | The name of the source file, without a path or a location, but with an extension if relevant. This field should be similar to the last element in the [SrcFilePath](#srcfilepath) field. |
+| <a name="srcfilepath"></a>**SrcFilePath**| Recommended |String |The full, normalized path of the source file, including the folder or location, the file name, and the extension. <br><br>For more information, see [Path structure](#path-structure).<br><br>Example: `/etc/init.d/networking` |
+|**SrcFilePathType** | Recommended | Enumerated| The type of [SrcFilePath](#srcfilepath). For more information, see [Path structure](#path-structure).|
+|**SrcFileMD5**|Optional |MD5 | The MD5 hash of the source file. <br><br>Example: `75a599802f1fa166cdadb360960b1dd0` |
+|**SrcFileSHA1**|Optional |SHA1 |The SHA-1 hash of the source file.<br><br>Example:<br>`d55c5a4df19b46db8c54`<br>`c801c4665d3338acdab0` |
+|**SrcFileSHA256** | Optional|SHA256 |The SHA-256 hash of the source file. <br><br>Example:<br> `e81bb824c4a09a811af17deae22f22dd`<br>`2e1ec8cbb00b22629d2899f7c68da274`|
+|**SrcFileSHA512** |Optional | SHA512|The SHA-512 hash of the source file. |
+|**SrcFileSize**| Optional|Integer | The size of the source file in bytes.|
++
+### Actor fields
+
+| Field | Class | Type | Description |
+||--||--|
+| <a name="actoruserid"></a>**ActorUserId** | Recommended | String | A machine-readable, alphanumeric, unique representation of the Actor. For the supported format for different ID types, refer to [the User entity](normalization-about-schemas.md#the-user-entity). <br><br>Example: `S-1-12` |
+| **ActorScope** | Optional | String | The scope, such as Azure AD tenant, in which [ActorUserId](#actoruserid) and [ActorUsername](#actorusername) are defined. or more information and list of allowed values, see [UserScope](normalization-about-schemas.md#userscope) in the [Schema Overview article](normalization-about-schemas.md).|
+ **ActorScopeId** | Optional | String | The scope ID, such as Azure AD Directory ID, in which [ActorUserId](#actoruserid) and [ActorUsername](#actorusername) are defined. or more information and list of allowed values, see [UserScopeId](normalization-about-schemas.md#userscopeid) in the [Schema Overview article](normalization-about-schemas.md).|
+| **ActorUserIdType**| Recommended | String | The type of the ID stored in the [ActorUserId](#actoruserid) field. For a list of allowed values and further information, refer to [UserIdType](normalization-about-schemas.md#useridtype) in the [Schema Overview article](normalization-about-schemas.md). |
+| <a name="actorusername"></a>**ActorUsername** | Mandatory | String | The Actor username, including domain information when available. For the supported format for different ID types, refer to [the User entity](normalization-about-schemas.md#the-user-entity). Use the simple form only if domain information isn't available.<br><br>Store the Username type in the [ActorUsernameType](#actorusernametype) field. If other username formats are available, store them in the fields `ActorUsername<UsernameType>`.<br><br>Example: `AlbertE` |
+|**User** | Alias| | Alias to the [ActorUsername](#actorusername) field. <br><br>Example: `CONTOSO\dadmin`|
+| <a name="actorusernametype"></a>**ActorUsernameType** | Mandatory | Enumerated | Specifies the type of the user name stored in the [ActorUsername](#actorusername) field. For a list of allowed values and further information, refer to [UsernameType](normalization-about-schemas.md#usernametype) in the [Schema Overview article](normalization-about-schemas.md).<br><br>Example: `Windows` |
+| **ActorSessionId** | Optional | String | The unique ID of the login session of the Actor. <br><br>Example: `999`<br><br>**Note**: The type is defined as *string* to support varying systems, but on Windows this value must be numeric. <br><br>If you are using a Windows machine and used a different type, make sure to convert the values. For example, if you used a hexadecimal value, convert it to a decimal value. |
+| **ActorUserType** | Optional | UserType | The type of Actor. For a list of allowed values and further information, refer to [UserType](normalization-about-schemas.md#usertype) in the [Schema Overview article](normalization-about-schemas.md). <br><br>**Note**: The value might be provided in the source record by using different terms, which should be normalized to these values. Store the original value in the [ActorOriginalUserType](#actororiginalusertype) field. |
+| <a name="actororiginalusertype"></a>**ActorOriginalUserType** | Optional | String | The original destination user type, if provided by the reporting device. |
++
+### Acting process fields
+
+| Field | Class | Type | Description |
+||--||--|
+| **ActingProcessCommandLine** | Optional | String | The command line used to run the acting process. <br><br>Example: `"choco.exe" -v` |
+| <a name='actingprocessname'></a>**ActingProcessName** | Optional | string | The name of the acting process. This name is commonly derived from the image or executable file that's used to define the initial code and data that's mapped into the process' virtual address space.<br><br>Example: `C:\Windows\explorer.exe` |
+|**Process**| Alias| | Alias to [ActingProcessName](#actingprocessname)|
+| **ActingProcessId**| Optional | String | The process ID (PID) of the acting process.<br><br>Example: `48610176` <br><br>**Note**: The type is defined as *string* to support varying systems, but on Windows and Linux this value must be numeric. <br><br>If you are using a Windows or Linux machine and used a different type, make sure to convert the values. For example, if you used a hexadecimal value, convert it to a decimal value. |
+| **ActingProcessGuid** | Optional | string | A generated unique identifier (GUID) of the acting process. Enables identifying the process across systems. <br><br> Example: `EF3BD0BD-2B74-60C5-AF5C-010000001E00` |
+
+### Source system related fields
+
+The following fields represent information about the system initiating the file activity, typically when carried over the network.
+
+| Field | Class | Type | Description |
+||--||--|
+| <a name='srcipaddr'></a>**SrcIpAddr** |Recommended |IP Address | When the operation is initiated by a remote system, the IP address of this system.<br><br>Example: `185.175.35.214`|
+| **IpAddr** | Alias | | Alias to [SrcIpAddr](#srcipaddr) |
+| **Src** | Alias | | Alias to [SrcIpAddr](#srcipaddr) |
+| **SrcPortNumber** | Optional | Integer | When the operation is initiated by a remote system, the port number from which the connection was initiated.<br><br>Example: `2335` |
+| <a name="srchostname"></a> **SrcHostname** | Recommended | Hostname | The source device hostname, excluding domain information. If no device name is available, store the relevant IP address in this field.<br><br>Example: `DESKTOP-1282V4D` |
+|<a name="srcdomain"></a> **SrcDomain** | Recommended | String | The domain of the source device.<br><br>Example: `Contoso` |
+| <a name="srcdomaintype"></a>**SrcDomainType** | Recommended | DomainType | The type of [SrcDomain](#srcdomain). For a list of allowed values and further information, refer to [DomainType](normalization-about-schemas.md#domaintype) in the [Schema Overview article](normalization-about-schemas.md).<br><br>Required if [SrcDomain](#srcdomain) is used. |
+| **SrcFQDN** | Optional | String | The source device hostname, including domain information when available. <br><br>**Note**: This field supports both traditional FQDN format and Windows domain\hostname format. The [SrcDomainType](#srcdomaintype) field reflects the format used. <br><br>Example: `Contoso\DESKTOP-1282V4D` |
+| <a name = "srcdescription"></a>**SrcDescription** | Optional | String | A descriptive text associated with the device. For example: `Primary Domain Controller`. |
+| <a name="srcdvcid"></a>**SrcDvcId** | Optional | String | The ID of the source device. If multiple IDs are available, use the most important one, and store the others in the fields `SrcDvc<DvcIdType>`.<br><br>Example: `ac7e9755-8eae-4ffc-8a02-50ed7a2216c3` |
+| <a name="srcdvcscopeid"></a>**SrcDvcScopeId** | Optional | String | The cloud platform scope ID the device belongs to. **SrcDvcScopeId** map to a subscription ID on Azure and to an account ID on AWS. |
+| <a name="srcdvcscope"></a>**SrcDvcScope** | Optional | String | The cloud platform scope the device belongs to. **SrcDvcScope** map to a subscription ID on Azure and to an account ID on AWS. |
+| **SrcDvcIdType** | Optional | DvcIdType | The type of [SrcDvcId](#srcdvcid). For a list of allowed values and further information, refer to [DvcIdType](normalization-about-schemas.md#dvcidtype) in the [Schema Overview article](normalization-about-schemas.md). <br><br>**Note**: This field is required if [SrcDvcId](#srcdvcid) is used. |
+| **SrcDeviceType** | Optional | DeviceType | The type of the source device. For a list of allowed values and further information, refer to [DeviceType](normalization-about-schemas.md#devicetype) in the [Schema Overview article](normalization-about-schemas.md). |
+| <a name="srcsubscriptionid"></a>**SrcSubscriptionId** | Optional | String | The cloud platform subscription ID the source device belongs to. **SrcSubscriptionId** map to a subscription ID on Azure and to an account ID on AWS. |
+| **SrcGeoCountry** | Optional | Country | The country associated with the source IP address.<br><br>Example: `USA` |
+| **SrcGeoRegion** | Optional | Region | The region within a country associated with the source IP address.<br><br>Example: `Vermont` |
+| **SrcGeoCity** | Optional | City | The city associated with the source IP address.<br><br>Example: `Burlington` |
+| **SrcGeoLatitude** | Optional | Latitude | The latitude of the geographical coordinate associated with the source IP address.<br><br>Example: `44.475833` |
+| **SrcGeoLongitude** | Optional | Longitude | The longitude of the geographical coordinate associated with the source IP address.<br><br>Example: `73.211944` |
+
+### Network related fields
+
+The following fields represent information about the network session when the file activity was carried over the network.
+
+| Field | Class | Type | Description |
+||--||--|
+|**HttpUserAgent** |Optional | String |When the operation is initiated by a remote system using HTTP or HTTPS, the user agent used.<br><br>For example:<br>`Mozilla/5.0 (Windows NT 10.0; Win64; x64)`<br>`AppleWebKit/537.36 (KHTML, like Gecko)`<br>` Chrome/42.0.2311.135`<br>`Safari/537.36 Edge/12.246`|
+| **NetworkApplicationProtocol**| Optional|String | When the operation is initiated by a remote system, this value is the application layer protocol used in the OSI model. <br><br>While this field is not enumerated, and any value is accepted, preferable values include: `HTTP`, `HTTPS`, `SMB`,`FTP`, and `SSH`<br><br>Example: `SMB`|
+++
+### Target application fields
+
+The following fields represent information about the destination application performing the file activity on behalf of the user. A destination application is usually related to over-the-network file activity, for example using Saas (Software as a service) applications.
+
+| Field | Class | Type | Description |
+|-|-||-|
+| <a name="targetappname"></a>**TargetAppName** | Optional | String | The name of the destination application.<br><br>Example: `Facebook` |
+| <a name="application"></a>**Application** | Alias | | Alias to [TargetAppName](#targetappname). |
+| <a name="targetappid"></a>**TargetAppId** | Optional | String | The ID of the destination application, as reported by the reporting device. |
+| <a name="targetapptype"></a>**TargetAppType** | Optional | AppType | The type of the destination application. For a list of allowed values and further information, refer to [AppType](normalization-about-schemas.md#apptype) in the [Schema Overview article](normalization-about-schemas.md).<br><br>This field is mandatory if [TargetAppName](#targetappname) or [TargetAppId](#targetappid) are used. |
+| <a name="targeturl"></a>**TargetUrl**| Optional | String| When the operation is initiated using HTTP or HTTPS, the URL used. <br><br>Example: `https://onedrive.live.com/?authkey=...` |
+| **Url** | Alias | | Alias to [TargetUrl](#targeturl) |
++
+### <a name="inspection-fields"></a>Inspection fields
+
+The following fields are used to represent that inspection performed by a security system such an anti-virus system. The thread identified is usually associated with the file on which the activity was performed rather than the activity itself.
+
+| Field | Class | Type | Description |
+| | | | |
+| <a name="rulename"></a>**RuleName** | Optional | String | The name or ID of the rule by associated with the inspection results. |
+| <a name="rulenumber"></a>**RuleNumber** | Optional | Integer | The number of the rule associated with the inspection results. |
+| **Rule** | Mandatory | String | Either the value of [kRuleName](#rulename) or the value of [RuleNumber](#rulenumber). If the value of [RuleNumber](#rulenumber) is used, the type should be converted to string. |
+| **ThreatId** | Optional | String | The ID of the threat or malware identified in the file activity. |
+| **ThreatName** | Optional | String | The name of the threat or malware identified in the file activity.<br><br>Example: `EICAR Test File` |
+| **ThreatCategory** | Optional | String | The category of the threat or malware identified in the file activity.<br><br>Example: `Trojan` |
+| **ThreatRiskLevel** | Optional | Integer | The risk level associated with the identified threat. The level should be a number between **0** and **100**.<br><br>**Note**: The value might be provided in the source record by using a different scale, which should be normalized to this scale. The original value should be stored in [ThreatRiskLevelOriginal](#threatoriginalriskleveloriginal). |
+| <a name="threatoriginalriskleveloriginal"></a>**ThreatOriginalRiskLevel** | Optional | String | The risk level as reported by the reporting device. |
+| **ThreatFilePath** | Optional | String | A file path for which a threat was identified. The field [ThreatField](#threatfield) contains the name of the field **ThreatFilePath** represents. |
+| <a name="threatfield"></a>**ThreatField** | Optional | Enumerated | The field for which a threat was identified. The value is either `SrcFilePath` or `DstFilePath`. |
+| **ThreatConfidence** | Optional | Integer | The confidence level of the threat identified, normalized to a value between 0 and a 100.|
+| **ThreatOriginalConfidence** | Optional | String | The original confidence level of the threat identified, as reported by the reporting device.|
+| **ThreatIsActive** | Optional | Boolean | True ID the threat identified is considered an active threat. |
+| **ThreatFirstReportedTime** | Optional | datetime | The first time the IP address or domain were identified as a threat. |
+| **ThreatLastReportedTime** | Optional | datetime | The last time the IP address or domain were identified as a threat.|
++
+### Path structure
+
+The path should be normalized to match one of the following formats. The format the value is normalized to will be reflected in the respective **FilePathType** field.
+
+|Type |Example |Notes |
+||||
+|**Windows Local** | `C:\Windows\System32\notepad.exe` | Since Windows path names are case insensitive, this type implies that the value is case insensitive. |
+|**Windows Share** | `\\Documents\My Shapes\Favorites.vssx` | Since Windows path names are case insensitive, this type implies that the value is case insensitive. |
+|**Unix** | `/etc/init.d/networking` | Since Unix path names are case-sensitive, this type implies that the value is case-sensitive. <br><br>- Use this type for AWS S3. Concatenate the bucket and key names to create the path. <br><br>- Use this type for Azure Blob storage object keys. |
+|**URL** | `https://1drv.ms/p/s!Av04S_*********we` | Use when the file path is available as a URL. URLs are not limited to *http* or *https*, and any value, including an FTP value, is valid. |
++
+## Schema updates
+
+These are the changes in version 0.1.1 of the schema:
+- Added the field `EventSchema`.
+
+There are the changes in version 0.2 of the schema:
+- Added [inspection fields](#inspection-fields).
+- Added the fields `ActorScope`, `TargetUserScope`, `HashType`, `TargetAppName`, `TargetAppId`, `TargetAppType`, `SrcGeoCountry`, `SrcGeoRegion`, `SrcGeoLongitude`, `SrcGeoLatitude`, `ActorSessionId`, `DvcScopeId`, and `DvcScope`..
+- Added the aliases `Url`, `IpAddr`, 'FileName', and `Src`.
+
+There are the changes in version 0.2.1 of the schema:
+- Added `Application` as an alias to `TargetAppName`.
+- Added the field `ActorScopeId`
+- Added source device related fields.
++
+## Next steps
+
+For more information, see:
+
+- Watch the [ASIM Webinar](https://www.youtube.com/watch?v=WoGD-JeC7ng) or review the [slides](https://1drv.ms/b/s!AnEPjr8tHcNmjDY1cro08Fk3KUj-?e=murYHG)
+- [Advanced Security Information Model (ASIM) overview](normalization.md)
+- [Advanced Security Information Model (ASIM) schemas](normalization-about-schemas.md)
+- [Advanced Security Information Model (ASIM) parsers](normalization-parsers-overview.md)
+- [Advanced Security Information Model (ASIM) content](normalization-content.md)
sentinel Normalization Schema V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-schema-v1.md
For more information, see:
- [Normalization in Microsoft Sentinel](normalization.md) - [Microsoft Sentinel authentication normalization schema reference (Public preview)](authentication-normalization-schema.md)-- [Microsoft Sentinel file event normalization schema reference (Public preview)](file-event-normalization-schema.md)
+- [Microsoft Sentinel file event normalization schema reference (Public preview)](normalization-schema-file-event.md)
- [Microsoft Sentinel DNS normalization schema reference](normalization-schema-dns.md) - [Microsoft Sentinel process event normalization schema reference](process-events-normalization-schema.md) - [Microsoft Sentinel registry event normalization schema reference (Public preview)](registry-event-normalization-schema.md)
sentinel Normalization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization.md
The following image shows how non-normalized data can be translated into normali
ASIM includes the following components:
-|Component |Description |
-|||
-|**Normalized schemas** | Cover standard sets of predictable event types that you can use when building unified capabilities. <br><br>Each schema defines the fields that represent an event, a normalized column naming convention, and a standard format for the field values. <br><br> ASIM currently defines the following schemas:<br> - [Audit Event](normalization-schema-audit.md)<br> - [Authentication Event](authentication-normalization-schema.md)<br> - [DHCP Activity](dhcp-normalization-schema.md)<br> - [DNS Activity](normalization-schema-dns.md)<br> - [File Activity](file-event-normalization-schema.md) <br> - [Network Session](./network-normalization-schema.md)<br> - [Process Event](process-events-normalization-schema.md)<br> - [Registry Event](registry-event-normalization-schema.md)<br>- [User Management](user-management-normalization-schema.md)<br> - [Web Session](web-normalization-schema.md)<br><br>For more information, see [ASIM schemas](normalization-about-schemas.md). |
-|**Parsers** | Map existing data to the normalized schemas using [KQL functions](/azure/data-explorer/kusto/query/functions/user-defined-functions). <br><br>Many ASIM parsers are available out of the box with Microsoft Sentinel. More parsers, and versions of the built-in parsers that can be modified can be deployed from the [Microsoft Sentinel GitHub repository](https://aka.ms/AzSentinelASim). <br><br>For more information, see [ASIM parsers](normalization-parsers-overview.md). |
-|**Content for each normalized schema** | Includes analytics rules, workbooks, hunting queries, and more. Content for each normalized schema works on any normalized data without the need to create source-specific content. <br><br>For more information, see [ASIM content](normalization-content.md). |
+### Normalized schemas
-### ASIM terminology
+Normalized schemas cover standard sets of predictable event types that you can use when building unified capabilities. Each schema defines the fields that represent an event, a normalized column naming convention, and a standard format for the field values.
-ASIM uses the following terms:
+ASIM currently defines the following schemas:
-|Term |Description |
-|||
-|**Reporting device** | The system that sends the records to Microsoft Sentinel. This system may not be the subject system for the record that's being sent. |
-|**Record** |A unit of data sent from the reporting device. A record is often referred to as `log`, `event`, or `alert`, but can also be other types of data. |
-|**Content**, or **Content Item** |The different, customizable, or user-created artifacts than can be used with Microsoft Sentinel. Those artifacts include, for example, Analytics rules, Hunting queries and workbooks. A content item is one such artifact.|
+- [Audit Event](normalization-schema-audit.md)
+- [Authentication Event](authentication-normalization-schema.md)
+- [DHCP Activity](dhcp-normalization-schema.md)
+- [DNS Activity](normalization-schema-dns.md)
+- [File Activity](normalization-schema-file-event.md)
+- [Network Session](network-normalization-schema.md)
+- [Process Event](process-events-normalization-schema.md)
+- [Registry Event](registry-event-normalization-schema.md)
+- [User Management](user-management-normalization-schema.md)
+- [Web Session](web-normalization-schema.md)
+For more information, see [ASIM schemas](normalization-about-schemas.md).
-<br>
+### Query time parsers
+
+ASIM uses query time parsers to map existing data to the normalized schemas using [KQL functions](/azure/data-explorer/kusto/query/functions/user-defined-functions). Many ASIM parsers are available out of the box with Microsoft Sentinel. More parsers, and versions of the built-in parsers that can be modified can be deployed from the [Microsoft Sentinel GitHub repository](https://aka.ms/AzSentinelASim).
+
+For more information, see [ASIM parsers](normalization-parsers-overview.md).
+
+### Ingest time normalization
+
+Query time parsers have many advantages:
+
+- They do not require the data to be modified, thus preserving the source format.
+- Since they do not modify the data, but rather presents a view of the data, they are easy to develop. Developing, testing and fixing a parser can all be done on existing data. Moreover, parsers can be fixed when an issue is discovered and the fix will apply to existing data.
+
+On the other hand, while ASIM parsers are optimized, query time parsing can slow down queries, especially on large data sets. To resolve this, Microsoft Sentinel complements query time parsing with ingest time parsing. Using ingest transformation the events are normalized to normalized table, accelerating queries that use normalized data.
+
+Currently, ASIM supports the following normalized tables as a destination for ingest time normalization:
+- [**ASimDnsActivityLogs**](/azure/azure-monitor/reference/tables/asimdnsactivitylogs) for the [DNS](normalization-schema-dns.md) schema.
+- [**ASimNetworkSessionLogs**](/azire/azure-monitor/reference/tables/asimnetworksessionlogs) for the [NetworkS Session](network-normalization-schema.md) schema
+
+For more information, see [Ingest Time Normalization](normalization-ingest-time.md).
+
+### Content for each normalized schema
+
+Content which uses ASIM includes solutions, analytics rules, workbooks, hunting queries, and more. Content for each normalized schema works on any normalized data without the need to create source-specific content.
+
+For more information, see [ASIM content](normalization-content.md).
## Getting started with ASIM To start using ASIM:
+- Deploy an ASIM based domain solution such as the [Network Threat Protection Essentials](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-networkthreatdetection?tab=Overview) domain solution.
+ - Activate analytics rule templates that use ASIM. For more information, see the [ASIM content list](normalization-content.md#builtin). - Use the ASIM hunting queries from the Microsoft Sentinel GitHub repository, when querying logs in KQL in the Microsoft Sentinel **Logs** page. For more information, see the [ASIM content list](normalization-content.md#builtin).
sentinel Registry Event Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/registry-event-normalization-schema.md
For more information, see:
- [Normalization in Microsoft Sentinel](normalization.md) - [Microsoft Sentinel authentication normalization schema reference (Public preview)](authentication-normalization-schema.md) - [Microsoft Sentinel DNS normalization schema reference](normalization-schema-dns.md)-- [Microsoft Sentinel file event normalization schema reference (Public preview)](file-event-normalization-schema.md)
+- [Microsoft Sentinel file event normalization schema reference (Public preview)](normalization-schema-file-event.md)
- [Microsoft Sentinel network normalization schema reference](./network-normalization-schema.md)
sentinel Web Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/web-normalization-schema.md
The most important fields in a Web Session schema are:
- [Url](#url), which reports the url that the client requested from the server. - The [SrcIpAddr](network-normalization-schema.md#srcipaddr) (aliased to [IpAddr](network-normalization-schema.md#ipaddr)), which represents the IP address from which the request was generated. -- [EventResultDetails](#eventresultdetails), which reports the HTTP Status Code.
+- [EventResultDetails](#eventresultdetails) field, which reports the HTTP Status Code.
Web Session events may also include [User](network-normalization-schema.md#user) and [Process](process-events-normalization-schema.md) information for the user and process initiating the request.
The following filtering parameters are available:
| **eventresultdetails_in** | dynamic | Filter only web sessions for which the HTTP status code, stored in the [EventResultDetails](#eventresultdetails) field, is any of the values listed. | | **eventresult** | string | Filter only network sessions with a specific **EventResult** value. |
+Some parameter can accept both list of values of type `dynamic` or a single string value. To pass a literal list to parameters that expect a dynamic value, explicitly use a [dynamic literal](/azure/data-explorer/kusto/query/scalar-data-types/dynamic#dynamic-literals.md). For example: `dynamic(['192.168.','10.'])`
For example, to filter only Web sessions for a specified list of domain names, use:
let torProxies=dynamic(["tor2web.org", "tor2web.com", "torlink.co"]);
_Im_WebSession (url_has_any = torProxies) ```
-> [!TIP]
-> To pass a literal list to parameters that expect a dynamic value, explicitly use a [dynamic literal](/azure/data-explorer/kusto/query/scalar-data-types/dynamic#dynamic-literals.md). For example: `dynamic(['192.168.','10.'])`.
->
- ## Schema details The Web Session information model is aligned with the [OSSEM Network entity schema](https://github.com/OTRF/OSSEM/blob/master/docs/cdm/entities/network.md) and the [OSSEM HTTP entity schema](https://github.com/OTRF/OSSEM/blob/master/docs/cdm/entities/http.md).
storage Storage Blob Download Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download-javascript.md
async function downloadBlobAsStream(containerClient, blobName, writableStream) {
## Download to a string
-The following example downloads a blob to a string with [BlobClient.download](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-download) method.
+The following Node.js example downloads a blob to a string with [BlobClient.download](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-download) method. In Node.js, blob data returns in a `readableStreamBody`.
```javascript
async function streamToBuffer(readableStream) {
} ```
+If you're working with JavaScript in the browser, blob data returns in a promise [blobBody](/javascript/api/@azure/storage-blob/blobdownloadresponseparsed#@azure-storage-blob-blobdownloadresponseparsed-blobbody). To learn more, see the example usage for browsers at [BlobClient.download](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-download).
+ ## See also - [Get started with Azure Blob Storage and JavaScript](storage-blob-javascript-get-started.md)
storage Storage Account Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-create.md
Previously updated : 05/26/2022 Last updated : 01/03/2023 -+ # Create a storage account
To create an Azure storage account with the Azure portal, follow these steps:
1. From the left portal menu, select **Storage accounts** to display a list of your storage accounts. If the portal menu isn't visible, click the menu button to toggle it on.
- :::image type="content" source="media/storage-account-create/menu-expand-sml.png" alt-text="Image of the Azure Portal homepage showing the location of the Menu button near the top left corner of the browser." lightbox="media/storage-account-create/menu-expand-lrg.png":::
+ :::image type="content" source="media/storage-account-create/menu-expand-sml.png" alt-text="Image of the Azure portal homepage showing the location of the Menu button near the top left corner of the browser." lightbox="media/storage-account-create/menu-expand-lrg.png":::
1. On the **Storage accounts** page, select **Create**.
- :::image type="content" source="media/storage-account-create/create-button-sml.png" alt-text="Image showing the location of the create button within the Azure Portal Storage Accounts page." lightbox="media/storage-account-create/create-button-lrg.png":::
+ :::image type="content" source="media/storage-account-create/create-button-sml.png" alt-text="Image showing the location of the create button within the Azure portal Storage Accounts page." lightbox="media/storage-account-create/create-button-lrg.png":::
Options for your new storage account are organized into tabs in the **Create a storage account** page. The following sections describe each of the tabs and their options.
The following table describes the fields on the **Advanced** tab.
| Section | Field | Required or optional | Description | |--|--|--|--| | Security | Require secure transfer for REST API operations | Optional | Require secure transfer to ensure that incoming requests to this storage account are made only via HTTPS (default). Recommended for optimal security. For more information, see [Require secure transfer to ensure secure connections](storage-require-secure-transfer.md). |
-| Security | Enable blob public access | Optional | When enabled, this setting allows a user with the appropriate permissions to enable anonymous public access to a container in the storage account (default). Disabling this setting prevents all anonymous public access to the storage account. For more information, see [Prevent anonymous public read access to containers and blobs](../blobs/anonymous-read-access-prevent.md).<br> <br> Enabling blob public access does not make blob data available for public access unless the user takes the additional step to explicitly configure the container's public access setting. |
+| Security | Allow enabling public access on containers | Optional | When enabled, this setting allows a user with the appropriate permissions to enable anonymous public access to a container in the storage account (default). Disabling this setting prevents all anonymous public access to the storage account. For more information, see [Prevent anonymous public read access to containers and blobs](../blobs/anonymous-read-access-prevent.md).<br> <br> Enabling blob public access does not make blob data available for public access unless the user takes the additional step to explicitly configure the container's public access setting. |
| Security | Enable storage account key access | Optional | When enabled, this setting allows clients to authorize requests to the storage account using either the account access keys or an Azure Active Directory (Azure AD) account (default). Disabling this setting prevents authorization with the account access keys. For more information, see [Prevent Shared Key authorization for an Azure Storage account](shared-key-authorization-prevent.md). | | Security | Default to Azure Active Directory authorization in the Azure portal | Optional | When enabled, the Azure portal authorizes data operations with the user's Azure AD credentials by default. If the user does not have the appropriate permissions assigned via Azure role-based access control (Azure RBAC) to perform data operations, then the portal will use the account access keys for data access instead. The user can also choose to switch to using the account access keys. For more information, see [Default to Azure AD authorization in the Azure portal](../blobs/authorize-data-operations-portal.md#default-to-azure-ad-authorization-in-the-azure-portal). | | Security | Minimum TLS version | Required | Select the minimum version of Transport Layer Security (TLS) for incoming requests to the storage account. The default value is TLS version 1.2. When set to the default value, incoming requests made using TLS 1.0 or TLS 1.1 are rejected. For more information, see [Enforce a minimum required version of Transport Layer Security (TLS) for requests to a storage account](transport-layer-security-configure-minimum-version.md). |
The following table describes the fields on the **Advanced** tab.
The following image shows a standard configuration of the advanced properties for a new storage account. ### Networking tab
The following table describes the fields on the **Networking** tab.
| Section | Field | Required or optional | Description | |--|--|--|--|
-| Network connectivity | Connectivity method | Required | By default, incoming network traffic is routed to the public endpoint for your storage account. You can specify that traffic must be routed to the public endpoint through an Azure virtual network. You can also configure private endpoints for your storage account. For more information, see [Use private endpoints for Azure Storage](storage-private-endpoints.md). |
+| Network connectivity | Network access | Required | By default, incoming network traffic is routed to the public endpoint for your storage account. You can specify that traffic must be routed to the public endpoint through an Azure virtual network. You can also configure private endpoints for your storage account. For more information, see [Use private endpoints for Azure Storage](storage-private-endpoints.md). |
| Network connectivity | Endpoint type | Required | Azure Storage supports two types of endpoints: standard endpoints (the default) and Azure DNS zone endpoints (preview). Within a given subscription, you can create up to 250 accounts with standard endpoints per region, and up to 5000 accounts with Azure DNS zone endpoints per region. To learn how to view the service endpoints for an existing storage account, see [Get service endpoints for the storage account](storage-account-get-info.md#get-service-endpoints-for-the-storage-account). | | Network routing | Routing preference | Required | The network routing preference specifies how network traffic is routed to the public endpoint of your storage account from clients over the internet. By default, a new storage account uses Microsoft network routing. You can also choose to route network traffic through the POP closest to the storage account, which may lower networking costs. For more information, see [Network routing preference for Azure Storage](network-routing-preference.md). |
The following table describes the fields on the **Data protection** tab.
|--|--|--|--| | Recovery | Enable point-in-time restore for containers | Optional | Point-in-time restore provides protection against accidental deletion or corruption by enabling you to restore block blob data to an earlier state. For more information, see [Point-in-time restore for block blobs](../blobs/point-in-time-restore-overview.md).<br /><br />Enabling point-in-time restore also enables blob versioning, blob soft delete, and blob change feed. These prerequisite features may have a cost impact. For more information, see [Pricing and billing](../blobs/point-in-time-restore-overview.md#pricing-and-billing) for point-in-time restore. | | Recovery | Enable soft delete for blobs | Optional | Blob soft delete protects an individual blob, snapshot, or version from accidental deletes or overwrites by maintaining the deleted data in the system for a specified retention period. During the retention period, you can restore a soft-deleted object to its state at the time it was deleted. For more information, see [Soft delete for blobs](../blobs/soft-delete-blob-overview.md).<br /><br />Microsoft recommends enabling blob soft delete for your storage accounts and setting a minimum retention period of seven days. |
-| Recovery | Enable soft delete for containers | Optional | Container soft delete protects a container and its contents from accidental deletes by maintaining the deleted data in the system for a specified retention period. During the retention period, you can restore a soft-deleted container to its state at the time it was deleted. For more information, see [Soft delete for containers (preview)](../blobs/soft-delete-container-overview.md).<br /><br />Microsoft recommends enabling container soft delete for your storage accounts and setting a minimum retention period of seven days. |
+| Recovery | Enable soft delete for containers | Optional | Container soft delete protects a container and its contents from accidental deletes by maintaining the deleted data in the system for a specified retention period. During the retention period, you can restore a soft-deleted container to its state at the time it was deleted. For more information, see [Soft delete for containers](../blobs/soft-delete-container-overview.md).<br /><br />Microsoft recommends enabling container soft delete for your storage accounts and setting a minimum retention period of seven days. |
| Recovery | Enable soft delete for file shares | Optional | Soft delete for file shares protects a file share and its contents from accidental deletes by maintaining the deleted data in the system for a specified retention period. During the retention period, you can restore a soft-deleted file share to its state at the time it was deleted. For more information, see [Prevent accidental deletion of Azure file shares](../files/storage-files-prevent-file-share-deletion.md).<br /><br />Microsoft recommends enabling soft delete for file shares for Azure Files workloads and setting a minimum retention period of seven days. | | Tracking | Enable versioning for blobs | Optional | Blob versioning automatically saves the state of a blob in a previous version when the blob is overwritten. For more information, see [Blob versioning](../blobs/versioning-overview.md).<br /><br />Microsoft recommends enabling blob versioning for optimal data protection for the storage account. | | Tracking | Enable blob change feed | Optional | The blob change feed provides transaction logs of all changes to all blobs in your storage account, as well as to their metadata. For more information, see [Change feed support in Azure Blob Storage](../blobs/storage-blob-change-feed.md). |
storage Storage Files Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-planning.md
When deploying Azure file shares into storage accounts, we recommend:
## Identity To access an Azure file share, the user of the file share must be authenticated and authorized to access the share. This is done based on the identity of the user accessing the file share. Azure Files integrates with four main identity providers: - **On-premises Active Directory Domain Services (AD DS, or on-premises AD DS)**: Azure storage accounts can be domain joined to a customer-owned Active Directory Domain Services, just like a Windows Server file server or NAS device. You can deploy a domain controller on-premises, in an Azure VM, or even as a VM in another cloud provider; Azure Files is agnostic to where your domain controller is hosted. Once a storage account is domain-joined, the end user can mount a file share with the user account they signed into their PC with. AD-based authentication uses the Kerberos authentication protocol.-- **Azure Active Directory Domain Services (Azure AD DS)**: Azure AD DS provides a Microsoft-managed domain controller that can be used for Azure resources. Domain joining your storage account to Azure AD DS provides similar benefits to domain joining it to a customer-owned Active Directory. This deployment option is most useful for application lift-and-shift scenarios that require AD-based permissions. Since Azure AD DS provides AD-based authentication, this option also uses the Kerberos authentication protocol.
+- **Azure Active Directory Domain Services (Azure AD DS)**: Azure AD DS provides a Microsoft-managed domain controller that can be used for Azure resources. Domain joining your storage account to Azure AD DS provides similar benefits to domain joining it to a customer-owned AD DS. This deployment option is most useful for application lift-and-shift scenarios that require AD-based permissions. Since Azure AD DS provides AD-based authentication, this option also uses the Kerberos authentication protocol.
- **Azure Active Directory (Azure AD) Kerberos for hybrid identities**: Azure AD Kerberos allows you to use Azure AD to authenticate [hybrid user identities](../../active-directory/hybrid/whatis-hybrid-identity.md), which are on-premises AD identities that are synced to the cloud. This configuration uses Azure AD to issue Kerberos tickets to access the file share with the SMB protocol. This means your end users can access Azure file shares over the internet without requiring a line-of-sight to domain controllers from hybrid Azure AD-joined and Azure AD-joined VMs. - **Azure storage account key**: Azure file shares may also be mounted with an Azure storage account key. To mount a file share this way, the storage account name is used as the username and the storage account key is used as a password. Using the storage account key to mount the Azure file share is effectively an administrator operation, because the mounted file share will have full permissions to all of the files and folders on the share, even if they have ACLs. When using the storage account key to mount over SMB, the NTLMv2 authentication protocol is used.
-For customers migrating from on-premises file servers, or creating new file shares in Azure Files intended to behave like Windows file servers or NAS appliances, domain joining your storage account to **Customer-owned Active Directory** is the recommended option. To learn more about domain joining your storage account to a customer-owned Active Directory, see [Azure Files Active Directory overview](storage-files-active-directory-overview.md).
+For customers migrating from on-premises file servers, or creating new file shares in Azure Files intended to behave like Windows file servers or NAS appliances, domain joining your storage account to **Customer-owned AD DS** is the recommended option. To learn more about domain joining your storage account to a customer-owned AD DS, see [Overview - on-premises Active Directory Domain Services authentication over SMB for Azure file shares](storage-files-identity-auth-active-directory-enable.md).
If you intend to use the storage account key to access your Azure file shares, we recommend using private endpoints or service endpoints as described in the [Networking](#networking) section.
storage Storage How To Use Files Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-use-files-portal.md
description: Learn how to create and use Azure file shares with the Azure portal
Previously updated : 10/24/2022 Last updated : 01/03/2023
az storage account create \
To create an Azure file share: 1. Select the storage account from your dashboard.
-1. On the storage account page, in the **Services** section, select **Files**.
+1. On the storage account page, in the **Data storage** section, select **File shares**.
![A screenshot of the data storage section of the storage account; select file shares.](media/storage-how-to-use-files-portal/create-file-share-1.png)
-1. On the menu at the top of the **File service** page, select **+ File share**. The **New file share** page drops down.
-1. In **Name** type *myshare*. Leave **Transaction optimized** selected for **Tier**.
+1. On the menu at the top of the **File shares** page, select **+ File share**. The **New file share** page drops down.
+1. In **Name**, type *myshare*. Leave **Transaction optimized** selected for **Tier**.
1. Select **Create** to create the Azure file share.
-Share names must be all lower case letters, numbers, and single hyphens but cannot start with a hyphen. For complete details about naming file shares and files, see [Naming and Referencing Shares, Directories, Files, and Metadata](/rest/api/storageservices/Naming-and-Referencing-Shares--Directories--Files--and-Metadata).
+File share names must be all lower-case letters, numbers, and single hyphens, and must begin and end with a lower-case letter or number. The name can't contain two consecutive hyphens. For details about naming file shares and files, see [Naming and Referencing Shares, Directories, Files, and Metadata](/rest/api/storageservices/Naming-and-Referencing-Shares--Directories--Files--and-Metadata).
# [PowerShell](#tab/azure-powershell)
First, you need to create or select a file to upload. Do this by whatever means
1. Select the **myDirectory** directory. The **myDirectory** panel opens. 1. In the menu at the top, select **Upload**. The **Upload files** panel opens.
- ![A screenshot of the upload files panel](media/storage-how-to-use-files-portal/upload-file-1.png)
+ :::image type="content" source="media/storage-how-to-use-files-portal/upload-file.png" alt-text="Screenshot showing the upload files panel in the Azure portal." border="true":::
1. Select the folder icon to open a window to browse your local files. 1. Select a file and then select **Open**.
storage Storage Troubleshoot Windows File Connection Problems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-troubleshoot-windows-file-connection-problems.md
You can use Azure File Sync as a workaround to access Azure Files from clients t
By setting up a VPN or ExpressRoute from on-premises to your Azure storage account, with Azure Files exposed on your internal network using private endpoints, the traffic will go through a secure tunnel as opposed to over the internet. Follow the [instructions to setup VPN](storage-files-configure-p2s-vpn-windows.md) to access Azure Files from Windows. #### Solution 3 ΓÇö Unblock port 445 with help of your ISP/IT Admin
-Work with your IT department or ISP to open port 445 outbound to [Azure IP ranges](https://www.microsoft.com/download/details.aspx?id=41653).
+Work with your IT department or ISP to open port 445 outbound to [Azure IP ranges](https://www.microsoft.com/en-us/download/details.aspx?id=56519).
#### Solution 4 ΓÇö Use REST API-based tools like Storage Explorer/PowerShell Azure Files also supports REST in addition to SMB. REST access works over port 443 (standard tcp). There are various tools that are written using REST API that enable a rich UI experience. [Storage Explorer](../../vs-azure-tools-storage-manage-with-storage-explorer.md?tabs=windows) is one of them. [Download and Install Storage Explorer](https://azure.microsoft.com/features/storage-explorer/) and connect to your file share backed by Azure Files. You can also use [PowerShell](./storage-how-to-use-files-portal.md) which also uses REST API.
virtual-desktop Create Host Pools Azure Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-host-pools-azure-marketplace.md
You can create host pools in the following Azure regions:
- Australia East - Canada Central - Canada East
+- Central India
- Central US - East US - East US 2
virtual-desktop Private Link Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/private-link-setup.md
Follow the directions in [Tutorial: Filter network traffic with a network securi
When you set up your NSG, you must configure it to allow both the URLs in the [required URL list](safe-url-list.md) and your private endpoints. Make sure to include the URLs for Azure Monitor.
+>[!NOTE]
+>If you intend to restrict network ports from either the user client devices or your session host VMs to the private endpoints, you will need to allow traffic across the entire TCP dynamic port range of 1 - 65535 to the private endpoint for the host pool resource using the *connection* sub-resource. If you restrict ports to the endpoint, your users may not be able to connect successfully to Azure Virtual Desktop.
+ ## Validate your Private Link deployment To validate your Private Link for Azure Virtual Desktop and make sure it's working:
virtual-desktop Screen Capture Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/screen-capture-protection.md
description: How to set up screen capture protection for Azure Virtual Desktop. Previously updated : 09/14/2022 Last updated : 01/03/2023 # Screen capture protection
-The screen capture protection feature prevents sensitive information from being captured on the client endpoints. When you enable this feature, remote content will be automatically blocked or hidden in screenshots and screen shares. Also, the Remote Desktop client will hide content from malicious software that may be capturing the screen.
+Screen capture protection prevents sensitive information from being captured on the client endpoints. When you enable this feature, remote content will be automatically blocked or hidden in screenshots and screen shares. Also, the Remote Desktop client will hide content from malicious software that may be capturing the screen.
## Prerequisites
-The screen capture protection feature is configured on the session host level and enforced on the client. Only clients that support this feature can connect to the remote session.
+Screen capture protection is configured on the session host level and enforced on the client. Only clients that support this feature can connect to the remote session.
-The following clients currently support screen capture protection:
+You must connect to Azure Virtual Desktop with one of the following clients to use support screen capture protection:
- The Windows Desktop client supports screen capture protection for full desktops only. - The macOS client (version 10.7.0 or later) supports screen capture protection for both RemoteApps and full desktops.
-If a user tries to connect to a capture-protected session host with an unsupported client, the connection won't work and will instead show an error message labeled "0x1151."
- ## Configure screen capture protection To configure screen capture protection:
To configure screen capture protection:
## Limitations and known issues
+- If a user tries to connect to a capture-protected session host with an unsupported client, the connection won't work and will instead show an error message with the code `0x1151`.
- This feature protects the Remote Desktop window from being captured through a specific set of public operating system features and Application Programming Interfaces (APIs). However, there's no guarantee that this feature will strictly protect content in scenarios where a user were to take a photo of their screen with a physical camera. - For maximum security, customers should use this feature while also disabling clipboard, drive, and printer redirection. Disabling redirection prevents users from copying any captured screen content from the remote session. - Users can't share their Remote Desktop window using local collaboration software, such as Microsoft Teams, while this feature is enabled. When they use Microsoft Teams, neither the local Teams app nor Teams with media optimization can share protected content.
virtual-machines Key Vault Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/key-vault-windows.md
The Key Vault VM extension is also supported on custom local VM that is uploaded
## Updates in Version 3.0 -- Adding ACL permission to downloaded certificates-- Store configuration per certificate
+- Ability to add ACL permission to downloaded certificates
+- Certificate Store configuration per certificate
- Exportable private keys ## Prerequisites
virtual-machines Image Builder Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-json.md
customize: [
type: 'WindowsUpdate' searchCriteria: 'IsInstalled=0' filters: [
- exclude:$_.Title -like '*Preview*''
- include:$true'
+ 'exclude:$_.Title -like \'*Preview*\''
+ 'include:$true'
] updateLimit: 20 }
virtual-machines Client Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/client-images.md
Certain Windows client images are available from the Azure Marketplace. Visual S
The following table details the offer IDs that are eligible to deploy Windows client images through the Azure Marketplace. The Windows client images are only visible to the following offers. > [!NOTE]
-> Image offers are under **Windows Client** in the Azure Marketplace. Use **Windows Client** when searching for client images available to Visual Studio subscribers. If you need to purchase a Visual Stuido subscription, see the various options at [Buy Visual Studio](https://visualstudio.microsoft.com/vs/pricing/?tab=business)
+> Image offers are under **Windows Client** in the Azure Marketplace. Use **Windows Client** when searching for client images available to Visual Studio subscribers. If you need to purchase a Visual Studio subscription, see the various options at [Buy Visual Studio](https://visualstudio.microsoft.com/vs/pricing/?tab=business)
| Offer Name | Offer Number | Available client images | |: |::|::|
virtual-wan Virtual Wan Global Transit Network Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-global-transit-network-architecture.md
The Remote User-to-branch path lets remote users who are using a point-to-site c
The VNet-to-VNet transit enables VNets to connect to each other in order to interconnect multi-tier applications that are implemented across multiple VNets. Optionally, you can connect VNets to each other through VNet Peering and this may be suitable for some scenarios where transit via the VWAN hub isn't necessary.
-## <a name="DefaultRoute"></a>Force tunneling and default route
+## <a name="DefaultRoute"></a>Forced tunneling and default route
-Force Tunneling can be enabled by configuring the enable default route on a VPN, ExpressRoute, or Virtual Network connection in Virtual WAN.
+Forced Tunneling can be enabled by configuring the enable default route on a VPN, ExpressRoute, or Virtual Network connection in Virtual WAN.
A virtual hub propagates a learned default route to a virtual network/site-to-site VPN/ExpressRoute connection if enable default flag is 'Enabled' on the connection.
web-application-firewall Waf Front Door Drs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-drs.md
When you configure your WAF, you can decide how the WAF handles requests that ex
For example, if the anomaly score is 5 or greater on a request, and the WAF is in Prevention mode with the anomaly score action set to block, the request is blocked. If the anomaly score is 5 or greater on a request, and the WAF is in Detection mode, the request is logged but not blocked.
-A single *Critical* rule match is enough for the WAF to block a request when in Prevention mode with anomaly score action set to block, because the overall anomaly score is 5. However, one *Warning* rule match only increases the anomaly score by 3, which isn't enough by itself to block the traffic. When an anomaly rule is triggered it will show a "matched" action in the logs. If the anomly score is 5 or greater, there will be a separate rule triggered with the "blocked" action in the logs assuming the anomaly score action is set to block.
+A single *Critical* rule match is enough for the WAF to block a request when in Prevention mode with anomaly score action set to block, because the overall anomaly score is 5. However, one *Warning* rule match only increases the anomaly score by 3, which isn't enough by itself to block the traffic. When an anomaly rule is triggered it will show a "matched" action in the logs. If the anomly score is 5 or greater, there will be a separate rule triggered with the anomaly score action configured for the rule set. Default anomaly score action is block which will result in log entry with action ΓÇ£blockedΓÇ¥.
When your WAF uses older version of the default rule set (before DRS 2.0), your WAF runs in the traditional mode. Traffic that matches any rule is considered independently of any other rule matches. In traditional mode, you don't have visibility into the complete set of rules that a specific request matched.
web-application-firewall Application Gateway Waf Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-waf-configuration.md
For example, suppose your requests include this header:
My-Header: 1=1 ```
-The value of the header (`1=1`) might be detected as an attack by the WAF. But if you know this is a legitimate value for your scenario, you can configure an exclusion for the *value* of the header. To do so, you use the **RequestHeaderValues** request attribute, and select the header name (`My-Header`) with the value that should be ignored.
+The value of the header (`1=1`) might be detected as an attack by the WAF. But if you know this is a legitimate value for your scenario, you can configure an exclusion for the *value* of the header. To do so, you use the **RequestHeaderValues** match variable, the operator **contains**, and the selector (`My-Header`).
> [!NOTE] > Request attributes by key and values are only available in CRS 3.2 or newer and Bot Manager 1.0 or newer.